qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
265,509 |
<p>Given a (possibly disjoint) region, defined by a discrete set of points, how can I use <code>ListContourPlot[]</code> together with <code>Mesh</code> to highlight a specific area of the plot? For instance, how can I mesh the region where the points are smaller than a certain value?</p>
<p>Here I construct a minimal example where I try to highlight the area where the values of a discrete sample of the function <span class="math-container">$f(x) = e^{x^2 - y^2}$</span> are smaller then one.</p>
<pre><code>data = Table[Exp[x^2 - y^2], {x, -1, 1, .01}, {y, -1, 1, .01}];
ListContourPlot[
data
, Contours -> {1.0}
, ContourStyle -> Transparent
, Mesh -> 25
, MeshFunctions -> {#1 + #2 &}
, MeshStyle -> Thick
]
</code></pre>
<p>I also tried using <code>MeshFunctions -> {Piecewise[{{#1 + #2 &, #3 <= 1 &}, {None, #3 > 0 &}}]}</code>, but I had no luck.</p>
<p>I am aware that this can be done for symbolic functions through <code>RegionPlot[]</code>, however I am not sure how to extend this to numerical data.</p>
<p><a href="https://i.stack.imgur.com/RF0im.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RF0im.png" alt="Mesh in ListContourPlot" /></a></p>
|
bmf
| 85,558 |
<p>Excellent replies as usual by @cvgmt and @Bob Hanlon, but I'd like to give it a go using <code>Piecewise</code></p>
<p>With</p>
<pre><code>data = Table[Exp[x^2 - y^2], {x, -1, 1, .01}, {y, -1, 1, .01}];
</code></pre>
<p>The following works nicely I think</p>
<pre><code>ListContourPlot[data,
Contours -> {1.0},
ContourStyle -> Transparent,
Mesh -> 25,
MeshFunctions -> {Piecewise[{{#1 + #2, #3 <= 1}}] &,
Piecewise[{{None, #1 + #2 >= 1}}] &},
MeshStyle -> Thick]
</code></pre>
<p><a href="https://i.stack.imgur.com/dA4aK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dA4aK.png" alt="piecewise" /></a></p>
|
2,698,903 |
<p>So this has caused some confusion. In one exercise one was asked to prove that</p>
<p>$$\lim_{k \rightarrow \infty} \int_A \cos(kt)dt=0$$</p>
<p>where $A \subset [-\pi, \pi]$ is a measurable set.</p>
<p>My initial idea was to take any $a,b \in A$ and then show that:</p>
<p>$$\lim_{k \rightarrow \infty} \int_a^b cos(kt)dt= \lim_{k \rightarrow \infty} \frac{\sin(bk)-\sin(ak)}{k} =0$$</p>
<p>But my instructor said that this wouldn't work out because the lengths of the partitions on $A$ would make my computation impossible. I'm not sure about this.</p>
|
copper.hat
| 27,978 |
<p>Let $f=1_A$, and note that $f \in L^2[-\pi,\pi]$.</p>
<p>Let $\hat{f}_n = {1 \over 2 \pi} \int_{-\pi}^\pi f(t) e^{-int} dt$.</p>
<p>Bessel's inequality gives
$\sum_n |\hat{f}_n|^2 \le \|f\|_2^2$, and so $\hat{f}_n \to 0$.</p>
<p>Another approach would be the Riemann Lebesgue lemma.</p>
|
1,545,045 |
<p>Is it possible to cut a pentagon into two equal pentagons?</p>
<p>Pentagons are not neccesarily convex as otherwise it would be trivially impossible.</p>
<p>I was given this problem at a contest but cannot figure the solution out, can somebody help?</p>
<p>Edit: Buy "cut" I mean "describe as a union of two pentagons which intersect only on the boudaries".</p>
|
Empiromancer
| 164,385 |
<p>Imagine pairing people off one group at a time. For the first group, you've got $28$ choose $2$ possible ways to pick a team of $2$ from $28$ people. After you've selected that team, you've got to pick another team of $2$ from the remaining $26$ people, so $26$ choose $2$. Continue on in that manner until you've assigned all the teams, and the total number of possibilities so far is: $${28 \choose 2} \cdot {26 \choose 2} \cdots {2 \choose 2} = \frac{28!}{2! \cdot 26!} \cdot \frac{26!}{2!\cdot 24!} \cdots \frac{2!}{2! \cdot 0!} = \frac{28!}{2^{14}}.$$</p>
<p>This gives you all the different ways you could pick teams if the order you were picking them mattered (i.e., if you were picking a "team 1", "team 2", etc.). Since it doesn't, you want to divide this result by the number of labellings you can give to the same set of $14$ teams, which is $14!$. Hence, the number of ways of splitting 28 people into teams of two is $${28 \choose 2} \cdot {26 \choose 2} \cdots {2 \choose 2} \cdot \frac{1}{14!} = \frac{28! \cdot 26! \cdots 2!}{2^{14} \cdot 26! \cdot 24! \cdots 2! \cdot 14!} = \frac{28!}{2^{14} \cdot 14!}$$</p>
|
109,973 |
<p>I have stuck solving this problem of financial mathematics, in this equation:
$$\frac{(1+x)^{8}-1}{x}=11$$</p>
<p>I'm stuck in this eight grade equation:
$$x^{8}+8x^{7}+28x^{6}+56x^{5}+70x^{4}+56x^{3}+28x^{2}+9x-11=0$$</p>
<p>But I cannot continue past this. This quickly lead me to find a general way of solving equations of any degree, but I couldn't find anything serious on the internet.</p>
<p>Do you know any simple methods to solve equations of any degree?</p>
|
bgins
| 20,321 |
<p>In financial mathematics, you're looking at a problem of <a href="http://en.wikipedia.org/wiki/Compound_interest" rel="nofollow">compound interest</a> rates (over $\mathbf{n}$ <strong>compoundment periods</strong>), where $100x$ is the percentage interest rate or yield and $\mathbf{x}$ the relative accrual rate <strong>per period</strong>. When $n$ is the number of compoundment periods per year, $\mathbf{nx}$ is the <a href="http://en.wikipedia.org/wiki/Nominal_interest_rate" rel="nofollow">nominal annual interest rate</a> and $\mathbf{f(x)}$ is a unitless quantity <strong>comparing</strong> the <strong>growth over</strong> $\mathbf{n}$ <strong>periods</strong> with the growth over just one period, and is applicable for example on a fixed-rate mortgage when $n=12$. In this context, $\mathbf{x \cdot f(x)}$ is the equivalent yearly rate, also known as annual percentage rate (<strong>APR</strong>), annual equivalent rate (<strong>AER</strong>), and other various combinations of
$$
\text{annual/-}
\quad
\text{effective/equivalent/-}
\quad
\text{interest/percentage/-}
\quad
\text{rate/yield}
$$
(<em>i.e. about 42 different wordings if we require at least one of the words 'annual', 'effective' or 'equivalent' -- to be clear what we're talking about --- but allow their order to be interchanged if using two</em>).</p>
<p>Usually, the problem you are given does not have the $x$ in the denominator, making the problem easily solvable with <a href="http://en.wikipedia.org/wiki/Compound_interest#Simplified_calculation" rel="nofollow">logarithms</a> and a ratio of future and present values. Most financial calculators have a way to solve this directly. There are surely good web resources as well. Once could program this in Javascript for use offline; someone probably already has.
Without such a resource, or if programming it yourself, here's what you'd <strong>need</strong> (<em>and don't need</em>) to know about $f(x)$.</p>
<p><strong>First</strong> some of the many alternate algebraic forms
(for $x\in\mathbb{R}$ and $n\in\mathbb{Z}$ both positive):
$$
\eqalign{
f(x)
&=\frac{(1+x)^n-1}{x}
=\sum_{k=1}^n
\href{http://en.wikipedia.org/wiki/Binomial_coefficient}{\binom{n}{k}}
x^{k-1}
}
$$
and factorizations (<em>also perhaps of interest to math majors</em>):
$$
\eqalign{
f(x)
&=\frac{(1+x)^n-1}{(1+x)-1}
=\prod_{k=1}^{n-1}
\left(x+1-
\href{http://en.wikipedia.org/wiki/Root_of_unity}{e^{\frac{2\pi ik}{n}}}
\right)
=\prod_{1<\href{http://en.wikipedia.org/wiki/Divisibility}{d|n}}
\href{http://en.wikipedia.org/wiki/Cyclotomic_polynomial}{\Phi_d}
\left(x+1\right)
\cr
&=\left(x+2\right)^{
\left\lfloor\frac{ n }2\right\rfloor
-\left\lfloor\frac{n-1}2\right\rfloor}
\prod_{k=1}^{\href{http://en.wikipedia.org/wiki/Floor_function}{
\left\lfloor\frac{n-1}2\right\rfloor}}
\left(x^2+\left(2\sin\tfrac{k\pi}{n}\right)^2(x+1)\right)
\cr
}
$$
<strong>The first line</strong> gives the same form you gave as the starting point for this problem, i.e. the relative growth from $n$ compounding periods compared with the growth from one period. The second formula gives an expression for $f(x)$ as a <a href="http://en.wikipedia.org/wiki/Monic_polynomial" rel="nofollow">monic</a> <a href="http://en.wikipedia.org/wiki/Polynomial" rel="nofollow">polynomial</a> of <a href="http://en.wikipedia.org/wiki/Degree_of_a_polynomial" rel="nofollow">degree</a> $n-1$, with the multipliers of the various powers of $x$ known as <a href="http://en.wikipedia.org/wiki/Binomial_coefficient" rel="nofollow">binomial coefficients</a>.</p>
<p><em>The second line</em> gives some alternate representations which are quite meaningful in the <a href="http://en.wikipedia.org/wiki/Complex_number#Complex_plane" rel="nofollow">complex plane</a> and in elementary number theory. These equations say that the roots of the polynomial $f(x)$ lie on a unit circle centered at $-1$ and are equally spaced to form a regular $n$-gon, with the exception of the removed root at $x=0$ in the denominator on the LHS. Each root $e^{\frac{2\pi ik}{n}}-1$ makes an angle of $\frac{2\pi ik}{n}$ with the positive real axis as it extends radially from $-1$. The formula on the RHS says that these roots can be grouped by the denominator of $\frac{k}{n}$ in lowest terms.</p>
<p><em>The third line</em> performs a different grouping, into quadratic factors corresponding to pairs of complex conjugate roots, except for a linear factor $x+2$ for the root at $-2$ in case $n$ is even; this last optional bit is performed with the expression
$\left\lfloor\frac{ n }2\right\rfloor
-\left\lfloor\frac{n-1}2\right\rfloor
=\href{http://en.wikipedia.org/wiki/Floor_function#Mod_operator}{(n-1)-
2\left\lfloor\frac{n-1}2\right\rfloor}$,
which gives the remainder of $n-1$ modulo $2$, i.e. it is $0$ or $1$ depending on whether $n$ is odd or even. All coefficients here are again real, and all factors are irreducible over real fields.
An interesting consequence of this, looking at $f(0)$, is that
$$
\prod_{k=1}^{\left\lfloor\frac{n-1}2\right\rfloor}
\sin^2\tfrac{k\pi}{n}
=\frac{n}{2^{n-1}}.
$$</p>
<p><strong>Next</strong>, $f$ is increasing for $x>0$ and $n>1$,
as can be seen from its derivative:
$$
\eqalign{
f\,'(x)
&=\frac{(x+1)^{n-1}\big((n-1)x-1\big)+1}{x^2}
=\sum_{k=2}^n
\href{http://en.wikipedia.org/wiki/Binomial_coefficient}{\binom{n}{k}}
(k-1)x^{k-2}
}
$$</p>
<p><strong>Extra notes</strong>:</p>
<ul>
<li>$f(x)<\frac{e^{nx}-1}{x}$ for $x>-1$. Both sides approach $n$ as $x \rightarrow 0$. The inequality actually holds for all $x>-1$ and is strict for $x\ne0$. This follows from observing that $\ln(1+x)$ is increasing and convex with tangent line $y=x$ at the origin.</li>
<li>Defining $f(0)=n$ (or taking the binomial power sum as our definition) would make $f$ continuous at $0$, extending the domain to $(-1,\infty)$, since $f(x)$ approaches the derivative of the power function with power $n$ as $x \rightarrow 0$.</li>
<li>For $n=1$, $f(x)$ is also $1$.</li>
<li>$\frac{x}{n}f(\frac{x}{n}) \rightarrow e^x-1$ as $n \rightarrow \infty$.</li>
<li>With $f(0)=0$, the derivative $f\,'(0)=\frac{n(n-1)}{2}$ exists and is continuous.</li>
<li>$f\,'(x)=1$ for $n=2$.</li>
<li>Solving $y=f(x)$ for $x$ given $y$ will have one solution for $y>n$ ($y\ge n$ if we allow $x=0$), and no solutions for $y<n$. There are a host of methods for <a href="http://en.wikipedia.org/wiki/Root-finding_algorithm" rel="nofollow">rootfinding</a>, and some <a href="http://en.wikipedia.org/wiki/Root-finding_algorithm#Finding_roots_of_polynomials" rel="nofollow">specialized</a> ones for polynomials. <a href="http://en.wikipedia.org/wiki/Newton%27s_method" rel="nofollow">Newton's method</a>, the simpler <a href="http://en.wikipedia.org/wiki/Bisection_method" rel="nofollow">bisection method</a>, a combination of bisection and <a href="http://en.wikipedia.org/wiki/Secant_method" rel="nofollow">secant</a> resulting in a simplified version of <a href="http://en.wikipedia.org/wiki/Brent%27s_method" rel="nofollow">Brent's method</a> (as @JM pointed out), or a <a href="http://en.wikipedia.org/wiki/Companion_matrix" rel="nofollow">matrix</a> <a href="http://en.wikipedia.org/wiki/Eigenvalue_algorithm" rel="nofollow">method</a> such as @NickAlger mentions could be used to find $x$. For Newton-s method, you would use the target function $F(x)=f(x)-y$, and a starting value such as
$$x_0=\frac3{2n}\left(\sqrt{1+\frac83\left(\frac{y}{n}-1\right)}-1\right)$$
to iteratively find the root of $F$ by repeatedly setting $x_{n+1}=x_n+\Delta x_n$ for
$$
-\Delta x_n=\frac{F(x)}{F\,'(x)}=\frac{f(x)-y}{f\,'(x)}=
\frac{x\left[(1+x)^n-(1+xy)\right]}{1-(1+x)^{n-1}\left[1-(n-1)x\right]}
$$
(where I have omitted the $n$ subscript after each $x$ and
encapsulated the negative sign of $\Delta x_n$ on the LHS for brevity).
This may seem more complicated than bisection, but convergence is much more rapid, meaning fewer iterations are needed to get a precise answer. (<em>The value of $x_0$ above comes from using the upper bound $\frac{e^{nx}-1}{x}$ as an estimate for $f(x)$, expanding $e^{nx}$ in a Taylor series, simplifying, and using the quadratic equation on the lowest remaining terms.</em>)</li>
</ul>
<p><strong>Finally</strong>, in your problem, $n=8$, so $f(x)=y=11>8$ has solution $x \approx 0.08928634$, or $8.93$%.</p>
<p>Here is an example solution using Newton's method and <a href="http://www.sagemath.org/" rel="nofollow">sage</a> (<a href="http://sagenb.org/" rel="nofollow">online</a>), which is correct to about ten places after the third iteration (and to about a hundred after the seventh!):</p>
<pre><code>n = 8
y = 11
F,f,x = var('F,f,x')
f = ((1+x)^n - 1) / x
F = x - x * ((1+x)^n - (1+x*y)) / (1 - (1+x)^(n-1) * (1-(n-1)*x))
x = 3/2/n * (sqrt(1 + 8/3*(y/n - 1)) - 1) # x_0
x = x.n(digits=100)
for i in range(10):
e = (f(x) - y).n(digits=100) # error
print i, x.n(), e.n()
x = (F(x)).n(digits=100) # next estimate
# i x e:
0 0.0776650429449553 -0.452683820096624
1 0.0895542025788610 0.0106779696847534
2 0.0892864798024252 5.56813083089398e-6
3 0.0892863400500602 1.51628362736157e-12
4 0.0892863400500221 1.12440509060997e-25
5 0.0892863400500221 6.18311776700343e-52
6 0.0892863400500221 -1.71448108692341e-99
7 0.0892863400500221 0.000000000000000
8 0.0892863400500221 -2.28597478256455e-100
9 0.0892863400500221 9.14389913025820e-100
</code></pre>
|
109,973 |
<p>I have stuck solving this problem of financial mathematics, in this equation:
$$\frac{(1+x)^{8}-1}{x}=11$$</p>
<p>I'm stuck in this eight grade equation:
$$x^{8}+8x^{7}+28x^{6}+56x^{5}+70x^{4}+56x^{3}+28x^{2}+9x-11=0$$</p>
<p>But I cannot continue past this. This quickly lead me to find a general way of solving equations of any degree, but I couldn't find anything serious on the internet.</p>
<p>Do you know any simple methods to solve equations of any degree?</p>
|
NoChance
| 15,180 |
<p>I don't know of a universal method to solve any equation. If you have the luxury to choose any method to solve an equation, and you are not particularly interested in advanced math, you could try to plot the equation either by hand or use free software on the net. to give you at least a starting solution.</p>
<p>For example, your equation </p>
<p>$\frac{(1+x)^{8}-1}{x}=11$</p>
<p>looks like this when plotted. <img src="https://i.stack.imgur.com/7sFAX.jpg" alt="enter image description here"></p>
<p>Using plots of this sort only shows real solution (not complex ones). In this case $x=.089311$ is a solution to the equation (good for 3 digits after the decimal point).</p>
|
2,683,788 |
<blockquote>
<p>An urn contains $nr$ balls numbered $1,2..,n$ in such a way that $r$
balls bear the same number $i$ for each $i=1,2,...n$. $N$ balls are drawn at random without replacement. Find the probability that exactly $m$ of the numbers will appear in the sample.</p>
</blockquote>
<p>Any hints would be great, I tried solving it, finally relented and checked the solution given in the text, I can't seem to understand the working though I get the idea that inclusion-exclusion is the key to solving the problem.</p>
|
Parcly Taxel
| 357,390 |
<p>The "trick" of the question's title is probably in specifying the numbers of socks as pairs: Rihanna really has 18 blue and 16 grey socks with no distinction between "left" and "right" of a pair. Now the probability she picks up at least two blue socks in her selection of three <em>without replacement</em> is given by the hypergeometric distribution as
$$\frac{\binom{18}2\binom{16}1+\binom{18}3\binom{16}0}{\binom{34}3}=\frac6{11}$$
which matches the given answer.</p>
|
2,294,585 |
<p>I can't figure out why $((p → q) ∧ (r → ¬q)) → (p ∧ r)$ isn’t a tautology. </p>
<p>I tried solving it like this: </p>
<p>$$((p ∧ ¬p) ∨ (r ∧ q)) ∨ (p ∧ r)$$
resulting in $(T) ∨ (p ∧ r)$ in the end that should result in $T$. What am I doing wrong?</p>
|
Community
| -1 |
<p>Using $a\to b=\lnot a\lor b$,</p>
<p>$$\lnot((\lnot p\lor q)\land(\lnot r\lor \lnot q))\lor(p\land r)$$
$$((p\land \lnot q)\lor(r\land q))\lor(p\land r)$$
$$(p\land(\lnot q\lor r))\lor(r\land q)$$</p>
<p>is falsified with all false.</p>
<hr>
<p>Using bitwise operations and the hexadecimal representation,</p>
<p>$$p=00001111_b=0F_h, q=00110011_b=33_h, r=01010101_b=55h$$</p>
<p>$$p\to q=F3$$
$$r\to\lnot q=EE$$
$$(p\to q)\land(r\to\lnot q)=E2$$
$$p\land r=05$$
$$(p\to q)\land(r\to\lnot q)\to(p\land r)=1D\ne FF.$$</p>
|
3,911,297 |
<p>Chapter 12 - Problem 26)</p>
<blockquote>
<p>Suppose that <span class="math-container">$f(x) > 0$</span> for all <span class="math-container">$x$</span>, and that <span class="math-container">$f$</span> is decreasing. Prove that there is a <em>continuous</em> decreasing function <span class="math-container">$g$</span> such that <span class="math-container">$0 < g(x) \le f(x)$</span> for all <span class="math-container">$x$</span>.</p>
</blockquote>
<p>So this question has already been asked and "solved" on MSE 9 years ago <a href="https://math.stackexchange.com/questions/30777/spivaks-calculus-chapter-12-problem-26">here</a>, but the accepted answer isn't very detailed, and I think it is in fact flawed (or I've just misunderstood it). I tried commenting to open up the question again, but it seems pretty dead now, hence why I'm making this follow up.</p>
<p>The answer says to <em>"make <span class="math-container">$g$</span> piecewise linear with <span class="math-container">$g(n) = f(n+1)$</span>"</em>. Can someone explain what this means exactly? I will write my thoughts below, but it's a lot, so feel free to skip.</p>
<p>My thoughts: Notice that if we try to simply let <span class="math-container">$g(x) = f(x+1)$</span>, then it works perfectly except for the fact that <span class="math-container">$g$</span> may not be continuous, because <span class="math-container">$f$</span> need not be continuous (otherwise we could just let <span class="math-container">$g(x) = f(x)$</span> in that case!). So if we could just modify this <span class="math-container">$g$</span> to make it continuous somehow then we're done.</p>
<p>Fortunately, <span class="math-container">$f$</span> is decreasing on <span class="math-container">$\mathbb{R}$</span>, which means the left and right limits do exist, however they might disagree. This means <span class="math-container">$f$</span> can only have jump discontinuities that jump downwards.</p>
<p>So what if we took all the points in <span class="math-container">$\mathbb{R}$</span> where <span class="math-container">$f$</span> has a jump discontinuity, and just joined lines between them? (I think this is what the answer meant by piecewise linear function?) This would guarantee that <span class="math-container">$g$</span> is continuous, however, this approach has some fixable flaws.</p>
<p>First flaw, for starters, it isn't necessarily true that this <span class="math-container">$g$</span> would be always smaller than <span class="math-container">$f$</span>! For example, consider this picture, where <span class="math-container">$f$</span> is the red function, and <span class="math-container">$g$</span> is the black function:</p>
<p><a href="https://i.stack.imgur.com/WrfB5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WrfB5.png" alt="" /></a></p>
<p>Sure <span class="math-container">$g$</span> is continuous now, but we've lost the <span class="math-container">$g(x) \leq f(x)$</span> property! We can fix this easily by letting <span class="math-container">$g$</span> be the smaller of the piecewise linear function and <span class="math-container">$f$</span>. Then the picture becomes like this:</p>
<p><a href="https://i.stack.imgur.com/2if25.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2if25.png" alt="" /></a></p>
<p>To make this a bit more rigorous, first we need the set of all the points where <span class="math-container">$f$</span> is discontinuous:</p>
<p><span class="math-container">$S = \big\{x: \lim_{y \rightarrow x^-} (f(y)) = \lim_{y \rightarrow x^+} (f(y)) \big\}$</span></p>
<p>Then let <span class="math-container">$l(x)$</span> be the piecewise linear function joining all points <span class="math-container">$\big(x, \lim_{y \rightarrow x^+} [f(y)] \big)$</span>, where <span class="math-container">$x \in S$</span>.</p>
<p>Then finally let <span class="math-container">$g(x) = \text{Min}\big(f(x), l(x)\big)$</span>.</p>
<p>Now this would work fine, so long as <span class="math-container">$l(x)$</span> is well defined. But must it necessarily be so? I'm not sure, and this is where I'm stuck. For example, what if the set <span class="math-container">$S$</span> contains not isolated points, but an entire interval of points? For example, what if <span class="math-container">$f$</span> is a function that has a jump discontinuity at every point in <span class="math-container">$[0,1]$</span>? Then to construct <span class="math-container">$l(x)$</span>, we'd need to join all these jump discontinuity points in <span class="math-container">$[0,1]$</span>, of which it isn't obvious at all we can do that.</p>
<p>Now you might say that an interval of jump discontinuities is impossible, and you'd be right. However the proof of that comes much much later in the book and is certainly beyond the knowledge of this chapter. But more importantly, even if <span class="math-container">$f$</span> doesn't have an interval of jump discontinuities, there are other ways <span class="math-container">$l(x)$</span> can be questionable.</p>
<p>Consider this monstrous example: <span class="math-container">$f(x) =
\begin{cases}
1-\frac{x}{2^{\lfloor 1 - \log_2(|x|)-1 \rfloor}} & :x \leq \frac{1}{2} \\
\frac{6}{6x+5} & :x > \frac{1}{2}
\end{cases} \Biggr\}$</span></p>
<p>Looks something like this (click <a href="https://www.desmos.com/calculator/j13mpqs9io" rel="nofollow noreferrer">here</a> to view in Desmos): <a href="https://i.stack.imgur.com/3dPdp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3dPdp.png" alt="" /></a></p>
<p>As it turns out, this <span class="math-container">$f$</span> satisfies the questions premises, but it also has the cool property that it contains an infinite number of jump discontinuities in any neighbourhood around 0! As such, in order to construct the piecewise linear function for it, you'd have to join lines between an infinite number of points, and still have a function, which might be possible? But for sure it isn't obvious that it is so...</p>
<p>Those are my thoughts on the problem. So Q1) Is my approach so far in the right direction? Or could it be that I've missed some super simple trick that will make the problem trivial and everything I've said above redundant? Q2) If I am in the right direction, how can I justify taking a linear piecewise function for an infinite number of points in a given interval?</p>
|
Misha Lavrov
| 383,078 |
<p>Well, we could the same things more directly: write <span class="math-container">$a$</span> as <span class="math-container">$\frac{(a^{12})^3}{(a^7)^5} = \frac{a^{36}}{a^{35}}$</span>, or as <span class="math-container">$\frac{(a^7)^7}{(a^{12})^4} = \frac{a^{49}}{a^{48}}$</span>, and conclude that it's in <span class="math-container">$\mathbb Q$</span>. Your work probably simplifies to one of these after some substitutions.</p>
|
825,848 |
<p>I'm trying to show that the minimal polynomial of a linear transformation $T:V \to V$ over some field $k$ has the same irreducible factors as the characteristic polynomial of $T$. So if $m = {f_1}^{m_1} ... {f_n}^{m_n}$ then $\chi = {f_1}^{d_1} ... {f_n}^{d_n}$ with $f_i$ irreducible and $m_i \le d_i$.</p>
<p>Now I've managed to prove this using the primary decomposition theorem and then restricting $T$ to $ker({f_i}^{m_i})$ and then using the fact that the minimal polynomial must divide the characteristic polynomial (Cayley-Hamilton) and then the irreducibility of $f_i$ gives us the result.</p>
<p>However I would like to be able to prove this directly using facts about polynomials/fields without relying on the primary decomposition theorem for vector spaces. Is this fact about polynomials true in general?</p>
<p>We know that $m$ divides $\chi$ and so certainly $\chi = {f_1}^{m_1} ... {f_n}^{m_n} \times g$ but then how do we show that $g$ must have only $f_i$ as it's factors? I'm guessing I need to use the fact that they share the same roots. And I'm also guessing that it depends on $k$, i.e. if $k$ is algebraically closed then it is easy because the polynomials split completely into linear factors.</p>
<p>Help is much appreciated,
Thanks</p>
|
xbh
| 514,490 |
<h3>Motivation</h3>
<p><span class="math-container">$ \newcommand{\proofstart}{\mathbf{Proof.}\blacktriangleleft} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\lcm}{lcm}
\newcommand{\bm}{\boldsymbol}
\newcommand {\F}{\Bbb F} $</span>
I was doing a problem and I need the claim in the OP, so i tried to prove this. My reference used the fact that <span class="math-container">$\F$</span> always have a field extension which is algebraic closed, but I have not learned field theory systematically, so I try to use the tools only from basic undergrad math and here is my proof only within the range of <strong>linear algebra</strong> and the theory of <strong>univariate polynomial ring over a field</strong> <span class="math-container">$\F$</span>.</p>
<h3>The proof</h3>
<p><span class="math-container">$\proofstart$</span> We first prove that if <span class="math-container">$m = p^r$</span> where <span class="math-container">$p\in \F[x]$</span> is irreducible over <span class="math-container">$\F$</span>, then <span class="math-container">$\chi= p^n$</span> for some <span class="math-container">$n\geqslant r$</span>. By Cayley-Hamilton theorem, <span class="math-container">$m \mid \chi$</span>. Thus <span class="math-container">$\chi = p^n g$</span> for some <span class="math-container">$n\geqslant r$</span>. If <span class="math-container">$\deg(g)>0$</span> and <span class="math-container">$p \nmid g$</span> then <span class="math-container">$V = \Ker(\chi(\mathcal T)) = \Ker(p(\mathcal T)^n) \oplus \Ker(g(\mathcal T))$</span> by the fact #2. Let these subspaces be <span class="math-container">$V_1,V_2$</span> respectively, then <span class="math-container">$\mathcal T|_{V_1}$</span> is annihilated by <span class="math-container">$p^n$</span>, <span class="math-container">$\mathcal T|_{V_2}$</span> is annihilated by <span class="math-container">$g$</span>. Thus the minimal polynomial of <span class="math-container">$\mathcal T|_{V_1}$</span> is <span class="math-container">$p^k$</span> for some <span class="math-container">$k\leqslant n$</span>, and that of <span class="math-container">$\mathcal T|_{V_2}$</span> is some <span class="math-container">$d$</span> that divides <span class="math-container">$g$</span>. Thus <span class="math-container">$m = p^k d$</span> by the fact #4. Now by the fact #1, <span class="math-container">$m=p^n$</span> is the uniqueness decomposition, so we have <span class="math-container">$p \mid d$</span>, then <span class="math-container">$p\mid g$</span> which is a contradiction. Therefore <span class="math-container">$\deg(g) =0$</span>. Since <span class="math-container">$m, \chi$</span> are both monic, <span class="math-container">$g = 1$</span>, then <span class="math-container">$\chi = p^n$</span>.</p>
<p>Now for the general case, suppose <span class="math-container">$\chi =\prod _1^s p_j^{n_j}$</span> where <span class="math-container">$p_j$</span> are distinct irreducible monic polynomials. Then by the C-H theorem and the fact #3
<span class="math-container">$$
V = \Ker(\chi(\mathcal T)) = \bigoplus_1^s \Ker(p_j(\mathcal T)^{n_j}).
$$</span>
Let <span class="math-container">$V_j = \Ker(p_j(\mathcal T)^{n_j})$</span>. Then <span class="math-container">$V_j$</span> is invariant under <span class="math-container">$\mathcal T$</span>. Then <span class="math-container">$m_j$</span>, the minimal polynomial of <span class="math-container">$\mathcal T|_{V_j}$</span> is <span class="math-container">$p_j^{r_j}$</span> for some <span class="math-container">$r_j \leqslant n_j$</span>, since <span class="math-container">$p_j$</span> is irreducible and <span class="math-container">$m_j \mid p_j^{n_j}$</span>. Thus by #4, the minimal polynomial <span class="math-container">$m$</span> of <span class="math-container">$\mathcal T$</span> is <span class="math-container">$m = \lcm(p_j^{r_j})_1^s = \prod_1^s p_j^{r_j}$</span>.</p>
<p>For the converse, suppose <span class="math-container">$m = \prod_1^s p_j^{r_j}$</span>, then by the fact #3
<span class="math-container">$$
V = \Ker(m(\mathcal T)) = \bigoplus_1^s \Ker(p_j(\mathcal T)^{r_j}).
$$</span>
Let <span class="math-container">$W_j = \Ker(p_j(\mathcal T)^{r_j})$</span>, then <span class="math-container">$W_j$</span> is invariant under <span class="math-container">$\mathcal T$</span>. Then the minimal polynomial of <span class="math-container">$\mathcal T|_{W_j}$</span> is <span class="math-container">$\mu_j = p_j^{k_j}$</span> for some <span class="math-container">$k_j \leqslant r_j$</span>. Now apply the claim at the very beginning to <span class="math-container">$\mathcal T|_{W_j}$</span> and <span class="math-container">$\mu_j$</span>, we have that the characteristic polynomial <span class="math-container">$\chi_j$</span> of <span class="math-container">$\mathcal T|_{W_j}$</span> is <span class="math-container">$\chi_j = p_j^{n_j}$</span> for some <span class="math-container">$n_j \geqslant k_j$</span>. Noe use the fact #5, we have <span class="math-container">$\chi =\prod_1^s p_j^{n_j}$</span>.</p>
<p>Conclusively, <span class="math-container">$\chi$</span> and <span class="math-container">$m$</span> always share the same irreducible factors. <span class="math-container">$\blacktriangleright$</span></p>
<h3>Appendices: the fact I used</h3>
<p>First we should admit the following fact:</p>
<blockquote>
<p>1.[<strong>Uniqueness Decomposition of Polynomials</strong>] In <span class="math-container">$\Bbb F [x]$</span> where <span class="math-container">$\Bbb F $</span> is a field, every <span class="math-container">$f \in \Bbb F[x]$</span> of degree <span class="math-container">$\geqslant 1$</span> could be uniquely decomposed as a product of finitely many irreducible polynomials in <span class="math-container">$\F[x]$</span>.</p>
</blockquote>
<p>Now we need a decomposition theorem.</p>
<blockquote>
<p>2.Suppose <span class="math-container">$f=gh$</span> where <span class="math-container">$f ,g,h \in \F[x]$</span> and <span class="math-container">$g,h$</span> are coprime and of degrees both <span class="math-container">$>0$</span>, then given an <span class="math-container">$\F$</span>-linear space <span class="math-container">$V$</span>, for all <span class="math-container">$\mathcal T\in \mathcal L(V) [=\mathrm {End}(V)]$</span>, <span class="math-container">$$ \Ker(f(\mathcal T)) = \Ker(g(\mathcal T)) \bigoplus \Ker(h(\mathcal T)). $$</span></p>
</blockquote>
<p><span class="math-container">$\proofstart$</span> Clearly <span class="math-container">$\Ker(g(\mathcal T )), \Ker(h(\mathcal T)) \subseteq \Ker(f(\mathcal T ))$</span>, hence <span class="math-container">$\Ker(g(\mathcal T)) + \Ker(h(\mathcal T)) \subseteq \Ker(f(\mathcal T))$</span>. For the converse, since <span class="math-container">$g,h$</span> are coprime, there exists <span class="math-container">$u,v\in \F[x]$</span> s.t. <span class="math-container">$ug+vh =1$</span>. Thus <span class="math-container">$u(\mathcal T)g(\mathcal T) + v(\mathcal T)h(\mathcal T) = \mathcal I$</span>. Then for every <span class="math-container">$\alpha \in \Ker(f(\mathcal T))$</span>,
<span class="math-container">$$
\alpha = u(\mathcal T)g(\mathcal T)\alpha + v(\mathcal T)h(\mathcal T)\alpha.
$$</span>
Since <span class="math-container">$u(\mathcal T) g (\mathcal T)\alpha \in \Ker (h(\mathcal T))$</span>, <span class="math-container">$v(\mathcal T)h(\mathcal T)\alpha \in \Ker(g(\mathcal T))$</span>, we have
<span class="math-container">$$
\Ker(f(\mathcal T)) = \Ker(g(\mathcal T)) + \Ker(h(\mathcal T)).
$$</span>
To prove the sum is direct, suppose <span class="math-container">$g(\mathcal T)\beta = h(\mathcal T)\beta = 0$</span>, then
<span class="math-container">$$
0 = u(\mathcal T) g(\mathcal T) \beta + v(\mathcal T)h(\mathcal T)\beta = \beta,
$$</span>
thus the sum is direct. <span class="math-container">$\blacktriangleright$</span></p>
<p>This could be generalized as:</p>
<blockquote>
<p>3.Suppose <span class="math-container">$f\in \F[x]$</span> has a decomposition <span class="math-container">$f = \prod_1^s f_j$</span> where <span class="math-container">$(f_j)$</span> are pairwise coprime, then for <span class="math-container">$\mathcal T\in \mathcal L(V)$</span>, <span class="math-container">$$ \Ker(f(\mathcal T)) = \bigoplus_1^s \Ker(f_j (\mathcal T)).$$</span></p>
</blockquote>
<p>This could be proved by induction on <span class="math-container">$s$</span>.</p>
<p>Now for the minimal polynomial, we have</p>
<blockquote>
<p>4.Suppose <span class="math-container">$V$</span> is an <span class="math-container">$\F$</span>-linear space, <span class="math-container">$\mathcal T \in \mathcal L(V)$</span>. If <span class="math-container">$V$</span> could be decomposed as <span class="math-container">$V = \bigoplus_1^s V_j$</span> where every <span class="math-container">$V_j$</span> is invariant under <span class="math-container">$\mathcal T$</span>, then the minimal polynomial <span class="math-container">$m\in \F[x]$</span> of <span class="math-container">$\mathcal T$</span> and the minimal polynomials <span class="math-container">$m_j$</span> of <span class="math-container">$\mathcal T|_{V_j}$</span> satisfy
<span class="math-container">$$
m = \mathrm {lcm}(m_1, \dots, m_s).
$$</span></p>
</blockquote>
<p><span class="math-container">$\proofstart$</span> Suppose <span class="math-container">$p(\mathcal T) = \mathcal O$</span> for <span class="math-container">$p \in \F[x]$</span>. Then <span class="math-container">$p(\mathcal T|_{V_j}) = \mathcal O$</span> as well. Therefore <span class="math-container">$m_j\mid p$</span> for all <span class="math-container">$j$</span>, thus <span class="math-container">$g = \lcm(m_j)_1^s$</span> satisfies <span class="math-container">$g \mid p$</span>. Specifically <span class="math-container">$m(\mathcal T) = \mathcal O$</span>, so <span class="math-container">$g \mid m$</span>. Let <span class="math-container">$g = m_j d_j$</span> for <span class="math-container">$j =1, \dots, s$</span>. By the direct sum expression, each <span class="math-container">$\alpha$</span> is actually <span class="math-container">$\alpha = \sum_1^s \alpha_j$</span> for <span class="math-container">$\alpha_j \in V_j$</span>. Thus
<span class="math-container">$$
g(\mathcal T)\alpha = \sum_1^s(d_j \cdot m_j)(\mathcal T) \alpha_j = \sum_1^s (d_j\cdot m_j) (\mathcal T|_{V_j}) \alpha_j =0,
$$</span>
i.e. <span class="math-container">$g(\mathcal T) = \mathcal O$</span>. Thus <span class="math-container">$m \mid g$</span> by the minimal property of <span class="math-container">$m$</span>. Since <span class="math-container">$m,g$</span> are monic, <span class="math-container">$m =g.\blacktriangleright$</span></p>
<blockquote>
<p>5.Under the same assumption of #4, suppose <span class="math-container">$\chi_j$</span> is the characteristic polynomial of <span class="math-container">$\mathcal T|_{V_j}$</span>, then <span class="math-container">$\chi = \prod_1^s \chi_j$</span>.</p>
</blockquote>
<p><span class="math-container">$\proofstart$</span> By the decomposition of <span class="math-container">$V$</span>, we could find a basis of <span class="math-container">$V$</span> under which <span class="math-container">$\mathcal T$</span> has a block diagonal matrix <span class="math-container">$\bm T =\mathrm {diag}(\bm T_1, \dots, \bm T_s)$</span>, where <span class="math-container">$\bm T_j$</span> is a <span class="math-container">$\dim(V_j) \times \dim(V_j)$</span> matrix over <span class="math-container">$\F$</span>. Then we actually have
<span class="math-container">$$
\chi(x) = \det (x\bm I - \bm T) = \prod_1^s (x \bm I - \bm T_j) = \prod_1^s \chi_j(x),
$$</span>
as desired. <span class="math-container">$\blacktriangleright$</span></p>
|
1,139,723 |
<p>Given the differential equation $\frac{dy}{dt}+a(t)y=f(t)$ </p>
<p>with a(t) and f(t) continuous for:</p>
<p>$-\infty<t<\infty$</p>
<p>$a(t) \ge c>0$</p>
<p>$lim_{t_->0}f(t)=0$</p>
<p>Show that every solution tends to 0 as t approaches infinity.</p>
<p>$$\frac{dy}{dt}+a(t)y=f(t)$$</p>
<p>$$μ(t)=e^{\int a(t)dt}$$
$$μ(t)=e^{\frac{1}{2}a^2(t)}$$</p>
<p>Multiplying both sides of the equation by $μ(t)$:</p>
<p>$$\frac{dy}{dt}+a(t)y=f(t)$$
$$μ(t)[\frac{dy}{dt}+a(t)y]=μ(t)[f(t)]$$
$$e^{\frac{1}{2}a^2(t)}[\frac{dy}{dt}+a(t)y]=e^{\frac{1}{2}a^2(t)}[f(t)]$$
$$\frac{d}{dt}e^{\frac{1}{2}a^2(t)}y=e^{\frac{1}{2}a^2(t)}[f(t)]$$
$$e^{\frac{1}{2}a^2(t)}y=\int e^{\frac{1}{2}a^2(t)}[f(t)]dt$$ </p>
<p>Did I make a mistake in the integrating factor step? I am not sure how to proceed..</p>
|
Winther
| 147,873 |
<p>I will give a solution for the case $f \geq 0$. First we note that the solution can be written (since $y'+ay = f \implies (ye^{\mu})' = fe^{\mu}$ and $y(0)=0$)</p>
<p>$$y(t) = \int_0^t f(t')e^{\mu(t')-\mu(t)} dt'$$</p>
<p>where $\mu(t) = \int_0^t a(t')dt'$. This shows that $y(t)\geq 0$ for all $t\geq 0$. Now using $a(t)\geq c > 0$ we find</p>
<p>$$\mu(t) = \int_{0}^t a(t')dt' \geq ct$$</p>
<p>Using this we can write the ODE as</p>
<p>$$y' = f - ay \leq f - ct y$$</p>
<p>and apply <a href="http://en.wikipedia.org/wiki/Gronwall%27s_inequality" rel="nofollow">Gronwall's inequality</a> to get</p>
<p>$$y(t) \leq \left(\int_0^t f(t')dt'\right)e^{\int_0^t (-ct')dt'} = \left(\int_0^t f(t')dt'\right) e^{-ct^2/2}$$</p>
<p>Since $f$ is continuous and $\lim_{t\to \infty} f(t) = 0$ we have that $f$ is bounded (by $M$) on $\mathbb{R}$ and it follows that</p>
<p>$$0 \leq \lim_{t\to\infty}y(t) \leq \lim_{t\to\infty}Mt e^{-ct^2/2} = 0$$</p>
|
1,845,448 |
<p>I know that $(AB)^T$ = $B^TA^T$ and that $(A^T)^{-1}= (A^{-1})^T$ but couldn't reach any convincing answer. Can someone demonstrate the expression.</p>
|
GAVD
| 255,061 |
<p>One way to see, you can compare entries of two matrices $(AB)^t$ and $B^tA^t$.</p>
<p>Firstly, we have $$AB_{ij} = \sum_{k} a_{ik}b_{kj}.$$</p>
<p>When to transpose a matrix, we switch the rows and the columns.
$$(AB)^t_{ij} = AB_{ji} = \sum_{k} a_{jk}b_{ki}.$$</p>
<p>Now, let see the entry ij of $B^tA^t$:</p>
<p>$$B^tA^t_{ij} = \sum_{k} B^t_{ik}A^t_{kj} = \sum_{k} b_{ki}a_{jk}.$$</p>
<p>Finally, you can apply this to see $(A^t)^{-1} = (A^{-1})^t$.</p>
<p>$$I = I^t = (AA^{-1})^t = (A^{-1})^tA^t.$$ </p>
|
3,843,352 |
<p>It’s called a Diophantine Equation, and it’s sometimes known as the “summing of three cubes”.</p>
<blockquote>
<p>A Diophantine equation is a polynomial equation, usually in two or
more unknowns, such that only the integer solutions are sought or
studied (an integer solution is such that all the unknowns take
integer values).</p>
</blockquote>
<p>It seems easy for <span class="math-container">$x^3+y^3+z^3=8$</span>.
<span class="math-container">$x=1$</span>, <span class="math-container">$y=-1$</span> and <span class="math-container">$z=2.$</span></p>
<p>But what for higher values of <span class="math-container">$k$</span>?</p>
|
poetasis
| 546,655 |
<p>Using positive integers only for <span class="math-container">$x,y,z$</span> there are no solutions for <span class="math-container">$k<3$</span> but here are <span class="math-container">$3\le k \le 100$</span>. The use of non-positive integers follows.</p>
<pre><code> (x,y,z,k)
(1,1,1,3)
(1,1,2,10)
(1,2,2,17)
(2,2,2,24)
(1,1,3,29)
(1,2,3,36)
(2,2,3,43)
(1,3,3,55)
(2,3,3,62)
(1,1,4,66)
(1,2,4,73)
(2,2,4,80)
(3,3,3,81)
(1,3,4,92)
(2,3,4,99)
</code></pre>
<p>It appears that at least one of x,y,z must be positive if k is positive.</p>
<pre><code> (x,y,z,k)
(-100,1,100,1)
(-99,1,99,1)
(-98,1,98,1)
(-97,1,97,1)
(-96,1,96,1)
(-95,1,95,1)
(-94,1,94,1)
(-93,1,93,1)
(-92,1,92,1)
(-91,1,91,1)
(-90,1,90,1)
(-89,1,89,1)
(-88,1,88,1)
(-87,1,87,1)
(-86,1,86,1)
(-85,1,85,1)
(-84,1,84,1)
(-83,1,83,1)
(-82,1,82,1)
(-81,1,81,1)
(-80,1,80,1)
(-79,1,79,1)
(-78,1,78,1)
(-77,1,77,1)
(-76,1,76,1)
(-75,1,75,1)
(-74,1,74,1)
(-73,1,73,1)
(-72,1,72,1)
(-71,1,71,1)
(-70,1,70,1)
(-69,1,69,1)
(-68,1,68,1)
(-67,1,67,1)
(-66,1,66,1)
(-65,1,65,1)
(-64,1,64,1)
(-63,1,63,1)
(-62,1,62,1)
(-61,1,61,1)
(-60,1,60,1)
(-59,1,59,1)
(-58,1,58,1)
(-57,1,57,1)
(-56,1,56,1)
(-55,1,55,1)
(-54,1,54,1)
(-53,1,53,1)
(-52,1,52,1)
(-51,1,51,1)
(-50,1,50,1)
(-49,1,49,1)
(-48,1,48,1)
(-47,1,47,1)
(-46,1,46,1)
(-45,1,45,1)
(-44,1,44,1)
(-43,1,43,1)
(-42,1,42,1)
(-41,1,41,1)
(-40,1,40,1)
(-39,1,39,1)
(-38,1,38,1)
(-37,1,37,1)
(-36,1,36,1)
(-35,1,35,1)
(-34,1,34,1)
(-33,1,33,1)
(-32,1,32,1)
(-31,1,31,1)
(-30,1,30,1)
(-29,1,29,1)
(-28,1,28,1)
(-27,1,27,1)
(-26,1,26,1)
(-25,1,25,1)
(-24,1,24,1)
(-23,1,23,1)
(-22,1,22,1)
(-21,1,21,1)
(-20,1,20,1)
(-19,1,19,1)
(-18,1,18,1)
(-17,1,17,1)
(-16,1,16,1)
(-15,1,15,1)
(-14,1,14,1)
(-13,1,13,1)
(-12,1,12,1)
(-12,9,10,1)
(-11,1,11,1)
(-10,1,10,1)
(-9,1,9,1)
(-8,-6,9,1)
(-8,1,8,1)
(-7,1,7,1)
(-6,1,6,1)
(-5,1,5,1)
(-4,1,4,1)
(-3,1,3,1)
(-2,1,2,1)
(-1,1,1,1)
(0,0,1,1)
(-47,-24,49,2)
(-6,-5,7,2)
(0,1,1,2)
(-5,4,4,3)
(1,1,1,3)
(-58,-43,65,6)
(-1,-1,2,6)
(-1,0,2,7)
(-100,2,100,8)
(-99,2,99,8)
(-98,2,98,8)
(-97,2,97,8)
(-96,2,96,8)
(-95,2,95,8)
(-94,2,94,8)
(-93,2,93,8)
(-92,2,92,8)
(-91,2,91,8)
(-90,2,90,8)
(-89,2,89,8)
(-89,41,86,8)
(-88,2,88,8)
(-87,2,87,8)
(-86,2,86,8)
(-85,2,85,8)
(-84,2,84,8)
(-83,2,83,8)
(-82,2,82,8)
(-81,2,81,8)
(-80,2,80,8)
(-79,2,79,8)
(-78,2,78,8)
(-77,2,77,8)
(-76,2,76,8)
(-75,2,75,8)
(-74,2,74,8)
(-73,2,73,8)
(-72,2,72,8)
(-71,2,71,8)
(-70,2,70,8)
(-69,2,69,8)
(-68,2,68,8)
(-67,2,67,8)
(-66,2,66,8)
(-65,2,65,8)
(-64,2,64,8)
(-63,2,63,8)
(-62,2,62,8)
(-61,2,61,8)
(-60,2,60,8)
(-59,2,59,8)
(-58,2,58,8)
(-57,2,57,8)
(-56,2,56,8)
(-55,2,55,8)
(-54,2,54,8)
(-53,2,53,8)
(-52,2,52,8)
(-51,2,51,8)
(-50,2,50,8)
(-49,2,49,8)
(-48,2,48,8)
(-47,2,47,8)
(-46,2,46,8)
(-45,2,45,8)
(-44,2,44,8)
(-43,2,43,8)
(-42,2,42,8)
(-41,2,41,8)
(-40,-17,41,8)
(-40,2,40,8)
(-39,2,39,8)
(-38,2,38,8)
(-37,2,37,8)
(-36,2,36,8)
(-35,2,35,8)
(-34,2,34,8)
(-34,15,33,8)
(-33,2,33,8)
(-32,2,32,8)
(-31,2,31,8)
(-30,2,30,8)
(-29,2,29,8)
(-28,2,28,8)
(-27,2,27,8)
(-26,2,26,8)
(-25,2,25,8)
(-24,2,24,8)
(-24,18,20,8)
(-23,2,23,8)
(-22,2,22,8)
(-21,2,21,8)
(-20,2,20,8)
(-19,2,19,8)
(-18,2,18,8)
(-17,2,17,8)
(-16,-12,18,8)
(-16,2,16,8)
(-16,9,15,8)
(-15,2,15,8)
(-14,2,14,8)
(-13,2,13,8)
(-12,2,12,8)
(-11,2,11,8)
(-10,2,10,8)
(-9,2,9,8)
(-8,2,8,8)
(-7,2,7,8)
(-6,2,6,8)
(-5,2,5,8)
(-4,2,4,8)
(-3,2,3,8)
(-2,2,2,8)
(-1,1,2,8)
(0,0,2,8)
(0,1,2,9)
(-3,-3,4,10)
(1,1,2,10)
(-2,-2,3,11)
(-11,7,10,12)
(-46,23,44,15)
(-1,2,2,15)
(-94,-48,98,16)
(-12,-10,14,16)
(0,2,2,16)
(-52,25,50,17)
(1,2,2,17)
(-2,-1,3,18)
(-95,47,91,19)
(-77,26,76,19)
(-16,-14,19,19)
(-2,0,3,19)
(-56,21,55,20)
(-2,1,3,20)
(-86,28,85,21)
(-14,-11,16,21)
(-10,8,8,24)
(2,2,2,24)
(-1,-1,3,25)
(-1,0,3,26)
(-100,3,100,27)
(-99,3,99,27)
(-98,3,98,27)
(-97,3,97,27)
(-96,3,96,27)
(-95,3,95,27)
(-94,3,94,27)
(-93,3,93,27)
(-92,3,92,27)
(-91,3,91,27)
(-90,3,90,27)
(-89,3,89,27)
(-88,3,88,27)
(-87,3,87,27)
(-86,3,86,27)
(-85,3,85,27)
(-84,3,84,27)
(-83,3,83,27)
(-82,3,82,27)
(-81,3,81,27)
(-80,3,80,27)
(-79,3,79,27)
(-78,3,78,27)
(-77,3,77,27)
(-76,3,76,27)
(-75,3,75,27)
(-74,3,74,27)
(-73,3,73,27)
(-72,3,72,27)
(-71,3,71,27)
(-70,3,70,27)
(-69,3,69,27)
(-68,3,68,27)
(-67,3,67,27)
(-66,3,66,27)
(-65,3,65,27)
(-64,3,64,27)
(-63,3,63,27)
(-62,3,62,27)
(-61,3,61,27)
(-60,3,60,27)
(-60,22,59,27)
(-59,3,59,27)
(-58,3,58,27)
(-57,3,57,27)
(-56,3,56,27)
(-55,3,55,27)
(-54,3,54,27)
(-53,3,53,27)
(-52,3,52,27)
(-51,3,51,27)
(-50,3,50,27)
(-49,3,49,27)
(-48,3,48,27)
(-47,3,47,27)
(-46,3,46,27)
(-45,3,45,27)
(-44,3,44,27)
(-43,3,43,27)
(-42,3,42,27)
(-41,3,41,27)
(-40,3,40,27)
(-39,3,39,27)
(-38,3,38,27)
(-37,-36,46,27)
(-37,3,37,27)
(-36,3,36,27)
(-36,27,30,27)
(-35,3,35,27)
(-34,3,34,27)
(-33,3,33,27)
(-32,3,32,27)
(-31,3,31,27)
(-30,3,30,27)
(-29,3,29,27)
(-28,3,28,27)
(-27,3,27,27)
(-26,3,26,27)
(-25,3,25,27)
(-24,-18,27,27)
(-24,3,24,27)
(-23,3,23,27)
(-22,3,22,27)
(-21,3,21,27)
(-20,3,20,27)
(-19,3,19,27)
(-18,-10,19,27)
(-18,3,18,27)
(-17,3,17,27)
(-16,3,16,27)
(-15,3,15,27)
(-14,3,14,27)
(-13,3,13,27)
(-12,3,12,27)
(-11,3,11,27)
(-10,3,10,27)
(-9,3,9,27)
(-8,3,8,27)
(-7,3,7,27)
(-6,3,6,27)
(-5,-4,6,27)
(-5,3,5,27)
(-4,3,4,27)
(-3,3,3,27)
(-2,2,3,27)
(-1,1,3,27)
(0,0,3,27)
(-59,31,56,28)
(-17,13,14,28)
(0,1,3,28)
(-20,13,18,29)
(-3,-2,4,29)
(1,1,3,29)
(-6,5,5,34)
(-4,-3,5,34)
(-1,2,3,34)
(-13,-8,14,35)
(0,2,3,35)
(-75,40,71,36)
(-3,-1,4,36)
(1,2,3,36)
(-56,37,50,37)
(-3,0,4,37)
(-27,16,25,38)
(-3,1,4,38)
(-52,20,51,43)
(-13,8,12,43)
(-7,-7,9,43)
(2,2,3,43)
(-7,-5,8,44)
(-3,2,4,45)
(-29,19,26,46)
(-2,3,3,46)
(-50,-50,63,47)
(-30,-14,31,47)
(-8,6,7,47)
(-26,-23,31,48)
(-2,-2,4,48)
(-4,-2,5,53)
(-1,3,3,53)
(-18,-15,21,54)
(-11,-7,12,54)
(0,3,3,54)
(-23,-23,29,55)
(-9,-6,10,55)
(-2,-1,4,55)
(1,3,3,55)
(-47,31,42,56)
(-21,-11,22,56)
(-2,0,4,56)
(-38,25,34,57)
(-2,1,4,57)
(-4,-1,5,60)
(-4,0,5,61)
(-43,22,41,62)
(-34,27,27,62)
(-4,1,5,62)
(-1,-1,4,62)
(2,3,3,62)
(-63,-37,67,63)
(-58,-38,63,63)
(-6,-4,7,63)
(-1,0,4,63)
(-100,4,100,64)
(-99,4,99,64)
(-98,4,98,64)
(-97,4,97,64)
(-96,4,96,64)
(-95,4,95,64)
(-94,4,94,64)
(-93,4,93,64)
(-92,4,92,64)
(-91,4,91,64)
(-90,4,90,64)
(-89,4,89,64)
(-88,4,88,64)
(-87,4,87,64)
(-86,4,86,64)
(-85,4,85,64)
(-84,4,84,64)
(-83,4,83,64)
(-82,4,82,64)
(-81,4,81,64)
(-80,-34,82,64)
(-80,4,80,64)
(-79,4,79,64)
(-78,4,78,64)
(-77,4,77,64)
(-76,4,76,64)
(-75,4,75,64)
(-74,4,74,64)
(-73,4,73,64)
(-72,4,72,64)
(-71,4,71,64)
(-70,4,70,64)
(-69,4,69,64)
(-68,4,68,64)
(-68,30,66,64)
(-67,4,67,64)
(-66,4,66,64)
(-65,4,65,64)
(-64,4,64,64)
(-63,4,63,64)
(-62,4,62,64)
(-61,4,61,64)
(-60,4,60,64)
(-59,4,59,64)
(-58,4,58,64)
(-57,4,57,64)
(-56,4,56,64)
(-55,4,55,64)
(-54,4,54,64)
(-53,4,53,64)
(-52,4,52,64)
(-51,4,51,64)
(-50,4,50,64)
(-49,4,49,64)
(-48,4,48,64)
(-48,36,40,64)
(-47,4,47,64)
(-46,4,46,64)
(-45,4,45,64)
(-44,4,44,64)
(-43,4,43,64)
(-42,4,42,64)
(-41,4,41,64)
(-40,4,40,64)
(-39,4,39,64)
(-38,4,38,64)
(-37,4,37,64)
(-36,4,36,64)
(-35,4,35,64)
(-34,4,34,64)
(-33,4,33,64)
(-32,-24,36,64)
(-32,4,32,64)
(-32,18,30,64)
(-31,4,31,64)
(-30,4,30,64)
(-29,4,29,64)
(-28,4,28,64)
(-27,4,27,64)
(-26,4,26,64)
(-25,4,25,64)
(-24,4,24,64)
(-23,4,23,64)
(-22,-17,25,64)
(-22,4,22,64)
(-21,4,21,64)
(-20,4,20,64)
(-19,4,19,64)
(-18,4,18,64)
(-17,4,17,64)
(-16,4,16,64)
(-15,4,15,64)
(-14,4,14,64)
(-13,4,13,64)
(-12,4,12,64)
(-11,4,11,64)
(-10,4,10,64)
(-9,4,9,64)
(-8,4,8,64)
(-7,4,7,64)
(-6,4,6,64)
(-5,-3,6,64)
(-5,4,5,64)
(-4,4,4,64)
(-3,3,4,64)
(-2,2,4,64)
(-1,1,4,64)
(0,0,4,64)
(0,1,4,65)
(1,1,4,66)
(-22,-19,26,69)
(-4,2,5,69)
(-64,23,63,70)
(-21,11,20,70)
(-33,-22,36,71)
(-24,12,23,71)
(-20,-16,23,71)
(-3,-3,5,71)
(-1,2,4,71)
(-27,-13,28,72)
(-10,7,9,72)
(0,2,4,72)
(-47,29,43,73)
(-24,-12,25,73)
(1,2,4,73)
(-55,26,53,78)
(-66,-49,74,79)
(-33,-19,35,79)
(-6,-6,8,80)
(2,2,4,80)
(-18,10,17,81)
(-15,12,12,81)
(3,3,3,81)
(-11,-11,14,82)
(-36,-32,43,83)
(-29,22,24,83)
(-23,-15,25,83)
(-5,-2,6,83)
(-2,3,4,83)
(-16,-9,17,88)
(-4,-4,6,88)
(-4,3,5,88)
(-7,6,6,89)
(-100,31,99,90)
(-73,-32,75,90)
(-51,-35,56,90)
(-48,19,47,90)
(-27,13,26,90)
(-9,-8,11,90)
(-5,-1,6,90)
(-3,-2,5,90)
(-1,3,4,90)
(-5,0,6,91)
(0,3,4,91)
(-8,-5,9,92)
(-5,1,6,92)
(1,3,4,92)
(-5,-5,7,93)
(-22,14,20,96)
(-22,17,18,97)
(-3,-1,5,97)
(-15,9,14,98)
(-3,0,5,98)
(-37,16,36,99)
(-5,2,6,99)
(-3,1,5,99)
(2,3,4,99)
(-6,-3,7,100)
</code></pre>
|
175,775 |
<p>To clarify the terms in the question above:</p>
<p>The symmetric group Sym($\Omega$) on a set $\Omega$ consists of all bijections from $\Omega$ to $\Omega$ under composition of functions. A generating set $X \subseteq \Omega$ is minimal if no proper subset of $X$ generates Sym($\Omega$).</p>
<p>This might be a difficult question, but perhaps the answer is known already?</p>
|
Igor Rivin
| 11,142 |
<p>The canonical paper on the subject (don't be fooled by the publication date, it had been around for years before then) is George Bergman's gem:</p>
<p><em>Bergman, George M.</em>, <a href="http://dx.doi.org/10.1112/S0024609305018308" rel="nofollow noreferrer"><strong>Generating infinite symmetric groups.</strong></a>, Bull. Lond. Math. Soc. 38, No. 3, 429-440 (2006). <a href="https://zbmath.org/?q=an:1103.20003" rel="nofollow noreferrer">ZBL1103.20003</a>.</p>
|
1,711,673 |
<p>Why are the inverses of 2-cycles of the symmetric permutation group $S_3$ the elements themselves rather than their reverses?</p>
<p>E.g. why is $(1,2)^{-1}=(1,2)$ and not $(2,1)$?</p>
|
C. Dubussy
| 310,801 |
<p>Why not simply extending continuously the function at $x=0$ ? Thanks to l'Hospital's rule, we have $$\lim_{x \to 0}\frac{1-(\cos(x))^3}{x^2}=\lim_{x \to 0}\frac{3\cos^2(x)\sin(x)}{2x}=\frac{3}{2}.$$</p>
|
1,711,673 |
<p>Why are the inverses of 2-cycles of the symmetric permutation group $S_3$ the elements themselves rather than their reverses?</p>
<p>E.g. why is $(1,2)^{-1}=(1,2)$ and not $(2,1)$?</p>
|
Claude Leibovici
| 82,404 |
<p>If you want to compute the value of the expression when $x$ is very close to $0$, just use Taylor series of $\cos(x)$ to get $$\frac{1-\cos^3(x)}{x^2}=\frac{3}{2}-\frac{7 x^2}{8}+\frac{61 x^4}{240}+O\left(x^5\right)$$ which shows the limit and how it is approched.</p>
<p>For example, if $x=10^{-8}$, the above formula would give $$\frac{35999999999999997900000000000000061}{24000000000000000000000000000000000}\approx 1.499999999999999912500000$$ which is the value you would obtain using illimited precision.</p>
|
4,274,044 |
<p>I am trying to prove the following result.</p>
<blockquote>
<p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be sets and <span class="math-container">$\sim$</span> an equivalence relation on <span class="math-container">$X$</span>. Let <span class="math-container">$f: X \to Y$</span> be a function which preserves <span class="math-container">$\sim$</span>, and let <span class="math-container">$\pi$</span> denote the projection <span class="math-container">$X \to X/\sim$</span>. Prove that there exists a unique function <span class="math-container">$\tilde{f}$</span> such that <span class="math-container">$f = \tilde{f} \circ \pi$</span>.</p>
</blockquote>
<p>From the wording of the problem, it seems that I need to prove both existence and uniqueness, though I don't know how to prove uniqueness because that would require giving some sort of "formula" for <span class="math-container">$\tilde{f}$</span>, but if I can give a formula for it, it must be unique. What the map is doing is:
<span class="math-container">$$
x \stackrel{\pi}{\to} [x] \stackrel{\tilde{f}}{\to} Y.
$$</span>
Some versions of this decomposition I've seen send <span class="math-container">$[x]$</span> to an element of <span class="math-container">$\mathrm{Im}(f)$</span> and then inject into <span class="math-container">$Y$</span>. I'm not sure if I need to include that because then I'd be adding an extra map. The <span class="math-container">$f$</span>, as defined here, is only the composition of two maps.</p>
<p>As for proving uniqueness, it seems to me that this follows from the fact that a function is surjective if and only if it admits a right inverse. Suppose that <span class="math-container">$\tilde{f}$</span> and <span class="math-container">$\tilde{f}'$</span> have the property that <span class="math-container">$f = \tilde{f} \circ \pi = \tilde{f}' \circ \pi$</span>. Cancelling on the right gives <span class="math-container">$\tilde{f} = \tilde{f}'$</span>.</p>
<p>What am I missing?</p>
<p>Another thing I'm having trouble understanding is the exact meaning of a project map. The map sending <span class="math-container">$x \to [x]$</span> is surely a projection, but the definition I'd read is that <span class="math-container">$p$</span> is a projection if <span class="math-container">$p^2 = p$</span>. I can't even compose <span class="math-container">$\pi$</span> with itself because <span class="math-container">$X \neq X/\sim$</span>. Is there another, more general definition? It's it a shorthand for a surjective map?</p>
|
SV-97
| 960,107 |
<p>I think a lot of your pain points can be remedied by recognizing that we often times identify equivalence classes by representatives. This makes <span class="math-container">$\pi : X \to (X/\sim) \simeq X$</span> at which point you can apply the projection definition you know (up to isomorphism).</p>
<p>By choosing a set of representatives (in the worst case by invoking the axiom of choice) we have (at least one) map <span class="math-container">$$g : X / \sim \to X$$</span> statisfying <span class="math-container">$\pi \circ g = id_{X / \sim}$</span> - precisely the right inverse you described. Using this we may define <span class="math-container">$\bar{f} : X / \sim \to Y, \bar{f} := f \circ g$</span> and we can show quite easily that this does statisfy the needed equality. Uniqueness is indeed trivial since <span class="math-container">$f=\bar{f}\circ \pi \iff \bar{f} = f \circ g$</span>.</p>
|
2,707,299 |
<p>Suppose that the conditional distribution of $Y$ given $X=x$ is Poisson with mean $E(Y|x)=x$, $Y|x\sim POI(x)$, and that $X\sim EXP(1)$</p>
<p>Find $V[Y]$</p>
<p>We know that</p>
<p>$$V[Y]=E_x[Var(Y|X)] + Var_x[E(Y|X)]$$</p>
<p>we also know that </p>
<p>$$V(Y|x)=E(Y^2|x)-[E(Y|x)]^2$$</p>
<p>I know we can find $V[Y]$ with those 2 formulas, but I am getting confused with so many expectances and variances around each other. Can anyone explain me how to solve this?</p>
|
userNoOne
| 518,028 |
<p>We have been given that $Y|x$ is a Poisson Variate with parameter $x$. We know that, in Poisson distribution the mean and variance are equal to its parameter. Hence, the following will hold:</p>
<p>$E[Y|x]=V[Y|x]=x$.</p>
<p>Now, to obtain the variance of $Y$,</p>
<p>$V(Y)=E(V[Y|x])+V(E[Y|x]) = E(x)+V(x)$</p>
<p>Now, $X$ itself is a exponential Variate with parameter $1$. So, it's Expectation and variance both will be $1$. Hence, the required value of Variance of $Y$ after plugging in the appropriate Values will be:</p>
<p>$V(Y)=E(x)+V(x)=1+1=2$</p>
<p>Note: The mean and variance of $X~Exp(\beta)$ is:</p>
<p>$E(X)=\frac{1}{\beta}$,
$V(X)=\frac{1}{\beta^2}$</p>
|
507,975 |
<p>I'm new here and unsure if this is the right way to format a problem, but here goes nothing. I'm currently trying to solve an inequality proof to show that $n^3 > 2n+1$ for all $n \geq 2$.</p>
<p>I proved the first step $(P(2))$, which comes out to $8>5$, which is true.</p>
<p>In the next step we assume that for some $k \geq 2$, $k^3 > 2k+1$.
Then we consider the quotient $\frac{f(k+1)}{f(k)} > \frac{g(k+1)}{g(k)}$. </p>
<p>I so far have simplified it to the following:</p>
<p>$$\begin{align*}
\frac{(k+1)^3}{k^3} &> \frac{2(k+1)+1}{2k+1}\\
&= \frac{k^3+3k^2+3k+1)}{k^3}\\ &> \frac{2k+3}{2k+1}\\
&= 1 + \frac{3}{k} + \frac{3}{k^2} + \frac{1}{k^3}\\ &> \frac{2k+3}{2k+1}
\end{align*}$$</p>
<p>I don't know how to simplify the right side anymore (my algebra is terrible). I know that I have to simplify that inequality and multiply it by our previous assumption. </p>
<p>I should end up with some variant of $(k+1)^3 > 2(k+1)+1$. (This is $P(k+1)$). I just need help simplifying. Thanks!</p>
|
Arash
| 92,185 |
<p>First observe that if $u=\cos x$: </p>
<p>$$
\lim_{x\to \pi/2} (\sec x \tan x)^{\cos x}=\lim_{x\to \pi/2} \left(\frac{1}{\cos^2x}\right)^ {\cos x}=\lim_{u\to 0} \left(\frac{1}{u^2}\right)^ {u}
$$
Now just assume that $L=\lim_{u\to 0} \left(\frac{1}{u}\right)^ {2u}$. You get:
$$
\ln L=\lim_{u\to 0} 2u\ln\left(\frac{1}{u}\right)=0 \implies L=1
$$</p>
<hr>
<p><strong>Remark:</strong> The last limit can be proved also without Hopital rule! To see this assume $y=\ln\left(\frac{1}{u}\right)$, then the limit turns out to be:
$$
\lim_{y\to\infty}2ye^{-y}
$$
To prove that the last limit is zero, we use the following inequality:</p>
<p>$$
0<2ye^{-y}\leq \frac{2y}{1+y+y^2/2}
$$
The limit of RHS goes to zero as $y\to\infty$.</p>
|
1,678,922 |
<p>let B be a commutative unital real algebra and C its complexification
viewed as the cartesian product of B with itself.
If M is a maximal ideal in A, is the cartesian product of M with itself
a maximal ideal in C? </p>
|
Eric Wofsey
| 86,856 |
<p>Not necessarily. Try taking $B=\mathbb{C}$ and $M=0$.</p>
|
1,133,581 |
<p>$\lim\limits_{(x,y) \to (0,0)} \frac{\log(1-x^2y^2)}{x^2y^2}$</p>
<p>in polar coordinates:</p>
<p>$\lim\limits_{r \to 0} \frac{\log(1-r^2\cos^2t \sin^2t)}{r^2\cos^2t \sin^2t}$</p>
<p>The first limit exists if the second is independent from $t$. But if $t$ is $k \pi/2$ that fraction does not exist. So I could argue that the limit does not exist. Instead it is wrong the limit is $-1$. Where is the mistake?</p>
<p><strong>EDIT</strong></p>
<p>I know I can use another way to reach the result. My question is strictly related to this method (polar coordinates). Since the second limit does not exist for some $t$ I should think that the multivar limit does not exist. Perhaps are there some $t$ that I shouldn't care about?</p>
|
Qidi
| 212,981 |
<p>You can take the open cover $\{(0+\frac{1}{n}, 1-\frac{1}{n})\mid n>2\} $. It has no finite subcover because if it does simply take the largest $n$ and $\frac{1}{2n}$ is not included.</p>
|
1,133,581 |
<p>$\lim\limits_{(x,y) \to (0,0)} \frac{\log(1-x^2y^2)}{x^2y^2}$</p>
<p>in polar coordinates:</p>
<p>$\lim\limits_{r \to 0} \frac{\log(1-r^2\cos^2t \sin^2t)}{r^2\cos^2t \sin^2t}$</p>
<p>The first limit exists if the second is independent from $t$. But if $t$ is $k \pi/2$ that fraction does not exist. So I could argue that the limit does not exist. Instead it is wrong the limit is $-1$. Where is the mistake?</p>
<p><strong>EDIT</strong></p>
<p>I know I can use another way to reach the result. My question is strictly related to this method (polar coordinates). Since the second limit does not exist for some $t$ I should think that the multivar limit does not exist. Perhaps are there some $t$ that I shouldn't care about?</p>
|
graydad
| 166,967 |
<p>Since you have the real analysis tag, I'm assuming you are using the standard topology on $\Bbb{R}$, so open sets look like intervals of the form $(a,b)$. An open cover of $(0,1)$ could look something like $$\bigcup_{i=1}^\infty \left(0,1-\frac{1}{i} \right)$$ But you need to verify this is an open cover. That is, show $$(0,1) \subseteq \bigcup_{i=1}^\infty \left(0,1-\frac{1}{i} \right)$$ and further show that if $K \subset \Bbb{N}$ is any finite subset of $\Bbb{N}$ that $$(0,1) \not\subseteq \bigcup_{i\in K} \left(0,1-\frac{1}{i} \right)$$ which demonstrates that the open cover has no finite subcover.</p>
|
42,016 |
<p>I am looking for algorithms on how to find rational points on an <a href="http://en.wikipedia.org/wiki/Elliptic_curve">elliptic curve</a> $$y^2 = x^3 + a x + b$$ where $a$ and $b$ are integers. Any sort of ideas on how to proceed are welcome. For example, how to find solutions in which the numerators and denominators are bounded, or how to find solutions with a randomized algorithm. Anything better than brute force is interesting.</p>
<p>Background: a student worked on the <a href="http://en.wikipedia.org/wiki/Mordell-Weil_theorem">Mordell-Weil theorem</a> and illustrated it on some simple examples of elliptic curves. She looked for rational points by brute force (I really mean <em>brute</em>, by enumerating all possibilities and trying them). As a continuation of the project she is now interested in smarter algorithms for finding rational points. A cursory search on Math Reviews did not find much.</p>
|
Charles Matthews
| 6,153 |
<p>It is misleading to say "algorithm", really. There are probabilistic algorithms, and then hard cases (evidence of a point of infinite order that is hard to find). See for example <a href="http://www.jstor.org/pss/2152939" rel="nofollow">http://www.jstor.org/pss/2152939</a> . I think of Andrew Bremner as one of the experts on the highly numerical side, and you should add his name to John Cremona's.</p>
|
47,181 |
<p>I'll ask you to consider a situation wherein one has a series of edges for a graph, $(e_1, e_2, ..., e_N) \in E$, each with a specifiable length $(l_1, l_2, ..., l_N) \in L$, and the goal is to insure that the connected graph has a unique topology in 3-space. More specifically, I'm interested in insuring that some graph with the connectivity of a polytope can only be drawn as the skeleton of that particular polytope - that there should be no crossed edges or knots possible for the specified edge lengths. </p>
<p>To provide a physical example: </p>
<p>I use a group of rods to represent the edges of the desired graph (with pencils or the like) and color/symbol-encode their ends to represent vertex-assignments. I want to choose rod lengths in such a way that if I hand them to a naive-constructor (i.e. a 3-year old or a computer-controlled robot), and tell him/her/it to connect the ends of the rods together that have the same color or symbol, after waiting an arbitrarily long time there will only be a unique geometry satisfying the connectivity constraints of the graph I originally had in mind. </p>
<p>Is there a known computational complexity for this problem? Is there even a solution in the general case, or in the case where we apply the restriction that the specified polytope is convex?</p>
<p>I appreciate any feedback!</p>
<p>EDIT 1: The edges of the graph must be straight lines in 3-space, they cannot be bent to accommodate a particular edge length. </p>
<p>EDIT 2: Does the problem become easier if one assumes some physical diameter for the edges?</p>
|
Roland Bacher
| 4,556 |
<p>A matrix realizing Colin de Verdi`eres $\mu$-invariant yields an answer
if you accept that you get lengths at the end, not at the beginning.</p>
<p>A planar graph $G$ arising as the $1-$skeleton of a polytope has always $\mu$-invariant
equal to $3$. There exists thus a combinatorial Schr\"odinger operator on
$G$ whose second largest eigenvalue has multiplicity $3$ (and the eigenspace
satisfies a stability condition). Choose a basis of $3$ eigenvectors for such an operator. Interpret these eigenvectors as $x,y$ and $z$ coordinates of points in $\mathbb R^3$, indexed by vertices of
$G$. This yields a set of points in $\mathbb R^3$ which are extremal vertices of a polytope
realizing $G$ (and the realization is of course the obvious one, vertices are
already labeled by vertices of $G$).</p>
<p>Moreover, all convex polytopes realizing $G$ can be constructed in this way.
The "moduli space" of such polytopes is thus (up to the choice of a basis) in
bijection with Schr\"odinger operators realizing the $\mu$ invariant.</p>
|
1,920,776 |
<p>As I know there is a theorem, which says the following: </p>
<blockquote>
<p>The prime ideals of $B[y]$, where $B$ is a PID, are $(0), (f)$, for irreducible $f \in B[y]$, and all maximal ideals. Moreover, each maximal ideal is of the form $m = (p,q)$, where $p$ is an irreducible element in $B$ and $q$ is an irreducible element in $(B/(p))[y]$.</p>
</blockquote>
<p>I managed to prove, that any prime non-principal ideal $m \subset B[y]$ is maximal. This gives the first part of the theorem. Now I want to prove, that $m = (p,q)$. I can show, that $m \cap B \neq (0)$. Thus $m$ contains a non-zero element $a$ from B. Since $B$ is PID and $m$ is prime, $m$ must contain an irreducible element from $B$ (we can take an irreducible factor $p$ of $a$). </p>
<p>As I understand, now I need to show, that $m$ contains an element $q$ (see the theorem above). Then it follows that $(p,q) \subseteq m$. And after that I have to prove, that $m \subseteq (p,q)$. How do I make these last steps?</p>
<p>Yes, I have seen similar questions with $B = k[x]$ or $\mathbb{Z}$, but I still cannot prove the last part.</p>
|
Ben Grossmann
| 81,360 |
<p>I'll stick to 8-bit quantities, but the same applies in general.</p>
<p>The key to understanding two's complement is to note that we have a set of finitely many (in particular, <span class="math-container">$2^8$</span>) values in which there is a sensible notion of addition by <span class="math-container">$1$</span> that allows us to cycle through all of the numbers. In particular, we have a system of <a href="https://en.wikipedia.org/wiki/Modular_arithmetic" rel="noreferrer">modular arithmetic</a>, in this case modulo <span class="math-container">$2^8 = 256$</span>.</p>
<hr />
<p>Intuitively, arithmetic modulo <span class="math-container">$n$</span> is a system of addition (and subtraction) in which overflow and underflow cause you to "cycle back" to a value from <span class="math-container">$0$</span> to <span class="math-container">$n-1$</span>. A classic example is the usual "clock arithmetic", which is to say arithmetic modulo <span class="math-container">$12$</span>.</p>
<p>For example, if it is <span class="math-container">$11\!:\!00$</span> now, then three hours later it will be <span class="math-container">$2\!:\!00$</span>, since
<span class="math-container">$$
11 + 3 = 14 \equiv 2 \pmod {12}
$$</span>
and similarly, if it is <span class="math-container">$1\!:\!00$</span>, then <span class="math-container">$4$</span> hours ago it was <span class="math-container">$9$</span> since
<span class="math-container">$$
1 - 4 = -3 \equiv 9 \pmod{12}
$$</span>
Notice that subtracting <span class="math-container">$4$</span> hours on the clock is the same as <em>adding</em> <span class="math-container">$12 - 4 = 8$</span> hours. In particular, we could have computed the above as follows:
<span class="math-container">$$
1 - 4 \equiv 1 + 8 = 9 \pmod{12}
$$</span>
That is: when performing arithmetic modulo <span class="math-container">$n$</span>, we can <em>subtract <span class="math-container">$x$</span> by adding <span class="math-container">$n-x$</span></em>.</p>
<hr />
<p>Now, let's apply this idea modulo <span class="math-container">$256$</span>. How do you subtract <span class="math-container">$3$</span>? Well, by the above logic, this is the same as adding <span class="math-container">$256 - 3 = 253$</span>. In binary notation, we could say that subtracting <span class="math-container">$00000011$</span> is the same as adding
<span class="math-container">$$
1\overbrace{00000000}^8 - 00000011 =
1 + \overbrace{11111111}^8 - 00000011 = 11111101
$$</span>
and there's your two's complement: the calculation <span class="math-container">$(11111111 - 00000011)$</span> "flips the bits" of <span class="math-container">$00000011$</span>, and we add <span class="math-container">$1$</span> to this result.</p>
<hr />
<p><strong>Note 1:</strong> In the context of arithmetic with <a href="https://en.wikipedia.org/wiki/Signed_number_representations" rel="noreferrer">signed integers</a>, we don't think of <span class="math-container">$11111101$</span> as being <span class="math-container">$253$</span> in our <span class="math-container">$8$</span>-bit system, we instead consider it to represent the number <span class="math-container">$-3$</span>. Rather than having our numbers go from <span class="math-container">$0$</span> to <span class="math-container">$255$</span> around a clock, we have them go from <span class="math-container">$-128$</span> to <span class="math-container">$127$</span>, where <span class="math-container">$-x$</span> occupies the same spot that <span class="math-container">$n - x$</span> would occupy for values of <span class="math-container">$x$</span> from <span class="math-container">$1$</span> to <span class="math-container">$128$</span>.</p>
<p>Succinctly, this amounts to saying that a number with 8 binary digits is deemed negative if and only if its leading digit (its "most significant" digit) is a <span class="math-container">$1$</span>. For this reason, the leading digit is referred to as the "sign bit" in this context.</p>
<p><strong>Note 2:</strong> An interesting infinite analog to the two's complement system of subtraction is that of infinite series <a href="https://en.wikipedia.org/wiki/P-adic_number" rel="noreferrer">2-adic numbers</a>. In particular, we can say something strange like
<span class="math-container">$$
\dots 11111 = -1
$$</span>
since <span class="math-container">$\dots 11111$</span> is the "infinite two's complement" of <span class="math-container">$1$</span>.</p>
|
2,954,991 |
<p>How do I prove that <span class="math-container">$\forall p, b, c \in \mathbb{Z}^+, (\text{Prime}(p) \wedge p^2 + b^2 = c^2) \Rightarrow p^2 = c+b$</span> ?
I know this much;
<span class="math-container">$$p^2 + b^2 = c^2\\
p^2= c^2 - b^2\\
p^2=(c-b)(c+b)
$$</span>
I know the next part has something to do with the fact that every number that is not prime can be expressed as a product of primes that is unique(up to the orders of the factor?). And using that, somehow, <span class="math-container">$c-b=1$</span> and, therefore, <span class="math-container">$p^2 = c+b$</span>. If someone could please offer me a rigorous explanation for this it would be appreciated.</p>
|
rtybase
| 22,583 |
<p><strong>Part 1.</strong> It's easy to see that <span class="math-container">$p > c-b$</span>, otherwise, if we assume <span class="math-container">$p \leq c-b \Rightarrow p+2b\leq c+b$</span> and <span class="math-container">$p(p+2b)\leq(c-b)(c+b)=p^2$</span>, which is possible only when <span class="math-container">$b=0$</span>.</p>
<p><strong>Part 2.</strong> From <span class="math-container">$p^2=(c-b)(c+b) \Rightarrow p \mid (c-b)(c+b)$</span>. Applying <a href="https://en.wikipedia.org/wiki/Euclid%27s_lemma" rel="nofollow noreferrer">Euclid's lemma</a> we have <span class="math-container">$p \mid (c-b)$</span> or <span class="math-container">$p \mid (c+b)$</span>. But, due to <strong>part 1</strong>, <span class="math-container">$p \mid (c-b)$</span> doesn't make sense (as well <span class="math-container">$c-b=0$</span>, leading to <span class="math-container">$p=0$</span>), thus <span class="math-container">$p \mid (c+b)$</span>.</p>
<p><strong>Part 3.</strong> Applying <a href="https://en.wikipedia.org/wiki/Euclid%27s_lemma" rel="nofollow noreferrer">Euclid's lemma</a> and <strong>Part 2</strong> again, <span class="math-container">$p^2 \mid (c+b)$</span> or <span class="math-container">$\exists q\in\mathbb{Z^+}: c+b=p^2q$</span> or <span class="math-container">$p^2=(c-b)(c+b)=(c-b)p^2q \Rightarrow 1=q(c-b)$</span> or <span class="math-container">$c-b \mid 1$</span> which is possible only when <span class="math-container">$c-b=1$</span> and <span class="math-container">$q=1$</span>, thus <span class="math-container">$p^2=c+b$</span>.</p>
|
1,624,888 |
<h2>Question</h2>
<p>In the following expression can $\epsilon$ be a matrix?</p>
<p>$$ (H + \epsilon H_1) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) = (E |m\rangle + \epsilon E|m_1\rangle + \epsilon^2 E_2 |m_2\rangle + \dots) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) $$</p>
<h2>Background</h2>
<p>So in quantum mechanics we generally have a solution $|m\rangle$ to a Hamiltonian:</p>
<p>$$ H | m\rangle = E |m\rangle $$</p>
<p>Now using perturbation theory:</p>
<p>$$ (H + \epsilon H_1) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) = (E |m\rangle + \epsilon E|m_1\rangle + \epsilon^2 E_2 |m_2\rangle + \dots) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) $$</p>
<p>I was curious and substituted $\epsilon$ as a matrix:</p>
<p>$$ \epsilon =
\left( \begin{array}{cc}
0 & 0 \\
1 & 0 \end{array} \right) $$</p>
<p>where $\epsilon$ now, is the nilpotent matrix, we get:</p>
<p>$$ \left( \begin{array}{cc}
H | m \rangle & 0 \\
H_1 |m_1 \rangle + H | m\rangle & H |m_1 \rangle \end{array} \right) = \left( \begin{array}{cc}
E | m \rangle & 0 \\
E_1 |m_1 \rangle + E | m\rangle & E |m_1 \rangle \end{array} \right)$$</p>
<p>Which is what we'd expect if we compared powers of $\epsilon$'s. All this made me wonder if $\epsilon$ could be a matrix? Say something like $| m_k\rangle \langle m_k |$ ? Say we chose $\epsilon \to \hat I \epsilon$</p>
<p>then there exists a radius of convergence. What is the radius of convergence in a general case of any matrix?</p>
|
BenSmith
| 238,248 |
<p>$\epsilon$ is not generally used as a matrix. When matrices are needed, as you will notice from your example, epsilon is usually set beside the matrices of concern as a scalar which indicates that one is infinitesimally close to the origin (or zero) in whichever space you seek to be perturbing. </p>
|
202,043 |
<p>A <a href="http://en.wikipedia.org/wiki/Square-free_word">square-free word</a>
is a string of symbols (a "word") that avoids the pattern $XX$, where $X$ is any
consecutive sequence of symbols in the string.
For alphabets of two symbols, the longest square-free word has length $3$.
But for alphabets of three symbols, there are infinitely long square-free words,
due to Thue.</p>
<p>Define a <em>three-halves-free word</em> as one that avoids the pattern $XYX$, where $|X|=|Y|$.
Another view is that these words avoid $Z(\frac{1}{2}Z)$, i.e., $Z=XY$ with $X$ and $Y$ the
same length. So $Z$ has even length, and we want to avoid immediately
following with the first half of $Z$.</p>
<p>For an alphabet of two symbols, $0$ & $1$, here are the three-halves-free words
of length $5$:
\begin{eqnarray*}
&(0, 1, 1, 0, 0)\\
&(0, 0, 1, 1, 0)\\
&(1, 1, 0, 0, 1)\\
&(1, 0, 0, 1, 1)
\end{eqnarray*}
All the $28$ other length-$5$ words fail.
E.g., $(0,1,0,1,0)$ fails because $(0,1,0)$ matches with $XYX=010$.
But there are no three-halves-free words of length $\ge 6$.
For example, extending $(0, 1, 1, 0, 0)$ with $0$ gives $(0, 1, 1, 0, 0, 0)$, matching
$XYX=000$, and extending with $1$ gives $(0, 1, 1, 0, 0, 1)$, matching with $X=01$.</p>
<blockquote>
<p><strong><em>Q</em></strong>. Are there infinitely long words in three-letter alphabets that are three-halves-free?</p>
</blockquote>
<p>If a word has a square $ZZ$ with $|Z|$ even, then it has a three-halves pattern.
If a word is square-free, it may not be three-halves-free.
For example, both $(1,0,1)$ and $(0,1,2,1,0,1)$ are square-free.
And the square-free infinite word <a href="http://oeis.org/A029883">A029883</a>
$$
(1, 0, -1, 1, -1, 0, 1, 0, -1, 0, 1, -1, 1, 0, -1, \ldots )
$$
is not three-halves-free: e.g., $(-1,1,-1)$ and $(-1,0,1,0,-1,0)$
are three-halves patterns.</p>
|
Bjørn Kjos-Hanssen
| 4,600 |
<p><em>Also not an answer, but may be useful.</em></p>
<p>A somewhat similar kind of word is mentioned at the end of section 1.5 of <a href="http://www.math.ucla.edu/~pak/hidden/Courses/Salomaa-Ch1.pdf">Salomaa: <em>Jewels of Formal Language Theory</em></a>:</p>
<p>There is an infinite word over a 3-letter alphabet such that every "sandwich" $XYX$ (with $X$ nonempty) in the word is fairly meaty, viz., $|Y|\ge\frac13|X|$. Also, $1/3$ is sharp here as a level of meatiness: starting at length 39 every word must contain a sandwich no more meaty than that (which incidentally is anecdotal evidence that we have to look at fairly long words before drawing any conclusions in this area).</p>
<p>So while you're asking whether there is an infinite <strong>imbalanced sandwich</strong> word ("balanced" meaning that the meat is just as thick as each bun), this shows there is an infinite <strong>meaty sandwich</strong> word.</p>
|
3,299,072 |
<p>I want to prove that
<span class="math-container">$$\frac{(x-y)(x-z)}{x^2}+\frac{(y-x)(y-z)}{y^2}+\frac{(z-x)(z-y)}{z^2}\geq 0$$</span>
for positive numbers <span class="math-container">$x,y,z$</span>.
I don't know how to even begin. I must say I'm not 100% certain the inequality always holds.</p>
<p>I tried the sort of factoring involved in proving schur's inequality, but it doesn't seem to work here. I also tried to distribute the denominators to obtain terms of form (1-y/x)(1-z/x) and then maybe substituting x/y=a, y/z=b, z/x=a etc </p>
|
Mahdi Aghaee
| 384,279 |
<p>Notice, that
<span class="math-container">$$f(x,\,y,\,z)=\frac{(x-y)(x-z)}{x^2}+\frac{(y-x)(y-z)}{y^2}+\frac{(z-x)(z-y)}{z^2}= \frac{z^2y^2(x-y)(x-z)+x^2z^2(y-x)(y-z)+x^2y^2(z-x)(z-y)}{x^2y^2z^2}$$</span>
Consider two case</p>
<ol>
<li>If <span class="math-container">$x=y=z$</span>, then it is trivial that <span class="math-container">$f(x,\,y,\,z)=0$</span> .
<ol start="2">
<li>If <span class="math-container">$x\geq y\geq z$</span>, (without loss of generality you can commute <span class="math-container">$x,y,z$</span>) then <span class="math-container">$z^2y^2(x-y)(x-z) \geq0 $</span> and <span class="math-container">$x^2y^2(z-x)(z-y)\geq 0$</span> and <span class="math-container">$x^2z^2(y-x)(y-z)\leq 0$</span>. But <span class="math-container">$|x^2y^2(z-x)(z-y)|>|x^2z^2(y-x)(y-z)|$</span>. Therefor this sum is non-negative.</li>
</ol></li>
</ol>
|
2,263,431 |
<p>Let $$v=\sqrt{a^2\cos^2(x)+b^2\sin^2(x)}+\sqrt{b^2\cos^2(x)+a^2\sin^2(x)}$$</p>
<p>Then find difference between maximum and minimum of $v^2$.</p>
<p>I understand both of them are distance of a point on ellipse from origin, but how do we find maximum and minimum?</p>
<p>I tried guessing, and got maximum $v$ when $x=45^{o}$ and minimum when $x=0$, but how do we justify this?</p>
|
lab bhattacharjee
| 33,337 |
<p>Let $$p=\sqrt{a^2\cos^2(x)+b^2\sin^2(x)},q=\sqrt{b^2\cos^2(x)+a^2\sin^2(x)}$$</p>
<p>Now $p^2+q^2=a^2+b^2$</p>
<p>and $$2(p^2+q^2)-(p+q)^2=(p-q)^2\ge0\iff(p+q)^2\le2(p^2+q^2)=?$$</p>
|
535,948 |
<p>How do I prove that $(1+\frac{1}{2})^{n} \ge 1 + \frac{n}{2}$ for every $n \ge 1$</p>
<p>My base case is $n=1$ <br/>
Inductive step is $n=k$ <br/>
Assume $n=k+1$<br/></p>
<p>$(\frac{3}{2})^{k} \times \frac{3}{2} \ge (1 + \frac{k+1}{2})$</p>
<p>I'm not sure how to proceed.</p>
|
user35603
| 84,245 |
<p>Hint: Use binomial formula: $(a+b)^n=...$ and take first two terms.</p>
|
16,158 |
<p>I have a Manipulate control, the main element of which is a single slider. Moving the <em>one</em> slider triggers calls to <em>two</em> functions - one which is evaluated very quickly, the other more slowly. I'd like to see 'live' update ( i.e. ContinuousAction->True ) for the fast one, but would only like to update the slow one when the mouse is released. Is there a simple way to achieve this?</p>
<p>Thanks in advance!</p>
|
Stefan
| 2,448 |
<p>What you could do as well to indicate progress whilst your slow function is running:</p>
<pre><code>Manipulate[ControlActive["calculating...", Pause[.5]; x], {x, 0, 1} ]
</code></pre>
<p>I did put here Pause[] as a replacement.</p>
<p>A more elaborate example would be the color curve of a complex phase (although this is no problem for Mathematica), but it shows the idea:</p>
<pre><code>Manipulate[
ControlActive["calculating...", _;
Plot[Abs[Zeta[1/2 + I a*t]], {t, -20, 20},
ColorFunction -> Function[{t}, Hue[Rescale[Arg[Zeta[1/2 + I t]], {-Pi, Pi}]]],
ColorFunctionScaling -> False, Filling -> Axis]], {a, 1, 2}]
</code></pre>
|
2,889,482 |
<blockquote>
<p><strong>1.28</strong>
In Section 1.6, we introduced the idea of entropy $h(x)$ as the information gained on observing the value of a random variable $x$ having distribution $p(x)$.
We saw that, for independent variables $x$ and $y$ for which $p(x,y) = p(x) p(y)$, the entropy functions are additive, so that $h(x,y) = h(x) + h(y)$.
In this exercise, we derive the relation between $h$ and $p$ in the form of a function $h(p)$.
First show that $h(p^2) = 2 h(p)$, and hence by induction that $h(p^n) = n h(p)$ where $n$ is a positive integer.
Hence show that $h(p^{n/m}) = (n/m) h(p)$ where $m$ is also a positive integer.
This implies that $h(p^x) = x h(p)$ where $x$ is a positive rational number, and hence by continuity when it is a positive real number.
Finally, show that this implies $h(p)$ must take the form $h(p) \propto \ln p$.</p>
<p>(Original image <a href="https://i.stack.imgur.com/Hj5U2.png" rel="nofollow noreferrer">here</a>.)</p>
</blockquote>
<p>I am having a few problems with this exercise, I will try to list them and perhaps you can help on some of them:</p>
<ol>
<li>Essentially, I don't understand what kind of function is $h$, because the first two lines of the problem definition make me think that its input is a random variable $x$ and its ouput a real number $h(x)$. But next, the problem uses the notation $h(x,y)$, implying that now $h$ is a function of 2 random variables $x$ and $y$, yielding a real number $h(x,y)$ as output. So what would be the correct or formal way to define $h$?</li>
<li>In connection with the first item, now in the 5th line it reads $h(p)$... so if switching between 1 and 2 random variables as inputs wasn't enough, now the input is a probability density? $h$ must be a very robust function for sure... ok I guess that so far the question is the same as in item 1, i.e., what is $h$?</li>
<li>The first step of the problem definition is to show $h(p^2) = 2 \, h(p)$ - ignoring the issues I described previously - I can't see how this is achieved by using those two hypothesis about independent variables. A have a feeling that the idea is to do something like (warning: the following is nonsense): $h(p^2) = h(p,p) = h(p) + h(p) = 2\, h(p)$ the first equality being wrong because I magically split the $p^2$ into two input variables, the second one being also wrong because I apply a hypothesis given for <em>independent random variables</em> on <em>probability densities of dependent random variables</em> and the last one is ok (!!!). So, what would be a nicer way to fulfill this first step?</li>
<li>Finally, is it really the logarithm's family the only solutions that match these requirements? I wonder, for example, what would happen if the argument function was used (like arg$(z)$ in complex analysis) since it satisfies the same properties as the logarithm regarding the product and power of its inputs.</li>
</ol>
<p>Help in any of these items will be appreciated!</p>
<p>In case you are interested, there are many online sources of the book, e.g. <a href="http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf" rel="nofollow noreferrer">here</a>, the topic is covered in page 48 of the book (68 of the pdf) and the problem definition - picture uploaded - is in page 64 of the book (84 of the pdf).</p>
<p>EDIT: I've found <a href="https://en.wikipedia.org/wiki/Self-information" rel="nofollow noreferrer">this</a> article on Wikipedia, in which my first two items are answered, via a more clear notation for the functions and variables involved in the problem. Still looking for 3 and 4...</p>
|
masa53
| 680,298 |
<p>We can write <span class="math-container">$h(x) = F(p(x))$</span> because <span class="math-container">$h(x)$</span> is a function of <span class="math-container">$p(x)$</span>. Note that <span class="math-container">$h(x,y) = F(p(x,y))$</span>. </p>
<p><span class="math-container">$h(x,x) = F(p(x,x))=F(p(x) * p(x))=F(p^2)$</span> and <span class="math-container">$h(x,x) = h(x) + h(x) = F(p(x)) + F(p(x)) = 2*F(p(x))$</span>.</p>
<p>Hence, we get <span class="math-container">$F(p^2) = 2*F(p(x))$</span>. </p>
|
4,064,618 |
<p><span class="math-container">$10$</span> individuals including <span class="math-container">$x$</span> and <span class="math-container">$y$</span> seat at random at a round table. All possible orders are equally likely. Let <span class="math-container">$W$</span> be the event that there are exactly two people seated between <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. What is the probability of this event?</p>
|
michalis vazaios
| 540,891 |
<p>Let's say <span class="math-container">$x$</span> sits somewhere (it doesn't matter where). There are <span class="math-container">$9$</span> seats left. There are <span class="math-container">$2$</span> of these seats where <span class="math-container">$y$</span> can seat so that there are two people between them. So the answer is <span class="math-container">$2/9$</span>. In the image bellow these would be the red seats:</p>
<p><a href="https://i.stack.imgur.com/1bXuh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1bXuh.png" alt="enter image description here" /></a></p>
|
2,094,698 |
<p>I am trying to figure out how to create a formula to solve this problem I have.</p>
<p>OK, the problem</p>
<p>I need to work how much money to give people bassed on a percentage of a total value, but the problem is that the percentage varies from person to person.</p>
<p>Ie,</p>
<p>Total amount = $1,000,000</p>
<p>Total People = 100</p>
<p>90 people get 100% of $1,000,000 / 100 people</p>
<p>5 people get 80% of $1,000,000 / 100 people</p>
<p>3 people get 60% of $1,000,000 / 100 people</p>
<p>2 people get 40% of $1,000,000 / 100 people</p>
<p>Now the issue I am having is that not even one cent can be left over.
Do I just work out based on what each person would get $1,000,000 / 100 people and then just divding the remainder?</p>
<p>ie,</p>
<p>$1,000,000 / 100 = 10,000</p>
<p>10,000 * 100% = $10,000 </p>
<p>10,000 * 80% = $8,000</p>
<p>10,000 * 60% = $6,000</p>
<p>10,000 * 40% = $4,000</p>
<hr>
<p>And the total of that</p>
<p>90 x 10,000 = $900,000</p>
<p>5 x 8,000 = $40,000</p>
<p>3 x 6,000 = $18,000</p>
<p>2 x 4,000 = $8,000</p>
<p>Total of $966,000, remaining 34,000 dollars to be divided by 100 and added to each persons figure.</p>
<p>Or is there a better way to go about this? </p>
<p>EDIT: Another way I thought of doing this would be to have anyone who gets less than a 100% share, have it so they get the desired percentage of the full share people. </p>
<p>ie,
Say a full share worked out to be,
10,340 Dollars, have someone who gets an 80% share get 8,272 dollars etc and so on.</p>
<p>I just need to know how to get this worked out in a simple equation.</p>
<p>Thank for the help. And could someone edit the tags, I have no idea what to tag this with.</p>
|
John Hughes
| 114,036 |
<p>The problem is that, as you observe, the numbers don't add up. </p>
<p>This can be restated in a simpler form: "I have to distribute all of \$100 to two people. One gets 40%; the other gets 45%. How much should I give to each?" Well, if I give one \$40 and the other \$45, I've split it 40%/45%, but haven't given away the last \$15. If I give either one of them part of the \$15, they won't be getting \$40 and \$45, so I won't be following the rules. </p>
<p>So the answer to your problem is "There's no distribution of money that satisfies the rules given." Because there's only one way for each person to get the amounts specified (which you've written out!) and that allocation does not use up all the money. </p>
<p>It really is as if you were told "Give all this money to Joe. And make sure he gets exactly 83% of it." It's not your fault that you cannot carry out this instruction; the instruction itself is internally contradictory. </p>
|
2,309,900 |
<p>EDIT: How many ways are there to distribute $0$ and $1$ in a vector of length $N$ such that the number of zeros and 1 is the same (Suppose $N$ is even). I'm guessing it's $\binom{N}{2}$, but I'm not sure.</p>
|
Henrik supports the community
| 193,386 |
<p>After your edit (Zubins answer - before his edit - was correct before your edit).</p>
<p>You have to select the $\frac{N}{2}$ places that should be $1$ (or $0$) and then the rest will just have to be $0$ (or $1$), you can do that in:
$$
\binom{N}{N/2}
$$
ways.</p>
|
38,367 |
<p>Notation: Let$M$ be a smooth, closed manifold, $S$ any submanifold of $M$, $Diff(M)$ the group of diffeomorphisms of $M$ and $Imb(S, M)$ the group of smooth imbeddings of $S$ into $M$.</p>
<p>A classical result of R. Palais from the 1960 paper <em>Local triviality of the restriction map for embeddings</em> says that the map $Diff(M)$ $\rightarrow$ $Imb(S, M)$ given by restriction is a fibration.</p>
<p>I feel like I've heard during numerous teas that there are various refinements and generalizations of this due to J. Cerf and (possibly) others.</p>
<p>(1) Can anyone summarize what else is known in this direction beyond the theorem of Palais? </p>
<p>(2) Is there a way to see Palais' result easily? [added: from the responses it sounds like the original paper is still a great way to see this result. But see the answers of Randal-Williams and Palais for an alternate route.]</p>
|
Ryan Budney
| 1,465 |
<p>It's not clear what you mean by "various refinements and generalizations". Cerf has a huge paper published by IHES "Topologie de certains espaces de plongements" which goes into many related details. In a way it's more of a ground-up collection of basic information on the topology of function spaces.</p>
<p>Regarding your 2nd question, if instead of demanding a fibre bundle you ask for a Serre fibration, the proof is relatively simple. It's just the isotopy extension theorem with parameters, and the proof is pretty much verbatim Hirsch's proof of isotopy extension in his "Differential Topology" text plus the observation that solutions depend smoothly on the initial conditions. </p>
<p>Regarding your 2nd question, yes of course. Palais's paper is quite nice. If you haven't had a look at it, you might as well try -- it's only 7 pages long. If you want to discover the proof on your own I'd start with the case $S$ a finite set. Then move up to $S$ a positive-dimensional submanifold. You'll want to be comfortable with things like the proof of the tubular neighbourhood theorem, the concept of injectivity radius, etc. </p>
|
550,757 |
<p>I don't see how $(5, 1 - 8i)$ would generate the element $5$. I think that $5$ and $1 - 8i$ would have to have a common factor but I can't find any that divide both.</p>
|
rfauffar
| 12,158 |
<p>I think I may understand the question. We want to see that $(5,1-8i)$ is principal. We see that $5=(1+2i)(1-2i)$ and $1-8i=(1+2i)(3+2i)$, and so $(5,1-8i)=(1+2i)$, since this is the greatest common divisor of $5$ and $1-8i$.</p>
|
4,227,776 |
<p>Let <span class="math-container">$R$</span> be a commutative unital ring, and let <span class="math-container">$p$</span> be a prime number.</p>
<p><strong>Question 1</strong>: If <span class="math-container">$R$</span> is a <em>local</em> ring (with unique maximal ideal <span class="math-container">$m$</span>) of characteristic <span class="math-container">$0$</span>, <span class="math-container">$p \in m$</span>, and <span class="math-container">$R/p$</span> is finite, does it follow that <span class="math-container">$R$</span> is noetherian?</p>
<p><strong>Question 2</strong>: In case the answer to 1 is "no", does it hold if we additionally assume <span class="math-container">$R$</span> to be a domain?</p>
<p>Motivation: This came out of discussions to the question <a href="https://math.stackexchange.com/q/4225280/96384">$\mathcal{O}_{\mathbb{C}_p}/ p \mathcal{O}_{\mathbb{C}_p}$ infinite</a>. I have come to suspect that the answer is no to both, but I have not suceeded in cooking up counterexamples. I'm afraid it's been too long since I studied all the local ring regularity properties, and just hope there is no easy proof with Nakayama or something that I'm missing.</p>
|
Eric Wofsey
| 86,856 |
<p>Let <span class="math-container">$S\subset\mathbb{Q}[x]$</span> be the subring consisting of polynomials with integer constant term. Note that <span class="math-container">$(p)$</span> is a maximal ideal of <span class="math-container">$S$</span> with <span class="math-container">$S/(p)\cong\mathbb{F}_p$</span>. Let <span class="math-container">$R$</span> be the localization <span class="math-container">$S_{(p)}$</span>. It is clear that <span class="math-container">$R$</span> is a local domain of characteristic <span class="math-container">$0$</span> with maximal ideal generated by <span class="math-container">$p$</span> and <span class="math-container">$R/(p)\cong \mathbb{F}_p$</span> is finite.</p>
<p>I claim, however, that <span class="math-container">$R$</span> is not Noetherian. Indeed, consider the chain of ideals <span class="math-container">$(x)\subset(x/p)\subset(x/p^2)\subset\dots$</span>. These inclusions are all strict since <span class="math-container">$1/p\not\in R$</span> so <span class="math-container">$x/p^{n+1}$</span> is not a multiple of <span class="math-container">$x/p^n$</span> for any <span class="math-container">$n$</span>.</p>
<hr />
<p>Here is a very general principle along these lines. Let <span class="math-container">$R$</span> be any non-Artinian ring. Then I claim <span class="math-container">$R$</span> has an elementary extension (over the first-order language of rings) which is not Noetherian. Indeed, add a sequence of constant symbols <span class="math-container">$x_0,x_1,x_2,\dots$</span> to the language and axioms saying <span class="math-container">$x_n$</span> is not in the ideal generated by <span class="math-container">$x_0,\dots,x_{n-1}$</span> for each <span class="math-container">$n$</span>. These axioms are finitely satisfiable in <span class="math-container">$R$</span>, since <span class="math-container">$R$</span> is not Artinian so there are arbitrarily long finite chains of ideals in <span class="math-container">$R$</span>. So by compactness, there is an elementary extension of <span class="math-container">$R$</span> that satisfies all of them, and thus is not Noetherian.</p>
<p>In particular, applying this with (say) <span class="math-container">$R=\mathbb{Z}_{(p)}$</span> this gives an elementary extension of <span class="math-container">$\mathbb{Z}_{(p)}$</span> that is not Noetherian. The properties of being local, characteristic <span class="math-container">$0$</span>, <span class="math-container">$p$</span> being in the maximal ideal, a domain, and <span class="math-container">$R/(p)$</span> having <span class="math-container">$p$</span> elements are all first-order, and thus are still true of this elementary extension.</p>
|
286,852 |
<p>I'm currently trying to understand time continuous Fourier transformations for my Signal Analysis class (Electrical Engineering major). In my Textbook we have a set of "useful FT-pairs", one of which being
$x(t)<=FT=>X(\omega)$</p>
<p>$\sin(\omega_0*t)<=>i\pi[\delta(\omega+\omega_0)-\delta(\omega-\omega_0)]$</p>
<p>I'm trying to derive this on my own, but I keep running into the same dead end. I replace the sine with $\frac 1{2i}(e^\left(i\omega_0t\right)-e^\left(-i\omega_0t\right))$ and drop that into the $\int f(x) e^\left(-i\omega t\right)$. Multiply the functions together by adding together the exponents and then finding the anti-derivative isn't hard, but don't understand how you get from that to the $\delta(x)$. I played around on wolfram alpha and found out that apparently $\int_{-\infty}^{\infty} e^\left(it(\omega + \omega_0\right)dt$ doesn't converge except for $\omega = -\omega_0$, and then it converges to infinity. This seems close to the behavior of the delta function, but I don't think it's enough for me to replace them?</p>
<p>Could <em>really</em> use some help understanding this.</p>
|
dexter04
| 48,371 |
<p>You should understand that the Dirac-delta $\delta(\omega)$ is not a conventional function. It is a generalized function or a distribution. The mathematical theory behind it is really involved and the guy who explained everything(Laurent Schwartz) got a Field's Medal for it.</p>
<p>The best way to go around it is to find out the inverse Fourier Transform of $i\pi[\delta(\omega+\omega_0)-\delta(\omega-\omega_0)]$. It comes out to be $\sin(\omega_0t)$ without any problems.</p>
<p>This gives you the mathematical closure over an intuitive understanding.</p>
<p>The wiki page on Dirac Delta mentions the same integral that you are asking, but the IFT approach looks more natural to me.</p>
|
374,694 |
<p>I believe I'm looking for a function: $f(x) : \mathbb N \mapsto \mathbb N^2$ and it's inverse $f^{-1}(x) : \mathbb N^2 \mapsto \mathbb N$, a known mapping that can take any positive integer and map to a unique 2D integer point (and the inverse mapping as well). </p>
<p>Like the titles says, the conceptual idea is to take an infinitely long piece of rope, and starting at the origin (0,0), map the rope to the 2D plane in such a way as to fill the space efficiently, and allow for a simple algorithm to go from (d) -> (x,y) and from (x,y) -> (d).</p>
<p>The mapping kind of a resembles a Hilbert space-filling curve, however, there the space being filled is infinite.</p>
<p>I apologize in advance, it's been years since I've done high level mathematics (I'm CS now) and the correct jargon escapes me.</p>
|
Henrik Finsberg
| 67,299 |
<p>Cantor's paring function would probably do the job: <a href="http://en.wikipedia.org/wiki/Cantor_pairing_function#Cantor_pairing_function" rel="noreferrer">http://en.wikipedia.org/wiki/Cantor_pairing_function#Cantor_pairing_function</a></p>
|
4,220,751 |
<p>In the example in the following slide, we follow the highlighted formula. With regard to the highlight, I'm confused why the number is greater <strong>or equal</strong> to <span class="math-container">$2^{n-1}$</span>, while only need to be less than <span class="math-container">$2^n$</span> (not less than <strong>or equal to</strong> <span class="math-container">$2^n$</span>)?
<a href="https://i.stack.imgur.com/YZ9He.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YZ9He.png" alt="enter image description here" /></a></p>
|
José Carlos Santos
| 446,262 |
<p>It is easy to show by induction that<span class="math-container">$$(\forall i\in\Bbb N):A^i=\begin{bmatrix}1&i&\frac{i(i+1)}2\\0&1&i\\0&0&1\end{bmatrix}.$$</span>So,<span class="math-container">$$M=\begin{bmatrix}20&\sum_{i=1}^{20}i&\frac12\left(\sum_{i=1}^{20}i^2+\sum_{i=1}^{20}i\right)\\0&20&\sum_{i=1}^{20}i\\0&0&20\end{bmatrix}=\begin{bmatrix}20&210&1\,540\\0&20&210\\0&0&20\end{bmatrix}.$$</span>Can you take it from here?</p>
|
2,253,676 |
<p>$f(x)$ continuous in $[0,\infty)$. Both $\int^\infty_0f^4(x)dx, \int^\infty_0|f(x)|dx$ converge. </p>
<p>I need to prove that $\int^\infty_0f^2(x)dx$ converges.</p>
<p>Knowing that $\int^\infty_0|f(x)|dx$ converges, I can tell that $\int^\infty_0f(x)dx$ converges.</p>
<p>Other than that, I don't know anything. I tried to maybe integrate by parts but I got to nowhere</p>
<p>$\int^\infty_0f^4(x)dx = \int^\infty_0f^2(x)|f(x)|\cdot |f(x)| dx$.. </p>
|
TBTD
| 175,165 |
<p>Here is a proof. Let
$$
A \triangleq \{x\in[0,\infty) : |f(x)|\leq 1\}, \ \text{and} \ A^c \triangleq \{x\in[0,\infty) : |f(x)|>1\}.
$$</p>
<p>Write,
$$
\int_0^{\infty}f^2(x) dx = \underbrace{\int_Af^2(x) dx}_{\triangleq M} + \underbrace{\int_{A^c}f^2 (x) dx}_{\triangleq N}.
$$
Note that $M$ is bounded, since
$$
\int_A f^2(x) dx \leq \int_A |f(x)| dx \leq \int_{[0,\infty)}|f(x)|dx<\infty.
$$
For $N$, we'll use Cauchy-Schwarz inequality:
\begin{align}
\left(\int_{[0,\infty)}f^{4}(x)dx\right)\left(\int_{[0,\infty)}|f(x)|dx\right) & \geq \left(\int_{[0,\infty)}|f(x)|^{\frac{5}{2}}dx\right)^2 \\ & \geq \left(\int_{A^c}|f(x)|^{\frac{5}{2}}dx\right)^2 \\ &\geq \left(\int_{A^c}|f(x)|^2 dx\right)^2
\end{align}
hence, $N$ is also bounded. Therefore, we are done.</p>
<p><strong>Edit</strong></p>
<p>To address Open Ball's comment below, let $f \in L^p(X) \cap L^q(X)$ on some measure space $(X,\mathcal{F},\mu)$. Then note that,
$$
|f|^{\lambda p + (1-\lambda)q} \leq \lambda|f|^p + (1-\lambda)|f|^q,
$$
using the convexity of the map $x \mapsto a^x$ ($a>0$). With this, one can obtain weight $\lambda$ to express any $r \in [p,q]$ as $\lambda p + (1-\lambda)q$. Taking the integral of the inequality above with respect to underlying measure $\mu$, and keeping in mind the monotonicity of integral, we get
$$
\int |f|^r d\mu \leq \lambda \int |f|^p d\mu + (1-\lambda) \int |f|^q d\mu \implies ||f||_{L^r(X)}^r \leq \lambda ||f||_{L^p(X)}^p + (1-\lambda)||f||_{L^q(X)}^q.
$$</p>
<p>Hence, if $f \in L^p(X) \cap L^q(X)$, then for every $p\leq r \leq q$, $f \in L^r(X)$. </p>
|
2,253,676 |
<p>$f(x)$ continuous in $[0,\infty)$. Both $\int^\infty_0f^4(x)dx, \int^\infty_0|f(x)|dx$ converge. </p>
<p>I need to prove that $\int^\infty_0f^2(x)dx$ converges.</p>
<p>Knowing that $\int^\infty_0|f(x)|dx$ converges, I can tell that $\int^\infty_0f(x)dx$ converges.</p>
<p>Other than that, I don't know anything. I tried to maybe integrate by parts but I got to nowhere</p>
<p>$\int^\infty_0f^4(x)dx = \int^\infty_0f^2(x)|f(x)|\cdot |f(x)| dx$.. </p>
|
Community
| -1 |
<p><strong>Disclaimer</strong>: this is an overkill.</p>
<p>Here's a general fact: if $(X, \Sigma, \mu)$ is a measure space, we have for $1\le p<q \le \infty$, $L^p(X) \cap L^q(X) \subset L^r(X)$ for all $p \le r \le q$.</p>
<p>Here, $f \in L^1([0,\infty)) \cap L^4([0,\infty))$, so it is in $L^r([0,\infty))$ for each $r \in [1,4]$.</p>
|
3,497,919 |
<p><span class="math-container">$$\sum_{k=0}^{\Big\lfloor \frac{(n-1)}{2} \Big\rfloor} (-1)^k {n+1 \choose k} {2n-2k-1 \choose n} = \frac{n(n+1)}{2} $$</span></p>
<p>So I feel like <span class="math-container">$(-1)^k$</span> is almost designed for the inclusion-exclusion principle. And the left-hand side looks like some sort of pairing, so I am interested in some combinatorics proof like below-linked question. But using a generating function is always helpful. </p>
<p>[EDIT] now I am probably equally, if not more interested in a generating function solution now that I see below answer that completely makes sense to me, but with some issues in signs..</p>
<p><a href="https://math.stackexchange.com/questions/897948/evaluation-of-a-sum-of-1k-n-choose-k-2n-2k-choose-n1">Evaluation of a sum of $(-1)^{k} {n \choose k} {2n-2k \choose n+1}$</a></p>
|
aryan bansal
| 698,119 |
<p>Coefficient of <span class="math-container">$x^k$</span> in <span class="math-container">$(1-x)^{n+1}$</span> is <span class="math-container">$(-1)^k$${n+1 \choose k}$</span> </p>
<hr>
<p>Coefficient of <span class="math-container">$$x^{(\frac{n-2k-1}{2})}$$</span> in <span class="math-container">$$(1- \sqrt{x})^{-(n+1)}$$</span> is <span class="math-container">${{2n-2k-1} \choose n}$</span></p>
<hr>
<p>In all what we want is coefficient of (multiplying previous 2 series)<span class="math-container">$$x^{(\frac{n-1}{2})}$$</span> in <span class="math-container">$$(1 +\sqrt{x})^{n+1}$$</span> which is nothing but <span class="math-container">${{n+1} \choose 2}$</span></p>
<hr>
|
2,447,677 |
<p>Consider the parametric equations given by \begin{align*} x(t)&=\sin{t}-t,\\ y(t) & = 1-\cos{t}.\end{align*}</p>
<p>I want to write these parametric equations in Cartesian form. </p>
<p>In order to eliminate the sine and cosine terms I think I probably need to consider some combination of $x(t),y(t), x(t)^2$ and $y(t)^2$ but I can't see exactly how to do this. </p>
|
Guy Fsone
| 385,707 |
<p>$$ y(t) = 1-\cos{t}\implies t= \arccos(1-y) \implies x= \arccos(\sin(1-y)) -\arccos(1-y)$$
that is </p>
<p>$$x= \sqrt{1-(1-y)^2} -\arccos(1-y)$$</p>
|
805,596 |
<p>Evaluate $$\lim_{x\to\infty}\dfrac{5^{x+1}+7^{x+2}}{5^x + 7^{x+1}}$$</p>
<p>I'm getting a different result but not the exact one.
I got $$\dfrac{5\cdot\dfrac{5^n}{7^n} +49}{\dfrac{5^n}{7^n} + 7}.$$
I know the result is $7$ but I cannot figure out the steps.</p>
|
lab bhattacharjee
| 33,337 |
<p>$$\frac{5^{x+1}+7^{x+2}}{5^x+7^{x+1}}=\frac{5\left(\frac57\right)^x+7^2}{\left(\frac57\right)^x+7}$$</p>
<p>Now $\displaystyle\lim_{n\to\infty}r^n=0$ if $|r|<1$</p>
|
3,301,387 |
<p>I don't understand what it is about complex numbers that allows this to be true.</p>
<p>I mean, if I root both sides I end up with Z=Z+1. Why is this a bad first step when dealing with complex numbers?</p>
|
Geoffrey Trang
| 684,071 |
<p>Even in <span class="math-container">$\mathbb{R}$</span>, there is still a solution, namely <span class="math-container">$z = -\frac{1}{2}$</span>. It is then clear that <span class="math-container">$(-\frac{1}{2} + 1)^6 = (\frac{1}{2})^6 = (-\frac{1}{2})^6$</span>. In fact, for any 2 real numbers <span class="math-container">$x$</span> and <span class="math-container">$y$</span> and any positive integer <span class="math-container">$n$</span>, <span class="math-container">$x^n = y^n$</span> if and only if either <span class="math-container">$x = y$</span>, or <span class="math-container">$x = -y$</span> and <span class="math-container">$n$</span> is even.</p>
|
961,603 |
<p>Show that every positive integer $N$ less than or equal to $n$ factorial, is the sum of at most $n$ distinct positive integers, each of which divides $n!$.</p>
|
abiessu
| 86,846 |
<p>Step 1:</p>
<p>Show that the statement holds for $n=1$. For this value of $n$ we have $1!=1$, and thus every positive integer less than or equal to $1!$ is the set $\{1\}$. Since $n=1$ we must find a sum of at most one positive integer which divides $1!$ and adds up to each of the numbers in our given set, and indeed we have</p>
<p>$$\sum_{i=1}^11=1$$</p>
<p>This proves the case for $n=1$.</p>
<p>The second step is to assume that the proposition is true for $n=k\ge 1$. We now adopt this assumption, namely that every positive integer less than or equal to $n!=k!\ge 1!$ is the sum of at most $n=k$ distinct positive integers, each of which divides $n!=k!$. </p>
<p>The last step is to prove that this assumption leads to the conclusion that the proposition holds for the case $n=k+1$. One way to start is to take a number which is considered "arbitrary" from the range associated with this case; another approach would be to do a secondary induction over the set of numbers associated with this case.</p>
|
163,889 |
<p>Find a closed form for $a_n:=\sum_{k=0}^{n}\binom{n}{k}(n-k)^n(-1)^k$ using generating functions.</p>
|
Marko Riedel
| 44,883 |
<p>Suppose we are trying to evaluate
$$\sum_{k=0}^n {n\choose k} (n-k)^n (-1)^k.$$</p>
<p>Observe that
$$(n-k)^n =
\frac{n!}{2\pi i}
\int_{|z|=\epsilon} \frac{1}{z^{n+1}} \exp((n-k)z) \; dz.$$</p>
<p>This gives for the sum the integral
$$\frac{n!}{2\pi i}
\int_{|z|=\epsilon} \frac{1}{z^{n+1}}
\sum_{k=0}^n {n\choose k} (-1)^k \exp((n-k)z) \; dz
\\ = \frac{n!}{2\pi i}
\int_{|z|=\epsilon} \frac{1}{z^{n+1}}
\left(-1+\exp(z)\right)^n \; dz.$$</p>
<p>Now we have
$$[z^n] \left(-1+\exp(z)\right)^n = 1$$
by inspection, which then gives
$$n!$$
for the sum.</p>
|
16,794 |
<p>In this joint paper that I should be working on, we make significant use of a certain generalization of a triangulated disk. Many of our important examples are triangulated disks, but we are also interested in certain simplicial complexes that are singular disks, or even more generally singular disks tiled by polygons. It is easy to describe the disks of interest to us as simplicial complexes which are contractible, compact subsets of the plane. The embedding in the plane only matters at all up to isotopy, and it also does not matter all that much; the most important condition is that the complex is contractible and at most two triangles meet at every edge. For instance, you can have two triangulated disks that meet at a vertex, trees, "barbells", etc.</p>
<p>One could more generally look at those topological spaces in the plane that are an intersection of nested, closed disks, or maybe those that are locally connected. For instance, the Mandelbrot set is one. We do not need them for what we are doing if they are not simplicial complexes, but this is an interesting class of topological spaces that should have a good name. </p>
<p>The following names have been proposed:</p>
<ol>
<li>Van Kampen diagram - standard but ugly; very close to what we use although our edges are not labeled by group elements.</li>
<li>Contractible plane continuum - descriptive but clumsy.</li>
<li>Diskoid - a good name, by analogy with a dendroid which is the 1-dimensional case, but it doesn't seem to be standard.</li>
<li>Cactus - also a good name, by analogy with trees in graph theory, but it seems non-standard.</li>
<li>Singular disk - livable but not specific enough.</li>
</ol>
<p>The reason that I want a short name is that the object X itself is used to make a moduli space or an algebraic variety. The moduli space is easy to describe: Assign fixed lengths to the edges of X and look at its rigid embeddings into a metric space. So we wouldn't want to say "contractible plane continuum variety". Also, for no particular reason I've been thinking of the triangulation as an extra decoration and instead name the underlying topological space.</p>
<p>I can certainly think of livable names, but the general idea appears in several places in mathematics and I would prefer a good name. Could someone with a good sense of the literature argue for a particular term, not necessarily one of the ones listed above? Or any useful opinion would be welcome.</p>
|
Sam Nead
| 1,650 |
<p>Prickly pear cactus is very pretty. Lots of cut points!</p>
<p>I believe that that your cacti are "tree-graded spaces" where are "pieces" are points or disks. See Drutu, C., Sapir, M.: Tree-graded spaces and asymptotic cones of groups. Topology 44, 959–1058 (2005).</p>
|
16,794 |
<p>In this joint paper that I should be working on, we make significant use of a certain generalization of a triangulated disk. Many of our important examples are triangulated disks, but we are also interested in certain simplicial complexes that are singular disks, or even more generally singular disks tiled by polygons. It is easy to describe the disks of interest to us as simplicial complexes which are contractible, compact subsets of the plane. The embedding in the plane only matters at all up to isotopy, and it also does not matter all that much; the most important condition is that the complex is contractible and at most two triangles meet at every edge. For instance, you can have two triangulated disks that meet at a vertex, trees, "barbells", etc.</p>
<p>One could more generally look at those topological spaces in the plane that are an intersection of nested, closed disks, or maybe those that are locally connected. For instance, the Mandelbrot set is one. We do not need them for what we are doing if they are not simplicial complexes, but this is an interesting class of topological spaces that should have a good name. </p>
<p>The following names have been proposed:</p>
<ol>
<li>Van Kampen diagram - standard but ugly; very close to what we use although our edges are not labeled by group elements.</li>
<li>Contractible plane continuum - descriptive but clumsy.</li>
<li>Diskoid - a good name, by analogy with a dendroid which is the 1-dimensional case, but it doesn't seem to be standard.</li>
<li>Cactus - also a good name, by analogy with trees in graph theory, but it seems non-standard.</li>
<li>Singular disk - livable but not specific enough.</li>
</ol>
<p>The reason that I want a short name is that the object X itself is used to make a moduli space or an algebraic variety. The moduli space is easy to describe: Assign fixed lengths to the edges of X and look at its rigid embeddings into a metric space. So we wouldn't want to say "contractible plane continuum variety". Also, for no particular reason I've been thinking of the triangulation as an extra decoration and instead name the underlying topological space.</p>
<p>I can certainly think of livable names, but the general idea appears in several places in mathematics and I would prefer a good name. Could someone with a good sense of the literature argue for a particular term, not necessarily one of the ones listed above? Or any useful opinion would be welcome.</p>
|
Greg Kuperberg
| 1,450 |
<p>In the end, we (Joel Kamnitzer, his student Bruce Fontaine, and I) agreed on the term "diskoid". In the abstract, the word "cactus" seemed too clever by half. When I actually wrote it into the paper, it was awkward and it didn't emphasize the main property of interest, that our shapes are meant as a mild generalization of disks. Moreover, there have been papers that used the term "cactus" in similar but slightly different ways, and the mathematical etymology of this term is already somewhat confused. Actually, I don't have a good explanation of why the term that we picked seems good (or least bad), but somehow that is the feeling.</p>
|
4,219,193 |
<p>The <code>p implies q</code> statement is often described in various ways including:<br />
(1) <code>if p then q</code> (i.e. whenever p is true, q is true)<br />
(2) <code>p only if q</code> (i.e. whenever q is false, p is false)</p>
<p>I see the truth table for (1) as</p>
<pre><code>p | q | if p then q
-------------------
T | T | T
T | F |
F | T |
F | F |
</code></pre>
<p>I see the truth table for (2) as</p>
<pre><code>p | q | p only if q
-------------------
T | T |
T | F | F
F | T |
F | F | T
</code></pre>
<p>How are the two statements the same? What is wrong with my understanding?</p>
<hr />
<p><strong>Addenda</strong><br />
1). There are some excellent excellent answers/suggestions here but what really worked for me was the <a href="https://math.stackexchange.com/a/4219227/956241">following tip</a>:</p>
<blockquote>
<p>I think the intuitive way to think of this is if something is
contradicted then it is false but if nothing can be contradicted it is
by default true.</p>
</blockquote>
<p>2). I have now learned that a conditional statement that is true by virtue of the fact that its hypothesis is false (i.e. <code>true by default</code>) is called <code>vacuously true</code>.</p>
|
fleablood
| 280,126 |
<p>I think the intuitive way to think of this is if something is contradicted then it is false but if nothing can be contradicted it is by default true.</p>
<p><span class="math-container">$p\to q$</span> means whenever <span class="math-container">$p$</span> is true we'll have <span class="math-container">$q$</span> true. So two rows of the table are</p>
<p><span class="math-container">$\begin{matrix}p&q&p\to q\\T&T&T&\text{whenever p is true we must have q true... and we do}\\T&F&F&\text{whenever p is true we must have q true... and we don't}\end{matrix}$</span></p>
<p>But what of</p>
<p><span class="math-container">$\begin{matrix}p&q&p\to q\\F&T&\\F&F\end{matrix}$</span></p>
<p>Well, in these cases our condition <span class="math-container">$p$</span> is true is not met so we are not obligated to anything about <span class="math-container">$q$</span>. We can't contradict <span class="math-container">$p\to q$</span> as we can only test that if <span class="math-container">$p$</span> is true. And if these statements can't be contradicted.... they must be assumed to be true. So</p>
<p><span class="math-container">$\begin{matrix}p&q&p\to q\\T&T&T&\text{whenever p is true we must have q true... and we do}\\T&F&F&\text{whenever p is true we must have q true... and we don't}\\F&T&T&\text{whenever p is true we must have q true, but we don't have p is true so all bets are off}\\F&F&T&\text{whenever p is true we must have q true, but we don't have p is true so all bets are off}\end{matrix}$</span></p>
<p>And the truth table for <span class="math-container">$q\text{ only if }p$</span> is similar</p>
<p><span class="math-container">$\begin{matrix}p&q&q\text{ only if }p\\T&T&T&\text{whenever q is false we must have p true, but we don't have q is false so all bets are off}\\T&F&F&\text{whenever q is false we must have p false... and we don't}\\F&T&T&\text{whenever q is false we must have p true, but we don't have q is false so all bets are off}\\F&F&T&\text{whenever q is false we must have p false... and we do}\end{matrix}$</span></p>
<p>Same values.</p>
|
4,219,193 |
<p>The <code>p implies q</code> statement is often described in various ways including:<br />
(1) <code>if p then q</code> (i.e. whenever p is true, q is true)<br />
(2) <code>p only if q</code> (i.e. whenever q is false, p is false)</p>
<p>I see the truth table for (1) as</p>
<pre><code>p | q | if p then q
-------------------
T | T | T
T | F |
F | T |
F | F |
</code></pre>
<p>I see the truth table for (2) as</p>
<pre><code>p | q | p only if q
-------------------
T | T |
T | F | F
F | T |
F | F | T
</code></pre>
<p>How are the two statements the same? What is wrong with my understanding?</p>
<hr />
<p><strong>Addenda</strong><br />
1). There are some excellent excellent answers/suggestions here but what really worked for me was the <a href="https://math.stackexchange.com/a/4219227/956241">following tip</a>:</p>
<blockquote>
<p>I think the intuitive way to think of this is if something is
contradicted then it is false but if nothing can be contradicted it is
by default true.</p>
</blockquote>
<p>2). I have now learned that a conditional statement that is true by virtue of the fact that its hypothesis is false (i.e. <code>true by default</code>) is called <code>vacuously true</code>.</p>
|
ryang
| 21,813 |
<p>Further to the other comments and answers that have dealt with your motivating query (my additional take <a href="https://math.stackexchange.com/a/4217336/21813">here</a>), I'd like to address your question and follow-up comment “<em>I could be wrong but I don't believe the statements address the remaining entries in the truth table (the ones that I have left out)</em>” at face value.</p>
<ul>
<li>Think of <span class="math-container">$\;\rightarrow\;$</span> (the <strong>material conditional</strong>) in symbolic
logic as a logical operator—analogous to <span class="math-container">$\div$</span> being an
arithmetic operator—that outputs a truth value depending on the
truth values of its operands <span class="math-container">$P$</span> and <span class="math-container">$Q.$</span> Operators operate according
to predefined rules; here, how <span class="math-container">$\;\rightarrow\;$</span> operates is actually
<em>defined</em> by its truth table. The sentence <span class="math-container">$$P\rightarrow Q$$</span> (“if <span class="math-container">$P$</span> then <span class="math-container">$Q$</span>”) is <em>agnostic</em> to the truth values of <span class="math-container">$P$</span> and <span class="math-container">$Q;$</span> it doesn't care whether its output is T or F.</li>
<li>On the other hand, the <strong>assertion</strong> <span class="math-container">$$P\implies Q$$</span>
(“<span class="math-container">$P$</span> implies <span class="math-container">$Q$</span>”) encountered in non-formal logic (proofs or
arguments) claims that the sentence ‘if <span class="math-container">$P$</span> then <span class="math-container">$Q$</span>’ <em>is</em> true. It is asserting
that <span class="math-container">$Q$</span> is a consequence of (can be concluded from) <span class="math-container">$P$</span> being true.</li>
</ul>
|
2,629,115 |
<blockquote>
<p>We will define the convolution (slightly unconventionally to match Rudin's proof) of $f$ and $g$ as follows:
$$(f\star g)(x)=\int_{-1}^1f(x+t)g(t)\,dt\qquad(0\le x\le1)$$</p>
<ol>
<li>Let $\delta_n(x)$ be defined as $\frac n2$ for $-\frac1n<x<\frac1n$ and 0 for all other $x$. Let $f(x)$ be defined as $x$ for $.4<x<.7$ and let $f(x)=0$ for all other $x$. Find a piecewise algebraic expression for $f\star\delta_{10}$ and graph $f\star\delta_{10}$. Repeat the exercise for $f\star\delta_{20}$. In what sense does $f\star\delta_n$ converge on $[0,1]$ and to what function does it converge?</li>
</ol>
</blockquote>
<p>Hello everyone,
I just need help finding a piecewise algebraic expression for $f* \delta_{ 10}$. I think I should be able to figure out everything else in the question once I know how to do this. </p>
<p>Thoughts/things I know (how to do):
Delta 10 is defined as $5$ for $-1/10<x<1/10$ and $0$ otherwise.
$f(x)=x$ for $0.4<x<0.7$ and $0$ otherwise. I know how both graphs look like visually. I guess you could say there is a jump in the graph of f(x) at 0.4 and 0.7 and a jump in the graph of $\delta_{10}$ at $-1/10$ and $1/10$.</p>
<p>Now confusions: I believe that perhaps I will have to split up the problem into several cases/integrals as both f(x) and delta are piecewise. The main problem in this question is I don't understand what the t represents so I don't know how to set up my bounds as my bounds are in terms of t (as we integrate with respects to $dt$). I believe that the integral seemingly from the definition is only valid for $0\leq x<\leq1$ inclusive. Also if I'm doing $f* \delta_{10}$, lets say for x between $0.4<x<0.7$ then wouldn't $f(x+t)=x+t$ and $g(t)=0$? Overall, I think I'm confused so I could really use some guidance on this problem for the first case $f*\delta_{10}$, then I believe I could figure out the rest.
Thank you!</p>
|
Doug M
| 317,162 |
<p>$f\star\delta_{10}(x) = \int_{-1}^1 f(x - t)\delta(t)\ dt$</p>
<p>but $\delta_{10}(t) = 0$ for most of this interval</p>
<p>$f\star\delta_{10}(x) = \int_{-\frac {1}{10}}^{\frac {1}{10}} 5(x - t) dt$</p>
<p>$x\in [0.5, 0.6]$</p>
<p>But if $x$ is outside this interval:</p>
<p>$x\in [0.3, 0.5]$</p>
<p>$f\star \delta_{10}\int_{0.4-x}^{0.1} 5(x - t) dt$</p>
<p>$x\in [0.6, 0.8]$</p>
<p>$f\star \delta_{10}\int_{-0.1}^{0.7-x} 5(x - t) dt$</p>
<p>$f\star\delta_{10} = \begin {cases} \frac 52 (x+0.1)^2 - 0.4 &x\in[0.3,0.5]\\x&x\in[0.5,0.6]\\
-\frac 52x^2 + 0.5x + 1.2& x\in [0.6,0.8]\\0&\text {elsewhere}\end{cases}$</p>
<p>The delta function puts a small blur on the edges that sharpens as $n$ gets big.</p>
|
2,411,081 |
<p>How can I show (with one of the tests for convergence , <strong>not by solving</strong>) that the integral $$\int _{1}^\infty\frac{\ln^5(x)}{x^2}dx$$ converges?</p>
|
G Tony Jacobs
| 92,129 |
<p>Since $\ln^5(x)$ grows more slowly than $x^{1/2}$, you could use a comparison test. For some positive $a>1$, we have:</p>
<p>$$\int_a^\infty \frac{\ln^5(x)}{x^2} dx \leq \int_a^\infty \frac{x^{1/2}}{x^2} dx = \int_a^\infty \frac{dx}{x^{3/2}}$$</p>
<p>You should know that the last expression is convergent by previous results.</p>
|
72,943 |
<p>I want find the minima of a (multivariable) function under a constraint which has to be fulfilled on a whole interval, let's say
$$
\nabla f (\underline x) = 0 \ \\ \ c(\underline x,s)\geq0\ \forall s\in [0,1].
$$
How do I implement such a condition into the <code>Minimize[{f[x1,x2,...,xn],c[x1,...,xn,s]>=0 ?},{x1,x2,...,xn}]</code> function?
Thanks in advance!</p>
<p>Edit: Ok, small mistake. I wanted a condition to be an inequality. If I just change that in the example proposed it is stated that this are not valid constraints:</p>
<pre><code>f[x_, y_] := x^2 + y^2;
c[x_, y_, s_] = 2 x + 3 y + s;
NMinimize[{f[x, y], c[x, y, s] >= 0, 1 >= s >= 0}, {x, y}]
</code></pre>
|
bill s
| 1,783 |
<p>Here's a two dimensional example:</p>
<pre><code>f[x_, y_] := x^2 + y^2;
c[x_, y_, s_] = 2 x + 3 y + s;
Minimize[{f[x, y], c[x, y, s] == 0, 1 >= s >= 0}, {x, y}]
</code></pre>
<p>This gives an answer that depends on the value of <code>s</code>, as is plausible. For your revised/edited problem:</p>
<pre><code>Minimize[{f[x, y], c[x, y, s] >= 0, 1 >= s >= 0}, {x, y, s}]
</code></pre>
|
4,447,641 |
<p>I can't figure out where to even start, I have looked up the answer on Desmos but it uses L'Hopitals rule which i haven't learned yet.</p>
<p><span class="math-container">$$\lim_{h \to 0} \frac{(2+h)^{3+h} - 8}{h}$$</span>
I see that i can use the log rules to rewrite it as
<span class="math-container">$$\lim_{h \to 0} \frac{e^{(3+h)\ln(2+h)} - 8}{h}$$</span>
but that only confuses me more.</p>
|
Dan
| 1,374 |
<p>Per Levent's comment, supposed that you're given <span class="math-container">$f(x) = x^{x+1}$</span>. By definition of the derivative:</p>
<p><span class="math-container">$$f'(x) = \lim_{h \rightarrow 0} \frac{f(x+h) - f(x)}{h} = \lim_{h \rightarrow 0} \frac{(x+h)^{x+1+h} - x^{x+1}}{h}$$</span></p>
<p>Now, plug in <span class="math-container">$x=2$</span>.</p>
<p><span class="math-container">$$f'(2) = \lim_{h \rightarrow 0} \frac{(2+h)^{3+h} - 8}{h}$$</span></p>
<p>Hey, that's conveniently familiar! Now your question can be rephrased as “Find <span class="math-container">$f'(2)$</span>.”</p>
<p>Now, what's the derivative of <span class="math-container">$y = f(x) = x^{x+1}$</span>? The equation may be easier to work with if you take logarithms.</p>
<p><span class="math-container">$$ \ln y = (x + 1) \ln x$$</span></p>
<p>You can then use implicit differentiation to get:</p>
<p><span class="math-container">$$\frac{1}{y} \frac{dy}{dx} = \ln x + \frac{x+1}{x} = \ln x + 1 + \frac{1}{x}$$</span></p>
<p><span class="math-container">$$\frac{dy}{dx} = y(\ln x + 1 + \frac{1}{x})$$</span></p>
<p>And plugging in <span class="math-container">$x=2$</span> and <span class="math-container">$y=8$</span> gives:</p>
<p><span class="math-container">$$f'(2) = 8(\ln 2 + 1 + \frac{1}{2}) = 8 \ln 2 + 12$$</span></p>
|
2,078,943 |
<p>My book says this but doesn't explain it:</p>
<blockquote>
<p>Row operations do not change the dependency relationships among columns.</p>
</blockquote>
<p>Can someone explain this to me? Also what is a dependency relationship? Are they referring to linear dependence? </p>
|
Nowibananatzki
| 385,751 |
<p>Suppose a matrix A and one of its columns <span class="math-container">$v_k$</span> is a linear combination of the others, we can form an equation as such:</p>
<p><span class="math-container">$$a_1v_1 + \ldots + a_{k-1}v_{k-1} + a_{k+1}v_{k+1} + \ldots + a_nv_n = a_kv_k$$</span></p>
<p>If the column vectors of A are linearly independent, <span class="math-container">$a_1, a_2, \ldots, a_n$</span> are all zero. This doesn't affect our proof.</p>
<p>Now we can rewrite the equation to:</p>
<p><span class="math-container">$$A'x = b$$</span></p>
<p>where the columns of A' are <span class="math-container">$v_1, \ldots, v_{k-1}, v_{k+1}, \ldots, v_n$</span>, and x = <span class="math-container">$[a_1, \ldots, a_{k-1}, a_{k+1}, \ldots, a_n]^T$</span>, and <span class="math-container">$b = a_kv_k$</span>.</p>
<p>Since we know that the equation still holds after row operations on the augmented matrix of A', thus they don't change the linear relationship of columns of A.</p>
|
758,466 |
<blockquote>
<p>A fair 6-sided die is tossed 8 times. The sequence of 8 results is recorded to form an 8-digit number. For example if the tosses give {3, 5, 4, 2, 1, 1, 6, 5}, the resultant number is $35421165$. What is the probability that the number formed is a multiple of 8.</p>
</blockquote>
<p>I solved this by listing all possibilities for the last 3 digits that give multiples of 8, and found this to be $\frac{1}{8}$. </p>
<p>The solution key agrees with my answer, but also says that "There are quicker ways to solve the problem using a more advanced knowledge of number theory"</p>
<hr>
<p>What would be a faster way to solve this using number theory?</p>
|
paw88789
| 147,810 |
<p>Here's another approach. As noted in the problem statement, it all comes down to the last <span class="math-container">$3$</span> rolls concatenating to a multiple of <span class="math-container">$8$</span>.</p>
<p>The last two rolls have to concatenate to a multiple of <span class="math-container">$4$</span>.</p>
<p>We break these into two groups: The last two are a multiple of <span class="math-container">$8$</span>, or not:</p>
<p>Group 1: <span class="math-container">$16, 24, 32, 56, 64$</span>.</p>
<p>Group 2: <span class="math-container">$12, 36, 44, 52$</span>.</p>
<p>To complete an entry in group 1 to a multiple of <span class="math-container">$8$</span>, the leftmost digit must be even (<span class="math-container">$3$</span> choices). To complete an entry in group 2 to a multiple of <span class="math-container">$8$</span>, the leftmost digit must be odd (<span class="math-container">$3$</span> choices).</p>
<p>So altogether there will be <span class="math-container">$3\cdot 5 + 3\cdot 4 = 27$</span> multiples of <span class="math-container">$8$</span>.</p>
<p>This gives a probability of <span class="math-container">$\frac{27}{216}=\frac18$</span>.</p>
|
20,683 |
<p>The most elementary construction I know of quantum groups associated to a finite dimensional simple Hopf algebra is to construct an algebra with generators $E_i$ and $F_i$ corresponding to the simple positive roots, and invertible $K_j$'s generating a copy of the weight lattice. Then one has a flurry of relations between them, and a coproduct defined on the generators by explicit formulas. These are not mortally complicated, but are still rather involved. Then come explicit checks of coassociativity, and compatibility between multiplication and comultiplication. Finally, one has the $R$-matrix which is an infinite sum with rather non-obvious normalizations. Enter more computations to verify $R$-matrix axioms.</p>
<p>I recall learning about a nice way to construct the quantum group, which in addition to requiring less formulas has the advantage of making it clear conceptually why it's braided.</p>
<blockquote>
<p>I'm hoping someone can either point me to a reference for the complete picture, or perhaps fill in some of the details, since I only remember the rough outline. That, precisely, is my question.</p>
</blockquote>
<p>I include the remarks below in hopes it will jog someone's memory.</p>
<p>You start with the tensor category $Vect_\Lambda$ of $\Lambda$-graded vector spaces, where $\Lambda$ is the weight lattice. We have a pairing $\langle,\rangle:\Lambda\times\Lambda\to \mathbb{Z}$, and we define a braiding $\sigma_{\mu,\nu}:\mu \otimes \nu \to \nu\otimes\mu$ to be $q^{\langle \mu,\nu \rangle}$. Here $q$ is either a complex number or a formal variable. We may need to pick some roots of $q$ if we regard it as a number; I don't remember (and am not too worried about that detail). Also, here we denoted by $\mu$ and $\nu$ the one dimensional vector space supported at $\mu$ and $\nu$ respectively, and we used the fact that both $\mu\otimes\nu$ and $\nu\otimes\mu$ are as objects just $\mu+\nu$.</p>
<p>Okay, so now we're supposed to build an algebra in this category, generated by the $E_i$'s, which generators we regard as living in their respective gradings, corresponding to the simple roots. Here's where things start to get fuzzy. Do we take only the simples as I said, or do we take all the $E_\alpha$'s, for all roots $\alpha$? Also, what algebra do we build with the $E_i$'s? Of course it should be the positive nilpotent part of the quantum group, but since we build it as an algebra in this category, there may be a nicer interpretation of the relations? Anyways, let's call the algebra we are supposed to build here $U_q(\mathfrak{n}^+)$. I definitely remember that it's now a bi-algebra in $Vect_\Lambda$, and the coproduct is just $\Delta(E_i)=E_i\otimes 1 + 1\otimes E_i$ (the pesky $K$ that appears there usually has been tucked into the braiding data). Now we take $U_q(\mathfrak{n}^-)$ to be generated by $F_i$'s in negative degree, and we construct a pairing between $U_q(\mathfrak{n}^+)$ and $U_q(\mathfrak{n}^-)$. The pairing is degenerate, and along the lines of Lusztig's textbook, one finds that the kernel of the pairing is the q-Serre relations in each set of variables $E_i$ and $F_i$.</p>
<p>Finally, once we quotient out the kernel, we take a relative version of Drinfeld's double construction (the details here I also can't remember, but would very much like to), and we get a quasi-triangular Hopf algebra in $Vect_\Lambda$. As an object in $Vect_\Lambda$ it's just an algebra generated by the $E_i$'s and $F_i$'s, so no torus. But since we're working in this relative version, we can forget down to vector spaces, and along the way, we get back the torus action, because that was tucked into the data of $Vect_\Lambda$ all along.</p>
<p>So, the construction (a) gives neater formulas for the products, coproducts, and relations (including the $q$-Serre relations), and (b) makes it clear why there's a braiding on $U_q(\mathfrak{g})$ by building it as the double.</p>
<p>The only problem is that I learned it at a seminar where to my knowledge complete notes were never produced, and while I remember the gist, I don't remember complete details. Any help?</p>
|
S. Carnahan
| 121 |
<p>If you have a copy of Gaitsgory's <i>Notes on factorizable sheaves</i> there is a sketchy exposition of the relative double in section 3.2, and a description of the construction of the quantum group in section 5 (in the numbering of the March 2008 edition). Here is what I understand, with many possible misconceptions. There is a sequence of Hopf algebra homomorphisms:</p>
<p>${}^{free}U_q^\pm \twoheadrightarrow {}^{DK}U_q^\pm \twoheadrightarrow u_q^\pm \hookrightarrow {}^LU_q^\pm \hookrightarrow {}^{cofree}U_q^\pm$</p>
<p>${}^{free}U_q^+$ is the free associative algebra generated by the simple root operators $E_i$, which have the appropriate grading. It has the coproduct you described. Taking a quotient by Serre relations yields the first arrow. The operation of taking dual Hopf algebra with opposite comultiplication flips the direction of the diagram and changes the sign. Taking the double yields hybrid algebras, e.g., half Lusztig and half DeConcini-Kac. I don't understand why both $Vect^\Lambda_q$ and $Vect^\Lambda_{q^{-1}}$ play a role here.</p>
<p>The relative version of Drinfeld's double construction can be written on the category level as the E[2]-centralizer of the monoidal functor $Vect^\Lambda \to Rep ({}^{\ast}U_q^\pm)$, where $\ast$ indicates one of the choices above. I guess you can combine definition 2.5.1 with example 2.5.15 in Lurie's DAG VI: E[k]-algebras to get a description of this construction. Generically this seems to yield the category of representations of Lusztig's U-dot algebra instead of the original $U_q$.</p>
|
619,580 |
<p>Please help me to solve the following problem that is in the Lebesgue integral discussion </p>
<blockquote>
<p>Give an example of a sequence $\,\,f_n : [0, 1] \to \Bbb R$ of continuous functions such that $\,\,\|f_n\|_\infty \to \infty$ but $\int_0^1\lvert\, f_n\rvert\,d\lambda \to 0$ as $n\to\infty$.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/IXYJq.jpg" rel="nofollow noreferrer">Original screenshot</a></p>
|
zhw.
| 228,045 |
<p>Here's another: $f_n(x) = nx^{n^2}$</p>
|
619,580 |
<p>Please help me to solve the following problem that is in the Lebesgue integral discussion </p>
<blockquote>
<p>Give an example of a sequence $\,\,f_n : [0, 1] \to \Bbb R$ of continuous functions such that $\,\,\|f_n\|_\infty \to \infty$ but $\int_0^1\lvert\, f_n\rvert\,d\lambda \to 0$ as $n\to\infty$.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/IXYJq.jpg" rel="nofollow noreferrer">Original screenshot</a></p>
|
triple_sec
| 87,778 |
<p>\begin{align*}
f_n(x)=\begin{cases}n&\text{if $x\in\left[0,\frac1 {2n^2}\right]$;}\\
\text{[linear descent; see below]}&\text{if $x\in\left(\frac{1}{2n^2},\frac{1}{n^2}\right]$;}\\0&\text{if $x\in\left(\frac1{n^2},1\right]$.}\end{cases}
\end{align*}
$\|f_n\|_{\infty}=n,$</p>
<p>$\|f_n\|_1<n\times(1/n^2)=1/n$.</p>
<p>The linear descent is needed to make the function continuous. It can be computed to be $x\mapsto-2n^3x+2n$.</p>
<p>[Figure below is not to scale.]</p>
<p><img src="https://i.stack.imgur.com/pYHza.jpg" alt="Note: not to scale"></p>
|
2,662,792 |
<p>I'm looking for a Galois extension $F$ of $\mathbb{Q}$ whose associated Galois group $\mbox{Gal}(F, \mathbb{Q})$ is isomorphic to $\mathbb{Z}_3 \oplus \mathbb{Z}_3$. I'm wondering if I should just consider some $u \notin Q$ whose minimum polynomial in $Q[x]$ has degree 9. In that case we'd have $[\mathbb{Q}(u) : \mathbb{Q}] = 9$ and thus $|\mbox{Gal}(\mathbb{Q}(u), \mathbb{Q})| = 9$ (assuming the extension is Galois). But since there are two distinct groups of order 9 (up to isomorphism), I'm not sure that this will yield the desired result. </p>
|
P Vanchinathan
| 28,915 |
<p>Take $\alpha = \exp(2\pi i/7),\ \beta = \exp(2\pi i/13)$, primitive roots of unity of order 7 and 13 (these are prime numbers and leave a remainder 1 when divided by 3). They generate fields having cyclic Galois groups of order 6 and 12 respectively. Each of them have their own (unique) cubic subfields having cyclic group of order 3 as their Galois groups.</p>
<p>So my guess is the following algebraic number should generate the field you want:
$\alpha +\bar \alpha +\beta +\beta^3+\beta^9$</p>
|
1,897,212 |
<p>Does there exist any surjective group homomorphism from <span class="math-container">$(\mathbb R^* , .)$</span> (the multiplicative group of non-zero real numbers) onto <span class="math-container">$(\mathbb Q^* , .)$</span> (the multiplicative group of non-zero rational numbers)? </p>
|
Tsemo Aristide
| 280,301 |
<p>$f(-1)^2=f(1)$ implies that $f(-1)=1$ or $f(-1)=-1$. Suppose $f(-1)=1$. Let $x>0$, $f(x)=f(\sqrt x)^2$. Implies that $f(x)$ is a square, $f(-x)=f(-1)f(x)>0$. Impossible. Since $f$ is surjective.</p>
<p>Suppose $f(-1)=-1, f(-x)=-f(\sqrt x)^2$ and $f(x)=f(\sqrt x)^2$ this implies that $f$ is not surjective since $1/2$ is not in the image: it is positive but not the square of a rational.</p>
|
120,808 |
<pre><code>Limit[Sum[k/(n^2 - k + 1), {k, 1, n}], n -> Infinity]
</code></pre>
<p>This should converge to <code>1/2</code>, but <code>Mathematica</code> simply returns <code>Indeterminate</code> without calculating (or so it would appear). Any specific reason why it can't handle this? Did I make a mistake somewhere?</p>
|
Feyre
| 7,312 |
<p>The problem is that without the limit, the sum doesn't converge, and without the sum the limit is <code>0</code>. <em>Mathematica</em> can only do one at a time.</p>
<p>At a finite value <code>n</code>, the sum gives a sequence with a $\frac{1}{0}$ or other complexinfinity expression for all values of <code>n</code>, which is a by product of the sum not converging.</p>
<p>You can approximate the limit:</p>
<pre><code>i=4;
Sum[k/(n^2 - k + 1), {k, 1, 10*^i}] /. n -> 10*^i // N
</code></pre>
<p>which will give <code>i</code> significant numbers.</p>
|
3,137,681 |
<p>I'm trying to see if there is any way to solve for x? I'm told Option 1 is correct and I think I understand it but why is Option 2 incorrect?</p>
<p>Option 1: subtract both sides by y
xy + y -y = y - y
xy = 0
No solution as either x or y could be zero</p>
<p>Option 2: factor out y
y * (x + 1) = y
(1/y) * y * (x + 1) = y * (1/y)
(x + 1) -1 = 1 -1
x = 0
Granted we still don't know what y is, but can't we be sure that x = 0?</p>
|
Dr. Sonnhard Graubner
| 175,066 |
<p>Your equation <span class="math-container">$$xy+y-y=0$$</span> simplifies to
<span class="math-container">$$xy=0$$</span> so <span class="math-container">$$x=0$$</span> or <span class="math-container">$$y=0$$</span></p>
|
2,076,910 |
<p>proving $\displaystyle \binom{n}{0}-\binom{n}{1}+\binom{n}{2}+\cdots \cdots +(-1)^{\color{red}{m}-1}\binom{n}{m-1}=(-1)^{m-1}\binom{n-1}{m-1}.$</p>
<p>$\displaystyle \Rightarrow 1-n+\frac{n(n-1)}{2}+\cdots \cdots (-1)^{n-1}\frac{n.(n-1)\cdot (n-2)\cdots(n-m+2)}{(m-1)!}$</p>
<p><strong>Added</strong></p>
<p>writting LHS as $\displaystyle \binom{n}{0}-\left(\binom{n-1}{0}+\binom{n-1}{1}\right)+\left(\binom{n-1}{1}+\binom{n-1}{2}\right)+\cdots \cdots +(-1)^{n-1}\left(\binom{n-1}{m-2}+\binom{n-1}{m-1}\right)=(-1)^{m-1}\binom{n-1}{m-1}.$</p>
<p>$\displaystyle \binom{n}{0}-\binom{n-1}{0}+\binom{n-1}{1}-\cdots +(-1)^{n-1}\binom{n-1}{m-2}-\left(\binom{n-1}{1}-\binom{n-1}{2}+\cdots +(-1)^n\binom{m-1}{m-1}\right)$ </p>
<p>wan,t be able to solve after that, help me to solve it</p>
|
Stefan4024
| 67,746 |
<p><strong>HINT:</strong> Fix a random $n$ and do induction on $m$.</p>
|
81,348 |
<p>There is a well-known proof of the Compactness Theorem in propositional logic which uses the compactness of the space $\{0,1\}^P$, where $P$ is the set of propositional variables in consideration. In general, this compactness relies on the Tychonoff theorem which in turn requires the Axiom of Choice. Let me sketch it (in danger of boring the experts, but for reference it will be useful):</p>
<p>For a set $A$ of formulas in $P$, let $T(A)$ be the set of interpretations $P \to \{0,1\}$ which make the formulas in $A$ true. Then $T(\cup_i A_i)=\cap_i T(A_i)$ and each $T(A)$ is closed in $\{0,1\}^P$. Thus if $A$ is finitely consistent, then $T(A)$ is the directed intersection of nonempty closed sets, thus also nonempty (compactness).</p>
<p>My question concerns the case that $P = \{p_1,p_2,\dotsc\}$ is countable. Then it seems to me that it is provable in ZF that $\{0,1\}^P$ is compact: Use the homeomorphism to the closed cantor set $C \subseteq [0,1]$ and the compactness of $[0,1]$. From this we can conclude that the Compactness Theorem in propositional logic for countably many propositional variables is provable in ZF - if everything is correct so far.</p>
<p>Isn't this somehow counterintuitive? Or does this proof yield a constructive algorithm how to find an interpretation which makes all formulas in $A$ true provided one can find them for finitely many?</p>
|
Andreas Blass
| 6,794 |
<p>The proof is correct, but it does not provide an algorithm unless you have some additional information about $A$. If you untangle the proofs, you find the following pseudo-algorithm: Go through the propositional variables $p_i$ in order, adding each $p_i$ or its negation to $A$ "greedily", i.e., add $p_i$ if doing so leaves $A$ (which now contains the original $A$, all your previous additions, and $p_i$) finitely satisfiable, and otherwise add $\neg p_i$. An easy induction shows that this preserves finite satisfiability. At the end, your decisions tell you what truth value to give each $p_i$, and it's easy to check that $A$ is satisfied by this truth assignment. The reason this is only a pseudo-algorithm rather than an algorithm is that there is, in general, no way to decide what happens at any step. You'd need to able to decide whether a certain set of formulas (the original $A$ plus decisions) is consistent, and that might not be algorithmically decidable, even if the original $A$ was a computable set of formulas.</p>
|
1,715,232 |
<p>Suppose $f_n$ converges uniformly to $f$ and $f_n$ are differentiable. Is it true that f will be differentiable?</p>
<p>My initial guess is no because $f_n= \frac{\sin(nx)}{\sqrt n}.$ Is this right? And more examples would be greatly appreciated.</p>
|
Dominik
| 259,493 |
<p>The limit function doesn't even have to be differentiable anywhere. Take any function $f: [0, 1] \to \mathbb{R}$ that is continuous but nowhere differentiable [such functions exist]. By the <a href="https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem" rel="nofollow">Stone-Weierstrass theorem</a> you can find a sequence of smooth functions that converge uniformly to $f$ on $[0, 1]$.</p>
<p>For a more specific example, consider a <a href="https://en.wikipedia.org/wiki/Weierstrass_function" rel="nofollow">Weierstrass function</a>. The corresponding partial sums form a sequence of smooth functions that converge uniformly on $\mathbb{R}$ to a nowhere differentiable function.</p>
|
323,781 |
<p>In the <a href="https://arxiv.org/abs/1902.07321" rel="noreferrer">paper</a> by Griffin, Ono, Rolen and Zagier which appeared on the arXiv today, (Update: published now in <a href="https://www.pnas.org/content/early/2019/05/20/1902572116" rel="noreferrer">PNAS</a>) the abstract includes</p>
<blockquote>
<p>In the case of the Riemann zeta function, this proves the GUE random
matrix model prediction in derivative aspect. </p>
</blockquote>
<p>In more detail, towards the bottom of the second page they say</p>
<blockquote>
<p>Theorem 3 in the case of the Riemann zeta function is the <em>derivative
aspect Gaussian Unitary Ensemble</em> (GUE) random matrix model
prediction for the zeros of Jensen polynomials. To make this precise,
recall that Dyson, Montgomery, and Odlyzko ... conjecture that the
non-trivial zeros of the Riemann zeta function are distributed like
the eigenvalues of random Hermitian matrices. These eigenvalues
satisfy Wigner's Semicircular Law, as do the roots of the Hermite
polynomials <span class="math-container">$H_d(X)$</span>, when suitably normalized, as
<span class="math-container">$d\rightarrow+\infty$</span> ... The roots of <span class="math-container">$J_{\gamma}^{d,0}(X)$</span>, as
<span class="math-container">$d\rightarrow+\infty$</span>, approximate the zeros of
<span class="math-container">$\Lambda\left(\frac{1}{2}+z\right)$</span>, ... and so GUE predicts that
these roots also obey the Semicircular Law. Since the derivatives of
<span class="math-container">$\Lambda\left(\frac{1}{2}+z\right)$</span> are also predicted to satisfy
GUE, it is natural to consider the limiting behavior of
<span class="math-container">$J_{\gamma}^{d,n}(X)$</span> as <span class="math-container">$n\rightarrow+\infty$</span>. The work here proves
that these derivative aspect limits are the Hermite polynomials
<span class="math-container">$H_d(X)$</span>, which, as mentioned above, satisfy GUE in degree aspect.</p>
</blockquote>
<p>I am hoping someone can further explain this. In particular, does this result shed any light on the horizontal distribution of the zeros of the derivative of the Riemann zeta function?</p>
<hr>
<p>Edit: Speiser showed that the Riemann hypothesis is equivalent to <span class="math-container">$\zeta^\prime(s)\ne 0$</span> for <span class="math-container">$0<\sigma<1/2$</span>. Since then quite a lot of work has gone into studying the horizontal distribution of the zeros of <span class="math-container">$\zeta^\prime(s)$</span>. For example, <a href="https://arxiv.org/abs/1002.0372" rel="noreferrer">Duenez et. al.</a><a href="https://iopscience.iop.org/article/10.1088/0951-7715/23/10/014" rel="noreferrer">compared</a> this distribution with the radial distribution of zeros
of the derivative of the characteristic polynomial of a random unitary matrix. (Caveat: I'm not up to date on all the relevant literature.)</p>
<p>This is a very significant question. If the GUE distribution holds for the Riemann zeros, then rarely but infinitely often there will be pair with less than than half the average (rescaled) gap. From this, by the work of Conrey and Iwaniec, one gets good lower bounds for the class number problem.</p>
<p>In <a href="https://arxiv.org/abs/1002.1616" rel="noreferrer">this</a> <a href="https://www.sciencedirect.com/science/article/pii/S0001870812001600" rel="noreferrer">paper</a> Farmer and Ki showed that if the derivative of the Riemann zeta function has sufficiently many zeros close to the
critical line, then the zeta function has many closely spaced zeros, which by the above, also solves the class number problem. </p>
<p>The question of modeling the horizontal distribution of the zeros of <span class="math-container">$\zeta^\prime(s)$</span> with the radial distribution of zeros
of the derivative of the characteristic polynomial of a random unitary matrix, is intimately connected to the class number problem. Based on the answer of Griffin below, I don't think that's what the Griffin-Ono-Rolen-Zagier paper does, but it's worth asking about.</p>
|
Astron
| 136,161 |
<p>Unfortunately, I cannot comment.
<a href="https://www.youtube.com/watch?v=HAx_pKUUqug" rel="noreferrer">https://www.youtube.com/watch?v=HAx_pKUUqug</a> 56:28</p>
|
1,167,438 |
<p>I am currently solving a differential equation but I am having a little trouble figuring out my integrative factor. </p>
<p>I have </p>
<p>$$\exp\bigg({∫\frac{3}{t}dt}\bigg)$$</p>
<p>so to integrate I made it $\exp(\ln(t^3))$ what is the final solution? $t^3$?</p>
|
achille hui
| 59,379 |
<p>Notice if $d, e$ are random variables uniform over $[0,1]$, so do $\tilde{d} \stackrel{def}{=} 1 - d\;$ and $\;\tilde{e}\stackrel{def}{=} 1 - e$.<br>
We have
$$a + b + c > d + e \quad\iff\quad a + b + c + \tilde{d} + \tilde{e} > 2$$
For any $p = (x_1,x_2,x_3,x_4,x_5) \in \mathbb{R}^5$, let $\;\ell(p) = x_1 + x_2 + x_3 + x_4 + x_5$.<br>
The probability $\mathcal{P}$ we seek is equal to:</p>
<p>$$\mathcal{P} = \verb/Volume/\big\{\; p \in [0,1]^5 : \ell(p) > 2 \;\big\}
= 1 - \verb/Volume/\big\{\; p \in [0,1]^5 : \ell(p) \le 2 \;\big\}\tag{*1}$$</p>
<p>Let $e_1, e_2, \ldots, e_5$ be the standard bases of $\mathbb{R}^5$ and $\Delta_p(\lambda)$ be the simplex
$$\big\{\; q \in \mathbb{R}^5 : q_i \ge p_i, i = 1\ldots5\;\text{ and }\; \ell(q-p) \le \lambda \;\big\}$$</p>
<p>The polytope on the right of $(*1)$ can be constructed by
cutting 5 small simplices based at $e_i$ from the simplex $\Delta_0(2)$. More precisely,
up to a set of measure zero, we have:
$$\big\{\; p \in [0,1]^5 : \ell(p) \le 2 \;\big\}
= [0,1]^5 \cap \Delta_0(2)
\approx \Delta_0(2) \setminus \left( \bigcup_{i=1}^5 \Delta_{e_i}(1) \right)$$
This leads to$\color{blue}{^{[1]}}$</p>
<p>$$\mathcal{P}
= 1 - \frac{2^5}{5!} + 5 \left(\frac{1^5}{5!}\right)
= 1 - \frac{32 - 5}{120} = \frac{31}{40}$$</p>
<p><strong>Notes</strong></p>
<ul>
<li>$\color{blue}{[1]}$ In general, in any dimension, the volume of the simplex $\Delta_0(\lambda)$ inside the unit hyper-cube is given by following formula:
$$\verb/Volume/\big( [0,1]^n \cap \Delta_0(\lambda) \big) = \sum_{k=0}^{\lfloor\lambda\rfloor} (-1)^k \binom{n}{k} \frac{(\lambda-k)^n}{n!}$$
One can prove this formula using <a href="http://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">inclusion exclusion principle</a>. Look at my <a href="https://math.stackexchange.com/a/455711/59379">answer</a> for a similar question in $\mathbb{R}^3$,
it has more explanation of the underlying ideas.</li>
</ul>
|
3,217,130 |
<p>How can I prove this by induction? I am stuck when there is a <span class="math-container">$\Sigma$</span> and two variables, how would I do it? I understand the first step but have problems when i get to the inductive step.</p>
<p><span class="math-container">$$\sum_{j=1}^n(4j-1)=n(2n+1)$$</span></p>
|
Dietrich Burde
| 83,966 |
<p>The conjugation operation of <span class="math-container">$G$</span> on <span class="math-container">$G$</span>, given by <span class="math-container">$b\mapsto aba^{-1}$</span> then is the identity, i.e., <span class="math-container">$aba^{-1}=b$</span> for all <span class="math-container">$a,b\in G$</span>. Hence <span class="math-container">$G$</span> is abelian.</p>
|
3,217,130 |
<p>How can I prove this by induction? I am stuck when there is a <span class="math-container">$\Sigma$</span> and two variables, how would I do it? I understand the first step but have problems when i get to the inductive step.</p>
<p><span class="math-container">$$\sum_{j=1}^n(4j-1)=n(2n+1)$$</span></p>
|
Santana Afton
| 274,352 |
<p>The property implies that the conjugacy class of any <span class="math-container">$g\in G$</span> is exactly <span class="math-container">$\{g\}$</span>, which is equivalent to being abelian.</p>
|
2,565,763 |
<p>I was playing a standard solitaire game on my mobile app and I came across a round where I couldn't perform a single move thus resulting in a loss. I was then thinking as to what the probability of this event to happen in a single game. Would anybody know this probability and show the calculations given a standard deck of 52 cards? </p>
<p>Here is a link to the rules: <a href="https://www.wikihow.com/Play-Solitaire" rel="nofollow noreferrer">https://www.wikihow.com/Play-Solitaire</a></p>
<p>Note: There are three cards turned at a time but you can only play the second card after you played the top card. Also, the game is klondike.</p>
|
Michael Hardy
| 11,667 |
<p>A compound hypothesis is a hypothesis that is consistent with more than one probability distribution. When the alternative hypothesis is composite, then one uses
$$
\frac{L(\theta_\text{null})}{L(\theta_{\text{null} \,\cup\, \text{alternative}})}
$$
as the test statistic. Then the denominator may be equal to the numerator, in which case the value of this ratio is $1.$ The critical value $K$ will be less than $1.$</p>
<p>They prove the test using the $Z$ statistic is the same as the likelihood ratio test so that they can cite theorems about likelihood ratio tests being in some way optimal under some circumstances.</p>
|
4,036,975 |
<p><span class="math-container">$\require{begingroup} \begingroup$</span>
<span class="math-container">$\def\e{\mathrm{e}}\def\W{\operatorname{W}}\def\Wp{\operatorname{W_0}}\def\Wm{\operatorname{W_{-1}}}\def\Catalan{\mathsf{Catalan}}$</span></p>
<p><a href="https://math.stackexchange.com/q/3590905/122782">Related question</a></p>
<p>Is there a known closed form solution to</p>
<p><span class="math-container">\begin{align}
I_n&=
\int_0^1\frac{\ln(1+x^{2n})}{1+x^2} \,dx
=?
\tag{0}\label{0}
\end{align}</span></p>
<p>It checks out numerically, for <span class="math-container">$n=1,\dots,7$</span> that</p>
<p><span class="math-container">\begin{align}
I_1=
\int_0^1\frac{\ln(1+x^2)}{1+x^2} \,dx
&=
\tfrac\pi2\,\ln2-\Catalan
\tag{1}\label{1}
,\\
I_2=
\int_0^1\frac{\ln(1+x^{2\cdot2})}{1+x^2} \,dx
&=
\tfrac\pi2\,\ln(2+\sqrt2)-2\Catalan
\tag{2}\label{2}
,\\
I_3=
\int_0^1\frac{\ln(1+x^{2\cdot3})}{1+x^2} \,dx
&=
\tfrac\pi2\,\ln6-3\Catalan
\tag{3}\label{3}
,\\
I_4=
\int_0^1\frac{\ln(1+x^{2\cdot4})}{1+x^2} \,dx
&=
\tfrac\pi2\,\ln(4+\sqrt2+2\sqrt{4+2\sqrt2})-4\Catalan
\tag{4}\label{4}
,\\
I_5=
\int_0^1\frac{\ln(1+x^{2\cdot5})}{1+x^2} \,dx
&=
\tfrac\pi2\,\ln(10+4\sqrt5)-5\Catalan
=
\tfrac\pi2\,\ln(10\cot^2\tfrac\pi5)-5\Catalan
\tag{5}\label{5}
,\\
I_6=
\int_0^1\frac{\ln(1+x^{2\cdot6})}{1+x^2} \,dx
&=
\tfrac\pi2\,\ln((5\sqrt2+2\sqrt{12})(1+\sqrt2))-6\Catalan
\tag{6}\label{6}
,\\
I_7=
\int_0^1\frac{\ln(1+x^{2\cdot7})}{1+x^2} \,dx
&=
\tfrac\pi2\,\ln(14\cot^2\tfrac\pi7)-7\Catalan
\tag{7}\label{7}
,
\end{align}</span></p>
<p>so \eqref{0} seems to follow the pattern</p>
<p><span class="math-container">\begin{align}
I_n&=
\tfrac\pi2\,\ln(f(n))-n\Catalan
\tag{8}\label{8}
\end{align}</span></p>
<p>for some function <span class="math-container">$f$</span>.</p>
<p>Items \eqref{5} and \eqref{7} look promising
as they agree to <span class="math-container">$f(n)=2n\cot^2(\tfrac\pi{n})$</span>,
but the other fail on that.</p>
<hr />
<p><strong>Edit:</strong></p>
<p>Also, it looks like
<span class="math-container">\begin{align}
\int_1^\infty\frac{\ln(1+x^{2n})}{1+x^2} \,dx
&=\tfrac\pi2\,\ln(f(n))+n\Catalan
\tag{9}\label{9}
\end{align}</span></p>
<p>and</p>
<p><span class="math-container">\begin{align}
\int_0^\infty\frac{\ln(1+x^{2n})}{1+x^2} \,dx
&=\pi\,\ln(f(n))
\tag{10}\label{10}
\end{align}</span></p>
<p>with the same <span class="math-container">$f$</span>.</p>
<hr />
<p><strong>Edit</strong></p>
<p>Thanks to <a href="https://math.stackexchange.com/a/4037168/122782">the great answer by @Quanto</a>,
the function <span class="math-container">$f$</span> can be defined as</p>
<p><span class="math-container">\begin{align}
f(n)&=
2^n\!\!\!\!\!\!\!\!\!\!
\prod_{k = 1}^{\tfrac{2n-1+(-1)^n}4}
\!\!\!\!\!\!\!\!\!
\cos^2\frac{(n+1-2k)\pi}{4n}
\tag{11}\label{11}
.
\end{align}</span></p>
<p><span class="math-container">$\endgroup$</span></p>
|
Quanto
| 686,284 |
<p>The close-form result can be expressed as</p>
<p><span class="math-container">$$\color{blue}{ \int_0^1\frac{\ln(1+x^{2n})}{1+x^2} \,dx = -nG+\frac\pi2 n \ln 2
+ \pi \sum_{k=1}^{[\frac n2]}\ln \cos\frac{(n+1-2k)\pi}{4n} }
$$</span></p>
<p>as shown below. Note that</p>
<p><span class="math-container">\begin{align}
I_n =
\int_0^1\frac{\ln(1+x^{2n})}{1+x^2} \,dx
\overset{x\to\frac1x} == \frac12 J_n - nG
\end{align}</span><br />
where <span class="math-container">$ \int_1^\infty\frac{\ln x}{1+x^2} \,dx=G$</span> and</p>
<p><span class="math-container">$$J_n =\int_0^\infty\frac{\ln(1+x^{2n})}{1+x^2} \,dx $$</span>
Substitute
<span class="math-container">$$1+x^{2n} = \prod_{k=1}^{n}(1+e^{i\pi\frac{n+1-2k}n }x^2)
$$</span>
and use the known result
<span class="math-container">$\int_0^\infty \frac{\ln(1+ax^2)}{1+x^2}dx= \pi\ln(1+a^{\frac12})
$</span> to integrate</p>
<p><span class="math-container">\begin{align}
J_{n}& =\int_0^\infty\frac{dx}{1+x^2} \sum_{k=1}^{n}
\ln (1+e^{i\pi\frac{n+1-2k}n }x^2)
= \pi\sum_{k=1}^{n} \ln (1+e^{i\pi\frac{n+1-2k}{2n} })\\
&=n \pi \ln 2 + 2\pi\sum_{k=1}^{[\frac n2]} \ln \cos\frac{(n+1-2k)\pi}{4n}
\end{align}</span>
where the symmetry of the sequence is recognized in the last step.</p>
|
603,291 |
<p>Suppose $f:(a,b) \to \mathbb{R} $ satisfy $|f(x) - f(y) | \le M |x-y|^\alpha$ for some $\alpha >1$
and all $x,y \in (a,b) $. Prove that $f$ is constant on $(a,b)$. </p>
<p>I'm not sure which theorem should I look to prove this question. Can you guys give me a bit of hint? First of all how to prove some function $f(x)$ is constant on $(a,b)$? Just show $f'(x) = 0$?</p>
|
Ant
| 66,711 |
<blockquote>
<p><em>Caveat (by @Did, 2018-09-12): This accepted answer is wrong. For unfathomable reasons, for years now, neither its author not the OP seem interested in correcting the situation -- hence the present warning.</em></p>
</blockquote>
<p>divide by $|x-y| $ both members. you get</p>
<p>$$\frac{|f(x) - f(y)|}{|x-y|} \le M |x-y|^{\alpha - 1} \ \ (1)$$</p>
<p>now, since $(1)$ has to hold $\forall \ x, y$ then set $y = x + h$, with $h \to 0$</p>
<p>it becomes </p>
<p>$$|f'(x)| \le M \ |h|^{\alpha-1} = 0$$ ($\alpha - 1 > 0$ so there's no problem there)</p>
<p>So $|f'(x)| \le 0$, but of course also $|f'(x)| \ge 0$, it implies $|f'(x)| = 0$.
Hence $f'(x) = 0$</p>
<p>EDIT:</p>
<p>We can formalize it more. Let's do it.</p>
<p>$$\frac{|f(x) - f(y)|}{|x-y|} \le M |x-y|^{\alpha - 1} \ \ (1)$$</p>
<p>We can always set $x = y + h, h > 0$ since $(1)$ has to hold $\forall x, y$</p>
<p>Then</p>
<p>$$\frac{|f(y+h) - f(y)|}{h} \le M h^{\alpha - 1} \ \ (1)$$</p>
<p>(note $|h| = h$)</p>
<p>Now, let's suppose $f(y+h) - f(y) \ge 0 \ \ \ \ \ (2a) $</p>
<p>It implies that $f'(y) \ge 0$ (it suffice to divide $(2a)$ by $h$ and then taking the limit for $h \to 0$ to show it)</p>
<p>But it also implies, recalling (1) that</p>
<p>$$\frac{f(y+h) - f(y)}{h} \le 0 \Rightarrow f'(y) \le 0$$</p>
<p>(Again by taking the limit of both parts.)</p>
<p>But these last two results imply that $f'(y) = 0 $</p>
<p>We can do the exact same reasoning in the case that $f(y+h) - f(y) \le 0$</p>
<p>So again, in this case we find $f'(y) = 0$</p>
<p>Thus we have demonstrated that in both cases ($f(y+h) > f(y)$ and $f(y+h) < f(y)$) we have $f'(y) = 0$, so this has to hold $\forall y$</p>
<p>This implies $f = const$</p>
|
3,127,795 |
<p>So I’m struggling to understand how to find the angle in this circle, we’ve recently learnt about trigonometry and like finding the area of a circle and all that but I can’t seem to remember which formula I have to use to find this angle. Can anyone lend a helping hand?
<img src="https://i.stack.imgur.com/V2Sm7.jpg" alt=""></p>
|
Haydn Gwyn
| 645,813 |
<p>You can bisect the angle AOB, which divides isosceles triangle AOB into two right triangles. Each one of these right triangles has a hypotenuse of 7 and a leg of 5, so the sine of their angle at O is 5/7. As such, the measure of angle AOB is 2*arcsin(5/7).</p>
|
2,463,052 |
<p>I admit $\frac{1}{1+b}<1$ is trivial for $b>0.$ However, the above claim boils down to suggesting that, any $y\in\{x\in\mathbb{R}:x>1\}$ can be uniquely represented by $1+b$ for some $b>0.$ Can you give me some hints how to prove it?</p>
|
nonuser
| 463,553 |
<p>It should be $0<a<1$, othervise it doesn't hold. Say $a=-1/2$ then $b=-3$</p>
<p>Let $c= 1/a$ then $c>1$ since $1>a>0$. Let $b=c-1>0$. So $a= {1\over 1+b}$.</p>
|
4,486,956 |
<p>If <span class="math-container">$f$</span> is a convex increasing function, then I need to show that
<span class="math-container">$$ \lim_{x\rightarrow -\infty} \frac{1}{x}\int_x^{x_0}f(y)dy=-\lim_{x\rightarrow -\infty}f(x).$$</span></p>
<p>I can prove this equality using L'Hopital's rule and the Fundamental Theorem of Caluclus. However, is there a way of proving this without use L'Hopital's rule?</p>
|
Martin R
| 42,969 |
<p>L'Hopital's rule shows that the identity holds whenever the limit on the right exists. In your special case (<span class="math-container">$f$</span> increasing and convex) there is a simpler short proof:</p>
<p>First note that <span class="math-container">$A = \lim_{x\to -\infty}f(x)$</span> exists (as a real number or <span class="math-container">$-\infty$</span>) because <span class="math-container">$f$</span> is increasing.</p>
<p>For <span class="math-container">$x < x_0$</span> is
<span class="math-container">$$
(x_0-x) f(x) \le \int_x^{x_0} f(y) \, dy \le (x_0-x) f(\frac{x+x_0}{2}) \, ,
$$</span>
where the left inequality holds because <span class="math-container">$f$</span> is increasing, and the right inequality holds because <span class="math-container">$f$</span> is convex.</p>
<p>Dividing by <span class="math-container">$x$</span> and squeezing shows that
<span class="math-container">$$
\lim_{x \to -\infty} \frac 1x\int_x^{x_0} f(y) \, dy = -A \, .
$$</span></p>
|
1,664,188 |
<p>I am trying to show for my home work that the theory of random graphs RG, is not $\kappa$-stable for every $\kappa$ i.e If $M\vDash RG$ and $A\subseteq |M|$ with $|A|\le\kappa$ then $S_1(A)\le\kappa$</p>
<p>I tried to build a $\kappa$-tree with $2^{\kappa}$ branches of consistent formulas,but i get stuck at the limit ordinal stage, and so i manage to do that only for $\kappa =\omega$</p>
<p>It must have something to do with the fact that there are $\kappa$ many finite subsets for $A\subseteq |M|$ with $|A|=\kappa$, and for each pair of such sets we can fined a member of $|M|$ that is in relation with all members of one of them, and is not in relation with any of the other set members. But i need help to formalise it.</p>
<p>thanks a head </p>
|
Empy2
| 81,790 |
<p>Most vectors are not eigenvectors, unless the matrix is a multiple of the identity.<br>
Pick a random $v$, calculate $w=Av$, and check whether the inner product of $v$ and $w$ satisfies $$(w.v)^2=|w|^2||v|^2$$
If they do, keep picking other random $v$, and check that, until you have a good one.</p>
|
1,664,188 |
<p>I am trying to show for my home work that the theory of random graphs RG, is not $\kappa$-stable for every $\kappa$ i.e If $M\vDash RG$ and $A\subseteq |M|$ with $|A|\le\kappa$ then $S_1(A)\le\kappa$</p>
<p>I tried to build a $\kappa$-tree with $2^{\kappa}$ branches of consistent formulas,but i get stuck at the limit ordinal stage, and so i manage to do that only for $\kappa =\omega$</p>
<p>It must have something to do with the fact that there are $\kappa$ many finite subsets for $A\subseteq |M|$ with $|A|=\kappa$, and for each pair of such sets we can fined a member of $|M|$ that is in relation with all members of one of them, and is not in relation with any of the other set members. But i need help to formalise it.</p>
<p>thanks a head </p>
|
Community
| -1 |
<p>If a column of the matrix has a nonzero off-diagonal element, the corresponding standard vector can do, as certainly $$Ae_k=c_k\ne \lambda e_k.$$</p>
<p>If you cannot find such an element, the matrix is diagonal. Take the sum of the standard vectors corresponding to two diagonal elements of distinct values.</p>
<p>$$A(e_{k'}+e_{k''})=\lambda' e_{k'}+\lambda''e_{k''}\ne \lambda(e_{k'}+e_{k''})$$</p>
<p>If you cannot find two such elements, drop it.</p>
<hr>
<p>It is possible that some choices are better than others, like the one that maximizes the angle between the vector and its image, i.e. by minimizing</p>
<p>$$\min_k\frac{|e_k\cdot Ae_k|}{\|Ae_k\|},$$ the ratio of the diagonal elements over the norm of their column, but I have no guarantee.</p>
|
169,919 |
<blockquote>
<p>If $p$ is a prime, show that the product of the $\phi(p-1)$ primitive roots of $p$ is congruent modulo $p$ to $(-1)^{\phi(p-1)}$.</p>
</blockquote>
<p>I know that if $a^k$ is a primitive root of $p$ if gcd$(k,p-1)=1$.And sum of all those $k's$ is $\frac{1}{2}p\phi(p-1)$,but then I don't know how use these $2$ facts to show the desired result.<br>
Please help.</p>
|
awllower
| 6,792 |
<p>If $a^k$ is a primitive root modulo $p$, then so is $a^{-k}$. Thus, if $p-1$ is even, then the sum of coprime integers between 1 and $p-1$ with $p-1$ must be =$\phi(p-1)(p-1)/2$, and hence the product of $\phi(p-1)$ primitive roots modulo p must be $a^{\phi(p-1)(p-1/2)}\equiv(-1)^{\phi(p-1)} \pmod p$. If $p-1$ is odd, then the result is trivial, as $p=2$.</p>
|
2,319,341 |
<blockquote>
<p>If a rubber ball is dropped from a height of $1\,\mathrm{m}$ and continues to rebound to a height that is nine tenth of its previous fall, find the total distance in meter that it travels on falls only.</p>
</blockquote>
<h3>My Attempt:</h3>
<p>I tried if it could be solved using arithmetic progression for which the first term is $(a) = 1\,\mathrm{m}$ and the common difference is $(d) = \frac{9}{10}$. But I could not get any more information.</p>
|
Ananth Kamath
| 444,440 |
<p>The ball follows geometric progression, because, after each fall, the ball goes back up $\frac 9{10}$th of the height of the previous fall. </p>
<p>That is, if the ball fell down from $x$ metres, the next fall would be from $\frac 9{10} x$, the next from $\frac 9{10} \cdot \frac 9{10} x$. </p>
<p>Since the initial height is $1$ metre and we only have to evaluate the distance the ball falls, the excercise reduces to the sum of a simple Geometric progression, given by
$$ 1+ \frac 9{10} + \frac {9^2}{10^2} + \frac {9^3}{10^3} + \cdots \infty$$</p>
<p>$$= \frac 1{1-\frac 9{10}}$$
$$= 10$$
The total distance covered by the ball is $10$ metres.</p>
|
4,097,262 |
<p><a href="https://i.stack.imgur.com/E1UoR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E1UoR.png" alt="Cubic equation with a unit radius circle" /></a>
A cubic equation and circle (unit radius) has intersection at A,B,C,D. ABCD is a square. <em>Find the angle <span class="math-container">$\theta$</span>.</em></p>
<p>I tried:</p>
<ol>
<li><p><span class="math-container">$(0,0)$</span> is a solution so constant term is <span class="math-container">$0$</span></p>
</li>
<li><p>Substituting A(x,y) and C(-x,-y) and adding them gives coefficient of <span class="math-container">$x^2$</span> is 0.</p>
</li>
</ol>
<p>Then the cubic becomes f(x) = <span class="math-container">$ax^3+bx$</span>.</p>
<p>3.Substituting A and B and added the two equations.</p>
<p>I found it interesting-<a href="https://math.stackexchange.com/questions/3680693/prove-that-from-n1-given-points-a-unique-polynomial-of-n-degree-will-pass">for n points given we can find a unique n+1 degree polynomial</a></p>
<p>Also - Can complex number be used here?</p>
<p><strong>Please note</strong>: I am not sure whether we can find the angle(integer) without knowing the coefficients of the cubic.</p>
<p><strong>EDIT</strong>: From the answers<br />
1.putting A <span class="math-container">$(cos\theta,sin\theta)$</span> in f(x) :
<span class="math-container">$acos^3\theta + b cos\theta = sin\theta$</span></p>
<p>2.putting B <span class="math-container">$(-sin\theta,cos\theta)$</span> in f'(x) :
<span class="math-container">$asin^2\theta + b = tan\theta$</span> [ as circle has <span class="math-container">$tan\theta$</span> slope at B]</p>
<p><span class="math-container">$1,2 $</span> eqn gives <span class="math-container">$3asin^2\theta = acos^2\theta$</span></p>
<p>So, <span class="math-container">$sin^2\theta = \frac{1}{4}$</span></p>
<p><em>But I getting the value of <span class="math-container">$\theta$</span> but a answer shows plot of many cubics</em>
-> because in my case <span class="math-container">$ABCD$</span> is a square.</p>
|
G Cab
| 317,234 |
<p><em>HINT</em></p>
<p>The cubic must clearly be of type
<span class="math-container">$$
y = bx\left( {x^{\,2} - a^{\,2} } \right)
$$</span></p>
<p>In polar coordinates
<span class="math-container">$$
r\sin \theta = br\cos \theta \left( {r^{\,2} \cos ^{\,2} \theta - a^{\,2} } \right)
$$</span>
i.e.
<span class="math-container">$$
0 = r\left( {br^{\,2} \cos ^{\,3} \theta - \left( {a^{\,2} b\cos \theta + \sin \theta } \right)} \right)
$$</span>
and excluding the origin
<span class="math-container">$$
0 = br^{\,2} \cos ^{\,3} \theta - c\cos \left( {\theta + \beta } \right)
$$</span>
where either <span class="math-container">$b$</span> and <span class="math-container">$c$</span> can be taken as positive.</p>
<p>So
<span class="math-container">$$
r = \sqrt {{{c\cos \left( {\theta + \beta } \right)} \over {b\cos ^{\,3} \theta }}}
$$</span></p>
<p>Then <span class="math-container">$D$</span> is a local max for <span class="math-container">$r$</span>, and you shall impose to find the same <span class="math-container">$r_{max}$</span> at <span class="math-container">$90^{\circ}$</span> thereafter.</p>
|
545,728 |
<p>So I have $X \sim \text{Geom}(p)$ and the probability mass function is:</p>
<p>$$p(1-p)^{x-1}$$</p>
<p>From the definition that:</p>
<p>$$\sum_{n=1}^\infty ns^{n-1} = \frac {1}{(1-s)^2}$$</p>
<p>How would I show that the $E(X)=\frac 1p$</p>
|
mert
| 98,278 |
<p>\begin{eqnarray}
E(X)&=&\sum_{x=1}^\infty x p(1-p)^{x-1}\\
&=&p\sum_{x=1}^\infty x(1-p)^{x-1}\\
&=&p\sum_{x=1}^\infty -\frac{d}{dp}(1-p)^{x}\\
&=&-p\left[\frac{d}{dp} \sum_{x=1}^\infty (1-p)^x\right]\\
&=&-p\cdot \frac{d}{dp}\frac{1-p}{1-(1-p)} \\
&=&-p\cdot \frac{d}{dp}\frac{1-p}{p} \\
&=&-p\cdot \frac{-1}{p^2} \\
&=& \frac{1}{p}\\
\end{eqnarray}</p>
|
2,780,832 |
<p>Let's define: </p>
<p>$$\sin(z) = \frac{\exp(iz) - \exp(-iz)}{2i}$$
$$\cos(z) = \frac{\exp(iz) + \exp(-iz)}{2}$$</p>
<blockquote>
<p>We are to prove that
$$\sin(z+w)=\sin(w) \cos(z) + \sin(z)\cos(w), \forall_{z,w \in \mathbb{C}}$$
using only the following statement: $\exp(z+w) = \exp(w)\exp(z)$.</p>
</blockquote>
<p>I managed only to show that:
$$\sin(z + w) = \frac{\exp(iz)\exp(iw)}{2i} - \frac{\exp(-iz)\exp(-iw)}{2i}.$$<br>
Where can I go from here?</p>
|
Ennar
| 122,131 |
<p>It is easier to start from $\sin z\cos w + \cos z\sin w$, but you can also directly continue from what you have.</p>
<p>It is straightforward to check $e^{iz} = \cos z + i\sin z$ directly from the definition of $\cos$ and $\sin$. Also, quite easily seen, $\cos$ is even, while $\sin$ is odd.</p>
<p>So, from what you have:</p>
<p>\begin{align}\sin(z + w) &= \frac{\exp(iz)\exp(iw)}{2i} - \frac{\exp(-iz)\exp(-iw)}{2i} \\
&= \frac 1{2i}( (\cos z + i\sin z)(\cos w + i\sin w) - (\cos z - i\sin z)(\cos w - i\sin w) )\\
&= \sin z\cos w + \cos z\sin w.\end{align}</p>
|
2,174,508 |
<p>We have a box in 3D space defined by it's position, size and rotation. How can i find the smallest box that contains that box, but has a new rotation?
I know this is a really bad description of the problem, i just can't describe it any better, i'm really bad at vector maths. i tried to draw a <a href="https://i.stack.imgur.com/ND7Cv.png" rel="nofollow noreferrer">little scene</a> in blender to clarify what i mean. here the white box is the original, and the red one is the one i want. the lines are the Forward vectors of these object. Sorry for the terrible formulation and thanks for reading :)</p>
|
Community
| -1 |
<p>A 3D rotation is usually expressed by a $3\times3$ matrix, such as that obtained via the Euler angles. See <a href="https://en.wikipedia.org/wiki/Rotation_matrix#In_three_dimensions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Rotation_matrix#In_three_dimensions</a>.</p>
<p>When you have such a matrix, you can rotate any point around the origin, by a matrix/vector product.</p>
<p>The $8$ vertices of your box will have coordinates $(\pm a,\pm b,\pm c)$. Transform them using the matrix. The the new bounding box will be delimited by the minimum and maximum $x, y$ and $z$ values among those you just computed.</p>
|
2,620,700 |
<p>I want to prove or disprove that ($\mathbb{R},\mathcal{E}_1$) with equivalence relation $x\mathcal{R}y \Leftrightarrow a \neq 0$ y = ax is homeomorphic to $(\{0,1\}, \tau)$ where $\tau$ is the discrete topology. </p>
<p>To me it seems that there are two classes of equivalence: every point is equivalent to every point, and 0 is equivalent only to 0. It then seems plausible that they are homeomorphic. The function I tried is this one: f(x) = 0 if x is 0, and 1 in every other situation. Does that work?</p>
<p>Thank you!</p>
<p>EDIT: $\mathcal{E}_1$ is the euclidean topology. The question is: Is the quotient space homeomorphic to the space with two elements with discrete topology?</p>
|
amrsa
| 303,170 |
<p>The equivalence relation has only two equivalence classes, as you pointed out:
$\{0\}$ and $\mathbb{R} \setminus \{0\}$. </p>
<p>For the quotient space to a two-element discrete space it is necessary that $\{0\}$ is an open set in that quotient.<br>
The open sets of the quotient are those whose inverse image by the map that sends the set to its quotient are open in the original space (see <a href="https://en.wikipedia.org/wiki/Quotient_space_(topology)" rel="nofollow noreferrer">Wikipedia article</a>).<br>
Since $\{0\}$ is an equivalence class, it is its own inverse image.<br>
But $\{0\}$ is not open in $\mathbb{R}$, with euclidean topology.<br>
Hence they are not homemorphic.</p>
|
1,181,356 |
<p>Can such complex-valued function exist that</p>
<p>$$\lim_{x\to x_0}f(x)=0$$</p>
<p>but</p>
<p>$$\lim_{x\to x_0}\frac 1{f(x)}=1$$?</p>
<p>What about</p>
<p>$$\lim_{x\to x_0}\frac 1{Re(f(x))}=1$$?</p>
|
Oria Gruber
| 76,802 |
<p>Such a function does not exist because of an elementary theorem that states that if $\lim_{x \to x_0} f(x)$ and $\lim_{x \to x_0} g(x)$ both exist, then $\lim_{x \to x_0} f(x)g(x)=\lim_{x \to x_0} f(x)\lim_{x \to x_0} g(x)$</p>
|
147,361 |
<p>i'm new in MATHEMATICA. I want to create an operator $D^{(f)}=\partial_x+f'-\partial^2_x$ and $D^{(g)}=\partial_x+g'-\partial^2_x$ and put it into a matrix element, then multiplied by a vector whose components are functions on $x$, say $u(x), v(x)$. For example $$\begin{pmatrix}D^{(f)} & D^{(g)}\\ D^{(g)} & D^{(f)} \end{pmatrix} \begin{pmatrix} u(x) \\ v(x) \end{pmatrix}$$</p>
<p>I saw the posts: <a href="https://mathematica.stackexchange.com/questions/69149/computing-with-matrix-differential-operators">Computing with matrix differential operators</a>, <a href="https://mathematica.stackexchange.com/questions/120324/how-to-do-matrix-operation-if-the-first-matrix-is-an-operator">How to do matrix operation if the first matrix is an operator?</a>; but i'm very newbie on this.</p>
<p>My problem lives on the fact that I have problems with the matrices products because I have operators</p>
<p>Thanks!</p>
<p>EDIT: $f=f(x,y,z)$ and $g=g(x,y,z)$, both of them are function of a vector $(x,y,z)$, but we can neglect the components $y,z$ and consider only the $x$ part</p>
|
LouisB
| 22,158 |
<p>We can start by defining our two differential operators and putting them into a matrix, like this:</p>
<pre><code>ClearAll["Global`*"]
df[w_] := D[w, x] + f'[x] w - D[w, x, x]
dg[w_] := D[w, x] + g'[x] w - D[w, x, x]
m = {{df, dg}, {dg, df}};
</code></pre>
<p>Now we need a way for the matrix operator to act on the vector. This is not plain multiplication. One way to get the matrix operator to act on a vector is to define a function to do it, like this:</p>
<pre><code>operate[matrix_, column_] :=
Table[Inner[#1[#2] &, μ, column, Plus], {μ, matrix}]
</code></pre>
<p>The above function is basically what those other posts were saying.
Now we can use the function to apply the matrix operator to our vector.</p>
<pre><code>result = operate[m, {u[x], v[x]}];
result // MatrixForm
(*
{u[x] f'[x] + v[x] g'[x] + u'[x] + v'[x] - u''[x] - v''[x],
v[x] f'[x] + u[x] g'[x] + u'[x] + v'[x] - u''[x] - v''[x]} *)
</code></pre>
<p>We could have defined our functions <code>f[x]</code> and <code>g[x]</code> at any point. For example, we can do it now, as</p>
<pre><code>f[x] := Sin[x]
result
(*
{Cos[x] u[x] + v[x] g'[x] + u'[x] + v'[x] - u''[x] - v''[x],
Cos[x] v[x] + u[x] g'[x] + u'[x] + v'[x] - u''[x] - v''[x]}
*)
</code></pre>
|
1,197,875 |
<p>I'm just starting partials and don't understand this at all. I'm told to hold $y$ "constant", so I treat $y$ like just some number and take the derivative of $\frac{1}{x}$, which I hope I'm correct in saying is $-\frac{1}{x^2}$, then multiply by $y$, getting $-\frac{y}{x^2}$.</p>
<p>But apparently the correct answer is $\frac{1}{x}$. What am I missing?</p>
|
Jonathan Hebert
| 120,932 |
<p>"With respect to $y$" means that you will be holding $x$ constant, not $y$. $y$ is the variable we are differentiating with respect to, so it is /not/ to be treated as a constant!</p>
|
84,982 |
<p>I am a new professor in Mathematics and I am running an independent study on Diophantine equations with a student of mine. Online I have found a wealth of very helpful expository notes written by other professors, and I would like to use them for guided reading. <strong>I am wondering whether it is customary to ask permission of the author before using his or her online notes for my own reading course.</strong> </p>
<p>Also, if anyone has suggestions for good sources Diophantine Equations please feel free to enlighten me.</p>
|
paul garrett
| 15,629 |
<p>Echoing Gerhard Paseman's comment more-or-less: it certainly never hurts to ask permission, or at least to explicitly express interest in use of on-line resources. Some on-line resources have indications of restrictions or lack thereof (e.g., Creative Commons License). In all cases, I think one should <em>acknowledge</em> on-line resources just as one would acknowledge traditional paper-media resources. Credit where credit is due. </p>
<p>Another point, more practical, is that the on-line versions may get updated, corrected, and revised, while downloaded (or printed) versions will certainly not. Thus, there is the "just in time" ("JIT") mechanism in play for on-line resources, namely, not to download (or print) until the last possible moment.</p>
<p>... and echoing anon, as well.</p>
<p>Edit: The concern about depending upon on-line work that later proves flawed should also be directed at "refereed journal" publications, because many journals nowadays explicitly tell referees that they are not responsible for determining correctness! It is supposedly the author's responsibility! Thus, if there's no scandalous outcome, perhaps no one has seriously attempted to corroborate. </p>
<p>And, yes, one can argue that things are put on-line to be used. But, still, I think credit should be given, just as with physical books and papers. E.g., just because I've purchased a book doesn't mean I don't owe an intellectual debt to the author, and to the people cited in the book. I think it is completely comparable to say that one should not, by failing to say anything to the contrary, give the impression that ideas one gleaned from someone else's writing are one's own. Being on-line or not does not suspend the rules of responsible, ethical scholarly conduct.</p>
<p>Other discussions in this and other forums have attempted to distinguish "logical dependence" from "acknowledgement of prior art". I have known some otherwise-reputable mathematicians who have explicitly said that since they deliberately avoid looking at other peoples' officially published work, they never have to mention it, perhaps despite having heard informal discussions of it. Or, if there's no logical dependence on someone else's stuff, it doesn't have to be mentioned, supposedly. I realize others will disagree, but I think "prior art" deserves as much mention as anything else. All the other things can be dodged, if one tries hard enough! I think we should respect prior art, as well as noting technical-logical dependence, and I do not think that on-line or not makes much difference in this.</p>
|
234,063 |
<p>There are some answers on how to get a smooth squarewave function. But I would like to have a smooth boxcar function or rectangle function with 2 different widths.: <code>wup</code>, and <code>wdown</code></p>
<p>One solution is the Fourier Transform, but I prefer having an approximation with a smoothness factor.</p>
<p>Thank you very much.</p>
<p>Here is the non-smooth version. I would like to smooth it.
<strong>Note that: this is not a square wave function (the smooth version is known <a href="https://mathematica.stackexchange.com/questions/38293/make-a-differentiable-smooth-sawtooth-waveform">the solution</a>.</strong> This is the boxcar function with 2 widths: up width is 2, and down width is 3.</p>
<p><a href="https://i.stack.imgur.com/BHJNK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BHJNK.jpg" alt="enter image description here" /></a></p>
<pre><code>pulse[h_, wup_, wdown_, t_] :=
With[{period = wup + wdown, duty = wup/(wup + wdown)},
h*UnitBox[Mod[t/period, 1.]/(2. duty)]]
Plot[{pulse[1, 2, 3, t]}, {t, 0, 10}, Exclusions -> None]
</code></pre>
<hr />
<p><strong>Square Wave Pulse with Uniform Width:</strong></p>
<p>I can do a smooth version only for a square wave function, with a uniform width, but not the rectangle function.</p>
<pre><code>smoothPulse[ePulse_, wup_, wdown_, smoothness_, t_] :=
ePulse/2 + ePulse/\[Pi]*ArcTan[Sin[\[Pi] t/wup]*smoothness]
</code></pre>
<p><a href="https://i.stack.imgur.com/F6pX4.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F6pX4.jpg" alt="enter image description here" /></a></p>
<hr />
<p><strong>Rectangle Pulse with 2 differents widths:</strong></p>
<p>I don't know the smooth version. I tried to build a smooth UnitBox version.</p>
<pre><code>smoothunitbox[t_, sharpness_] :=
Piecewise[{{Tanh[sharpness*(2 t + 1)] + 1,
t < 0}, {1 - Tanh[sharpness*(2 t - 1)], t >= 0}}]/2
smoothRectangle[h_, wup_, wdown_, smoothness_, t_] :=
With[{period = wup + wdown, duty = wup/(wup + wdown)},
h*smoothunitbox[Mod[t/period, 1.]/(2. duty), smoothness]]
Plot[smoothRectangle[1, 2, 4, 20, t], {t, 0, 10}, Exclusions -> None]
</code></pre>
<p>But the smooth is not good. How to improve?</p>
<p><a href="https://i.stack.imgur.com/Oa0qH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oa0qH.jpg" alt="enter image description here" /></a></p>
|
cvgmt
| 72,111 |
<p>use the <code>mollifier</code> in mathematics. It also work for the <code>Piecewise</code> function.</p>
<p><strong>Reply the comment</strong></p>
<pre><code>a = 2;
b = 3;
S[x_ /; 0 <= x <= a] := 1;
S[x_ /; a <= x <= a + b] := 0;
S[x_ /; x >= a + b] := S[x - (a + b)];
S[x_ /; x <= a + b] := S[x + a + b];
ρ = 1/NIntegrate[Exp[-1/(1 - x^2)], {x, -1, 1}];
φ[x_, ϵ_] = (ρ/ϵ)* Piecewise[{{Exp[-ϵ^2/(ϵ^2 - x^2)], -ϵ < x < ϵ}}];
Plot[S[x], {x, -2 (a + b), 2 (a + b)}]
Plot[NIntegrate[φ[t - x, .3]*S[x], {x, -2 (a + b),
2 (a + b)}], {t, -2 (a + b), 2 (a + b)}]
</code></pre>
<p><a href="https://i.stack.imgur.com/RVZvc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RVZvc.png" alt="enter image description here" /></a></p>
<p><strong>Updated</strong></p>
<pre><code>f[x_] = Piecewise[{{1, 0 <= x <= 2}}];
s[x_] = f[Mod[x, 4, -2]];
ρ = 1/NIntegrate[Exp[-1/(1 - x^2)], {x, -1, 1}];
φ[x_, ϵ_] = (ρ/ϵ)*Piecewise[{{Exp[-ϵ^2/(ϵ^2 - x^2)], -ϵ < x < ϵ}}];
Plot[s[x], {x, -4, 4}]
Plot[NIntegrate[φ[t - x, .3]*s[x], {x, -4, 4}], {t, -4, 4}]
</code></pre>
<p><a href="https://i.stack.imgur.com/7Niq1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Niq1.png" alt="enter image description here" /></a></p>
<p><strong>Original</strong></p>
<p>But I don't know how to smooth the rectangle, I only try to smooth the <code>Abs</code> function.</p>
<pre><code>g[x_] = Piecewise[{{x, 0 <= x <= 1}, {-x, -1 <= x <= 0}}];
h[x_] = g[Mod[x, 2, -1]];
Plot[h[x], {x, -4, 4}]
ρ = 1/NIntegrate[Exp[-1/(1 - x^2)], {x, -1, 1}];
φ[x_, ϵ_] = (ρ/ϵ)*Piecewise[{{Exp[-ϵ^2/(ϵ^2 - x^2)], -ϵ < x < ϵ}}];
Plot[NIntegrate[φ[t - x, 1/8]*h[x], {x, -4, 4}], {t, -4,4}]
</code></pre>
<p><a href="https://i.stack.imgur.com/6bKNg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6bKNg.png" alt="enter image description here" /></a></p>
|
261,156 |
<p>Well, I am trying to write a code that makes the number:</p>
<p><span class="math-container">$$123456\dots n\tag1$$</span></p>
<p>So, when <span class="math-container">$n=10$</span> we get:</p>
<p><span class="math-container">$$12345678910$$</span></p>
<p>And when <span class="math-container">$n=15$</span> we get:</p>
<p><span class="math-container">$$123456789101112131415$$</span></p>
<p>And when <span class="math-container">$n=4$</span> we get:</p>
<p><span class="math-container">$$1234$$</span></p>
|
David Reiss
| 84,332 |
<pre><code>ToExpression[
StringJoin[
ToString /@ Range[15]
]
]
</code></pre>
|
1,487,654 |
<p>Q. Find the no of ways of inserting r dolllars using 1 dollar, 2 dollar and 5 dollar tokens,when order doesn't matter and when order doesn't matter.
Ans.
When order doesn't matter..
$(1+x+x^2+x^3+..)(1+x^2+x^4+..)(1+x^5+x^{10}+..)$
Coefficient of $x^r$ in above generating function.
I clearly understand this. But I don't understand the process when order matters. Please somebody explain how (and why) to approach when order matters.</p>
|
Marconius
| 232,988 |
<p>When order does matter, the contents of the $k$-th bracket will be the possibilities for the $k$-th token to be inserted, which are $1,2$ or $5$ dollars regardless of $k$.</p>
<p>So a first guess for the generating function that respects order is</p>
<p>$$g(x)=(x+x^2+x^5)\times(x+x^2+x^5)\times(x+x^2+x^5)\times\cdots$$</p>
<p>but this has the problem that the degree of any term in the product is infinite.</p>
<p>But we know that at least $4$ and at most $19$ tokens can be used to make up the amount.</p>
<p>So</p>
<p>$$\begin{align}
g(x)&=\sum_{k=4}^{19}{(x+x^2+x^5)^k} \\
&=(x+x^2+x^5)^4\cdot\sum_{k=0}^{15}{(x+x^2+x^5)^k} \\[2ex]
&=(x+x^2+x^5)^4\cdot\frac{(x+x^2+x^5)^{16}-1}{x+x^2+x^5-1}
\end{align}$$</p>
|
1,487,654 |
<p>Q. Find the no of ways of inserting r dolllars using 1 dollar, 2 dollar and 5 dollar tokens,when order doesn't matter and when order doesn't matter.
Ans.
When order doesn't matter..
$(1+x+x^2+x^3+..)(1+x^2+x^4+..)(1+x^5+x^{10}+..)$
Coefficient of $x^r$ in above generating function.
I clearly understand this. But I don't understand the process when order matters. Please somebody explain how (and why) to approach when order matters.</p>
|
robjohn
| 13,854 |
<p><strong>Order Does Not Matter</strong></p>
<p>If the order doesn't matter, then the coefficient of $x^n$ in
$$
\begin{align}
&\overbrace{\left(1+x+x^2+x^3+\dots\right)}^{\text{exponent = number of $\$1$}}\overbrace{\left(1+x^2+x^4+x^6+\dots\right)}^{\text{exponent = 2 $\times$ number of $\$2$}}\overbrace{\left(1+x^5+x^{10}+x^{15}+\dots\right)}^{\text{exponent = 5 $\times$ number of $\$5$}}\\
&=\bbox[5px,border:2px solid #C0A000]{\frac1{1-x}\frac1{1-x^2}\frac1{1-x^5}}\\
&\small=1+x+2x^2+2x^3+3x^4+4x^5+5x^6+6x^7+7x^8+8x^9+10x^{10}+11x^{11}+13x^{12}\\
&\small\,+14x^{13}+16x^{14}+18x^{15}+20x^{16}+22x^{17}+24x^{18}+26x^{19}+29x^{20}+\dots
\end{align}
$$
is the number of ways to choose ones, twos, and fives that sum to $n$.</p>
<p>For example, suppose $n=12$. The case of $3$ $\$1$ tokens and $2$ $\$2$ tokens and $1$ $\$5$ token is counted by the $x^3$ term from the first sum times the $x^4$ term from the second sum times the $x^5$ term from the third sum.</p>
<p>Since the denominator of the generating function is $1-x-x^2+x^3-x^5+x^6+x^7-x^8$, the recursion for the coefficients is
$$
a_n=a_{n-1}+a_{n-2}-a_{n-3}+a_{n-5}-a_{n-6}-a_{n-7}+a_{n-8}
$$</p>
<hr>
<p><strong>Order Matters</strong></p>
<p>If the order matters, break down the generating function into the sum of the generating functions for a given number of tokens:
$$
\begin{align}
&1+\overbrace{\left(x+x^2+x^5\right)\!\vphantom{\left(x^2\right)^2}}^{\text{one token}}+\overbrace{\left(x+x^2+x^5\right)^2}^{\text{two tokens}}+\overbrace{\left(x+x^2+x^5\right)^3}^{\text{three tokens}}+\dots\\
&=\bbox[5px,border:2px solid #C0A000]{\frac1{1-\left(x+x^2+x^5\right)}}\\
&=1+x+2x^2+3x^3+5x^4+9x^5+15x^6+26x^7+44x^8+75x^9\\
&\,+128x^{10}+218x^{11}+372x^{12}+634x^{13}+1081x^{14}+1843x^{15}+\dots
\end{align}
$$
Under the six tokens term, we would have a product like
$$
\small\left(x+x^2+x^5\right)\left(x+x^2+x^5\right)\left(x+x^2+x^5\right)\left(x+x^2+x^5\right)\left(x+x^2+x^5\right)\left(x+x^2+x^5\right)
$$
There is now a term to specify each possible ordering of the tokens. Using the example above, one such term for $3$ $\$1$ tokens and $2$ $\$2$ tokens and $1$ $\$5$ token would be
$$
\small\overbrace{\left(x\color{#C0C0C0}{+x^2+x^5}\right)}^{\$1}\overbrace{\left(\color{#C0C0C0}{x+}x^2\color{#C0C0C0}{+x^5}\right)}^{\$2}\overbrace{\left(\color{#C0C0C0}{x+x^2+}x^5\right)}^{\$5}\overbrace{\left(x\color{#C0C0C0}{+x^2+x^5}\right)}^{\$1}\overbrace{\left(x\color{#C0C0C0}{+x^2+x^5}\right)}^{\$1}\overbrace{\left(\color{#C0C0C0}{x+}x^2\color{#C0C0C0}{+x^5}\right)}^{\$2}
$$
Since the denominator of the generating function is $1-x-x^2-x^5$, the recursion for the coefficients is
$$
a_n=a_{n-1}+a_{n-2}+a_{n-5}
$$</p>
|
108,277 |
<p>If Mochizuki's proof of abc is correct, why would this provide a new proof of FLT?</p>
<p>Edit: In proof of asymptotic FLT, does Mochizuki claim a specific value of n and if so what is this value?</p>
|
Igor Rivin
| 11,142 |
<p>See <a href="http://mathworld.wolfram.com/abcConjecture.html" rel="nofollow">http://mathworld.wolfram.com/abcConjecture.html</a></p>
|
3,657,751 |
<p>Consider the series <span class="math-container">$$\sum_{n=1}^{\infty}\frac{(-1)^{\frac{n(n+1)}{2}+1}}{n}=1+\dfrac12-\dfrac13-\dfrac14+\dfrac15+\dfrac16-\cdots.$$</span> This is clearly not absolutely convergent. On the other hand, obvious choice, alternating series does not work here. Seems like the partial sum sequence is bounded but it is not monotone.</p>
<p>How can we prove that this series converges? and, where does it converge to? </p>
|
Sil
| 290,240 |
<p>The sequence of partial sums is <span class="math-container">$s_n=s_{n-1}+\frac{(-1)^{T_n+1}}{n}$</span>, and if we can partition the sequence into two subsequences which converge to the same value, it implies the original sequence converges as well. This is usually done by looking at subsequences made by odd and even indeces, see for example <a href="https://math.stackexchange.com/questions/589167">If $\{x_{2m}\}$ and $\{x_{2m-1}\}$ converge to the same limit, does $\{x_m\}$ converge?</a>. Notice that we don't need to care about terms re-arrangements since we will be working only with finite partial sums.</p>
<p>So we consider two subsequences by splitting the <span class="math-container">$s_n$</span> for and even <span class="math-container">$s_{2n}$</span> and odd <span class="math-container">$s_{2n-1}$</span> subsequences. Since trinagular numbers parity is periodic modulo <span class="math-container">$4$</span>, we can write these explicitly in terms of <span class="math-container">$s_{4n},s_{4n-2}$</span> and <span class="math-container">$s_{4n-1},s_{4n-3}$</span>. For example for the even indexes subsequence, say <span class="math-container">$b_{n}$</span> such that <span class="math-container">$b_{2n}=s_{4n}$</span> and <span class="math-container">$b_{2n-1}=s_{4n-2}$</span>, applying the recurrence above we can see that
<span class="math-container">$$
b_{2n}=b_{2n-1}-\frac{1}{4n-1}-\frac{1}{4n},
b_{2n-1}=b_{2n-2}+\frac{1}{4n-3}+\frac{1}{4n-2}\\
$$</span>
which together with <span class="math-container">$b_1=s_2=\frac{3}{2}$</span> describes the sequence completely. But this is indeed the sum
<span class="math-container">$$b_n=\sum_{k=1}^{\infty}(-1)^{k+1} \left(\frac{1}{2k-1}+\frac{1}{2k}\right).$$</span>
Similarly for the sequence <span class="math-container">$a_n$</span> of odd terms <span class="math-container">$a_{2n}=s_{4n-1}$</span>, <span class="math-container">$a_{2n-1}=s_{4n-3}$</span>, we arrive at</p>
<p><span class="math-container">$$
a_{2n}=a_{2n-1}+\frac{1}{4n-2}-\frac{1}{4n-1},a_{2n-1}=a_{2n-2}+\frac{1}{4n-4}-\frac{1}{4n-3}
$$</span>
with <span class="math-container">$a_1=s_1=1$</span>, corresponding to
<span class="math-container">$$a_n=1+\sum_{k=2}^{\infty}(-1)^{k} \left(\frac{1}{2k-2}+\frac{1}{2k-1}\right).$$</span>
Now both sequences clearly converge by the alternating series test, so we are left with showing they converge to the same value. But since both of the sums are themselves just added <span class="math-container">$\frac{\pi}{4}=\sum_{k=0}^{\infty} \frac{(-1)^k}{2k+1}$</span> (Leibniz formula) and taylor series for natural logarithm <span class="math-container">$\ln(1+x)=\sum_{k=1}^{\infty} \frac{(-1)^{k-1}}{k}x^k$</span> for <span class="math-container">$x=1$</span>. So, as already shown in the other answer, indeed
<span class="math-container">$$
\lim_{n \to \infty} s_n = \lim_{n \to \infty} a_n = \lim_{n \to \infty} b_n = \frac{\pi}{4}+\frac{\ln 2}{2}.
$$</span></p>
|
3,657,751 |
<p>Consider the series <span class="math-container">$$\sum_{n=1}^{\infty}\frac{(-1)^{\frac{n(n+1)}{2}+1}}{n}=1+\dfrac12-\dfrac13-\dfrac14+\dfrac15+\dfrac16-\cdots.$$</span> This is clearly not absolutely convergent. On the other hand, obvious choice, alternating series does not work here. Seems like the partial sum sequence is bounded but it is not monotone.</p>
<p>How can we prove that this series converges? and, where does it converge to? </p>
|
Ali Shadhar
| 432,085 |
<p><span class="math-container">$$1\color{red}{+\frac12}\color{blue}{-\frac13}-\frac14+\frac15\color{cyan}{+\frac16}\color{magenta}{-\frac17}+\cdots$$</span></p>
<p><span class="math-container">$$=1\color{red}{-\frac12+2\cdot\frac12}\color{blue}{+\frac13-2\cdot\frac13}-\frac14+\frac15\color{cyan}{-\frac16+2\cdot\frac16}\color{magenta}{+\frac17-2\cdot\frac17}+\cdots$$</span></p>
<p><span class="math-container">$$=1-\frac12+\frac13-\frac14+\frac15-\cdots+2\left(\frac12-\frac13+\frac16-\frac17+\cdots\right)$$</span></p>
<p><span class="math-container">$$=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n}+2\sum_{n=1}^\infty\frac{1}{4n-2}-\frac{1}{4n-1}$$</span></p>
<p><span class="math-container">$$=\ln(2)+\frac12\sum_{n=1}^\infty\frac{1}{n-1/2}-\frac{1}{n-1/4}$$</span></p>
<p><span class="math-container">$$=\ln(2)+\frac12\left(H_{-1/4}-H_{-1/2}\right)$$</span></p>
<p><span class="math-container">$$=\ln(2)+\frac12\left(\frac{\pi}{2}-\ln(2)\right)$$</span></p>
<p><span class="math-container">$$=\frac{\pi}{4}+\frac12\ln(2)$$</span></p>
<p>note that we used the series representation of the harmonic number </p>
<p><span class="math-container">$$H_a=\sum_{n=1}^\infty\frac{1}{n}-\frac{1}{n+a}\Longrightarrow H_a-H_b=\sum_{n=1}^\infty\frac{1}{n+b}-\frac{1}{n+a}$$</span>
and we also used the results <span class="math-container">$H_{-1/4}=\frac{\pi}{2}-3\ln(2)$</span> and <span class="math-container">$H_{-1/2}=-2\ln(2)$</span> which can be obtained from the integral representation of the harmonic number <span class="math-container">$H_a=\int_0^1\frac{1-x^a}{1-x}dx$</span>.</p>
|
3,111,906 |
<p>I have to show that for any <span class="math-container">$b >1$</span>, we have
<span class="math-container">$$ b^n > n$$</span>
for all <span class="math-container">$n$</span> sufficiently large, using only very basic analysis (no calculus). My attempt is as follows.</p>
<hr>
<p>We know that <span class="math-container">$b^{n+1} - b^n = b^n(b-1)$</span>. For <span class="math-container">$n$</span> sufficiently large, say <span class="math-container">$$n \geq N = \left\lceil \frac{\ln(2/(b-1))}{\ln b} \right\rceil + 1,$$</span> we have
<span class="math-container">$$ b^{n+1} - b^n > 2.$$</span></p>
<p>Now let <span class="math-container">$\Delta = N - b^N$</span>. Then for any <span class="math-container">$j\geq 1$</span>, we have
<span class="math-container">$$ b^{N+j} = (b^{N+j} - b^{N+j-1}) + \ldots + (b^{N+1} - b^N) + b^N > 2j + b^N = 2j + N - \Delta = N+j + (j - \Delta).$$</span>
Thus we have <span class="math-container">$b^n > n$</span> for any <span class="math-container">$n \geq N + |\Delta|$</span>.</p>
<hr>
<p>This works, but it seems messy. <strong>Is there is better way?</strong> I know induction is usual for this type of problem, but establishing the base case for generic <span class="math-container">$b$</span> seems difficult. </p>
|
Umberto P.
| 67,536 |
<p>If <span class="math-container">$b > 1$</span> you can write <span class="math-container">$b = 1 + x$</span> with <span class="math-container">$x > 0$</span> and by Bernoulli's inequality <span class="math-container">$$b^n = (1+x)^n \ge \frac{n(n-1)}{2}x^2.$$</span> Thus <span class="math-container">$$\frac{b^n}{n} \ge \frac{(n-1)x^2}{2} \ge 1$$</span> for all <span class="math-container">$n > \dfrac{2}{x^2} + 1.$</span></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.