title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Check my answer for uniform convergence | 1) and 2) are correct. For 3) note that
$$\sup_{x\in (1,+\infty)}\left|1-\frac{x^n}{1 + x^n}\right|=\sup_{x\in (1,+\infty)}\frac{1}{1+x^n}=\frac{1}{2}.$$
and therefore $f_n$ is NOT uniformly convergent in $(1,+\infty)$.
On the other hand, if $a>1$ then
$$\sup_{x\in [a,+\infty)}\left|1-\frac{x^n}{1 + x^n}\right|=\sup_{x\in [a,+\infty)}\frac{1}{1+x^n}=\frac{1}{1+a^n}\to 0.$$ |
How to evaluate $\lim _{n\to \infty }\left(\frac{\sqrt{4n^3+3n}-2n\sqrt{n-2}}{\sqrt{2n+4}}\right)$? | $$\lim _{n\to \infty }\left(\frac{\sqrt{4n^3+3n}-2n\sqrt{n-2}}{\sqrt{2n+4}}\right)=$$
$$=\lim _{n\to \infty }\left(\frac{(\sqrt{4n^3+3n}-2n\sqrt{n-2})\cdot(\sqrt{4n^3+3n}+2n\sqrt{n-2})}{\sqrt{2n+4}(\sqrt{4n^3+3n}+2n\sqrt{n-2})}\right)=$$
$$=\lim _{n\to \infty }\left(\frac{4n^3+3n-4n^3+8n^2}{\sqrt{2n+4}(\sqrt{4n^3+3n}+2n\sqrt{n-2})}\right)=$$
$$=\lim _{n\to \infty }\left(\frac{3n+8n^2}{\sqrt{2n+4}(\sqrt{4n^3+3n}+2n\sqrt{n-2})}\right)=\sqrt2$$ |
Why is $H^0(C,\mathcal O(D))$ a vector space? | Let $D_+$, $D_-$ denote respectively the positive and negative parts of a divisor.
Just notice that $f + g$ has poles exactly at the points either $f$ or $g$ has poles and with order equal to the maximum of the order of the two. Hence $(f + g)_-$ plus $D_+$ is still $\ge 0$. On the other hand, $D_-$ dictates where $f$ and $g$ must have zeros (and to what order). Since each one of $f$ and $g$ have zeros at those points to at least the right order, so will $(f+g)_+ + D_- \ge 0$. |
Why is this equation $\sum_{k=0}^{\infty}k^2\frac{\lambda^k}{k!e^\lambda}=\lambda +\lambda^2$ true? | HINT: Write $k^2 = k(k-1) + k$, and split the sum into two. [Update: The OP has added the complete solution to the post.]
If you are interested in a general $d^{th}$ moment of this distribution, then we are faced with an expression of the form
$$
\mathrm e^{-\lambda} \sum_{k=0}^{\infty} k^d \frac{\lambda^k}{k!}.
$$
To proceed, we first write $x^d$ as a linear combination of the $d$ falling factorial polynomials (also called Pochhammer symbols):
$$
\begin{align*}
(x)_1 &= x
\\
(x)_2 &= x(x-1)
\\
(x)_3 &= x(x-1)(x-2)
\\
&\vdots
\\
(x)_d &= x(x-1)(x-2) \cdots (x-d+1)
\end{align*}
$$
[I am following the notation used in the wikipedia article.] After this step, the rest of the manipulations is very similar to the $d=2$ case that the OP showed in the question.
The falling factorial polynomials appear in multiple contexts. First, the Pochhammer symbol $(x)_i$ is obviously related to the binomial coefficient $\binom{x}{i}$. Further, as seen above, these polynomials are very useful in manipulating summations. See also this answer of robjohn. |
Cumulative distribution function of the median | Let $X_1,\ldots,X_n$ be i.i.d. with $\mathrm{Pois}(\lambda)$ distribution. Then the probability mass function of the $k^{\mathrm{th}}$ order statistic is given by
\begin{align}
\mathbb P(X_{(k)}=m) &= \mathbb P(X_{(k)}\leqslant m) - \mathbb P(X_{(k)}<m)\\
&= \sum_{j=0}^{n-k}\binom nj((1-F(m))^j(F(m))^{n-j} - (1-F(m)+f(m))^j(F(m)-f(m))^{n-j}),
\end{align}
where $F(m) = \sum_{k=0}^m \frac{\lambda^k}{k!}e^{-\lambda}$ is the distribution function of $X_1$ and $f(x) = \frac{\lambda^m}{m!}e^{-\lambda}$ is the probability mass function of $X_1$. Letting $n=3$ and $k=2$ we have
\begin{align}
\mathbb P(X_{(k)}=m) &= \sum_{j=0}^{1}\binom 3j\left(\left(\sum_{k=m+1}^\infty \frac{\lambda ^k}{k!}e^{-\lambda}\right)^j\left(\sum_{k=0}^m \frac{\lambda^k}{k!}e^{-\lambda}\right)^{3-j} - \left(\sum_{k=,}^\infty \frac{\lambda^k}{k!}e^{-\lambda}\right)^j\left(\sum_{k=0}^{m-1} \frac{\lambda^k}{k!}e^{-\lambda}\right)^{3-j}\right)\\
&=\frac{\Gamma (m+1,\lambda )^2 (3 m!-3 m \Gamma (m,\lambda )+\Gamma (m+1,\lambda ))}{(m!)^3}+\frac{2 \Gamma (m,\lambda )^3}{\Gamma (m)^3}-\frac{3 \Gamma (m,\lambda )^2}{\Gamma (m)^2}
\end{align}
I don't know how to simplify this further, unfortunately. |
The diagonalizable matrices are not dense in the square real matrices | Consider the matrix :
$$R=\pmatrix{0 & -1\cr 1 & 0}\in M_2(\mathbb{R})$$
which is not diagonalizable since its characteristic polynomial $X^2+1$ does not split in $\mathbb{R}[X]$.
Suppose there exists a sequence $(D_n)$ of diagonalizable matrices in $M_2(\mathbb{R})$, which converges to $R$.
For every $n$, the characteristic polynomial of $D_n$ has nonnegative discriminant and by continuity of the determinant, it should be the same for $R$, but this is not the case.
This proves that the set $\mathcal{D}_2$ of all diagonalizable matrices in $M_2(\mathbb{R})$ is not dense in $M_2(\mathbb{R})$. |
Good examples of violations to the "law of sufficiently large intervals" | My favourite example are Borwein integrals.
Consider following expressions
$$
\begin{aligned}
\frac{2}{\pi} \int_0^\infty \frac{\sin(x)}{x} dx &= 1\\
\frac{2}{\pi} \int_0^\infty \frac{\sin(x)}{x}\frac{\sin(x/3)}{x/3} dx &= 1\\
\frac{2}{\pi} \int_0^\infty \frac{\sin(x)}{x}\frac{\sin(x/3)}{x/3}\frac{\sin(x/5)}{x/5} dx &= 1\\
\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots&\dots\dots\\
\frac{2}{\pi} \int_0^\infty \frac{\sin(x)}{x}\frac{\sin(x/3)}{x/3}\cdots\frac{\sin(x/13)}{x/13} dx &= 1
\end{aligned}
$$
i.e. you have sequence $1,1,1,1,1,1,1$.
Would you expect the next number to be $1$?
Well
$$
\frac{2}{\pi} \int_0^\infty \frac{\sin(x)}{x}\frac{\sin(x/3)}{x/3}\cdots\frac{\sin(x/15)}{x/15} dx = \frac{467807924713440738696537864469}{467807924720320453655260875000}.
$$
Seven ones is too little to make conclusions? Than what about a slight modification:
$$
\frac{2}{\pi} \int_0^\infty \frac{\sin(x)}{x}\frac{\sin(x/101)}{x/101}\cdots\frac{\sin(x/(100n+1))}{x/(100n+1)} dx = 1
$$
for all $n < e^{99} \approx 10^{43}$ and than the pattern breaks. |
Differential equation $x' = 2tx$ | $$x'=2tx\implies \frac{x'}x=2t\implies \int\frac{dx}x=\int 2t \,dt\implies \log(x)=t^2+C$$
Just continue. |
Prove that there is no rational lowest upper bound for $\sqrt{3}$. | if you have positive integers $x,y$ with
$$ \frac{x}{y} > \sqrt 3 $$
we also have
$$ x^2 - 3 y^2 > 0 $$
is an integer
so that there is some positive $T$ with
$$ x^2 - 3 y^2 = T. $$
Well we calculate that
$$ u = 2x + 3y \; \; , \hspace{9mm} v = x+2y $$
satisfy
$$ u^2 - 3 v^2 = T > 0 $$
also.
$$ u^2 > 3 v^2 $$
$$\frac{u^2}{v^2} > 3 $$
$$ \left( \frac{u}{v} \right)^2 > 3 $$
Well
$$ x^2 - 3 y^2 > 0 $$
$$ x^2 > 3 y^2 $$
$$ x^2 + 2 x y > 2xy + 3 y^2 $$
$$ x (x+2y) > y(2x+3y) $$
$$ \frac{x}{y} > \frac{2x+3y}{x+2y} $$ so
$$ \frac{x}{y} > \frac{2x+3y}{x+2y} > \sqrt 3 $$ |
Double struck f in combinatorics graph theory | Here is a community wiki answer so the question can be closed:
$\mathbb{F}_2$ is the field of two elements. Therefore $\mathbb{F}_2^n = \underbrace{\mathbb{F}_2 \times \cdots \times \mathbb{F}_2}_{n\ \text{times}}$ is the $n$-dimensional $\mathbb{F}_2$-vector space consisting of all binary $n$-tuples with addition modulo 2. |
Reversing Dot Product Problem | It is not possible to determine $y$. From $z=\frac y{w^t\cdot y}$ we can see that we can multiply $y$ by any scalar without changing $z$. In fact we can multiply both sides by $w^t$ and find $w^t\cdot z=1$. If that is true, $y$ can be any vector proportional to $z$. If it is not true, there is no solution. |
Find the limit of the function:$f(x)=x-\frac{x^3}{6}-\sin x$ | Intuitively, we know that
$$\sin x\le 1\tag1$$
then adding both sides $(1)$ with $\dfrac{x^3}6-x$, yield
$$
\dfrac{x^3}6-x+\sin x\le\dfrac{x^3}6-x+1.\tag2
$$
Now, multiply both sides $(2)$ with $-1$, yield
$$
x-\dfrac{x^3}6-\sin x\ge x-\dfrac{x^3}6-1.
$$ |
Circle revolutions rolling around another circle | Thank you everyone for your answers. They were all informative, but I wasn't able to intuitively understand them until I saw this visual:
https://www.geogebra.org/m/v3a437ux
The key for me was to see that there are two kinds of revolution happening:
Revolutions of Circle A with respect to Circle B, and
Revolutions of Circle A with respect to the (overhead) observer.
To the observer, the revolutions of the type (2) complete before revolutions of type (1). The revolutions of type (1) happen at the $r_B/r_A$ roots of unity, and the revolutions of type (2) happen at the $(r_B/r_A + 1)$ roots of unity. It was very helpful to see precisely where these revolutions occur, because it was mentally impossible to unify the two types of revolution without the visual. It's also useful to notice that at any given moment, with respect to the observer, the points of Circle A on the far side from Circle B are moving faster than the points of Circle A on the side that is touching Circle B.
David K's answer is great for understanding that the parametric mapping of time to point pairs is the same whether Circle A is rolling along a straight line or a circle, and that we are dealing with frames of reference. I simply didn't believe the mapping was the same until I saw the visual.
zoli and Hans Lundmark's answers are great for understanding that one extra revolution must occur at some point along the rolling path. The complete answer, of course, is that this last mysterious revolution does not happen all at once, but gradually along the entire roll.
The visual was discovered in the comments on this page, which is another discussion of the same problem:
https://plus.maths.org/content/circles-rolling-circles |
Prove that $\exists_{x_0}f''(x_0)=0$ | You need to apply two times the Rolle theorem to $g(x)=f(x)- x$ in $[0,1]\;, [1,2]$ ,and you have in $x_0,x_1 \in (0,2)$ that $g'(x_0)=g'(x_1)=0$ and again use the rolle theorem |
Language of Recursively enumerable languages | Depending on your definitions of the notation of a TM, your set could even be regular. If transitions that use states or symbols that are not in the TM's state set/tape alphabet are allowed and simply ignored, and maybe some other details (fixed start state for all TM), the syntax for a correct TM should be regular. It is definitely recursively enumerable and even context-sensitive, even if you are restrictive on what a valid specification is (no inexistent states/smybols etc.). Not much needs to be checked.
However, the resulting language should not be called the "Language of Recursively enumerable languages" as you do in your title. It is merely the language of all valid Turing Machine specifications. |
Z compact with another topology? | Let $(X, \tau)$ be a topological space. Suppose that $A \subseteq X$ so that $|A|<\infty$. Suppose that $\mathcal{U}$ is an open cover of $A$. Then we go ahead and choose $U_a$ so that $a \in U_a$. There are at most finitely many $U_a$. Then $A^{\prime}=\{U_a \mid a \in U_a\}$. But then there is a finite sub-collection so that $A \subseteq A^{\prime}$, hence $A$ compact. This is more general. Any finite set is compact.
I'm not sure what you meant by "singleton sub-cover," but the set of $\{x\} \subseteq X$ need not be open. See here for further discussion:
Are Singleton sets in $\mathbb{R}$ both closed and open? |
Probability Question: Rolling two dice with re-roll | No, you can just assume the first roll is $6$ because you reroll anything else. It becomes a single roll with $+100$ on a $6$ and $-20$ on anything else, which is fair. |
linear combinations of eigenvalues | For diagonal matrices as well as matrices that share the same eigenvectors what you claim is true. I am not sure of the necessary condition.
The eigenvalue is the scalar by which the eigenvector is expanded or contracted by the original matrix $A$. If you have two matrices $A$ and $B$, let $\lambda_{1A}$ and $\lambda_{1B}$ be (one of) their eigenvalues and the corresponding eigenvectors be $v_{A}$ and $v_{B}$.
The eigenvector of $A+B$ may not be the linear combination of $v_{A}$ and $v_{B}$ as $v_{B}$ will be twisted (not just multiplied) by A perhaps having a component along all other eigenvectors; and $v_{A}$ will be similarly twisted by B.
For perhaps a deeper answer beyond my understanding, I found this reference.
http://arxiv.org/abs/math/9908012 |
Evaluating $\int_0^\pi f^{-1} (x) dx$ with $f(x)=\cos x+ x$. | Note that $f(x):=\cos(x)+x$ is a strictly increasing map from $\mathbb{R}$ to $\mathbb{R}$. Moreover
$$f(\pi-x)=-\cos(x)+\pi-x=\pi-f(x),$$
which implies that
$$f^{-1}(f(\pi-f^{-1}(x)))=f^{-1}(\pi-f(f^{-1}(x)))\implies f^{-1}(\pi-x)+f^{-1}(x)=\pi.$$
Hence $I:=\int_0^{\pi} f^{-1}(x)dx=\int_0^{\pi} f^{-1}(\pi-x)dx$
and
$$I=\frac{1}{2}\int_0^{\pi}(f^{-1}(x)+f^{-1}(\pi-x))dx=\frac{1}{2}\int_0^{\pi}\pi dx=\frac{\pi^2}{2}.$$ |
Calculate an Average in RPN (Reverse Polish Notation) | This is correct. There are more ways to compute this but I would call your solution fairly canonical. For example, you could have started like this:
a,b,c,+,+,...
or like this:
10,1/x,a,b,+,...,*
However, these are fairly obvious variants and you are probably looking for a completely different way. |
Are inner product-preserving maps always linear? | Here is a short proof.
$$\begin{split}
\|T(\lambda u + w) - \lambda T(u) - T(w) \|^2
&= \langle T(\lambda u + w) - \lambda T(u) - T(w) , T(\lambda u + w) - \lambda T(u) - T(w) \rangle\\
&=\langle T(\lambda u + w) , T(\lambda u + w) \rangle + \text{more such terms}\\
&= \langle \lambda u + w , \lambda u + w\rangle + \text{more such terms}\\
&= \langle (\lambda u + w) -\lambda u-w, (\lambda u + w) -\lambda u-w\rangle= 0.
\end{split}$$
In the second step, we apply linearity of the inner product to arrive at a sum of terms of the type $\langle T(a),T(b)\rangle=\langle a,b\rangle$. In the last step, we put everything back into one inner product. |
Integration by u-substitution. | $$\int_0^1x^2\cos\left(\frac{x^3}{3}+1\right)\cos\left(\sin\left(\frac{x^3}{3}+1\right)\right)\mathrm{d}x$$
Let's let $u = \sin\left(\dfrac{x^3}{3}+1\right)$ so $\mathrm{d}u = x^2\cos\left(\dfrac{x^3}{3}+1\right)$. We also need to change the bounds of integration. So from $u = \sin\left(\dfrac{x^3}{3}+1\right) $, plugging in $\left[0,1\right]$ we get $\left[\sin(1),\ \sin\left(\dfrac43\right)\right]$ as the bounds. So now our integral reduces down to
\begin{align*}
\int_{\sin(1)}^{\sin\left(\frac43\right)} \cos(u) \ \mathrm{d}u &= \bigg[\sin(u)\bigg]_{\sin(1)}^{\sin\left(\frac43\right)} \\
&= \sin\left(\sin\left(\frac43\right)\right) - \sin(\sin(1))
\end{align*} |
Is there any general function $x(t)$ that gives the solution to $x''(t) = k/x(t)^2$, where k is a constant? | Note that
$ (x')^2 + \dfrac{2k}{x(t)}$ is constant (in physical terms, that is conservation of energy). Call that constant $ E$. Then
$$ x' = \pm \sqrt{E - \dfrac{2k}{x}}$$
Note also that $E - 2k/x \ge 0$ for the square root to be real. The differential equation is separable, and so we get
$$ \int \dfrac{dx}{\sqrt{E - 2k/x}} = \pm t + C $$
The integral can actually be done in closed form, but is messy: if $E > 0$ it is
$$
{\frac {1}{{E}^{3/2}} \left( k\ln \left( {\frac {Ex+\sqrt {x \left( E
x-2\,k \right) E}-k}{\sqrt {E}}} \right) +\sqrt {x \left( Ex-2\,k
\right) E} \right) }
$$ |
$df_x$ maps $N(S.X)$ isomorphically | To say that $g|_{S^\perp}: S^\perp \to g(S^\perp)$ is an isomorphism is just to say that it's injective, since it's onto its image by definition.
But if $v \in \ker(g|_{S^\perp})$, then $v \in S \cap S^\perp = \{0\}$. |
A scheme with not-numerable affine covering. | Yes, $X=\bigsqcup \text{Spec } R_i$ is a scheme. |
Multiple-valued analytic functions | Why is the negative real axis described by $z\le 0$ (shouldn't it be $x\le 0$?)
Whenever a complex number appears in an inequality, the inequality implicitly says that the number is real. Hence, $z\le 0$ is indeed the negative real axis. The inequality $x\le 0$, on the other hand, would be understood as describing the left half-plane (since it does not say anything about $y$).
Isn't it always the case that one value of $\sqrt{z}$ has a positive real part
True (if "positive" means $\ge 0$). But after removing the negative semiaxes we can say the same, but with "positive" being $>0$. The difference is substantial. Strict inequalities are stable under small perturbations; thus, choosing the root with positive real part gives us a continuous function.
And why does $w=\sqrt{z}$ become a single-valued function, when we restrict the domain but not the range?
The range is determined by the domain. Whenever we talk about the restriction of some function, it's the domain that is being restricted.
There are still two values of square root to choose from, but it's now possible to make the choice continuously, (and, consequently, in a holomorphic way). |
Irreducible cyclotomic polynomial | Yes there is.
Assuming $\gcd(q,n)=1,$ the $n^{th}$ cyclotomic polynomial $\Phi_n(x)\in\mathbb{Z}[x]$ will remain irreducible (after reduction mod $p$) in $\mathbb{F}_q[x]$ if and only if the residue class of $q$ generates the multiplicative group $\mathbb{Z}_n^*$ of residue classes coprime to $n$.
This is because if $z$ is a root of $\Phi_n(x)$ in an extension of $\mathbb{F}_q$, then its conjugates are $z^q, z^{q^2},$ et cetera. If you get the same number of conjugates as you would get over $\mathbb{Q}$, then you are done. But over $\mathbb{Q}$ the conjugates are exactly $z^a, \gcd(a,n)=1, 1\le a<n$.
More details. Let $z$ be a primitive $n^{th}$ root of unity in an extension $\mathbb{F}_q$. Let $\mathbb{F}_q[z]=\mathbb{F}_{q^k}$. Because the multiplicative group
of $\mathbb{F}_{q^k}$ is cyclic of order $q^k-1$, we know that $k$ is the smallest positive integer with the property that $n\mid q^k-1$. By the Galois theory of finite fields the minimal polynomial of $z$ is
$$
m(x)=(x-z)(x-z^q)(x-z^{q^2})\cdots(x-z^{q^{k-1}}).
$$
This will always be a factor of the cyclotomic polynomial $\Phi_n(x)$. The roots of the latter are $z^a, 1\le a<n, \gcd(a,n)=1$. The polynomial $\Phi_n(x)$ is thus irreducible precisely when the two sets of roots are the same.
Here $z^{q^i}=z^a$ if and only if $q^\ell\equiv a\pmod{n}$. Therefore all the primitive roots $z^a$ are zeros of $m(x)$ only, if all the exponents $a$ are congruent to a power of $q$ modulo $n$.
All of the above assumed that $\gcd(n,q)=1$. Let us next consider the case where that is not true. Here $q$ is the order of a finite field, so it is a power of a prime number $p$.
Therefore $\gcd(n,p)>1$ if and only if $p\mid n$, so we can write $n=mp^\ell$ for some integer $\ell\ge1$, $m$ coprime to $p$. Then we have in the ring $\mathbb{F}_p[x]$ the factorization
$$
x^n-1=(x^m-1)^{p^a}
$$
as a consequence of Freshman's dream:
$$
(a+b)^p=a^p+b^p.
$$
Therefore all the roots of $\Phi_n(x)$ in $\overline{\mathbb{F}_q}$ are actually roots of $x^m-1$ as well. Hence any one of them has at most $\phi(m)<\phi(n)$ conjugates. Therefore $\Phi_n(x)$ cannot be irreducible in $\mathbb{F}_q[x]$. (the part in italics is incorrect, see below)
Edit: As pointed out by Yecabel, the last claim is a touch too sweeping.
We do see that any zero of $\Phi_n(x)$ has at most $\phi(m)$ conjugates.
But, it is possible that $\phi(m)=\phi(n)$. As $n=mp^\ell$, $p\nmid m$,
we have $\phi(n)=\phi(m) p^{\ell-1}(p-1)$. So for $\phi(n)$ to be equal to $\phi(m)$ we need that $p=2$ and $\ell=1$. Leaving the special case of $q$ even, $n=2m$, $m$ odd, to deal with. We always have $\Phi_{2m}(x)=\Phi_m(-x)$. And in characteristic two $\Phi_m(-x)=\Phi_m(x)$, so $\Phi_n(x)$ is irreducible if and only if $\Phi_m(x)$ is. The conclusion is thus
If $\gcd(q,n)>1$ then $\Phi_n(x)\in\Bbb{Z}[x]$ stays irreducible in $\Bbb{F}_q[x]$ only, if $q$ is a power of two, $n=2m$, $2\nmid m$, and
$\Phi_m(x)$ stays irreducible (see the result in the main case). |
$\vDash \varphi$ iff $\| \varphi \|_A =1$ for every boolean valued structure $A$ | Hint
The proof assume that $b \ne 1$ and we can consider the case that $b=b'_n=\sup B_n$.
By the R-S Th there is an homomorphism $h: \textbf{B}\rightarrow \textbf{2}$ such that: $h(b'_n)=\sup h(B_n)$.
But - see proof of R-S Th, page 29 - an homomorphism between boolean algebras maps $0$ onto $0$ and $1$ onto $1$. This means that, if $b'_n=b \ne 1$, also $h(b)\ne 1$.
But $h(b) \in \textbf{2}$ and thus, if it is not $1$, it must be $0$. |
The action of the group of deck transformation on the higher homotopy groups | The issue is with base points:
You write:
From this we can induce an automorphism $d_\gamma*:\pi_n(\tilde X)\rightarrow\pi_n(\tilde X)$ given by $d_\gamma *[\tilde f]=[d_\gamma\circ\tilde f]$ for some $\tilde f:S^n\rightarrow \tilde X$.
But notice that $[\tilde f]$ has a different base point than $[d_\gamma\circ\tilde f]$ so it's not quite right to say $d_\gamma *$ is an "automorphism", since technically, $[\tilde f]$ and $[d_\gamma\circ\tilde f]$ belong to different groups.
They are isomorphic groups and so it seems like the better thing to do would be what you did, then move back to the group $[\tilde f]$ lives in by the isomorphism $\tilde\gamma_ *^{-1}$ which is just a change of base point transformation in $\tilde X$ via the path $\tilde\gamma$ which is the lift of $\gamma$ that corresponds to the deck transformation $d_\gamma$.
Being a lift of $\gamma$, this should now work out the way you want it to. |
Poisson random variable with mean going to infinity | You could use characteristic functions and the Lévy continuity theorem. |
Interpret the following sequence $X_n^{(k)} = \underset{1 \leq i_1 < \dots <i_k \leq n }{\sum} \; \xi_{i_1} \dots \xi_{i_k}$ | It looks like they are taking all ordered combinations of k elements and summing them under the condition that $k\leq n$
So for example, with k = 1, we get the regular sum.
With $k=2$, we have a sum of terms, with each term a product of two elements from the finte sequence $\xi_1, \dots, \xi_n$. |
Balanced $2019\times 2019$ grids (BMO $2020$ round $2$) | If two adjacent squares in a row have the same colour.
For $2\times2$ balancing, the two columns, $C_1$ and $C_2$, containing these squares must consist of rows of same coloured squares, the colours alternating from row to row.
If there were also two adjacent squares in a column with the same colour then we could make the equivalent statement about the rows, $R_1$ and $R_2$ containing these two squares. But then we would obtain a contradiction where $C_1$ and $C_2$ cross $R_1$ and $R_2$.
Therefore no column contains adjacent same coloured squares and so the colours strictly alternate as one goes down any column and are completely determined by the colours in the top row.
This alternation means that 'even' blocks are automatically balanced whereas an 'odd' block is balanced if and only if its top row is balanced. The top row of the whole grid must therefore consist of alternating runs of white and black squares, each run of length $1$ or $2$, where the colours of the runs of length $2$ must alternate across the row.
From any given run of length $2$, successively pair up the squares on each side and note that all runs of length $2$ will be in one of these pairs. Note also that there will be a single square left on one side or the other.
We can colour the single square with either colour and then as we go across the row we must change colour when we come to a new pair of squares but we can arbitrarily choose the colour of the second square of the pair.
We have $2$ choices for the end of the single square and $2$ choices for its colour. There are then $2^{1009}-1$ choices of colour for the pairs. (Since we are assuming at least one run of $2$ squares we must discard the one choice of colours without such a run.) The total number of boards is
$$2^{1011}-4.$$
Otherwise.
We have the same number of boards when two adjacent squares in a column have the same colour. Plus two boards where no adjacent squares have the same colour.
Total number of balanced boards.
$$2\times (2^{1011}-4)+2=2^{1012}-6.$$ |
About a projection operator in cyclic von Neumann Algebra | There is no general answer. Take $M=B(H)$, and every projection is in $M$. Take $M$ to be any von Neumann algebra without minimal projections (for instance, a II$_1$-factor) and it cannot contain a rank-one projection. |
Difference real and complex fourier series | With $f(x) = x^2$, the complex Fourier series should be indexed by the integers. That is,
$$ f(x) \sim c_0 + \sum_{n=-\infty}^{\infty} c_n \mathrm{e}^{inx}, $$
where the Fourier coefficients are given by
$$ \frac{1}{2\pi} \int_{-\pi}^{\pi} x^2 \mathrm{e}^{-inx}\, \mathrm{d} x = \begin{cases}
\frac{2}{n^2}(-1)^n & \text{if $n\ne 0$, and} \\
\frac{\pi^2}{3}
& \text{if $n=0$.}
\end{cases}$$
(I get this after two integration by part steps–I'm leaving off the details here, as you seemed to have worked them out correctly in your work.)
Via little bit of manipulation, this becomes
\begin{align*}
f(x)
&\sim \sum_{n=-\infty}^{\infty} c_n \mathrm{e}^{inx} \\
&= c_0 + \sum_{n=-\infty}^{-1} c_{n} \mathrm{e}^{inx} + \sum_{n=1}^{\infty} c_{n} \mathrm{e}^{inx} \\
&= c_0 + \sum_{n=1}^{\infty} c_{-n} \mathrm{e}^{-inx} + \sum_{n=1}^{\infty} c_{n} \mathrm{e}^{inx} \\
&= \frac{\pi^2}{3} + \sum_{n=1}^{\infty} \frac{2}{(-n)^2} (-1)^{-n}\mathrm{e}^{-inx} + \sum_{n=1}^{\infty} \frac{2}{n^2} (-1)^{n}\mathrm{e}^{inx} \\
&= \frac{\pi^2}{3} + \sum_{n=1}^{\infty} \frac{2}{n^2} (-1)^n\left( \mathrm{e}^{inx} + \mathrm{e}^{-inx} \right) \\
&= \frac{\pi^2}{3} + \sum_{n=1}^{\infty} \frac{4}{n^2} (-1)^n \left( \frac{\mathrm{e}^{inx} + \mathrm{e}^{-inx}}{2} \right) \\
&= \frac{\pi^2}{3} + \sum_{n=1}^{\infty} \frac{4}{n^2} (-1)^n \cos(nx),
\end{align*}
which is the result you were hoping to get. |
What is the sum of the coefficients in the expansion of $(x+y+w+z)^{20}$ | That will be $(1+1+1+1)^{20} = 4^{20}$.
You just set $x=y=z=w=1$ in the formula, and simplify. |
unit conversion Diffusion coefficient | Yes. One km$^2$ is a square with side 1 km length, so it contains 1000 rows of 1000 ( that is 1 million) squares of 1 m length. |
is there a bundle with any given Euler class? | The answer is no.
In fact this question has a nice relationship to the classic Hopf-invariant-one problem, since the Hopf maps define sphere bundles of Euler class equal to the volume form of $S^n$ for $n = 2, 4, 8$ (there is also an $n = 1$ map but it does not define an orientable $S^0$ bundle). For $n$ which are not these, there is no Euler class equal to the volume form.
Further there is a relationship to the Steenrod problem since Euler classes may be represented in Poincar\'e duality by embedded manifolds (the zero locus of a suitably regular smooth section of the associated vector bundle). Thus there are lots of classes we expect cannot be represented this way.
On the other hand it was proven by L. Guijarro, T. Schick, and G. Walschap in ``Bundles with Spherical Euler Class" that if $X$ is an $n$-dimensional CW complex and $\alpha \in H^k(X,\mathbb{Z})$, $k$ even, there is a multiple of $\alpha$ which is an Euler class. This vibes with the previous comments since the obstructions to representing a class by an immersed manifold (not sure about embedded) are all torsion. Interestingly, the multiplication factor is a universal function of $n$ and $k$. There are also some more general results about representation by Pontryagin classes here.
That we need $k$ even can be seen by studying $S^1$. No multiple of the volume form is the Euler class of an $S^0$ bundle, since there are only two of these, the trivial and the Mobius bundle. |
Proving a function is operation preserving | At issue here is essentially whether $f$ is well-defined in the first place, that is, is independent of the choice $x$ that gives a particular value $e^{2 \pi i x / n}$; more explicitly, this means that for any $x, y$ such that $e^{2 \pi i x / n} = e^{2 \pi i y / n}$ we need to ensure that $[x]_n = [y]_n$. Once we know that $f$ is well-defined, then checking whether it preserves the operation is just a matter of using the definition of $f$, namely, checking that
$$f(e^{2 \pi i x / n} \cdot e^{2 \pi i y / n}) = f(e^{2 \pi i x / n}) +_n f(e^{2 \pi i y / n}) .$$
Remark Another way to think about this is as follows: We're checking that the map $q_n : (\Bbb Z, +) \to (\Bbb Z_n, +_n)$, $a \mapsto [a]_n$ is constant on the fibers of the map $\epsilon: (\Bbb Z, +) \to (G, \,\cdot\,)$, $a \mapsto e^{2 \pi i a / n}$. It is, so there is a map $F : G \to \Bbb Z_n$ satisfying $F \circ \epsilon = q_n$, and unwinding definitions we see that the latter statement is just that the induced map $F$ is just the given map $f$. In fact, if we already know that $q_n$ and $\epsilon$ preserve operations, then we can conclude that $f$ does, too. |
Sub group of $\Bbb{Z}\oplus \Bbb{Z}$ generated by | Assume first that $\text{gcd}(d_1,d_2)=1$. From $d_1x+d_2y=0$ we get
$$
d_1x=-d_2y.
$$
So, since we are assuming that $d_1$ and $d_2$ are coprime, we know that $x$ is a multiple of $d_2$ and $y$ is a multiple of $d_1$. If $x=kd_2$, then
$$
y=-\frac{d_1x}{d_2}-d_1k.
$$
So every pair $(x,y)$ is of the form $(d_2k,-d_1k)=k\cdot (d_2,-d_1)$.
In general, if $c=\text{gcd}(d_1,d_2)$, we can write $d_1=e_1c$, $d_2=e_2c$ with $e_1$,$e_2$ coprime. From $d_1x+d_2y=0$ we get $e_1x+e_2y=0$ and the above applies to show that all elements in the subgroup are of the form $$k\cdot(e_2,-e_1).$$
Finally, as you mention in the question, $c\,e_1e_2=\text{lcm}(d_1,d_2)$, so
$$
e_2=\frac{\text{lcm}(d_1,d_2)}{ce_1}=\frac{\text{lcm}(d_1,d_2)}{d_1}
$$
and similarly
$$
e_1=\frac{\text{lcm}(d_1,d_2)}{ce_2}=\frac{\text{lcm}(d_1,d_2)}{d_2}.
$$ |
Maximize the distance of projectile motion | The projectile equations are:
$$x=v\,\cos(\varphi)\,t\tag 1$$
$$y=v\,\sin(\varphi)\,t-\frac{1}{2}\,g\,t^2\tag 2$$
the arc length is:
$$L=\int_0^{t_f} \sqrt{\dot{x}^2+\dot{y}^2}\,dt$$
with $y=0~,\Rightarrow $ $t_f=\frac{2\,v\sin(\varphi)}{g}$
thus:
$$L=L(v~,\varphi)\tag 3$$
and the max arc length condition :
$$\frac{\partial L}{\partial \varphi}=\\\frac{v^2\,\cos(\varphi)}{g}\,\underbrace{(2+\ln \left( -\sin \left( \varphi \right) +1 \right) \sin \left(
\varphi \right) -\ln \left( \sin \left( \varphi \right) +1 \right)
\sin \left( \varphi \right)
)}_{f(\varphi)}=0$$
$f(\varphi)=0~\Rightarrow~ \varphi=56.465^\circ$
Remarks :
I used Maple to do the calculations. |
A metric on a power set of a metric space. | HINT: While not violating the triangle inequality, here's a reason why this is not a metric. Let $A\cap B\neq\varnothing$, but $A\neq B$. Calculate $D(A,B)$. |
Help with limit of radical function | When you divided by $x^2$ in the numerator and denominator and moved it inside the radical, you should have been dividing by $x^4$, not $x^{1/2}$. Consequently, you should have $$\lim_{x\to\infty} \frac{\sqrt{4+3x^{-4}}}{5+3x^{-2}}$$ Which does, indeed, evaluate to $2/5$ |
Continuous and discrete random variables defined on the same probability space? | Consider $\Omega = [-1, 1]$, $X(t) = t$, and set the codomain of $Y$ to $\{0, 1\}$, with
$$
Y(t) = \begin{cases}
0 & t \le 0 \\
1 & \text{else}
\end{cases}.
$$
Let $x = 0$ and $y = 0$ in your last paragraph. The first set is
$\{0\}$ and the second is $[-1, 0]$, and their intersection is $\{0\}$.
So the answer is "no" to that main question.
It is true that the measure of that intersection is always zero (assuming the intersection is measurable, which I think it always is, but I'm not good at this stuff...), but that's because it's a subset of the right set, which has measure
zero to start with. |
Probability that Amy the cyclist will complete her route within 30 days (or less) | The mean number of km she does is $3.5 \cdot 20 \cdot 30=2100=\mu$ The variance on one day is $\frac{35}{12}$, so the total variance is $\frac {35 \cdot 30}{12}=\frac {175}2$. The standard deviation is $\sigma=\sqrt{\frac {175}2}$ She needs to be $\frac {2300-2100}{\sigma}\approx 21.38$ standard deviations high. Ain't gonna happen |
Prove in GL that no statement can be proven consistent with PA unless PA is inconsistent | For $\Box\bot\to\Box\neg\Box\neg p$, use the tautology $\bot\to\neg\Box\neg p$.
For $\Box\neg\Box\neg p\to\Box\bot$, use the tautology $\bot\to\neg p$ to infer $\Box\bot\to\Box\neg p$, thus $\neg\Box\neg p\to\neg\Box\bot$, from which you can derive $\Box\neg\Box\neg p\to\Box\neg\Box\bot$. Then, as the hint tells you, $\Box\neg\Box\bot\to\Box\bot$ is an instance of Löb’s axiom. |
$\int_{0}^{\frac{\pi}{4}}{\frac{\cos(2x)}{\cos x+\sin x}dx}$ | Hint: Notice, that:
$$\int_0^{\frac{\pi}{4}}\frac{\cos^2x - \sin^2x}{\cos{x}+\sin{x}}dx=
\int_0^{\frac{\pi}{4}}\frac{(\cos{x} - \sin{x})(\cos{x}+\sin{x})}{\cos{x}+\sin{x}}dx=
\int_0^{\frac{\pi}{4}}\left(\cos{x}-\sin{x}\right)dx$$ |
Solving congruence system with no multiplicative inverse | A congruence
$$ax\equiv b\pmod{n}\tag{1}$$
is soluble iff $\gcd(a,n)\mid b$. In this case (1) is equivalent to
$$\frac agx\equiv \frac bg\pmod{\frac ng}\tag{2}$$
where $g=\gcd(a,n)$. As $a/g$ is coprime to $n/g$ you may solve (2)
by multiplicative inverses if you like. |
Convergence of series with complex exponentials | By Parseval, the coefficients $a_n$ are square-summable. Consequently, they are bounded. Hence, the series defining $F$ converges by comparison to the geometric series $\sum r^n$.
Furthermore, the convergence is uniform on compact subsets of the disk.
Furthermore, if the series for $F$ is differentiated term-wise, the convergence persists. You can use this justify the fact that the sum is a smooth function with zero Laplacian. (Although it's easier to write each term as real part of a holomorphic function and appeal to the fact that locally uniform convergence preserves holomorphicity; the latter follows from Morera's theorem.) |
Representing integer as product of $2^n$ and an odd number | This is kind of an explicit formula for $x\ne 0$:
$$N(x) = \mathrm{max}\{n \in \mathbb{N} : 2^n \mid x\} \quad \text{and} \quad Q(x)=x / N(x).$$
Algorithmically you repeatedly divide $x$ by $2$ and count the number $N(x)$ of divisions until you get a non-zero remainder $Q(x).$ |
If $n$ is the sum of two squares, then $n$ is not congruent to $3\pmod 4$ | The squares in $\Bbb Z_4$ are $0$ and $1$ (calculate all 4 possible squares to see this). Thus the sum of two squares is either $0,1,2$. |
Average value of $g'$ on $[a, b]$ | hint
$ g'$ is continuous at $ [a,b] $, and is then integrable at $ [a,b]$.
By FTC, we have
$$\int_a^bg'(x)dx=\Bigl[g(x)\Bigr]_a^b=g(b)-g(a)$$ |
Proving $ \int_0^{+\infty} \frac{\sin(t)^{^{2n+1}}}{t} \, dt= \frac{\pi (2n)!}{2^{2n+1}(n!)^2} $ with elementary calculus? | By the binomial theorem:
$$ \sin(t)^{2n+1} = \frac{(-1)^n}{4^n}\sum_{k=0}^{n}(-1)^k\binom{2n+1}{k} \sin((2n+1-2k)t)$$
hence
$$ \int_{0}^{+\infty}\left(\sin t\right)^{2n+1}\frac{dt}{t} = \frac{\pi(-1)^n}{2\cdot 4^n}\sum_{k=0}^{n}(-1)^k \binom{2n+1}{k} $$
and by recalling that $\sum_{k=0}^{2n+1}(-1)^k\binom{2n+1}{k}=\sum_{k=0}^{2n}(-1)^k\binom{2n}{k}=0$ and invoking $\binom{2n+1}{k}=\binom{2n}{k}+\binom{2n}{k-1}$ and the symmetry of binomial coefficients, we have that
$$ \sum_{k=0}^{n}(-1)^k \binom{2n+1}{k}=(-1)^n\binom{2n}{n}, $$
so
$$ \int_{0}^{+\infty}\left(\sin t\right)^{2n+1}\frac{dt}{t} = \frac{\pi}{2^{2n+1}}\binom{2n}{n} $$
as claimed. |
Prove that the maximum volume of a triangular-base prism is $\sqrt{\frac{K^3}{54}}$ where $K$ is the area of three triangles containing a vertex $A$ | WOLOG, consider the case $A$ is located at top of prism.
Let $A,B,C$ be the vertices of the triangle at top and $a,b,c$ be the lengths
of corresponding sides.
Let $\Delta$ be the area of triangle $\triangle_{ABC}$. Let $h$ and $V$ be the height and volume of the prism.
The given data is
$$K = \Delta + (b+c)h\quad\text{ and }\quad V = \Delta h$$
Notice $$\Delta \le \frac12 bc \le \frac18 (b+c)^2\quad\implies\quad (b+c) \ge \sqrt{8\Delta}$$
We have
$$K \ge \Delta + \sqrt{8\Delta} h
= \frac{V}{h} + \sqrt{8Vh}
= \frac{V}{h} + \sqrt{2Vh} + \sqrt{2Vh}$$
Apply AM $\ge$ GM to rightmost expression, we get
$$K \ge 3\left(\frac{V}{h}\times \sqrt{2Vh} \times \sqrt{2Vh}\right)^{1/3}
=3(2V^2)^{1/3}$$
This is equivalent to the desired inequality
$\displaystyle\;\sqrt{\frac{K^3}{54}} \ge V$.
Please note that this bound is tight. When $a : b : c : h = 2\sqrt{2} : 2 : 2 : 1$, triangle $\triangle_{ABC}$ is isosceles with a right angle at $A$.
Furthermore, the equality $\displaystyle\;\sqrt{\frac{K^3}{54}} = V$ is achieved. |
Show $\sim:=\{(X,Y)\in\mathcal P(Z)\times\mathcal P(Z): X\cap Y\ne\emptyset\}$ isn't transitive. | Hint:
For a similar problem, we might be talking about the relation between circles in a plane where one circle is related to another circle if they overlap.
Despite $A$ overlapping $B$ and $B$ overlapping $C$, you can see that $A$ and $C$ do not overlap in the picture below.
Relate this to the specific problem you were asked and formalize a counterexample using the same ideas as above. |
Book Recommendations and Proofs for a First Course in Real Analysis | I have been teaching about 13 years in collage so I have seen many books or texts written by for example, Rudin, Bartle, Apostol and Aliprantis in Analysis. But the ones have been useful for me or for students that have topological approaches or graphical approaches. Rudin's is a great one but there is not examples as you find variously in Apostol's. Bartle's some chapters are including figures but two last ones have a few. Aliprantis's is full of problems and because of that I prefer it. Just an advice : If you are new in any field and that's why you want to be more familiar to those new concepts; try to select the books whose have many solved problems. I prefer to teach through practice. Sorry if my written in English is not good as others. |
If $E$ and $F$ are disjoint closed subsets of a metric space $(X,d)$, then is $dist~(E,F) >0$ always? | Consider $\mathbb R^2$ with standard metric and let $E=\{\,(x,y)\mid xy=1\,\}$, $F=\{\,(x,y)\mid xy=0\,\}$.
Your argument about $e,f$ being accumulation points is false. Instead of
$$ \exists e\in E\exists f\in F\forall r>0\colon d(e,f)<r$$
we only have
$$\forall r>0\exists e\in E\exists f\in F\colon d(e,f)<r $$ |
Show that $x^n + ax + p$ is irreducible, $p$ prime, $a \in \mathbb Z, \lvert a \rvert < p - 1$ | If $x^n+ax+p = fg$ were reducible, with nonconstant monic $f,g\in \mathbb{Z}[x]$. Since $p$ is prime, we can assume the constant term of $f$ be $\pm 1$. So products of roots of $f$ has absolute value $1$, one of the root $r$ must satisfy $|r|\leq1$.
However, $|r^n + ar| \leq 1+|a| < p$, contradicting to the fact that $r^n+ar+p=0$. |
Prove that there exists infinitely many positive numbers $x$ and $y$ such that $x\neq y$ but $ x^x=y^y$ | It is easier than that, you need only examine the derivative: the derivative is $x^x \left ( \frac{d}{dx} x \log(x) \right ) = x^x \left ( \log(x) + 1 \right )$ which changes sign precisely at $x=e^{-1}$. This point is a minimum, and of course $\lim_{x \to \infty} x^x=\infty$. Knowing that, do you see now how for every $x \in (0,e^{-1})$ there exists $y>e^{-1}$ with $x^x=y^y$?
(More generally the original statement holds whenever any continuous function has a local extremum.) |
Intuition/explanation of totally disconnected sets (or subsets of R). | Totally disconnected is sort of opposite to connected.
Being connected is sometimes a counterintuitive concept. So let's have a look at a different, yet similar and more intuitive idea: path connectedness.
A space is path connected if any two points can be connected via continuous path. Examples are: $\mathbb{R}$, the sphere $S^n$ and a singleton $\{*\}$. It is easy to write the opposite of that: a space it totally path disconnected if no two distinct points can be connected via a path. Examples are: $\mathbb{Q}$, the Cantor set, any discrete space (note funny fact: single point is the only nonempty space that is both connected and totally disconnected at the same time). An example of something in-between is $[0,1]\cup[2,3]$.
I often imagine it like this: if I land on a totally disconnected space then regardless of the landing point I cannot move anywhere. I have to jump.
We can use this idea for normal connectedness as well. Except that first we have to somehow translate "connectedness" into a relationship between points. And we do that by realizing that a space is connected if and only if any two points belong to a connected subset. This gives us a way to introduce the opposite: a space is totally disconnected if no two distinct points lie in a connected subset.
Examples I gave you earlier are examples for connected and totally disconnected spaces as well (although these concepts do not pairwise coincide).
Read more here: https://en.wikipedia.org/wiki/Totally_disconnected_space |
Limit points of the set | You are right. A point $x \in \Bbb R$ is a limit point of your set iff there is some sequence with infinitely many distinct terms converging to it. Since every sequence with infinitely many elements of your set converges to $0, 0$ is the only limit point. |
Bipartite graph with $4e > (v-1)^2$ is connected | In the following for $x \in \mathbb{R}$ we denote by $\lceil x\rceil$ the smallest integer $m$ such that $m \geq x$, similarly by $\lfloor x \rfloor$ we denote the largest integer $m$ such that $m \leq x$.
I leave the following two statements as exercises for you to prove:
Exercise 1: A bipartite graph on $v$ vertices has at most $\lceil v/2 \rceil \lfloor v/2 \rfloor$ edges.
Exercise 2: The function $(v_1,v_2) \mapsto \lceil v_1/2 \rceil \lfloor v_1/2 \rfloor + \lceil v_2/2 \rceil \lfloor v_2/2 \rfloor$ subject to the constraints that $v_1,v_2 \geq 1$ are integers and $v_1 + v_2 = v$ has maximium $\lceil (v-1)/2 \rceil \lfloor (v-1)/2 \rfloor$.
Given these two results the statement can be proved as follows:
For a graph $\Gamma = (V,E)$ and a subset $W \subseteq V$ we denote by $\Gamma[W]$ the induced subgraph of $\Gamma$ obtained from $\Gamma$ by deleting all vertices which are not in $W$ and all edges which have an endpoint that is not in $W$.
Assume for the sake of contradiction that $\Gamma = (V,E)$ is not connected, then there is a partition of the vertex set $V = V_1 \dot\cup V_2$, where $|V_1| = v_1$ and $|V_2| = v_2$ such that neither $V_1$ nor $V_2$ is empty and there is no edge in $\Gamma$ going from $V_1$ to $V_2$.
Thus the number of edges in $\Gamma$ is the number of edges in $\Gamma[V_1]$ plus the number of edges in $\Gamma[V_2]$. Further $\Gamma[V_1]$ and $\Gamma[V_2]$ are bipartite (as subgraphs of a bipartite graph). Hence by Exercise 1 and Exercise 2 we have
$$
e \leq \lceil v_1/2 \rceil \lfloor v_1/2 \rfloor + \lceil v_2/2 \rceil \lfloor v_2/2 \rfloor \leq \lceil (v-1)/2 \rceil \lfloor (v-1)/2 \rfloor \leq ((v-1)/2)^2.
$$
Further by assumption we have $e > ((v-1)/2)^2$, thus
$$
((v-1)/2)^2 < e \leq ((v-1)/2)^2,
$$
which is a contradiction. Hence $\Gamma$ must be connected. |
Integral $\int{ \frac{1}{\sqrt {1 - e^{2x}} } dx}$ | HINT:
Let $\sqrt{1-e^{2x}}=y\implies e^{2x}=1-y^2$
and $$\dfrac{2e^{2x}\ dx}{2\sqrt{1-e^{2x}}}=-dy\iff\dfrac{dx}{\sqrt{1-e^{2x}}}=\dfrac{dy}{y^2-1}$$ |
Is $\int_0^{\infty} \frac{t^4}{1+t^6} dt$ convergent? | Hint. A potential problem is as $ t \to \infty$, but one may observe that
$$
0\le\frac{t^4}{1+t^6}\le \frac{1}{t^2}, \qquad t\ge1,
$$ giving the convergence of the given integral by comparison to a Riemann integral
$$
\int_1^\infty \frac{dt}{t^\alpha}<\infty,\qquad \alpha>1.
$$ |
Paul Erdos's Two-Line Functional Analysis Proof | I sent Professor Purdy an email. I asked him what he recalled about the incident. With his permission I've copied his correspondence below.
Dear Jacob,
Yes, I was there, and I'm the one who told the story to Paul Hoffman,
who then included it in his book "The man who loved only numbers."
The 30 page proof was written by Jack Bryan just before Erdos came to
visit, at Texas A & M University. The problem was written on the
blackboard in the mathematics lounge and Erdos saw it and asked "Is
that a problem?" I told him yes, and he went over and wrote a two line
proof on the blackboard. It's the most incredible thing I ever
witnessed, and that's why I told Paul Hoffman the story. Ron Graham
told Hoffman to talk to me because I knew Erdos well.
Much later it became obvious that Erdos loved this story, and I asked
him how he did it without knowing the subject. He said, smiling, "Oh,
I was a good student at school!"
I have come to realize also that Erdos was one of those mathematicians
who could tackle an unknown area as if he knew it. He also prized this
ability in others. He once demonstrated to me that Fred Galvin had
this ability. Fred was at the board and Paul asked him a question, and
he answered it, and he asked Fred "Had you seen these things before",
and Fred answered that he hadn't, and Erdos turned to me and said
"See!"
George |
Are projection maps homomorphisms? | The operation $?$ is simply that of the direct product. See the comment by @SMM:
$$(g_1,h_1)?(g_2,h_2):= (g_1+g_2,h_1*h_2).
$$
I suggest that you use a different symbol than $?$, like $\cdot$ given by $\cdot$ for instance. |
Why is it continuous? | Maybe it will help to work out the concrete case $n = 2$. We consider the open set $U = \{R > r_1 > r_2 > 0\}$; you can visualize this as the part of the square $(0, R)^2$ in the plane which is below the diagonal line $r_1 = r_2$. Let $\pi_1$ and $\pi_2$ denote the projection functions restricted to this set, so that $\pi_1(r_1, r_2) = r_1$ and $\pi_2(r_1, r_2) = r_2$.
If I understand your hypotheses correctly, $f$ is assumed to be a continuous function on $U$ with the property that for every $(r_1, r_2)$ we have $f(r_1, r_2)$ is either equal to $r_1$ or $r_2$ and you want to prove that globally either $f = \pi_1$ or $f = \pi_2$.
So let's first consider the function $g$ that you introduced - you ask why it is continuous. The cleanest answer uses the pasting lemma, which says that if we write $U$ as the union of two closed subsets and define a function on $U$ whose restriction to each subset is continuous then the function is continuous on $U$.
To apply this lemma, let $A_1$ denote the set of points on which $f$ agrees with $\pi_1$ and let $A_2$ denote the set of points on which $f$ agrees with $\pi_2$. Note that $U = A_1 \cup A_2$ since $f$ agrees with either $\pi_1$ or $\pi_2$ everywhere on $U$. Moreover $A_1$ and $A_2$ are closed subsets of $U$ since $A_i = (f - \pi_i)^{-1}(\{0\})$ (and the inverse image of a closed set under a continuous function is closed). You have defined $g \colon U \to \mathbb{R}$ to be the function which restricts to the constant function $1$ on $A_1$ and the constant function $2$ on $A_2$. Constant functions are continuous, so the hypotheses of the pasting lemma are satisfied and we conclude that $g$ is continuous.
Finally, you ask why the continuity of $g$ implies the conclusion of the theorem. The answer uses the concept of connectedness, and specifically the fact that the continuous image of a connected set is connected. Indeed $U$ is connected and thus the set $g(U)$ is a connected subset of $\mathbb{R}$. But by definition $g(U)$ is a nonempty subset of the two point set $\{1, 2\}$, and the only connected subsets of a two point set are one point sets. So $g$ is either the constant function $1$ or the constant function $2$ on $U$, which means that either $A_2$ or $A_1$ is the empty set. But if $A_2$ is empty then $U = A_1$ and hence $f = \pi_1$ on $U$, and similarly if $A_1$ is empty then $f = \pi_2$.
(Note: since connectedness is involved anyway, we could have bypassed $g$ entirely. The sets $A_1$ and $A_2$ are disjoint since $\pi_1$ is never equal to $\pi_2$ on $U$, so $A_1$ and $A_2$ form a separation of $U$ and hence one of them must be empty since $U$ is connected. I'm not sure why the authors chose to write the argument the way they did.)
This completes the proof in the case $n=2$; you can imitate this argument for larger $n$, or just use induction. |
Generalized Josephus problem | You may have confused yourself by using different variable names $n$ and $m$ on different sides of the equations. The variable $n$ here doesn't have a specific value; it stands for the general argument of the function $n$. Thus, if the ansatz $f(n)=1$ is made, you can replace all values of $f$ by $1$, not just ones that literally use 'n' as an argument. Likewise, if $f(n)=n$ for general $n$, then $f(2n)=2n$ and $f(2n+1)=2n+1$. |
$X$ and $Y$ are Compact Metric Spaces Such that for a continuous surjective $f$, $f^{-1}[\{y\}]$ is connected | HINT: Suppose that $H$ and $K$ are disjoint closed subsets of $X$ such that $X=H\cup K$.
Show that $f[H]$ and $f[K]$ are closed sets in $Y$.
Show that for each $y\in Y$, $f^{-1}[\{y\}]\subseteq H$ or $f^{-1}[\{y\}]\subseteq K$.
Conclude that $f[H]\cap f[K]=\varnothing$.
Clearly $f[H]\cup f[K]=Y$, and $Y$ is connected, so ... ? |
Hermite Polynomials Triple Product | Yes, such formulas exist (they are related to the so-called "linearization formulas"), and can be found in the literature. For instance, for the standard Hermite polynomials $H_k(x)$, the corresponding formula is:
$$
\int_{-\infty}^\infty H_\ell(x) \, H_m(x) \, H_n(x) \, e^{-x^2} dx = \frac{2^{(\ell+m+n)/2}\ell! m! n! \sqrt{\pi}}{(\frac{\ell+m-n}{2})!(\frac{m+n-\ell}{2})!(\frac{n+\ell-m}{2})!}
$$
if $\ell+m+n$ is even and the sum of any two of $\ell,m,n$ is not less than the third, or zero otherwise. This is equation (6.8.3) in the book "Special Functions", by Andrews-Askey-Roy.
Notice that your weight function is slightly different (it's a scaled version, corresponding to the "probabilists' Hermite polynomials"), but an easy change of variables will give the result that you want. |
Subfields of $\mathbb{Q}(3^{1/16})$ | As mentioned in the comments the subfields aren't neccesarily of the form $\mathbb{Q}(\alpha^{\frac 1n})$. Nevertheless the easiest way would be to consider the Galois closure of the field and use Galois correspondence.
It's not hard to compute that $\mathbb{Q}(\alpha,\zeta_{16})$, where $\zeta_{16}$ is the principal 16-th root of unity, is the Galois closure of $\mathbb{Q}(\alpha)$. Also we have that $x^{16}-3$ is the minimal polynomial of $\alpha$ over $\mathbb{Q}(\zeta_{16})$. This follows as $3$ is a prime in the ring of integers $\mathbb{Z}[\zeta_{16}]$ and thus we can use the generalized Eisenstein Criterion. Now it's not hard to see that the Galois group is generated by:
$$\tau:
\begin{array}{lr}
\alpha \to \zeta_{16}\alpha\\
\zeta_{16} \to \zeta_{16}
\end{array}
\quad \quad \sigma:
\begin{array}{lr}
\alpha\to \alpha\\
\zeta_{16} \to \zeta_{16}^3
\end{array}
\quad \quad \pi:
\begin{array}{lr}
\alpha\to \alpha\\
\zeta_{16} \to \zeta_{16}^7
\end{array}$$
In fact we have that the Galois group is isomorphic to the semidirect product $\mathbb{Z}_{16}\rtimes \left(\mathbb{Z}_2 \times \mathbb{Z}_4\right)$. As $\{e\} \rtimes \left(\mathbb{Z}_2 \times \mathbb{Z}_4\right)$ is the subgroup fixing $\mathbb{Q}(\alpha)$ you need to find all subgroups of $\mathbb{Z}_{16}\rtimes \left(\mathbb{Z}_2 \times \mathbb{Z}_4\right)$ containing it. They are exactly:
$$\langle e \rangle \rtimes \left(\mathbb{Z}_2 \times \mathbb{Z}_4\right), \langle 8 \rangle \rtimes \left(\mathbb{Z}_2 \times \mathbb{Z}_4\right),\langle 4 \rangle \rtimes \left(\mathbb{Z}_2 \times \mathbb{Z}_4\right),\langle 2 \rangle \rtimes \left(\mathbb{Z}_2 \times \mathbb{Z}_4\right),\langle 1 \rangle \rtimes \left(\mathbb{Z}_2 \times \mathbb{Z}_4\right)$$
Thus by Galois correspondence $\mathbb{Q} \subset \mathbb{Q}(\alpha)$ has exactly $5$ intermediate subfields and they must be the one already mentioned.
Hence the proof. |
Finding mean, variance of $X^2$ and $3X$ given the distribution of $X$ | General formula for mathematical expectation for discrete variables: $$\mathbb {E} [X]=\sum \limits _{i=1}^{\infty }x_{i}\,p_{i},$$ where $X$ is your random variable,
$x_i$ is i-value of random variable, $p_i$ is probability of i-value of random variable.
$X^2$ is a new random variable with new distribution, but it's generated by $X$ random variable, which has distribution: $$\mathbb{P}(x = 0) = \frac{1}{5}; \mathbb{P}(x = 1) = \frac{1}{10} + \frac{1}{10} = \frac{1}{5};
\mathbb{P}(x = 2) = \frac{1}{10} + \frac{1}{5} = \frac{3}{5}$$
We have 3 values, bcs $(-1)^2 = (1^2) = 1$, $(0^2) = 0$ and $(-2^2) = (2^2) = 2.$
Similarly, $3X$ has distribution: $$\mathbb{P}(x = -6) = \frac{1}{2}; \mathbb{P}(x = -3) = \frac{1}{5}; \mathbb{P}(x = 0) = \frac{1}{5}; \mathbb{P}(x = 3) = \frac{1}{10}; \mathbb{P}(x = 6) = \frac{1}{10}$$
Thus, $$\mathbb {E} [X^2]=\sum \limits _{i=1}^{3}x_{i}\,p_{i} = 0\cdot\frac{1}{5} + 1 \cdot \frac{1}{5}+2 \cdot\frac{3}{5} = \frac{7}{5}$$
Calculate the same way: $$\mathbb {E} [3X] = ?$$
General formula for variance :$$\mathbb D[X]=\mathbb {E}\left[{\big (}X-\mathbb {E}[X]{\big )}^{2}\right] =\mathbb {E}[X^{2}]-\left(\mathbb {E}[X]\right)^{2}$$
General furmula for SD: $$\sigma ={\sqrt {\mathbb D[X]}}$$.
Can you finish it yourself? |
dipendence of algebraic funtions | The answer is yes. To show this we will construct a map from surface $S$ to a curve $E$ such that all forms and functions in the questions are pull-backs of forms and functions on curve $E$. Since transcendence degree of $K(E)$ is one, the functions will be dependent.
How to construct $E$? The main reference is [Phillip Griffiths, Joseph Harris: Principles of Algebraic Geometry, Ch. IV, 5.2] Integrating $\omega_1$ and $\omega_2$ (locally) on $\widetilde U\subset S$ we get a map
$$
\pi: \widetilde U\to U\subset\mathbb C^2.
$$
Actually, the image $\pi(\widetilde U)=Z$ is a curve and for every point $x\in Z$ the restrictions
$$
\omega_1|_{\pi^{-1}(x)}\ \mbox{and}\ \omega_2|_{\pi^{-1}(x)}
$$
are trivial. After a suitable choice of coordinate on $Z$ one can find that the map $\pi:\widetilde U\to Z$ is given by $g=\frac{\omega_1}{\omega_2}$ (to check this, one need to consider jacobian of the map $\pi:\widetilde U\to U$).
Function $g=\frac{\omega_1}{\omega_2}$ is holomorphic and defines extended map $\pi:S\to\mathbb P^1$. For a general point $\lambda\in \mathbb P^1$ the pre-image $\pi^{-1}(\lambda)$ is a union of irreducible divisors $C_{\lambda,i}$ (which do not intersect because of the smoothness of $\pi^{-1}(\lambda)$ for general $\lambda$). Also, restrictions of $\omega_1$ and $\omega_2$ on each fiber are trivial. The set of divisors
$$
E=\bigcup_{i,\lambda} \{C_{\lambda,i}\}
$$
is actually a (ramified) finite cover of $\mathbb P^1$ and is an algebraic curve (one can give more explicit construction, see [Phillip Griffiths, Joseph Harris: Principles of Algebraic Geometry]). We have a decomposition of $\pi$:
$$
S\to E\to \mathbb P^1,
$$
$$
p\mapsto (\lambda, i) \mapsto \lambda,
$$
where $(\lambda, i)$ is defined by $p\in C_{\lambda, i}$ (curves do not intersect).
Since $\omega_1$ and $\omega_2$ are constant along divisors $C_{\lambda,i}$, which are irreducible, they are pull-backs of forms on $E$, the same is true for $f$ and $g$ as well. |
Prove that in ring $\mathbb{Z}, \forall a,b \in \mathbb{Z} $, $(a,b) = (d)$, where $ d = \mathrm{gcd}(a,b)$ | Hint:
$x\in(a,b)\iff$ integers $n,m$ exist with $na+mb=x$
$x\in(d)\iff$ integer $k$ exists with $kd=x$ |
Find real solution $x$ in order to $x + \sqrt{2020}$ and $\dfrac{5}{x} -\sqrt{2020}$ are integers | As Edward H's question comment hint suggests, let $a$ be the integer where
$$a = x + \sqrt{2020} \implies x = a - \sqrt{2020} \tag{1}\label{eq1A}$$
Then let $b$ be the other integer, and use \eqref{eq1A}, to get
$$\begin{equation}\begin{aligned}
b & = \frac{5}{a - \sqrt{2020}} - \sqrt{2020} \\
b + \sqrt{2020} & = \frac{5}{a - \sqrt{2020}} \\
(b + \sqrt{2020})(a - \sqrt{2020}) & = 5 \\
ab - b\sqrt{2020} + a\sqrt{2020} - 2020 & = 5 \\
ab + \sqrt{2020}(-b + a) & = 2025 \\
\end{aligned}\end{equation}\tag{2}\label{eq2A}$$
Since $ab$ and $2025$ are integers, this means $\sqrt{2020}(-b + a)$ must also be an integer. However, due to $\sqrt{2020}$ being irrational and $-b + a$ being an integer, this is only possible if it's $0$, i.e.,
$$b = a \tag{3}\label{eq3A}$$
Using this in \eqref{eq2A} gives
$$a^2 = 2025 \implies a = \pm 45 \tag{4}\label{eq4A}$$
Finally, \eqref{eq1A} gives that
$$x = a - \sqrt{2020} \implies x = \pm 45 - \sqrt{2020} \tag{5}\label{eq5A}$$ |
De Rham Cohomology groups of $\mathbb{R}^3\times S^3$ | It is correct to say that that 1) your manifold has zero second cohomology (it's homotopy equivalent to $S^3$) and 2) your manifold has a symplectic structure (it's diffeomorphic to the cotangent bundle of $S^3$, because $S^3$ is parallelizable).
But only closed symplectic manifolds are required to have nonzero second cohomology! The general fact is that, if $\omega$ is a symplectic form on a $2n$-dimensional manifold, then $\omega^n$ is a volume form. If the manifold is closed, than any volume form must represent a nonzero cohomology class, and so $\omega$ must also represent a nonzero cohomology class. But if the manifold is not closed, then $H^{2n}$ is trivial anyway, so nothing more can be concluded. |
On the uniform continuity of some functions | $$f(x)=\sin(x^2)$$ satisfies the hypotheses, but it is not uniformly continuous. |
Combinatorial Necklaces & Strips of $n$ Beads and $k$ Colours | For the case of necklaces we have the cycle index
$$Z(C_n) = \frac{1}{n} \sum_{d|n} \varphi(d) a_d^{n/d}.$$
We get with $k$ colors $Q_j$ (this is PET)
$$[Q_1^{n/k} \cdots Q_k^{n/k}]
Z\left(C_n; \sum_{j=1}^k Q_j\right)
= \frac{1}{n} \sum_{d|n} \varphi(d)
[Q_1^{n/k} \cdots Q_k^{n/k}]
\left(\sum_{j=1}^k Q_j^d\right)^{n/d}.$$
We see that $d$ must divide $n/k:$
$$\frac{1}{n} \sum_{d|n/k} \varphi(d)
[Q_1^{n/k} \cdots Q_k^{n/k}]
\sum_{m_1+\cdots+m_k = n/d}
{n/d\choose m_1,\ldots, m_k}
\prod_{j=1}^k Q_j^{d m_j}.$$
Here the $m_j$ are non-negative. Now we must have $d m_j = n/k$
so that $m_j = n/k/d$ and we get
$$\frac{1}{n} \sum_{d|n/k} \varphi(d)
\frac{(n/d)!}{(n/k/d)!^k}$$
or alternatively
$$\bbox[5px,border:2px solid #00A000]{
\frac{1}{n} \sum_{d|n/k} \varphi(n/k/d)
\frac{(kd)!}{d!^k}.}$$
As a sanity check with just one color $k=1$ we get $\frac{1}{n}
\sum_{d|n} \varphi(n/d) = 1$, which is correct. With $k=n$ colors we get
$\frac{1}{n} \sum_{d=1} \varphi(1/d) \frac{(nd)!}{(d!)^n} = \frac{1}{n}
n! = (n-1)!$ which is correct as well. This is OEIS
A208183. |
Proof of the proposition which says that the column rank of $A$ is equal to the row rank of $A$. (Gilbert Strang's new lecture) | We have $A=C R$ as in the lecture. the row space of $A$ is the column space of $A^T$ and $A^T=R^T C^T$ so the colums space of $A^T$ is generated by the columns of $C^T$ (using coefficients from $R^T$) and those columns are just the rows of $C$ again. |
A conceptual problem in group theory | As suggested in the comments, the answer is that cycles $\sigma$ and $\tau$ commute if and only if either:
(i) $\sigma$ and $\tau$ are disjoint; or
(ii) $\sigma$ is a power of $\tau$ and $\tau$ is a power of $\sigma$.
In case (ii), this implies that they have the same order and hence the same length, and they must both be cycles on the same set of points. |
Why exactly $1/p+\mathbb{Z} \in \text{ker} (\alpha)$? | $\alpha(1/p + \mathbb Z) = p(1/p + \mathbb Z)$. Recall that the sum of cosets $(a + \mathbb Z) + (b + \mathbb Z)$ is defined as $(a+b) + \mathbb Z$. Thus, $p(1/p + \mathbb Z) = 1 + \mathbb Z = 0 + \mathbb Z$ which is the identity element of the Prüfer $p$-group. Hence, $1/p + \mathbb Z \in ker(\alpha)$. |
What is a skew symmetric matrix? | It is a square matrix $A$ satisfying $A^T=-A$---that is, its transpose is the negative of the original. If $B=A^T$, then $B_{ij}=-A_{ji}$.
A similar property for complex matrices is skew-Hermitian: $A^H=-A$. If $B=A^H$, then $B_{ij}=-\overline{A_{ji}}=-\Re(A_{ji})+\jmath\Im(A_{ji})$.
Note that these properties require the diagonal elements to be zero. |
Given a group of 300 people, each student is a friend with exactly 200 other people. | The following diagram is a hint ? |
Books or articles on derivative-free optimization | You could start here. The site helps you choose an optimization algorithm according to problem type. One of the problem types listed is "derivative free" optimization. There are many references.
Also, bear in mind that some optimization packages that require derivatives also include utilities for estimating these. The donlp2 package does this, for example.
For theoretical aspects, you could start with this book. |
Calculate the following conditional expectation. | Notice that
$$
\mathsf{E}\left(X \mid Y = y\right) = \mathsf{E}\left(R \cos \Theta \mid R \sin \Theta = y\right) = \mathsf{E}\left( R \cdot \mathsf{E}\left(\cos \Theta \mid \sin \Theta = \frac{y}{R}, R\right) \right)
$$
The equation $\sin\Theta = y/R$ has two solutions, $\Theta = \arcsin\left(y/R\right)$ and $\Theta = \pi - \arcsin\left(y/R\right)$ for which $\cos(\Theta) = \pm \sqrt{1-y^2/R^2}$. These two solutions are equally likely, hence
$$
\mathsf{E}\left(\cos \Theta \mid \sin \Theta = \frac{y}{R}, R\right) = \frac{1}{2} \sqrt{1-y^2/R^2} - \frac{1}{2} \sqrt{1-y^2/R^2} = 0
$$
The remaining expectation over $R$ is now trivial.
Incidentally, you might find the answer to the following question of mine helpful in establishing that $X$ and $Y$ are independent standard normal random variables, giving
$$
\mathsf{E}(X \mid Y = y) = \mathsf{E}(X) = 0
$$ |
How to show the following about the infinite group? | As mentioned by Lord Shark the Unknown (see comments) every infinite group acts on a set of size $1$.
If you require moreover that $X$ has cardinality greater than $1$ then you can still define a transitive action on $X$ by an infinite group.
Example: If $X=\{a,b\}$ we can define the action of $\mathbb{Z}$ on $X$ by
$$n.x = x$$
if $n$ is even, and
$$n.x = \cases{ a & x=b \\ b & x=a}$$
if $n$ is odd.
I claim that this is a group action:
The trivial element $0$ is even and it acts trivially. It is left to show that $g(hx)=(g+h)x$
If $n,m$ are even then $n+m$ is even and so $n(mx) = (n+m)x = x$.
If $n,m$ are odd then $n+m$ is even and we still have $n(m(x))=(n+m)x=x$.
If $n$ is odd and $m$ is even then $n+m$ is odd. In this case
$$n(m(a))=n(a)=b = (n+m)(a)$$
and
$$n(m(b))=n(b) = a = (n+m)(b)$$
Finally if $n$ is even and $m$ is odd then $n+m$ is odd. In this case
$$n(m(a))=n(b)=b = (n+m)(a)$$
$$n(m(b))=n(a)=a=(n+m)(b)$$
Thus in all cases we have that $n(m(x))=(n+m)x$ and so this really defines a group action.
The action is transitive of course because $\{0.x,1.x\} = \{a,b\}$ for all $x\in\{a,b\}$. |
Learn Algebra with computational applications [Book Recommendation] | I noticed while reading Wikipedia a while back that one of our better contributors, Derek F. Holt, authored a book with Eick and O'Brien called Handbook of Computational Group Theory.
He appears, based on his posts, to know a lot about the subject, and group theory in general.
Also, while I haven't used it myself, I have gathered that GAP is a very useful tool for the subject. I suggest looking into it. |
What is the cardinality of the countable ordinals? | It is true that every ordinal can be written in a unique Cantor normal form, but in many cases this is a vacuous form. For example $\varepsilon_0$ is a countable ordinal such that $\omega^{\varepsilon_0}=\varepsilon_0$. Therefore the Cantor normal form of $\varepsilon_0$ is... $\omega^{\varepsilon_0}$.
Besides that the set of countable ordinals is a transitive set, and it is well-ordered by $\in$. It follows that it is an ordinal itself, denote it $\alpha$. However $\alpha$ cannot be countable, because then $\alpha\in\alpha$ which implies it is not well-ordered by $\in$ (and contradicts the axiom of regularity). Therefore $\alpha$ is uncountable.
But we note that every uncountable ordinal is strictly larger than all countable ordinals, and therefore $\alpha$ is the least uncountable ordinal. That is, $\alpha=\omega_1$ and therefore its cardinality is $\aleph_1$. |
Unsure about infinite continued fraction | In view of the fact that the standard continued fraction expansion of $e$ is
$$
e=2+\frac1{1+}\frac1{2+}\frac1{1+}\frac1{1+}\frac1{4+}\frac1{1+}\frac1{1+}\frac1{6+}\cdots\,,
$$
I think you are mistaken in thinking that your number might even be algebraic, much less quadratic. |
A non-positively curved cube complex that admits a local isometric embedding into a Salvetti complex is special. | The key point to notice is that a combinatorial map $f: X \to Y$ sends two edges dual to the same hyperplane either to a single edge or to two edges dual to the same hyperplane. Thus, if $f$ is moreover locally injective and if $e_1,e_2$ are two edges with a common endpoint and dual to the same hyperplane, then $f(e_1)$ and $f(e_2)$ are two distinct edges of $Y$ dual to the same hyperplane. As a consequence, $f$ sends self-intersecting hyperplanes to self-intersecting hyperplanes. The same argument essentially holds for the other pathological configurations of hyperplanes. |
What means function f on set X? | Example: "$f$ is continuous on $B$" means: in each $x_0 \in B$ the function $f$ is continuous. |
Multivariate Hypergeometric Distribution With Wildcard | Hypergeometric Probability Distribution.
1.) What is the probability of making a flush in Poker given you are holding four cards of the same suit ?
(hypergeometric 1 47 1 9) = 9/47 ~ 19.148...%
2.) what is the probability of making a flush in Poker given you are holding three cards of the same suit ?
(hypergeometric 2 47 2 10) = 45/1081 ~ 4.1628... %
Similiar to your problem but not exactly so.
A Sceme program that can be modified to extend the hypergeometric probability distribution.
(define (factorial-iter-aux product counter max-count)
(if (> counter max-count)
product
(factorial-iter-aux (* counter product)
(+ counter 1)
max-count)))
(define (factorial-iter n)
(factorial-iter-aux 1 1 n))
(define (choose j k)
(/ (factorial-iter j) (* (factorial-iter k) (factorial-iter (- j k)))))
(define (hypergeometric x N n k)
(/ (* (choose k x) (choose (- N k) (- n x))) (choose N n))) |
Why does $\tan(z)$ have a non-isolated singularity when z tends to infinity? | Another way to look at this is to note that the behavior of a function $f: \bar{\mathbb C} \to \bar{\mathbb C}$ as $z \to \infty$ is equivalent to considering $g(z) = f(1/z)$ as $z \to 0$. For $f(z) = \tan z$, even just along the line $\Im(z) = 0$ we have singularities of $g$ whenever $z \in S = \left\{( \frac{\pi}{2} + k \pi)^{-1} : k \in \mathbb Z\right\}$. But clearly, for any sufficiently small $\epsilon > 0$, there is always some $w$ satisfying $-\epsilon < w < \epsilon$ such that $w \in S$; i.e., no matter how small a radius we draw around the point $z = 0$, we can find a singularity of $g$ in that neighborhood. This means the singularity at $0$ is not isolated, and the corresponding singularity of $f$ at $\infty$ is not isolated. |
If $\frac{1}{5-2(z+z^{-1})}=f(z)+f(z^{-1})$with $f$ analytic at $z=0$, what is $f$? | Okay. I have just figured out an elementary answer.
Since $5-2(z+z^{-1})=(z-2)(z^{-1}-2)$, we could expect
$$\frac{1}{5-2(z+z^{-1})}=\frac{az+b}{z-2}+\frac{az^{-1}+b}{z^{-1}-2}$$
for some $a$ and $b$. This leads to
$$1=(az+b)(z^{-1}-2)+(az^{-1}+b)(z-2)=2(a-2b)+(b-2a)(z+z^{-1}).$$
So:
$$\begin{cases}a-2b=1/2\\ b-2a=0\end{cases}$$
which gives $a=-1/6$ and $b=-1/3$, and hence
$$f(z)=-\frac{1}{6}\cdot\frac{z+2}{z-2}$$ |
"Famille sommable" in English | It's a “summable family”. That's the expression that is used in the English version of Bourbaki's General Topology. |
a small technical detail | It's not meaningful for a single morphism to be stable under pushouts. A class $\mathcal M$ of morphisms, such as the regular monomorphisms, is said to be stable under pushouts if given a span $z\leftarrow x\to y$ with pushout $z\to p\leftarrow y$, if the map $z\leftarrow x$ is in $\mathcal M$ then so is the map $p\leftarrow y$. |
When is it possible to pass to the limit in the base and the exponent separately? | We have
$$
\lim_{n \to \infty } \frac{1}{{{\left( {{{\left( {1 + {1 \over { - 4{n^2}}}} \right)}^{ - 4{n^2}}}} \right)}^{{{1 - {n^2}} \over { - 4{n^2}}}}}} = \lim_{n\to\infty} a_n ^{b_n},
$$
where $\lim_{n\to\infty} a_n = e$ and $b_n = -\frac{1-n^2}{-4n^2}$.
Because $\lim_{n\to\infty} b_n$ exists and the function $f(x,y)=x^y$ is continuous at $(x,y)$ when $x=e$, it is acceptable to say
$$
\lim_{n\to\infty} a_n ^{b_n} = \left(\lim_{n\to\infty} a_n\right)^{\lim_{n\to\infty} b_n}.
$$ |
How to show $I_k(t)=\int_\Omega (u(x,t)-k)_+^2 dx$ is absolutely continuous for any $u\in \mathring W^{1,1}_2(Q_T)$? | It is enough to show that $I_k$ is in $W^{1,1}(0,T)$. So multiply by $f’(t)$, where $f\in C^1_c(0,T)$, integrate in $t$ in $(0,T)$ and then use the definition of weak derivative to integrate by parts.Since the function $g(z)=(z-k)_+$ is Lipschitz continuous, you can apply the chain rule, which tells you that $v(x,t)=(u(x,t)-k)_+$ is in $W^{1,2}$ with
$$
\frac{\partial v}{\partial t}(x,t)=\left\{
\begin{array}
[c]{ll}%
\frac{\partial u}{\partial t}(x,t) & \text{if }u(x,t)>k,\\
0 & \text{otherwise.}%
\end{array}
\right.
$$
Hence,
\begin{align*}
\int_{0}^{T}f^{\prime}(t)I_{k}(t)\,dt & =\int_{0}^{T}\int_{\Omega}f^{\prime
}(t)v^{2}(x,t)\,dxdt\\&=-\int_{0}^{T}\int_{\Omega}f(t)2v(x,t)\frac{\partial
v}{\partial t}(x,t)\,dxdt\\
& =-\int_{0}^{T}\int_{\Omega}f(t)2(u(x,t)-k)_{+}\frac{\partial u}{\partial
t}(x,t)\,dxdt\\
& =-\int_{0}^{T}f(t)\int_{\Omega}2(u(x,t)-k)_{+}\frac{\partial u}{\partial
t}(x,t)\,dxdt.
\end{align*}
This shows that the weak derivative of $I_{k}(t)$ is the function
$$
\omega(t)=\int_{\Omega}2(u(x,t)-k)_{+}\frac{\partial u}{\partial t}(x,t)\,dx,
$$
which is integrable since by Holder's inequality
\begin{align*}
\int_{0}^{T}|\omega(t)|\,dt & \leq2\int_{0}^{T}\int_{\Omega}(u(x,t)-k)_{+}%
\left\vert \frac{\partial u}{\partial t}(x,t)\right\vert \,dxdt\\
& \leq2\left( \int_{0}^{T}\int_{\Omega}(u(x,t)-k)_{+}^{2}\,dxdt\right)
^{1/2}\left( \int_{0}^{T}\int_{\Omega}\left\vert \frac{\partial u}{\partial
t}(x,t)\right\vert ^{2}dxdt\right) ^{1/2}<\infty.
\end{align*} |
I am trying to understand part of a projection theorem proof in Hilbert spaces from some lecture notes. | You already proved in the existence part of the proof that there is a vector $y\in H'$ with minimal distance to $x$, i.e. $d(x,y) \leq d(x,z)$ for all $z \in H'$. In particular,
$$d(x,y) \leq d(x,y+tw)$$
for all $t \in \mathbb{R}$. Hence, $f(0) \le f(t)$ for all $t \in \mathbb{R}$ by the definition of $f$ and the claim follows. |
Obtain the value of $\int_0^1f(x) \ dx$, where $f(x) = \begin{cases} \frac {1}{n}, & \frac{1}{n+1}<x\le\frac{1}{n}\\ 0, & x=0 \end{cases}$ | Yes, it is Riemann integrable and
$$\int_0^1 f(x) dx=\sum_{n=1}^{\infty}\frac{1}{n}\left(\frac{1}{n}-\frac{1}{n+1}\right)=\sum_{n=1}^{\infty}\frac{1}{n^2}-\sum_{n=1}^{\infty}\frac{1}{n(n+1)}=\frac{\pi^2}{6}-1.$$
where we used the Basel problem and Mengoli's telescopic series. |
Notation for an arbitrary set of N elements | If we decode $0=\varnothing, 1=\{0\}, 2=\{0,1\}$ and generally $n+1=n\cup\{n\}$, then we have that $n$ itself is a set of $n$ elements.
If you don't want to confuse with the natural numbers you can use $[n]$ for example.
Whatever notation you chooses, though, just write it down. For example:
We will use ${\frak S}(n)$ to denote a fixed set of $n$ elements.
Or if you prefer
Let $A_n$ be a set of size $n$.
Then you can continue with $A_n$ instead.
As for a concrete notation for the set of size $n$, well.. there is none which I can recall. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.