title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Given $\phi ,\psi\in$ End($V$), show that $\phi \psi$ and $\psi \phi$ have the same eigenvalues. | The dimension is finite here.
Suppose that $\phi(\psi(x))=0, x\neq 0$, then $\psi$ or $\phi$ is not bijective an has $0$ as eigenvalue. So $\psi\circ \phi$ is not bijective. the same argument shows that $0$ is an eigenvalue of $\psi\circ \phi$ implies that it is an eigenvalue of $\phi\circ \psi$.
Suppose that $\phi(\psi(x)) =cx, c\neq 0$, then $\psi(\phi(\psi(x))=c\psi(x)=(\psi\circ \phi)(\psi(x))$. This implies that $\psi(x)$ is an eigenvector of $\psi\circ \phi$ associated to $c$. Remark that $\psi(x)\neq 0$ since $\phi(\psi(x))=cx\neq 0$. |
Sum of some columns of Character Table in GAP | To make a matrix containing character table entries, one can use ValuesOfClassFunction which returns the list of values of the class function $\psi$, the $i$-th entry being the value on the $i$-th conjugacy class of the underlying character table (see ?UnderlyingCharacterTable).
For example, let's take the group from the question and compute its character table "on fly" (one could also use CharacterTable("D20") to fetch the precomputed table from The GAP Character Table Library which is a GAP package redistributed together with the system). So, we have
gap> G:=DihedralGroup(IsPermGroup,20);
Group([ (1,2,3,4,5,6,7,8,9,10), (2,10)(3,9)(4,8)(5,7) ])
gap> t:=CharacterTable(G);;
gap> Display(t);
CT2
2 2 2 2 1 1 1 1 2
5 1 . . 1 1 1 1 1
1a 2a 2b 10a 5a 10b 5b 2c
2P 1a 1a 1a 5a 5b 5b 5a 1a
3P 1a 2a 2b 10b 5b 10a 5a 2c
5P 1a 2a 2b 2c 1a 2c 1a 2c
7P 1a 2a 2b 10b 5b 10a 5a 2c
X.1 1 1 1 1 1 1 1 1
X.2 1 -1 -1 1 1 1 1 1
X.3 1 -1 1 -1 1 -1 1 -1
X.4 1 1 -1 -1 1 -1 1 -1
X.5 2 . . A -*A *A -A -2
X.6 2 . . *A -A A -*A -2
X.7 2 . . -*A -A -A -*A 2
X.8 2 . . -A -*A -*A -A 2
A = -E(5)-E(5)^4
= (1-Sqrt(5))/2 = -b5
Now the matrix may be "extracted" in the following way:
gap> m:=List(Irr(t),ValuesOfClassFunction);;
gap> Display(m);
[ [ 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, -1, -1, 1, 1, 1, 1, 1 ],
[ 1, -1, 1, -1, 1, -1, 1, -1 ],
[ 1, 1, -1, -1, 1, -1, 1, -1 ],
[ 2, 0, 0, -E(5)-E(5)^4, E(5)^2+E(5)^3, -E(5)^2-E(5)^3, E(5)+E(5)^4, -2 ],
[ 2, 0, 0, -E(5)^2-E(5)^3, E(5)+E(5)^4, -E(5)-E(5)^4, E(5)^2+E(5)^3, -2 ],
[ 2, 0, 0, E(5)^2+E(5)^3, E(5)+E(5)^4, E(5)+E(5)^4, E(5)^2+E(5)^3, 2 ],
[ 2, 0, 0, E(5)+E(5)^4, E(5)^2+E(5)^3, E(5)^2+E(5)^3, E(5)+E(5)^4, 2 ] ]
For a character table with known underlying group $G$ (like in this case), the ConjugacyClasses attribute stores a list of conjugacy classes of $G$:
gap> ConjugacyClasses(t);
[ ()^G, (2,10)(3,9)(4,8)(5,7)^G, (1,2)(3,10)(4,9)(5,8)(6,7)^G,
(1,2,3,4,5,6,7,8,9,10)^G, (1,3,5,7,9)(2,4,6,8,10)^G,
(1,4,7,10,3,6,9,2,5,8)^G, (1,5,9,3,7)(2,6,10,4,8)^G,
(1,6)(2,7)(3,8)(4,9)(5,10)^G ]
Therefore, it's easy to figure out that e.g. the column corresponding to the conjugacy class of $(1,9,7,5,3)(2,10,8,6,4)$ is the 5th column:
gap> PositionProperty( ConjugacyClasses(t), x -> (1,9,7,5,3)(2,10,8,6,4) in x );
5
The rest is straightforward. Please note that rows correspond to irreducible characters and not to conjugacy classes. |
Number theory problem in induction | Use strong induction.
The base cases are clear. Next suppose $n>1$ is an integer and that the result is true for all positive integers less than $n$.
Case $1$: $n$ is even. Then $\frac{n}{2}$ is an integer less than $n$ and so is expressible as $\frac{n}{2} = 2^k m$ for a unique odd $m$. Then $n = 2^{k+1} m$ is unique way to express $n$ in such a way.
Case $2$: $n$ is odd. Then $n = 2^0 n$ is one way to express $n$ in the required form. Suppose $n = 2^0 m$ for some other odd $m$, then clearly $m=n$ so the expression above is unique. |
Computing H1 norm numerically | The finite difference method computes a point-wise approximation of $u_{\mathrm{true}}$. You have to do a sort of post-processing of the FDM approximation $u_h$ for which you can compute/approximate its derivative. E.g., in 1D, it is reasonable to reconstruct a $\tilde{u}_h$ which is linear on each interval such that $\tilde{u}_h(x_i)=u_h(x_i)$ in the point $x_i$ of the discretisation. Such a function you can already differentiate.
For computing the error, I would suggest you to compute the integrals $\int_{x_{i-1}}^{x_i}(\tilde{u}_h'(x)-u_{\mathrm{true}}(x)')^2\mathrm{d}x$ on a reasonable discretisation of each interval $[x_{i-1},x_i]$ in order to diminish the quadrature error due to the nonlinearity of $u_{\mathrm{true}}$. Of course the same is true also for the $L_2$-norm. |
Number theoretic puzzle involving 123456789 | The answer is: $a=3607, b=14, c=3$.
Obviously $123456789$ is divisible by $9$: $123456789/9=13717421=:n$.
Let assume that $n$ is square free. In this case $c=3$ and $n=a(a+b^2)$. Let $u$ and $v$ be two complementary divisors of $n$: $u\cdot v =n$. Then:
$$
(u+v)^2-(u-v)^2=4n \Rightarrow (u+v)^2=4n+(u-v)^2.
$$
It follows from our assumption that $u-v=b^2$ is an even perfect square. Adding to $4n$ the squares of even perfect squares one finally is lucky with $u-v=196$, which results in a perfect square. The resulting values $u=3803$ and $v=3607$ are primes, so that the above assumption was correct.
From the prime factor decomposition $123456789=3^2\cdot3607\cdot 3803$ the answer follows. |
number of ways you can partition a string into substrings of certain length | Let $C_{n,r}$ be the counting we are looking for: partitions of a string of length $n$ into parts with length up to $r$. This set can be divided according to the length of the last segment.
But the number of such partitions ending with a segment of length $k$ equals the number of partitions of the prefix string, of length $n-k$ (always with the same restriction). And then we have the recursive expression:
$$C_{n,r} = C_{n-1,r}+C_{n-2,r}+\cdots + C_{n-r,r}=\sum_{k=1}^r C_{n-k,r}$$with the initial conditions $C_{0,r}=1$ and $C_{j,r}=0$ for $j<0$.
This is the expression of generalized Fibonacci numbers (plain Fibonacci numbers if $r=2$), sometimes called r-step Fibonnaci numbers.
For example, for $r=4$ we get (starting from $n=0$) the tetranacci numbers: $1,1,2,4,8,15,29,56 \cdots$
Notice that, for $n\le r$ we get $C_{n,r}=2^{n-1}$, which should be obvious (all the partitions are valid).
The general value $C_{n,r}$ has no simple form. One formula:
$$ C_{n,r} = {\rm round}\left(\frac{z^{n-1}(z-1)}{(r+1)z -2 r}\right)$$
where $z$ is the root of $x+x^{-r}=2$ near $x=2$ (can be obtained numerically by iterating $z_{t+1}=2 - z_{t}^{-r}$, starting from $z_0=2$) and ${\rm round}(\cdot)$ returns the nearest integer.
An asymptotic for large $n$ can be
$$ C_{n,r} \approx 2^{n-1}(1-2^{-r})^{n/2}$$ |
Maximize $x^2+y^2+z^2$ subject to $x+2y+3z=4$ and $x,y,z\geq 0$ | After replacing the constraint $x$, $y$, $z>0$ by $x$, $y$ , $z\geq0$ the feasible domain becomes a closed triangle $T$ with its vertices on the positive coordinate axes. Imagine a sphere centered at $O$, so large that it contains the triangle $T$ in its interior. Now continually shrink this sphere until it hits one of the vertices of $T$. This will be the vertex $X:=(4,0,0)$, since the other vertices are lying nearer to $O$. It follows that the point of $T$ farthest away from $O$ is $X$, and that the maximum of the given function on $T$ is $4^2=16$. |
Trying to show $\Vert A (\sum_{j=1}^n v_j)\Vert_2^2 = \sum_{j=1}^n\lambda_j^2$? | The author is correct. Note that $v_{i}^{T}v_{j}=0$ for $i \neq j$, and that $v_{i}^{T}v_{i}=1$ for all $i$. When you expand the sum you'll find that most of the terms vanish and the remaining terms simplify to give you the desired result. |
What is the remainder of $3^{13} \div 25$ by hand using Euler's theorem? | You've already used Euler's theorem to show that $3^{13}\equiv2\pmod{25}$ or $3^{13}\equiv23\pmod{25}$, as
$$(3^{13})^2\equiv3^{26}\equiv3^{26\mod{\phi(25)}}\equiv3^{26\mod{20}}\equiv3^6\equiv4\pmod{25}.$$
You can use Euler's theorem mod $5$ to finish the proof; you know that
$$3^{13}\equiv3^{13\mod{\phi(5)}}\equiv3^{13\mod{4}}\equiv3^1\pmod{5},$$
and hence $3^{13}\equiv23\pmod{25}$. |
probability lottery ticket | Hint:
Note, there are three cases to consider.
All three digits are different
Two of the three digits are the same
All of the digits are the same
How many winning tickets are there if the ticket drawn is something like 123 with all numbers different? What does this say about the total profit?
How many winning tickets are there if the ticket drawn is something like 112 with two numbers the same? What does this say about the total profit?
How many winning tickets are there if it is something like 111 with all three numbers the same? What does this say about the total profit?
What are the chances of being in each of the different states?
If you define a random variable, $X$, to be the total profit, how do you use the previously determined information to find $E[X]$? |
Conjugacy classes and Centers | Your proof is correct but you have some unnecessary steps at the end. You don't have to prove $b=a$ the way you did: you already know that $xax^{-1}=a$ for all $x$, and so in particular you can take $x=y$ to conclude immediately $b=yay^{-1}=a$. |
Definite integral of $|t-s|^{2a-2}$ | The formula is correct if $t \geq 0, s \geq 0$.
You can consider the cases $t<s$ and $t>s$ separately and split the integration range into subranges where $u<v$ and $u>v$ for each case.
Alternatively, you can write
$$a (2a-1) \int_0^t \int_0^s ((u-v)^2)^{a-1} du dv = \\
a \int_0^t (u-v) ((u-v)^2)^{a-1} \Big\rvert_{u=0}^s dv = \\
a \int_0^t (s-v) ((s-v)^2)^{a-1} dv +
a \int_0^t v (v^2)^{a-1} dv = \\
-\frac 1 2 ((s-v)^2)^a \bigg\rvert_{v = 0}^t +
\frac 1 2 (v^2)^a \bigg\rvert_{v = 0}^t = \\
\frac 1 2 (|s|^{2a} + |t|^{2a} - |s-t|^{2a}).$$
This is valid for any real $t,s$. |
Calculating the force on one side of a circular plate submerged vertically in water. | HINT
In that case ti is convenient use polar coordinates assuming as origin the center of the the circular plate, with y axis upward, indeed we have
$x=R\cos \theta$
$y=R\sin\theta$
$\theta\in[0,2\pi]$
pressure: $f(y)=\rho g(R-y) $
that is
$$F=\int_0^{2\pi} \int_0^R\rho g(R-y)\,r\,dr\,d\theta$$ |
Simplifying a limit with two variables | Note that you made mistake here
$$\lim_{h\to0}\dfrac{4(x+h)-13-4x-\color{red}{13}}{h}$$
it should be
$$\lim_{h\to0}\dfrac{4(x+h)-13-4x\color{red}{+13}}{h}$$
Given $f(x)=4x-13$
$$\lim_{h\to0}\dfrac{f(x+h)-f(x)}{h}$$
$$\lim_{h\to0}\frac{4(x+h)-13-4x+13}{h}$$
$$\lim_{h\to0}\frac{4x+4h-4x}{h}$$
$$\lim_{h\to0}\frac{4h}{h}$$
$$=4$$ |
how find minimum value of this function ? $f=\prod_{i=1}^{20}b_i(x)$ | I use the fact that the function f is continuous.
For $x<a_{10}$, $f(x)$ has the form of $K/x^n$, so the function is decreasing. The minimum cannot be there
For $x>a_{11}$, $f(x)$ has the form of $K*x^n$, so the function is increasing. The minimum cannot be there
For $a_{11} >x>a_{10} $, $f(x) = \frac{a_{20}*...*a_{11}}{a_{10}*...*a_1}$ is constant. Here is the minimum |
How to write $e^{2iz}$ in the form $u(x,y)+iv(x,y)$? | Hint: $e^z=e^{2i(x+iy)} = e^{-2y} \cdot e^{i \cdot 2x}$ |
If all $a_n \geq 0$, what does $\sum_{n=1}^\infty \frac{a_n}{n}<\infty$ imply on the $a_n$'s? | No. Let $$a_n = \begin{cases} 1, & n=k^2, \\ 1/n, & \text{otherwise}.\end{cases}$$ |
showing compactness for a subset of a function space | Assume that $A$ is finite, and let $N$ be its cardinality. Then $X_A$ is homeomorphic to $\mathbb R^N$ endowed with the usual topology and $\overline{B_1(\beta))}$ is $[-1,1]^N$, a compact set.
Assume that $A$ is infinite. It contains $\{x_n,n\in\mathbb N\}$, a countable collection of disjoint points. Define $g_n$ by $g_n(x_n)=1$ and $g_n(x)=1$ if $x\neq x_n$. Then $g_n\in \overline{B_1(\beta)}$ and $\alpha(g_m,g_n)=1$ if $m\neq n$. So we cannot extract a convergent subsequence. |
Bounded linear functional separating a hyperplane and a point | First of all, as stated the problem fails trivially, since we have the zero functional (which is always bounded and it gives equality, not strict inequality). So suppose that we exclude the zero functional from the question. If the convex set is a subspace, the question has no interest:
If $Y$ is a proper, closed subspace of $X$ and $x_0\in X\setminus Y$, then there exists a functional $f\in X^*$ so that $f=0$ on $Y$ and $f(x_0)>0$, so $\text{Re}(f(x_0))=f(x_0)>0=\sup\{f(x): x\in Y\}$.
If $Y$ is a subspace of $X$ that is not closed, nor dense, then $\bar{Y}$ is a proper closed subspace of $X$ and the previous case yields a non-zero functional that fails to satisfy $\text{Re}(f(x_0))<\sup\text{Re}(f(x))$.
If $Y$ is a dense subspace of $X$, then any bounded functional $f$ that is zero on $Y$ is identically zero on $X$, bceause of continuity. Now suppose that $f$ is a non-zero functional on $Y$, so $f(y_0)\neq0$. Consider the element $y_1=\overline{f(y_0)}\cdot y_0\in Y$. Then $f(y_1)=|f(y_0)|^2>0$. But the elements $y_n=n\cdot y_1$ belong in $Y$ and $f(y_n)=n|f(y_0)|^2$, so $\sup\{\text{Re}(f(y)): y\in Y\}=\infty$ so for any $x\in X$ we have that $\text{Re}(f(x_0))<\sup\{\text{Re}(f(y)): y\in Y\}$. |
All triangles that have the same orthocenter and circumcircle have the same nine-point circle | True. Let $H -$ orthocenter and $O -$ circumcenter, $R -$ radius, $F -$ midpoint $HO$. Then nine-point circle has a center point $F$ and a radius $\frac 12 R$ |
Functions allowed in Hilbert and Banach spaces. | Banach spaces and Hilbert spaces are not simply sets of functions, but sets equipped with a structure of vector space and a further function (a norm for Banach spaces and an inner product for Hilbert spaces) that has important algebraic and topological properties. In particular a Banach space become naturally a metric space and is complete in the topology induced by this metric. An Hilbert space is a space where we can define orthogonality and where any bounded linear functionals can be expressed as an inner product with a well defined and unique vector ( Riesz Representation Theorem).
We know that a Hilbert space is always a Banach space, but the converse is not true. This means that any inner product induce a norm, but a norm can be derived by an inner product only if his norm verifies the parallelogram low:
$$
||x+y||^2+||x-y||^2=2||x||^2+2||y||^2
$$
So, given the same vector space $V$ of functions, we can have a norm on it such that $V$ is a Banach space but not an Hilbert space but we can have a different norm such that $V$ is an Hilbert space.
As an example let $C[0,1]$ the vector space of continuous real functions on $[0,1]$ with the usual pointwise sum and scalar product.
With the sup-norm
$$ ||f||_\infty=\sup _{x \in [0,1]}|f(x)| $$
it is a Banach space but not an Hilbert space ( to show this test the parallelogram equality with the two functions $f(x)=x$ and $g(x)=1$).But with the norm
$$
||f||_2=\left( \int_0^1|f(x)|^2dx \right)^{1/2}
$$
it become an Hilbert space. |
A very simple look on limits | Sure, I think this is a fine way to calculate limits. For example,
$$f(x)=\frac{x^2-1}{x-1}$$
And we want to take the limit $x\to1$:
$$\frac{x^2-1}{x-1}=\frac{(x-1)(x+1)}{x-1}=\frac{\require{cancel}\cancel{(x-1)}(x+1)}{\cancel{x-1}}=x+1$$
So that we have
$$\lim_{x\to1}f(x)=\lim_{x\to1}x+1=1+1=2$$
This is often seen as the 'algebraic' way to evaluating a limit, and the key is to remove whatever is keeping us from calculating the limit. |
Estimating Gamma PDF parameters from data with negative increments | This can be done using a particle filter https://en.wikipedia.org/wiki/Particle_filter.
A useful reference can be found in paper: A tutorial on particle filters for online nonlinear/non-gaussian https://www.irisa.fr/aspi/legland/ensta/ref/arulampalam02a.pdf |
Tangential equation by a point on the graph | The equation of the tangent to $y=e^x$ at the point $x=a$ is $y-e^a = e^a(x-a)$ since the tangent has slope $f'(a)=e^a$ and passes through the point $(a,e^a)$.
I surmise your question is "When do the tangents to $y=e^x$ pass through the point $(1,1)$?"
This occurs exactly when $(x,y)=(1,1)$ satisfies the equation $y-e^a=e^a(x-a)$. Thus we
must have $1-e^a=e^a(1-a)$. Thus $1-e^a=e^a-ae^a$ and so $2e^a-ae^a-1=0$. This is a non-linear equation whose solution is given by Wolfram Alpha to be $-1.14619$ and $1.84141$. |
Proof Verification: Identity is the only idempotent element | Nowhere you proved that $f$ satisfies $fx=x$ for all $x\in G$, nor you did for $g$. In fact, the same things you said could be written under the weaker assumption that $G$ is just a monoid and, in that case, the thesis would not hold. You really need to use the existence of an inverse. |
If $(\Omega, F, m)$ is a $\sigma$-finite measure space, show the following | *Hint
$m(E)=m(\cup_{n=1}^{\infty}E_n\cap E)\leq \sum_{n=1}^{\infty}m(E_n\cap E)$
So, you must prove that you can cover each $E_n\cap E$ with the desired sets. (Which is finite by monotonicity) |
Can we list down all order 4 integer valued 3 x 3 matrices | To get some examples, you can just remember the $2 \times 2$ representation of $i$ as $\begin {pmatrix} 0&-1\\1&0 \end {pmatrix}$ which has fourth power $I$. Now just add one more row/column that has a $1$ on the main diagonal somewhere. Also all matrices of the form $\begin {pmatrix} \pm 1&0&0\\0& \pm 1&0\\0&0&\pm 1 \end {pmatrix}$ will qualify. So will $I$ itself.
To try to find them all, you can write an arbitrary matrix with nine variables for entries, take it to the fourth power, equate it to $I$, and see where that leads. In theory it is nine equations in nine unknowns, but there may well be dependencies there. |
How many ways are there to select 3 numbers from the set {1,2,3...99} so that the sum is even? | You could also focus on the remainder of the numbers $\mod 2$. Then the list of the numbers $\{1,2,3, ... ,99\}$ would turn into $\{1,0,1,0, ... ,1\}$, with $50$ ones and $49$ zeros. For the sum of $3$ numbers to be $\equiv 0 \mod 2$ the only options are:
$2$ ones and $1$ zero. Here there are ${50\choose2}\cdot{49\choose1}$ ways.
$3$ zeros. Here there are ${49\choose 3}$ ways.
So the total number of ways is
$$
{50\choose2}\cdot{49\choose1} + {49\choose 3} = 49^2 \cdot 25 + 49 \cdot 47 \cdot 8 = 49 \cdot 1601.
$$
It's quite similar reasoning, but this method it's easier to use when you're dealing with greater numbers e.g "The sum of $3$ numbers $\mod 37 \equiv 0$". |
Find all differentiable function for which $f'(x)=177f(x)$ | $$\frac{f'(x)}{f(x)}=177$$
Integrating both sides with respect to $x$ gives
$$\ln{(f(x))}=177x+C_1$$
$$f(x)=e^{177x+C_1}=C_2e^{177x}$$
Where $C_2=e^{C_1}$ |
Can I find a good approximation of this function? | For $z \rightarrow 0$ a taylor approximation is fine.
I leave this part to you.
For $z\rightarrow 1$ we may observe that the relevant contributions arise from the terms containing a $1-z^2$. The other contributions will only give finite corrections in this limit and can be approximated by their values at $z=1$ . Furthermore we observe that $\sinh(x)\sim \frac{e^x}{\bf2} $ as $x\rightarrow \infty$. Taken all this into account we may write
$$
\lim_{z\rightarrow 1_-}f(z)\sim \log\left(\frac{1-z^2}{2}\frac{e^{\frac{2}{1-z^2}}}{{\bf2}}\right)=\frac{2}{1-z^2}+\log(1-z^2)-\log(4)
$$
Edit:
A forgotten $\frac{1}{\bf2}$ was inserted. Now the graph looks even better! |
Finite State Markov Chain Stationary Distribution | There are a number of ways to prove this. In a recent article "What is a stationary measure?", Alex Furman outlines a straightforward proof using only linear algebra. He notes that the existence of an invariant probability measure also follows from the Brouwer fixed point theorem. |
Question on abstract algebra about invertiblity? | This isn't possible in a group. If $ab = e$, then $b=a^{-1}$ and $a=b^{-1}$, which implies $ba=e$. |
How to compute $\int_C \frac{z^4}{z^5-z-1}$ where $C$ is the radius $2$ circle centred at $(0,0)$? | Step1: all the roots of $z^5-z-1$ are simple and lie inside the region $\left|z\right|\leq\frac{3}{2}$.
The roots are simple because $\gcd(z^5-z-1,5z^4-1)=1$, and they lie in the region $\left|z\right|\leq\frac{3}{2}$ because if $\left|z\right|=C>\frac{3}{2}$, $\left|z\right|^5 > |z|+1 \geq |z+1|$ contradicts $z^5=z+1$.
Step2: by Cauchy's theorem, the given integral equals $2\pi i$ times the sum of the residues of the function $f(z)=\frac{z^4}{z^5-z-1}$, that also equals $\oint_{|z|=R}f(z)\,dz$ for any $R>\frac{3}{2}$. However:
$$ \oint_{|z|=R} f(z)\,dz - \oint_{|z|=R}\frac{dz}{z} = o(1) $$
for $R\to +\infty$, hence the given integral simply equals $\color{red}{2\pi i}$. This happen because for any $z$ far from the origin $f(z)-\frac{1}{z}=O\left(\frac{1}{|z|^5}\right)$, hence by integrating $f(z)-\frac{1}{z}$ over the circle $|z|=R$ we get something that is $o(1)$ for $R\to +\infty$. |
Missing and parasite roots in the trigonometric equation. | Just minor slippage. For example, from $\cos 2x=1$ you concluded that $2x=\pi k$. That is false, we need $k$ to be even: if for example $2x=\pi$, then $\cos 2x=-1$.
I don't understand your concern with $\frac{\pi}{3}$. It works. |
Dividing a polynomial with integer coefficients by a quadratic | Some quick observations : if $x$ is a root of $P(x)$ then so is $-x$. Thus if $(x^2-x+a)$ is a factor of $P(x)$, so is $(-x)^2-(-x)+a=x^2+x+a.$
Then $(x^2-x+a)(x^2+x+a)=x^4+(2a-1)x^2+a^2$ is a factor of $P(x)$. $\, a^2|36 \,$ $\Rightarrow a \in \{1,2,3,6\}$. (Here @Anwesha1729 observe in their answer that as $P(x)$ takes odd values also for integer $x$, $a$ must be odd.)
Let $P(x)=(x^4+(2a-1)x^2+a^2)(x^4+bx^2+c)$.
Comparing coefficients :
$$b+(2a-1)=5$$
$$a^2+c+(2a-1)b=13$$
$$a^2b+(2a-1)c=20$$
$$a^2c=36$$
It can be readily checked $a=3$ $\, (b=0,c=4)$ |
clarify my understanding of (mod) in a function and explain the alternative solution | Yes. For $$a\equiv b\mod c $$ $a-b$ will be divisible by $c$. So $16\equiv 1\mod 5$ is true, as $5$ divides $16-1=15$
For the alternate solution, the first step is simply expanding the bracket to get $a^2+8a+16$
Now, because we're working modulo $5$, we can take out any multiple of $5$ from our equation because it will not influence whether the equation is a multiple of $5$ or not. Hence, we take out $5a$ and $15$, leaving us with $a^2+3a+1$ |
Is similar triangles have equal areas a proposition? | In logic, the Converse
of a categorical or implicational statement is the result of reversing its two parts. For the implication $P → Q$, the converse is $Q → P$. For the categorical proposition All $S$ are $P$, the converse is All $P$ are $S$. In neither case does the converse necessarily follow from the original statement.
Thus if the original proposition is "similar triangles have equal areas", we can translate it as:
$∀x∀y(\text {Tr}(x) ∧ \text {Tr}(y) \to (\text {Sim}(x,y) \to \text {SameArea}(x,y)))$
in which case we can "convert" it to : $∀x∀y(\text {Tr}(x) ∧ \text {Tr}(y) \to (\text {SameArea}(x,y) \to \text {Sim}(x,y))).$
Both are propositions in the "logical" sense, i.e. they have a definite truth value.
"If $x > 3$, then $x > 5$" is not a proposition because it has no definite truth value: its truth value depends on the value assigned to $x$.
It is an open formula.
But if we quantify it, what we get is a sentence : $\forall x ((x > 3) \to (x > 5))$, which is FALSE in $\mathbb N$. |
How does the 3D to 2D projection for CALTag work | Yes that is how it was was meant.
The initial guess is always the centroid of the known point-correspondences and improved iteratively by calculating:
$$\begin{bmatrix}
\frac{\delta f_x(u, v)}{\delta u} && \frac{\delta f_x(u, v)}{\delta v} \\
\frac{\delta f_y(u, v)}{\delta u} && \frac{\delta f_y(u, v)}{\delta v} \\
\end{bmatrix}
\begin{bmatrix}
\Delta u \\
\Delta v
\end{bmatrix} =
\begin{bmatrix}
-f_x(u, v) \\
-f_y(u, v)
\end{bmatrix}$$
where $f_x(u,v)$ and $f_y(u, v)$ are the distance values in $x$ and $y $ direction for the guess $(u, v)$ to the known point in 3D. This also means that we need to calculate the pierce points and the consequent ray through the volume for each iteration. |
Free subgroup of an amalgamated free product | Think of this problem in the Bass-Serre tree $T$ for the amalgam. The group $G$ acts on $T$, and the tree $T$ has a vertex $V_A$ with stabilizer subgroup $A$, a vertex $V_B$ with stabilizer $B$, and an oriented edge $E = V_A \rightarrow V_B$ with stabilizer $C$, so that every vertex is in the $G$-orbit of either $V_A$ or $V_B$ and every edge is in the orbit of $E$.
I am going to make some assumptions about your reduced word $w$ just to focus on a special case: $w=w_1 \ldots w_{2l+1}$ has odd length $2l+1$; and the first and last letters $w_1, w_{2l+1}$ are in $B$. (A few other very similar special cases are needed for the complete proof.)
With these assumptions, the path in the tree $T$ from vertex $V_A$ to vertex $w \cdot V_A$ has length $2l+2$ and passes through the following sequence of vertices:
$$V_A \rightarrow V_B \leftarrow w_1 \cdot V_A \rightarrow w_1 w_2 \cdot V_B \leftarrow w_1 w_2 w_3 \cdot V_A \rightarrow w_1 w_2 w_3 w_4 \cdot V_B \leftarrow \ldots
$$
$$ \ldots \rightarrow w_1 w_2 \cdots w_{2L-1} w_{2L} \cdot V_B \leftarrow w_1 w_2 \cdots w_{2l-1} w_{2l} w_{2l+1} \cdot V_A = w \cdot V_A
$$
Furthermore, the directed edges represented by the arrows in this sequence are
$$E, w_1 \cdot E, w_1 w_2 \cdot E, w_1 w_2 w_3 \cdot E, \ldots, w_1 w_2 \cdots w_{2L-1} w_{2l} E, w_1 w_2 \cdots w_{2l-1} w_{2l} w_{2l+1} E
$$
Suppose now that $A \cap w A w^{-1}$ contains a nontrivial element $g \in G$. The stabilizer of $w \cdot V_A$ equals $w A w^{-1}$, so $g$ fixes the two vertices $V_A$, $w \cdot V_A$, which are the endpoints of the path. Since $T$ is a tree, $g$ must fix every point along that path, in particular every vertex and every oriented edge along that path. So $g$ fixes both of the edges $E$ and $w_1 w_2 \cdots w_k \cdot E$. Set $v=w_1w_2\cdots w_k$. The stabilizer of $vE$ is $vCv^{-1}$, so $g \in C \cap vCv^{-1}$, contradiction.
By the way, it looks to me like this proof does not use the full power of your hypothesized inequality $2l+1 \ge 2k+1$, it only uses $2l+1 \ge k$. I presume you need the full power of the inequality elsewhere in the proof. |
Formula for calculating parking time | Say the parking takes place from time $t_1$ to time $t_2$, where $t_1$ and $t_2$ are integers between $0$ and $24$. The number of free hours in the morning is equal to $\max\{6-t_1, 0\}$, while the number of free hours in the evening is $\max \{t_2 - 20, 0\}$. The total number of hours spent parking is $t_2 - t_1$, meaning the driver must pay for
$$t_2 - t_1 - \max\{6-t_1, 0\} -\max\{t_2 - 20, 0\}$$hours. |
Question on Riemann's Derivation of Zeta Functional Equation | For $\Re(s) < 0$ $$\int_H \frac{(-x)^{s-1}}{e^x-1}dx= \int_B \frac{(-x)^{s-1}}{e^x-1}dx$$ where $B$ is this contour
to which you can apply the residue theorem |
Bounded entire fuctions. | I think it's more direct to use Liouville's theorem. (You could use the Cauchy integral formula, but it would then probably be emulating the proof of Liouville's theorem)
Note that $g(z) = \frac{e^{z^3}}{f(z)}$ is entire (in particular because $f(z) \ne 0$ for any $z$ by the hypothesis), and it satisfies $|g(z)| \le 3$. Thus, $g(z)$ is constant, and therefore the conclusion follows immediately. |
Confidence interval for slope of regression | In a simple linear regression model $y_i=\alpha+\beta x_i+\varepsilon_i\,,\,i=1,2\ldots,n$ where the errors $\varepsilon_i$ are i.i.d $N(0,\sigma^2)$ with $\sigma$ unknown, a usual test statistic for testing $H_0:\beta=\beta_0$ against $H_a:\beta>\beta_0$ for some given $\beta_0$ is
$$T=\frac{\hat\beta-\beta_0}{\operatorname{s.e.}(\hat\beta)}\stackrel{H_0}\sim t_{n-2}\,,$$
where $\hat\beta$ is the least square estimator of the slope $\beta$ and $\operatorname{s.e.}(\hat\beta)$ is the estimated standard error of $\hat\beta$.
We reject $H_0$ at $5\%$ level of significance if $\text{observed } T>t_{0.05,n-2}$ where $t_{0.05,n-2}$ is the upper $5\%$ point of a $t$ distribution with $n-2$ degrees of freedom. When $n$ is large as in your case, the $t$ distribution is closely approximated by a standard normal distribution.
You have $\beta_0=0$, so the observed value of $T$ is about $16.136$ which exceeds the tabulated value $t_{0.05,142}\approx 1.645$. This suggests you should reject $H_0$ at $5\%$ level of significance.
To find a confidence interval from the above test, we consider the acceptance region associated with $$\frac{\hat\beta-\beta}{\operatorname{s.e.}(\hat\beta)}<t_{0.05,142}$$
That is, $$\beta>\hat\beta-\operatorname{s.e.}(\hat\beta)t_{0.05,142}$$
So a $95\%$ confidence interval for $\beta$ based on the data is approximately $(3.62275,\infty)$. Clearly you will always reject $H_0$ based on this interval. |
Verification of integral over $\exp(\cos x + \sin x)$ | The definition of the modified Bessel function $I_0$ is
$$
I_0(z)=\frac{1}{\pi}\int_0^\pi e^{z\cos x}dx=\frac{1}{2\,\pi}\int_{-\pi}^\pi e^{z\cos x}dx
$$
Now
$$
a\cos x+b\sin x=\sqrt{a^2+b^2}\cos(x+\phi)
$$
for some angle $\phi$. Since the integral is over a full period, you get the result. |
"Intrinsic" treatment of projective spaces | First, you have to define what a "projective structure" means. Let $P^n$ be a projective space over some field $F$. In the context of classical projective geometry, a self-map $f: P^n\to P^n$ is projective (or preserves the projective structure) if it preserves projective incidences: Three points $x, y, z\in P^n$ belong to the same line if and only if their images do. (In principle, one should also make a similar requirements on configurations of $k$ points belonging to a common $k-2$-dimensional projective subspace, but this turns out to be unnecessary). If $n=1$, i.e., you are dealing with 1-dimensional projective space, then this condition, of course is meaningless and every self-map $f$ is projective. However, if $n\ge 2$ then von Staudt proved what he called the
"2nd fundamental theorem of projective geometry",
which states that a bijective transformation $P^n\to P^n$ is projective if and only if it is a composition of an element of $PGL(n, F)$ and an automorphism of the field $F$. If you are interested in $F={\mathbb R}$, then such an automorphism has to be the identity. If $F={\mathbb C}$, then you have an uncountable group of automorphisms (assuming Axiom of Choice). However, if you assume that your projective transformations are continuous, then the only automorphism left is the complex conjugation. |
Is taking modulus on both sides of an equation valid? | The problem is, "taking-mod-of-both-sides" is valid, but only gives a one way implication. If $z_{1} = z_{2}$ then $|z_{1}| = |z_{2}|$. But there are lots of possibilities for $z_{3}$,$z_{4}$ with $|z_{3}| = |z_{4}|$ and $z_{3} \neq z_{4}$, since modulus is not a one-to-one map.
When you look at ${\left(\frac{1+\mathrm{i}}{1-\mathrm{i}}\right)}^{n}$, this always has modulus $1$ for all values of $n$. But this just means it lies on the unit circle in the complex plane, it does not mean that it is equal to 1. |
Derivative of $x(\ln a^x)$ | $$x\ln a+\ln a^x=2x \ln a$$
You can bring down the exponenent. |
A small question concerning the correctness of induction | Why should there in fact exist such a chain starting from $1$ and reaching all the way up to $m$? This uses an assumption about $\mathbb{N}$, which is ... exactly the well-ordering principle (or something equivalent to it).
This can feel very slippery at first. I think the best way to clarify the situation is to look at a structure which is like the natural numbers except for induction. Since induction fails in such a structure, there must be something special about the natural numbers which lets us prove that induction holds of them.
Specifically, I want to look at a discrete ordered semiring with a simple failure of induction. Here's an example I quite like:
Consider the set $\mathfrak{X}$ of polynomials in a single variable $t$ with integer coefficients, which are either zero or have positive leading term.
There are obvious notions of addition and multiplication of elements of $\mathfrak{X}$ (just the usual operations on polynomials). Moreover, $\mathfrak{X}$ carries a natural ordering: we set $p<q$ iff $\lim_{a\rightarrow \infty}p(a)<\lim_{a\rightarrow\infty}q(a)$. (It's a good exercise to give an equivalent definition of this ordering in terms of coefficients alone.) This turns $\mathfrak{X}$ into a discrete ordered semiring, just like $\mathbb{N}$.
However, induction fails in $\mathfrak{X}$: consider for example the property
$P(a)\equiv$ "For every $b<a$ there is some $c$ such that either $c+c=b$ or $c+c+1=b$."
That is, "Every number $<a$ is either even or odd." In any discrete ordered semiring whatsoever this is trivially satisfied by $0$ and satisfies $\forall x(P(x)\rightarrow P(x+1))$. However, $\forall x(P(x))$ is not true in $\mathfrak{X}$: consider the monomial $t$.
So there is something special about $\mathbb{N}$ which doesn't come from its discrete ordered semiring structure alone but needs to be considered separately, and this is exactly captured by the well-ordering principle. Put another way:
Since induction holds in $\mathbb{N}$ but fails in $\mathfrak{X}$, any argument that induction holds in $\mathbb{N}$ must use some property of $\mathbb{N}$ which doesn't hold of $\mathfrak{X}$. This is what the well-ordering principle provides us. |
Calculating the double integral | Your primitive is wrong. The correct one is
$$
\int \frac{x}{x^2+y^2}\,dy=\arctan(y/x)+C.
$$
This means that you end up calculating
$$
\int_0^1\arctan(1/x)\,dx,
$$
which is doable.
I first did not see your second question. Yes, you can switch order here. I suggest that you try to do the problem integrating in that order once you are done with the first way. It is a good practice. |
If $\overline{\operatorname{Sp}}(C)=X$ and $C$ is countable, then $X$ is separable. | Consider the elements with rational coordinates. Then set of all such elements is a countable set because it is the union of countable sets $P_n:=\{\sum_{i=1}^n\alpha_ic_i:\alpha_i\in\mathbb{Q},c_i\in C,1\leq i\leq n\}$.
Let $B(x,\epsilon)$ be an arbitrary ball. We can choose $y\in \text{span}(C)$ such that $B(y,r)\subset B(x,\epsilon)$. If $y=\sum_{i=1}^k\alpha_ic_i$, then $\|\sum_{i=1}^k\alpha_ic_i-\sum_{i=1}^k\beta_ic_i\|\leq\sum_{i=1}^k|(\alpha_i-\beta_i)|\|c_i\|$. Since rationals are dense, we can choose $\beta_i\in\mathbb{Q}$ such that $\sum_{i=1}^k|(\alpha_i-\beta_i)|\|c_i\|<r$. |
How do we prove that $\int_{0}^{\pi/2}\sin(t)^{2n+3}dt=\frac{4^n(2n+2)}{(2n+3)(2n+1){2n\choose n}}$? | A nice method to integrate $\sin t$ to an odd power is to keep one factor of sine, and convert all the rest to cosine (using $\sin^2 t = 1-\cos^2 t$), then substitute $x = \cos t$, $dx = -\sin t\;dt$.
$$
\int_0^{\pi/2} (\sin t)^{2n+3} dt = \int_0^{\pi/2} (1-\cos^2 t)^{n+1} \sin t\;dt
= \int_0^1 (1-x^2)^{n+1} \;dx
$$
the integral of a polynomial, which you already know how to do.
This way, there is no need to know about gamma or beta functions.
Integrating the polynomial, we get
$$
\sum_{j=0}^{n+1}\frac{(-1)^j\binom{n+1}{j}}{2j+1}
$$
So we have to do some combinatorics to reach your answer
$$
\frac{(2n+2)!!}{(2n+3)!!} = \frac{4^n (n!)^2(2n+2)}{(2n+1)(2n+3)(2n)!}
$$ |
singleton null vector set linearly dependent, but other singletons are linearly independent set | The intuition is the following: the null vector only spans a zero-dimensional space, whereas any other vector spans a one-dimensional space. This is captured by the following thought:
A set of vectors $\{ \bar v_1, \bar v_2, ..., \bar v_n\}$ is linearly independent iff $span(\{ \bar v_1, ..., \bar v_n\})$ is not spanned by a proper subset of $\{ \bar v_1, \bar v_2, ..., \bar v_n\}$. Now, the space spanned by $\{ \bar o\}$ is already spanned by a proper subset of $\{ \bar o\}$ namely $\emptyset$. For a non-zero $\bar v, span(\bar v)$ is not spanned by any proper subset of $\{ \bar v \}$.
Edit: Definition: Let $S=\{ \bar v_i:i\in I\}$ be a subset of a vector space $V$, then $$span(S):= \{ \sum_{i \in I}c_i\bar v_i: \bar v_i \in S, c_i \in \Bbb F, c_i=0 \mbox { for almost all } i \}.$$ Taking $I=\emptyset$, i.e. $S=\emptyset$, we get $$span(\emptyset )= \{ \sum_{i \in \varnothing }c_i\bar v_i \} = \{ \bar o \},$$ because by definition the empty sum equals $\bar o$ (just like in arithmetic, where the empty sum equals $0$ and the empty product equals $1$, or in set theory, where the empty union equals $\emptyset$). |
Maximize arccos-function | Just plot, and it and you'll see there is only one maximum in $[0,5]$
I guess if you take the derivative you will find that there is only a possible stationary point, and then checking the second derivative you would certify it is a maximum. But I guess in that interval the function is even strictly concave. |
Help comprehending the work required to drain a hemispherical tank with radius 5 | In your first the limits of integration say $y$ is increasing upward. The distance you lift the water is $-y$, which will change the sign of the integral to positive. For the second the radius is zero at $y=0$ so your term $25-y^2$ should be $25-(5-y)^2$ |
Three closed orbits with only one fixed point in a phase portrait? | You can fill the space between the closed orbits with spiraling trajectories connecting the orbits. No conservation of energy is assumed, i.e., the system is not Hamiltionian, so this is possible.
For instance, make the fixed point and the second closed orbit stable, the first and third closed orbit unstable. |
Proof of the spectral theorem for compact operators in Banach spaces | $x-\lambda e=x-\lambda_n e +(\lambda_n-\lambda) e$. Hence $(x-\lambda_n e)^{-1} (x-\lambda e)=e+(x-\lambda_n e)^{-1}(\lambda_n-\lambda) e$. If the conclusion fails then the last term tends to $0$ along a subsequence. But this makes $\|(x-\lambda_n e)^{-1} (x-\lambda e)-e\|<1$ for some $n$. Using the fact that $y$ is invertible if $\|y-e\|<1$ we see that $(x-\lambda_n e)^{-1} (x-\lambda e)$ is invertible. But then $x-\lambda e$ is invertible. This contradicts the fact that $\lambda \in \sigma (x)$.
Here is the final argument: suppose $\lambda \neq 0$ is in $\sigma(x)$ and, if possible suppose $\lambda$ is not an eigen value. By 1) we get $\|Tx-\lambda x\|\geq \delta \|x\|$. It is easy to see from this that if $\lambda_n \to \lambda, \lambda \notin \sigma(x)$ then $\|(Tx-\lambda_n x)^{-1}\|\leq \frac 2 {\delta}$ contradicting (c). We have proved that no sequence on the complement of $\sigma (x)$ can converge to $\lambda$. This means $\lambda$ is an interior point of the spectrum. Obviously interior points are limit points of the spectrum. But by (2) $\lambda$ is not a limit point. This finishes the proof. |
convex function, inequality | If your function is continuous. We have for $t \in (0,1)$
\begin{equation}
f(tx+(1-t)x) \le tf(x) + (1-t)f(y) \le tf(x) +f(y)
\end{equation}
Doing $t $ tends to one we obtain a).
To b) $f(0) = f(0x) =0f(x) =0$.
to c) $0 = f(x-x) \le f(x) +f(-x)$
To d) a do simple indction of a) |
How is the following a CW complex | I'm in favour of some pictures of cell complexes, so here are two, taken from the book Topology and Groupoids:
Of course the first is of dimension $1$ and the second is of dimension $2$. |
Solve equation taking Laplace transforms | The general solution from the differential equation
\begin{align}
\left[ \partial_{t}^{2} + (t+1) \partial_{t} + t \right] y(t) = 0
\end{align}
is
\begin{align}
y(t) &= c_{1} e^{-t^{2}/2} H_{-1}\left( \frac{t-1}{\sqrt{2}} \right) + c_{2} e^{1/2 -t} \\
&= e^{-t} \left[ A \,\, erf\left(\frac{t-1}{\sqrt{2}}\right) + B \right].
\end{align}
Now the Laplace transform of the equation is as follows.
\begin{align}
L\{ \left[ \partial_{t}^{2} + (t+1) \partial_{t} + t \right] y(t) \} = 0
\end{align}
which becomes
\begin{align}
(s+1) \partial_{s} y(s) - (s^{2} + s -1) y(s) = -y(0)(s+1) - y^{'}(0)
\end{align}
or
\begin{align}
\partial_{s} y(s) - \left( s - \frac{1}{s+1} \right) y(s) &= -y(0) - \frac{y^{'}(0)}{s+1} \\
\frac{1}{s+1} \, e^{s^{2}/2} \partial_{s} \left[ (s+1) e^{-s^{2}/2} y(s) \right] &= -y(0) - \frac{y^{'}(0)}{s+1} \\
\partial_{s} \left[ (s+1) e^{-s^{2}/2} y(s) \right] &= y(0)(s+1) \, e^{-s^{2}/2}
- y^{'}(0) e^{-s^{2}/2} \\
(s+1) e^{-s^{2}/2} y(s) &= y(0) e^{-s^{2}/2} - (y(0) + y^{'}(0)) \int^{s} e^{-u^{2}/2} du
\end{align}
which gives
\begin{align}
y(s) &= \frac{y(0)}{s+1} - \sqrt{ \frac{\pi}{2} }(y(0) + y^{'}(0)) \, \frac{1}{s+1} \, e^{-s^{2}/2} \, erf\left(\frac{s}{\sqrt{2}}\right).
\end{align}
Inverting the transform yields
\begin{align}
y(t) &= y(0) e^{-t} - \sqrt{ \frac{\pi}{2} }(y(0) + y^{'}(0)) L^{-1}\{\frac{1}{s+1} \, e^{-s^{2}/2} \, erf\left(\frac{s}{\sqrt{2}}\right) \} \\
&= y(0) e^{-t} - \sqrt{ \frac{\pi}{2} }(y(0) + y^{'}(0)) \, e^{-t^{2}/2} \,\,\cdot \frac{1}{2\pi i} \int_{\gamma-i \infty}^{\gamma + i \infty} e^{(s+t)^{2}/2} \, erf(s+t/\sqrt{2}) \frac{ds}{s+1} \\
&= y(0) e^{-t} - \sqrt{ \frac{\pi}{2} }(y(0) + y^{'}(0)) \, e^{-t^{2}/2} \,\, e^{(t-1)^{2}/2} \, erf(t-1/\sqrt{2}) \\
y(t) &= e^{-t} \left[ y(0) - \sqrt{ \frac{\pi}{2} }(y(0) + y^{'}(0)) \,\, erf\left(\frac{t-1}{\sqrt{2}}\right) \right].
\end{align}
Since the initial conditions are $y(0) =1$ and $y^{'}(0) = -1$ then the solution is
\begin{align}
y(t) = e^{-t}.
\end{align} |
True or false: continuous image of convex is convex | The function $f:\Bbb R^2\to\Bbb R^2:\langle x,y\rangle\mapsto \langle x,y+x^2\rangle$ is continuous and sends the $x$-axis, which is convex, to the graph of $y=x^2$, which is not. |
Optimal approximation of square area with identical circles | If there were no walls and you were just trying to squeze circles together as tightly as possible, then the best way to do it is the hexagonal packing. Even in this packing the circles only cover 90.69% of the area, the other 9.31% lies in the gaps between the circles. So the approximation is always going to be less than 90.69% of the total area.
Now consider putting really small circles into your square. You can use a hexagonal packing in the middle, and continue it out toward the edges. Since the circles are very small, you'll be able to get very close to the edges without them messing up your pattern. So almost all the square will be covered with the hexagonal packing. So the limit does in fact converge to the ratio 0.9069... |
Rational polynomial complex integral: How do work around the singularities? | How can we bring this polynomial into the form
$$\frac{f(z)}{(z−a)^n}$$
, where $f(z)$ is analytic on a disk with radius $3$ around the origin?
You don't. You note that you can integrate around each of the poles separately (due to Cauchy's integral theorem). In other words, do three loops, one around each pole, and use the theorem to show that the sum of the integrals over each of those three is equal to the the integral over a curve that goes around all of the poles at once.
So when integrating around the pole at $z=-2$, you get
$$
\frac{z/((z+1)(z-1))}{z+2}
$$
which is of the required form. Then you integrate around the pole at $z=-1$, where the function can be given the form
$$
\frac{z/((z+2)(z-1))}{z+1}
$$
And for the third pole I leave to you to do the rewriting.
Edit: Added details about Cauchy's integral theorem: The theorem says that integrating over the two curves in the below two pictures give the same result (the poles of the function are drawn in):
However, in the last one, we can split it into three separate integrals, one over each loop. And since the function only has one pole within each loop, on each loop we can do the required rewriting and use the integral formula to find the value of the integral. Then we just add the three values together afterwards to get the total value of the original integral. |
Matrix functions of a non-diagonalizable matrix | Like you said,
$$
A^n =
\begin{bmatrix}
1 & n & 0 \\
0 & 1 & 0 \\
0 & 0 & (-1)^n \\
\end{bmatrix} \qquad \text{for} \qquad n \in \Bbb N.$$
Then
$$
\exp(tA)=\sum_{n=0}^\infty\frac{t^nA^n}{n!}
=\begin{bmatrix}
\sum_{n=0}^\infty \frac{t^n}{n!}
&\sum_{n=0}^\infty n\frac{t^n}{n!}&0\\0&\sum_{n=0}^\infty \frac{t^n}{n!}&0\\
0&0&\sum_{n=0}^\infty \frac{(-1)^nt^n}{n!}
\end{bmatrix}
=\begin{bmatrix}
e^{t}&te^t&0\\
0&e^t&0\\
0&0&e^{-t}
\end{bmatrix}.
$$
You can play the same game for the sine.
About the powers, you could define
$$
A^t=\begin{bmatrix}1&t&0\\0&1&0\\0&0&e^{\pi i t}\end{bmatrix}.
$$
This agrees with the integer powers of $A$ and satisfies the exponential property $A^{t+s}=A^{t}A^{s}$. It is important to notice that for non-integer $t$ this choice is rather arbitrary and not the result of a calculation. |
Let $X\ge 0$ be an integrable random variable. Show that if $E(X)=0$ then $X=0$ almost everywhere. | Here's one way to proceed:
$\{\omega:X(\omega)>0\}=\bigcup_{n=1}^{\infty}\{\omega:X(\omega)\geq \frac{1}{n}\}$, so it's enough to show that $\mathbb{P}(\{\omega:X(\omega)\geq \frac{1}{n}\})=0$ for all $n\geq 1$.
$X(\omega)\geq \frac{1}{n}1_{\{X\geq \frac{1}{n}\}}(\omega)$, so take expectations of both sides to conclude. |
Proof of compactness for sets of norm equal to one in finite-dimensional normed vector spaces | If $A$ has the norm $N$ and $N'$ is an equivalent norm, $N'$ is continuous. Since $N'(x)\neq 0$ for all $x\in A, f$ is continuous as well. A similar argument can be used to show the inverse is continuous.
There is an easier way however. Since $E$ is finite-dimensional, there is an isomorphism $L:E\rightarrow \mathbb{R}^n$ for some $n$. If $N$ is the norm on $E$, we can define a norm $N'$ on $\mathbb{R}^n$ by $N'(x):=N(L^{-1}(x))$ (you have to check this is a norm). Then $B=\{x\in \mathbb{R}^n:N'(x)=1\}$ is closed and bounded, since all norms on $\mathbb{R}^n$ are equivalent, hence B is compact. But then so is $A$, since $A=L^{-1}(B)$ and $L$ is an isomorphism. |
Typo in the book or am I going crazy? | Alright, just to get this out of the unanswered question list, I am posting an answer.
Yes, there is a typo in the book. It should be $M'=MC$. Indeed, $M=(m_{ij})$ where $m_{ij}=\sigma_{i}(\omega_j)$. Then we can compute $(k, i)$-th coefficient of $MC$:
$$
(MC)_{ki} = \sum_{j=1}^{n} m_{kj} c_{ji} = \sum_{j=1}^{n} \sigma_{k}(\omega_j)c_{ji} = c_{1i}\sigma_{k}(\omega_1)+c_{2i}\sigma_{k}(\omega_2)+\cdots +c_{ni}\sigma_{k}(\omega_n) = \sigma_{k}(\omega'_{i})=M'_{ki}
$$
Thus, $MC=M'$. |
Projective resolution of $\mathbb{Z}/(d)$ as a $\mathbb{Z}/(n)$-module | There's an obvious exact sequence
$$0\to K\to\Bbb Z/(n)\to\Bbb Z/(d)\to0.$$
The kernel $K$ is cyclic of order $n/d$. Repeat
$$0\to K_1\to\Bbb Z/(n)\to\Bbb Z/(n/d)\to0.$$
Then $K_1\cong \Bbb Z/(d)$. We get into a cycle.
We get a projective resolution with all terms $\Bbb Z/(n)$
and with differential maps alternately ${}\times d$ and ${}\times n/d$. |
Uncountable solutions in Picard's Theorem | No, not even 1.
Let $w\in\mathbb C$. Consider these sets: $A_n=\{z\in\mathbb{C}\,|\,n\leqslant|z|\leqslant n+1\}$ ($n\in\mathbb N$), as well as the sets $B_n=\left\{z\in\mathbb{C}\,|\,\frac1{n+1}\leqslant|z|\leqslant\frac1n\right\}$ (also with $n\in\mathbb N$). Each of them is compact and therefore, since $f$ is not constant, it follows from the identity theorem that the equation $f(z)=w$ has only finitely many solutions in each of them. Since the number of such sets is countable, the equation $f(z)=w$ has, at most, countable many solutions in $\mathbb{C}\setminus\{0\}$. |
Is this true $f(n)+\mathcal O(f(n)) = \Theta(f(n))$? | No, it is entirely possible that $g\in O(f)$ and $f+g\in o(f)$. Namely, with $g=-f$.
It is however true that if $g\in o(f)$, then $f+g\in \Theta(f)$. In fact, eventually $\lvert g\rvert\le \frac12\lvert f\rvert$ and therefore, eventually, $$\frac12\lvert f\rvert\le\lvert f\rvert-\lvert g\rvert\le\lvert f+g\rvert\le\lvert f\rvert+\lvert g\rvert\le\frac32\lvert f\rvert$$ |
Approximating the logarithm of the binomial coefficient | A better approximation for the logarithm of a factorial can be found by using $\log n! \approx n \log n - n$. Interestingly, the additional terms in the approximation of the binomial coefficient cancel out, and the result is the same as if you used the simpler approximation $\log n! \approx n\log n$:
$$\begin{align}
\log {n\choose m} & = \log \frac{n!}{m!(n-m)!} \\
& = \log n! - \log m! - \log (n-m)! \\
& \approx n \log n - n - m \log m + m - (n-m) \log (n-m) + n-m \\
& = n \log n - m \log m - (n - m) \log (n-m)
\end{align}$$
An even better approximation uses more terms of Stirling's approximation, giving $\log n! \approx (n+\tfrac{1}{2})\log n - n + \tfrac{1}{2}\log 2\pi$ and hence
$$\begin{align}
\log {n\choose m} &\approx (n+\tfrac{1}{2})\log n - (m+\tfrac{1}{2})\log m - (n-m+\tfrac{1}{2})\log (n-m) - \tfrac{1}{2}\log 2\pi \\
& = n\log n - m \log m - (n-m)\log(n-m) + \cdots \\
& \qquad +\tfrac{1}{2} (\log n - \log m - \log (n-m) - \log 2\pi)
\end{align}$$
where the last term is the correction from using more terms of Stirling's approximation. |
Could Cauchy sequence have no accumulation points? | Yes, and your example is a good exemple. Your sequence is strictly increasing and do not converge, therefore it can't have accumulation point. But if the space is complete (for example $\mathbb R$) it's impossible since in a complete space every Cauchy sequence converge (and thus has a unique accumulation point).
Notice that $$\text{BW true } \iff\text{ the space is complete},$$
since $\mathbb Q$ is not complete, BW is not useable. |
Existence of the $P$ closure of binary relation $R$ | (1) Say that a relation $R$ on a set $S$ has a cycle if there are an integer $n \ge 1$ and distinct elements $s_0,s_1,\dots,s_n \in S$ such that $\{\langle s_n,s_0 \rangle\} \cup \{\langle s_k,s_{k+1} \rangle:k=0,1,\dots,n-1\} \subseteq R$, and let $P$ be the property of having a cycle. Let $R = \{\langle n,n+1 \rangle:n \in \mathbb Z\}$, a relation on $\mathbb Z$. For each $n \in \mathbb Z$ the relation $R_n = R \cup \{\langle n,n-1 \rangle\}$ has a cycle, namely, $\{\langle n,n-1 \rangle, \langle n-1, n \rangle\}$, but obviously $R = \bigcap\limits_{n \in \mathbb Z} R_n$ does not. Thus, this $P$ is not closed under intersections, and $R$ has no $P$-closure. (In this case each $R_n$ is a minimal relation containing $R$ with property $P$, but there is no minimum relation of this kind.)
(2) Yes, with one small caveat noted in the edit. Suppose that $R$ has a $P$-closure $\overline{R}$. Then by definition $\overline{R} \subseteq T$ whenever $T$ is a relation on $S$ containing $R$ and having property $P$, and $\overline{R}$ is itself such a relation, so $$\overline{R} = \bigcap\{T\subseteq S \times S:R \subseteq T\text{ and }T\text{ has }P\}.$$
Conversely, if $P$ is closed under intersections, then $\bigcap\{T\subseteq S \times S:R \subseteq T\text{ and }T\text{ has }P\}$ is clearly the smallest relation on $S$ containing $R$ and having $P$, so it is the $P$-closure of $R$.
Edit: As ccc points out, in the ‘Conversely’ part of the argument it is necessary to assume that $\{T\subseteq S \times S:R \subseteq T\text{ and }T\text{ has }P\}$ is non-empty. |
Cyclic Quintic Polynomial Family | Such families are easily construct able now.
In this paper, specializing the parameters $v = 2n+1, w = 1, u = 0$ leads to
$p = n^4 + 2n^3 + 14n^2 + 13n + 11$
Certain linear transformations of the Gaussian Quintic of conductor $p$ lead to the family
$x^5 + (n^2 + n + 4)x^4 + (-2n^2 - 2n + 2)x^3 + (-n^4 - 2n^3 - 10n^2 - 9n - 5)x^2 + (-3n^2 - 3n - 2)x + n^2 + n + 1$
The discriminant of this polynomial is
$(2n + 1)^4(n^2 + n + 1)^2p^4$ |
Number of possibilities to cross a hexagonal lattice. | Thanks for merico's crucial hint "4 coulmns of arrows going to the right", following
is my answer.
By collapsing some of the vertices and edges, the honey comb is equivalent to following ladder graph:
Forget about the matrix expression in above figure first. Some of the edges in
this ladder graph has been labeled by a number. It means that edge
in the ladder graph corresponds to multiple edges in the original honey comb.
It is clear the ladder graph is composed of 3 repeating units. Figure 2 below shows the correspondence between the leftmost portion of the honey comb with the first building unit.
Each unit is composed of 2 inputs on the left, 2 outputs on the right, $2m$ forward arrows and $1$ backward arrow in the middle.
One can connect the input of an unit, say $A$, to the outputs $B$ or $C$
in 3 possible manners.
As depicted in the figure 3, there are
(figure 3a)
$m + m = 2m$ ways connect to $B$ without using the backward arrow.
(figure 3b)
$m + m = 2m$ ways connect to $C$ without the backward arrow.
(figure 3c)
$m \times m = m^2$ ways connect to $C$ using the backward arrow.
For each building unit, we can summarize the number of possibilities of joining the input with the output using a $2\times 2$ matrix:
$$A(m) = \begin{pmatrix}2m & 2m+m^2\\2m+m^2 & 2m\end{pmatrix}$$
In particular, for the building units $A(1)$ and $A(2)$ appear in the honey comb, we have:
$$A(1) = \begin{pmatrix}2 & 3\\3 & 2\end{pmatrix}\;\;\text{ and }\;\;
A(2) = \begin{pmatrix}4 & 8\\8 & 4\end{pmatrix}
$$
The purposes of these matrices and the usefulness of merico's hint boils down to
a simple fact. Once one can break down a directed graph into a chain of subgraphs
with output of one subgraph feeding into input of next subgraph. The counting
of all possible paths from start to finish is simplified. It turns into a condition
sum of number of paths conditional to what edges are taken between the subgraphs.
The computation of total number of paths $\mathcal{N}_{path}$ becomes evaluation
of a matrix product of corresponding matrices and the matrix expression can be written
down by inspection of the graph! This is what the mysterious matrix expression in
Figure 1 for and the final result is:
$$\mathcal{N}_{path} = \begin{pmatrix}1\\1\end{pmatrix}^T
\begin{pmatrix}2 & 3\\3 & 2\end{pmatrix}
\begin{pmatrix}2 & 0\\0 &2\end{pmatrix}
\begin{pmatrix}4 & 8\\8 & 4\end{pmatrix}
\begin{pmatrix}2 & 0\\0 &2\end{pmatrix}
\begin{pmatrix}2 & 3\\3 & 2\end{pmatrix}
\begin{pmatrix}1\\1\end{pmatrix} = 2400
$$
UPDATE
This question has been asked before on MSE but not answered. It has also been posted to artofproblemsolving and identified as 2012 AMC 10B Problems/Problem 25. According to the answer on artofproblemsolving, $\mathcal{N}_{path}\;$ is 2400. |
Computing angle of ellipse or ellipsoid | Find the eigenvectors of $A$: they define the principal axes of the ellipsoid.
From those you can compute any angles of those axes you care to.
You can find a detailed description on p.10ff of the document "Algorithms for Ellipsoids" by Stephen Pope
(PDF download).
|
Find a point on the convex hull of a given set of points which is closest to a given point. | Why not just this ...
(1) Using this Qhull libary function
$\qquad$http://www.qhull.org/html/qh-faq.htm#closest
find the facet of the convex hull which is closest to the given point.
(2) For that facet ($k$-dimensional with $0 \le k \le n-1$), find the equations of the $k$-plane containing it.
(3) Using standard techniques from Vector Calculus, find the point on that $k$-plane which is closest to the given point.
Done.
If you post an actual numerical example, we can see if this works. |
Rotation of a vector and number of parameters needed | You are correct that a vector can be taken to another using two angles. Vectors don't have any structure along their axis. The third angle is rotation around the vector, which is needed if there is a yaw orientation that matters. |
Prove $O(2)$ is non-abelian? | Take for example
$$\begin{pmatrix}1&0\\0&\!-1\end{pmatrix}\begin{pmatrix}0&1\\1&0\end{pmatrix}=\begin{pmatrix}0&1\\\!-1&0\end{pmatrix}\neq\begin{pmatrix}0&\!-1\\1&0\end{pmatrix}=\begin{pmatrix}0&1\\1&0\end{pmatrix}\begin{pmatrix}1&0\\0&\!-1\end{pmatrix}$$
...over any field with characteristic$\,\neq2\;$ |
Lorenz equations and find a minimal trapping region. | We first see that the time derivative of the function $$V = rx^2+\sigma y^2+\sigma(z−2r)^2$$
is
$$\dot V = -2\sigma(rx^2+y^2+bz^2-2brz)$$
We know that a trapping region exists. Because $\dot V$ goes to $-\infty$ in all directions, the region where $\dot V \ge 0$ is bounded. In fact, if $C$ is the maximum value of $V$ in the region where $\dot V \ge 0$, then it's easy to see that any point outside of $rx^2+\sigma y^2+\sigma(z−2r)^2 \le C + \epsilon$ enters this region in finite time. So we try to find the maximum value obtained by $V$ in this region. $\dot V = 0$ gives
$$rx^2+y^2+bz^2 \le 2brz$$
Here the computation got difficult, so I gave up and consulted Mathematica. This give the solution for $C$ as
$$ C = \begin{cases}br^2 & r\le \frac{\sigma^{-2}}{3^{\frac13}}\\
\frac{27}{16}br^4\sigma^2 &r > \frac{\sigma^{-2}}{3^{\frac13}}\end{cases}$$ |
$\mathbb{Z}$-module $\prod\limits_{\text{p prime} } \mathbb{Z}/p\mathbb{Z}$ | Your example is almost right; the nonzero second coordinate $[1]$ is an element of $\Bbb{Z}/3\Bbb{Z}$, not of $\Bbb{Z}/2\Bbb{Z}$ so
$$2([0],[1],[0],\ldots)=([0],[2],[0],\ldots).$$
You are on the right track though. Note that elements of a product are usually denoted with round braces, not curly ones. Also, since the product is infinite it is customary not to write a last coordinate, but in stead to end with some dots...
The torsion part is the submodule $\bigoplus_{p\text{ prime}}\Bbb{Z}/p\Bbb{Z}$ of tuples with finite support. I'll leave it for you to prove. |
Is it true $\underset{x\to\infty}{\text{lim}}\left(\vartheta_3\left(0,e^{-\frac{\pi}{x^2}}\right)-x\right)=0$? | It is a very good exercice to prove the Fourier series $$f(t)=\sum_n e^{-\pi (t+n)^2 x} = \sum_k x^{-1/2} e^{-\pi k^2/x} e^{2i \pi kt}, \qquad x^{-1/2} e^{-\pi k^2/x} = \int_0^1 f(t) e^{-2i \pi kt}dt$$ from which it follows that $\lim_{x \to \infty}x-\sum_n e^{-\pi n^2 /x^2} = \lim_{x \to \infty} x-x \sum_n e^{-\pi n^2 x^2} = 0$
and that $\lambda(s) = \pi^{-s/2}\Gamma(s/2)\zeta(s)= \lambda(1-s)$ |
How many orders are there of the letters in the word M E D I T E R R A N E A N? In how many of these are the vowels consecutive? | Hint for the second one: since all vowels have to be consecutive, you can think of them as forming a single block. So you need to find the number ways to rearrange the letters "M D T R R N N V", where "V" stands for the block of the vowels as a single entity. But that won't be the final answer. Inside that block of vowels, those vowels can be arranged in all different possible ways. So you need to multiply by the number of ways to arrange "E I E A E A". |
Find all real numbers x such that $ |5x-1| ≤ |3x-2|$ | Squaring both side leads to: $(5x-1)^2-(3x-2)^2\leq 0 \iff (8x-3)(2x+1)\leq 0 \iff x\in [-\frac{1}{2};\frac{3}{8}]$ |
Clarification on step in proof for Ratio test | Let $x_k = \frac{|a_{k+1}|}{|a_k|}$.
Assume that $limsup$ $x_k \leq Q$ and let $t>Q$.
By the definition of limsup, limsup $x_k =$ $inf_{n \geq 1} sup_{m \geq n}x_m$ $\leq Q$. By the definition of the infimum this implies that there exists an N such that $sup_{m \geq N}x_m \leq t$ (note that $y_n = sup_{m \geq n}x_m$ is a decreasing sequence). But this implies that $x_m \leq t$ $\forall m \geq N$, as needed. |
Prove an identity with integer partitions | Your proven identity looks like the q-binomial theorem, and the terms can be rewritten with q-shifted factorial. I don't have the answer for your question, but it might exist some useful identities in q-series that can help to prove the identity. If you have already use this, then ignore my answer. |
$\text{Log}(1-z) \in H^p(\mathbb{D})$ for all $0<p<\infty$ | Solution 1 (general and applies to all $f=u+iv$ where either $u$ or $v$ is bounded on the unit disc)
It is enough to show the result for all even integers $2m \ge 2$ as the Hardy spaces are decreasing in $p$
$f(z)=\log(1-z)=\log|1-z|+i\arg (1-z)=u(z)+iv(z), |v(z)| \le \frac{\pi}{2}=A$ as $\Re (1-z) >0$
$\Re f(z)^{2m}=u^{2m}+\sum_{k=1}^{m}B_{k,m}u^{2m-2k}v^{2k}$ where $B_{k,m}$ are plus/minus binomial coefficients.
But $f(0)=0, f(0)^{2m}=0, (\Re f^{2m})(0)=0$ so by the harmonic mean property:
$\int_0^{2\pi} u^{2m}(re^{it})dt+\sum_{k=1}^{m}B_{k,m}\int_0^{2\pi} (u^{2m-2k}v^{2k})(re^{it})dt=0$
Holder Inequality implies:
$\int_0^{2\pi} u^{2m-2k}(re^{it})v^{2k}(re^{it})dt \le (\int_0^{2\pi} u^{2m}(re^{it})dt)^{\frac{2m-2k}{2m}}(\int_0^{2\pi} v^{2m}(re^{it})dt)^{\frac{2k}{2m}}$
($\int|ab| \le (\int|a|^p)^{1/p}(\int|b|^q)^{1/q}, 1/p+1/q=1, a=u^{2m-2k}, b=v^{2k}, p=\frac{2m}{2m-2k}, q=\frac{2m}{2k}$)
But now since $|v(z)| \le A$ we get that all integrals $(\int_0^{2\pi} v^{2m}(re^{it})dt)^{\frac{2k}{2m}} \le (2\pi A)^{2k}$ so denoting by $X_r(m)=(\int_0^{2\pi} u^{2m}(re^{it})dt)^{1/2m}$ and using the triangle inequality we get:
$X_r(m)^{2m} \le \sum_{k=1}^{m}|B_{k,m}|X_r^{2m-2k}(2\pi A)^{2k}$
In particular, this means that $X_r(m)$ is bounded for every $r<1$ by the positive root of the equation $R^{2m}-\sum_{k=1}^{m-1}|B_{k,m}|(2\pi A)^{2k}R^{2m-2k}=0, |B_{k,m}|>0$
(or one can use induction on $m$),
so anyway $\int_0^{2\pi} u^{2m}(re^{it})dt \le C_m, r <1$ which immediately implies $\int_0^{2\pi} |u(re^{it})|^pdt \le C_p, 0<p<\infty, r<1$ and hence
$\int_0^{2\pi} |f|^{2m}(re^{it})dt \le C_{m,1}, r <1$ by expanding $|f|^{2m}=(u^2+v^2)^m$ and we are done!
Note that we only used that $|v| \le A$ and nothing else; same proof works of course if $|u| \le A$
Solution 2 (sketch): Because $f \in H^2$ since $\log(1-z)=-\sum_{n \ge 1} {z^n/n}$ and Parseval, it is enough to show that $\int_0^{2\pi} |f|^{p}(e^{it})dt < \infty$ where $f(e^{it})$ is the non-tangential limit of $f$ (finite except at $t=0$ or $z=1$) since then (ie we already know apriori that $f$ is in some Hardy space, but maybe not in the one we want) $f \in H^p$ iff $f(e^{it}) \in L^p(dt)$
Note that in general even if an analytic function in the disc has non-tangential limits ae and $f(e^{it}) \in L^p(dt)$, it is far from true that $f$ is in a Hardy space; simple examples like $\exp \frac{1+z}{1-z}$ which is bounded on the unit circle ae so it is in $L^{\infty}(dt) \subset L^p(dt), p< \infty$ but it is highly unbounded and in no $H^p$ in the unit disc, show this, but it is possible for the function not to be even in the Nevanlinna space $N$ like the counterexample above and even worse, the zeroes may not satisfy the Blaschke condition that functions in $N$ do regardless if they are in a Hardy space or not.
A simple computation shows that $1-e^{it}=-2i\sin({t/2})e^{it/2}$ and hence the only problem is with $|\log \sin({t/2})|^p$ but that is roughly $ \log^p (1/|t|)$ for $t$ near zero which is clearly integrable as $\log (1/|t|) << t^{-\epsilon}$ so choosing $\epsilon$ small enough so $p\epsilon <1$ shows it |
Independent and identically distributed random variable questions. | $P(Y>1) = 1- P(Y\le1)$ so it suffices to compute $P(Y \le 1)$
$$P(Y \le 1) = P(X_1 \le 1, X_2 \le 1, X_3 \le 1) = P(X_1 \le 1) P(X_2 \le 1) P(X_3 \le 1)$$ (make sure you know how to justify each equality above)
$P(X_i \le 1) = F(1)$ for each $i$ |
How to complete the proof: If $x\geq 0$ and $x^2x$ with $y^2<2$ | Just choose
$$y=\sqrt{\frac{x^2+2}2}$$
then clearly we have $x^2\lt y^2\lt2$ and hence $y\gt x$ as the positive square root preserves order. |
How to calculate the probability distribution function (PDF)? | Glad to help you get out of notation heck.
First, you may be puzzled about what $f_X(x)$ means. It is just the writer's way of saying "the probability distribution function for the random variable $X$, evaluated when $X$ is $x$. Let's give an easy example:
Say that $X$ has a range of $0$ to $1$ and on that range, the pdf is $2x$. That is, it is very unlikely that $x$ will be near to zero, where the pdf is very small, and the most likely values will be near to $1$. In that case, we would say that $$f_x(x) = 2x$$
Now for the "d" in the formula -- actually, that is part of a math notation $dx$ which stands for a very tiny change in $x$. If you have had some calculus experience, you will be familiar with notations like
$$
\int_a^b x^2 \, dx = \frac13 (b^3 - a^3)$$
If you have not had any differential calculus experience, you won't understand the notations used when talking about pdf's or cumulative distribution functions.
Finally, let's try to understand the whole equation. It says that the probability that the variate $X$ is between $b$ and $a$ is given (in our example) by
$$ \int_a^b 2x \, dx $$
where that $2x$ is the pdf or $X$ at the point $x$.
This integral turns out to have value
$$\int_a^b 2x \, dx = (b^2-a^2)$$
And you can see that the probability that $0 \leq X \leq 1$ is $1^2 - 0^2 = 1$ which is comforting -- we knew $X$ had to be in that range. What we can learn from this is that not every possible function is usable as a pdf; if we had chosen $f_X(x) = 3x$ instead, the "normalization" would be wrong, and that comforting property would not hold true.
As a meatier exercise, what would the probability that $X \leq \frac12$ be?
Well, $$\left( \frac12 \right)^2 - 0^2 = \frac{1}{4}$$ |
Are Quadratic Residues the way to the solutions of Quadratic Diophantine Equations? | $y^2-Px=A$ has a solution if and only if $A$ is a quadratic residue modulo $P$, so quadratic residues do provide a way to tell whether or not this kind of quadratic diophantine equation has a solution. But if the equation does have solutions, it doesn't get you very far on the way to finding one. |
Bounded sum of reciprocals of primes. | The approach above does not provide the explicit value of $C_0$ in the estimate $$\sum_{p\leq x}\frac{1}{p+1}=\log \log x+C_0+O(\frac{1}{\log x}).$$
Here is how to remedy this:
We have
$$
\sum_{p\leq x}\frac{1}{p+1}=
\sum_{p\leq x}\frac{1}{p}-
\sum_{p\leq x}\frac{1}{p(p+1)}
.
$$
The estimate
$$
\sum_{p\leq x}\frac{1}{p(p+1)}
=
\sum_{p}\frac{1}{p(p+1)}
+O\left(\frac{1}{x\log x}\right)
$$
shows that
$$
C_0=C-\sum_{p}\frac{1}{p(p+1)}
$$
where $C$ is defined by
$$\sum_{p\leq x}\frac{1}{p}=\log \log x+C+O(\frac{1}{\log x}).$$ |
Are there infinitely many primes with digit sums of the form $n^2+1$? | Even the set of positive integers with digit sum $101$, only having the digits $0$ and $1$ and ending with a $1$, contains $$\binom{n-2}{99}$$ $n$-digit numbers.
This means, that we have , for example, about $\large \color\red {10^{438}}$ numbers with a million digits in the set. Plenty of them should be primes, since they share no common factor.
If $n$ increases, the binomial coefficient grows much faster than $n$ itself. So there is an overwhelming statistical evidence that infinite many nice primes exist.
Of course, such an evidence is no proof. |
How do I solve this trig derivative in respect to $x$? | Using Chain Rule, $$\frac{d(\tan^{-1}\frac yx)}{dx}=\frac{d(\tan^{-1}\frac yx)}{d\frac yx}\cdot\frac{d\frac yx}{dx}$$
As you have found,
$$\frac{d(\tan^{-1}\frac yx)}{d\frac yx}=\frac1{1+\left(\frac yx\right)^2}$$
Using Product Rule,
$$\frac{d\frac yx}{dx}=y\frac{d\frac1x}{dx}+\frac1x\frac{dy}{dx}=-\frac y{x^2}+\frac1x\frac{dy}{dx}$$
Again, $$\ln\sqrt{x^2+y^2}=\ln(x^2+y^2)^{\frac12}=\frac12\ln(x^2+y^2)$$
Again apply Chain Rule |
Compactness - Norm on $\Bbb R^n$ | Take a sequence $(a_n)_{n\in\Bbb N}$ of points of $A$ which converges to some $a\in\Bbb R$; you want to prove that $a\in A$. Each $a_n$ can be writen as $|x_n-y_n|$ with $x_n\in K_1$ and $y_n\in K_2$. The sequence $(x_n)_{n\in\Bbb N}$ has a subsequence $(x_{n_k})_{k\in\Bbb N}$ which converges to some $x\in K_1$, since $K_1$ is compact. And the sequence $(y_{n_k})_{k\in\Bbb N}$ has some subsequence $(y_{m_l})_{l\in\Bbb N}$ which converges to some $y\in K_2$. But then\begin{align}a&=\lim_{l\to\infty}a_{n_l}\\&=\lim_{l\to\infty}\left|x_{n_l}-y_{n_l}\right|\\&=\left|\lim_{l\to\infty}(x_{n_l}-y_{n_l})\right|\\&=|x-y|\\&\in A.\end{align} |
Subsubsequence converges $\implies$ sequence converges | HINT: Suppose that $x_n\not\to x$, and show that $\langle x_n:n\in\Bbb N\rangle$ has a subsequence that is bounded away from $x$. |
function not integrable if the asymptotics of the volume of upper level sets is $\Omega(r^{-1})$ | There's a relationship between the integral of a function and its distribution function i.e. superset measures. Namely for a nonnegative function $f(x)$ we have
$$\int_K f(x) = \int_0^{\infty}\mathrm{vol}(\{ x\in K \mid f(x) \geq r\})\,dr$$
I'll assume your function is nonnegative since otherwise you have counterexamples like $f(x) = {1 \over x}$ on $[-1,1]$.
Anyhow, suppose there's a constant $c > 0$ and $r_i \rightarrow \infty$ for which $\mathrm{vol}(\{ x\in K \mid f(x) \geq r_i\}) \geq c{1 \over r_i}$. By passing to a subsequence you can assume $r_{i+1} > 2r_i$ for each $i$.
In this situation the portion of the integral between $r_i$ and an $r_{i+1}$ is at least
$(r_{i+1} - r_i) \times \mathrm{vol}(\{ x\in K \mid f(x) \geq r_{i+1}\})$, which is at least ${c \over 2}$. Adding over all $i$ gives that the integral is $\infty$. |
find shortest distance from origin to parabolic form | One asks that the vector $(x,y)$ is orthogonal to the tangent at $(x,y)$ of the curve. Thus, assume that $(x,y)$ and $(x+h,y+k)$ are on the curve and that $h,k\to0$. Then
$$
\sqrt{x}+\sqrt{y}=\sqrt{a}=\sqrt{x+h}+\sqrt{y+k}=\sqrt{x}\sqrt{1+h/x}+\sqrt{y}\sqrt{1+k/y}
$$
hence
$$
\sqrt{x+h}+\sqrt{y+k}=\sqrt{x}+h/(2\sqrt{x})+o(h)+\sqrt{y}+k/(2\sqrt{y})+o(k).
$$
In particular, $h/\sqrt{x}\sim-k/\sqrt{y}$ hence the tangent vector is $(\sqrt{x},-\sqrt{y})$.
The tangent is orthogonal to $(x,y)$ if and only if $x\sqrt{x}-y\sqrt{y}=0$, that is, $x=y$, that is $x=y=a/4$. The shortest distance is $\sqrt{x^2+y^2}=\sqrt2a/4$. |
Solve $a\cdot 1111+b\cdot 111+c\cdot 11=9002$ | Rewrite the equation as
$$1000a+100(a+b)+11(a+b+c)=9002$$
and now look for solutions of $1000A+100B+11C=9002$ with $0\le A\le B\le C$ (or $0\lt A\lt B\lt C$ if you want positive integer solutions). From $11C\equiv2$ mod $100$, we see $-C\equiv99C\equiv18$ mod $100$, so $C\equiv82$ mod $100$. Thus, since $882\cdot11=9702\gt9002$, we have
$$C\in\{82,182,282,382,482,582,682,782\}$$
From these we get
$$10A+B=
\begin{cases}
81\\70\\59\\48\\37\\26\\15\\4
\end{cases}$$
and appropriate values for $0\le A\le B$ can be easily worked out, and then values of $(a,b,c)$ obtained. For example $(A,B,C)=(7,11,82)$ corresponds to $(a,b,c)=(7,4,71)$. |
Number of maximal submodules of a finitely generated module | You can view $V=M/mM$ as a finite dimension space over $k=R/m$ (the residual field).
The set of $R$-submodules of $M$ containing $mM$ is by Nakayama in bijection with the set of $k$-subspaces of $V$.
I am assuming $V$ has dimension $\geq 2$. ( if the dimension is $1$, $M=Ra$ for some $a \in M$ has exactly one maximal subspace, $ma$).
So if $V$ has infinitely many maximal strict subspaces (ie $k$ is infinite), $M$ has infinitely many maximal submodules.
To prove the converse, we need to show that maximal submodules of $M$ always contain $mM$. Now, if $N$ is a maximal submodule of $M$ not containing $mM$, $M=N+mM$ and by Nakayama we get a contradiction again.
As a conclusion: $M$ has finitely many maximal submodules iff the residual field is finite. |
Find a $ 2 \times 2$ symmetric matrix if we know the eigenvalues and one eigenvector. | In general, two eigenvectors with different eigenvalues must be linearly independent, true. But for a symmetric matrix we know even more: We know that they are orthogonal!
To show this, note first that the symmetry of $A$ means that $v_1^TAv_2=v_2^TAv_1$. Then use the fact that they are both eigenvectors to get $9v_1^Tv_2=1v_2^Tv_1$. This can only happen if $v_1^Tv_2=v_2^Tv_1=0$.
Now that we know they are orthogonal, this means we know we can set $v_2=(3,-1)^T$ (or any nonzero multiple of this; which one we choose ultimately doesn't matter).
Finally, to construct the matrix, I will use diagonalization. If we use $v_1,v_2$ as basis, the linear transformation $A$ is represented by the matrix $$\begin{bmatrix}1&0\\0&9\end{bmatrix}$$Translating this into the standard basis is just a matter of inserting translation matrices: the matrix whose columns are $v_1$ and $v_2$, and its inverse. We get
$$
\begin{bmatrix}v_1&v_2\end{bmatrix}\begin{bmatrix}1&0\\0&9\end{bmatrix}\begin{bmatrix}v_1&v_2\end{bmatrix}^{-1}
$$
Insert here and calculate, and that will give you the matrix representation of the linear transformation $A$ in the standard basis. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.