qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
167,221 |
<p>I'm wondering if this is possible for the general case. In other words, I'd like to take $$\int_a^b{g(x)dx} + \int_c^d{h(x)dx} = \int_e^f{j(x)dx}$$ and determine $e$, $f$, and $j(x)$ from the other (known) formulas and integrals. I'm wondering what restrictions, limitations, and problems arise.</p>
<p>If this is not possible in the general case, I'm wondering what specific cases this would be valid for, and also how it could be done. It's a curiosity of mine for now, but I can think of some possible problems and applications to apply it to.</p>
|
Matt Groff
| 2,626 |
<p>Here's a method that should allow one a large degree of freedom as well as allowing Rieman Integration (instead of Lesbegue Integration or some other method):</p>
<p>Let $\tilde{g}$ be such that: $$\int_a^b{g(x)dx} = \int_e^f{\tilde{g}(x)dx}$$
...and $\tilde{h}$ follows similarly. Then they can be both added inside a single integral.</p>
<p>The first method that comes to mind is to let $\tilde{g}(x) = \dot{g}\cdot g(x)dx$, where $\dot{g}$, a constant, is the ratio of the old and new integrations. A similar method that comes to mind is to actually let $\dot{g}$ be a function.</p>
<p>Another method that I'm exploring, and is somwhat questionable, is to attempt to use $e$ and $f$ as functions, possibly even of $x$, although this may be undefined or just plain wrong.</p>
<p>I'll add ideas to this as I hopefully come up with better methods.</p>
|
167,221 |
<p>I'm wondering if this is possible for the general case. In other words, I'd like to take $$\int_a^b{g(x)dx} + \int_c^d{h(x)dx} = \int_e^f{j(x)dx}$$ and determine $e$, $f$, and $j(x)$ from the other (known) formulas and integrals. I'm wondering what restrictions, limitations, and problems arise.</p>
<p>If this is not possible in the general case, I'm wondering what specific cases this would be valid for, and also how it could be done. It's a curiosity of mine for now, but I can think of some possible problems and applications to apply it to.</p>
|
JL344
| 34,774 |
<p>Certainly. In fact $e$ and $f$ can be anything you want, as long as they are not equal. An affine transformation is one way to do it. Namely if
$$j(x)=\frac{b-a}{f-e}g\left(\frac{b-a}{f-e}(x-e)+a\right)
+\frac{d-c}{f-e}h\left(\frac{d-c}{f-e}(x-e)+c\right),$$ then
$$\int_a^bg(u)du+\int_c^dh(v)dv=\int_e^fj(x)dx.$$</p>
<p>This transformation follows from the change of variables
$$u=\frac{b-a}{f-e}(x-e)+a,\qquad v=\frac{d-c}{f-e}(x-e)+c.$$</p>
|
748,442 |
<p>I have $\lceil x \rceil = -\lfloor -x \rfloor$, but I can't figure out how to rely on this in order to get $\lfloor x \rfloor$ from $\lceil x−1 \rceil$.</p>
<p>For the record, I am only interested in non-negative real values.</p>
<p>I wish to avoid the use of $modulo$, $abs$, $round$ and $if$.</p>
<p>The motivation behind this question is as follows:</p>
<p>$$x-\frac{1}{2}-\frac{\arctan(\tan(\pi(x-\frac{1}{2})))}{\pi}=\lceil x-1 \rceil$$</p>
<p>How do I "manipulate" the value of $x$ in order to get $\lfloor x \rfloor$ instead of $\lceil x-1 \rceil$?</p>
<p><strong>UPDATE:</strong></p>
<p>I also have this if it helps:</p>
<p>$$x+\frac{1}{2}+\frac{\arctan(\tan(\pi(-x-\frac{1}{2})))}{\pi}=\lceil x \rceil$$</p>
|
user2345215
| 131,872 |
<p>You know that $\lfloor x\rfloor=-\lceil-x\rceil$, so if you replace $x$ by $-x$ and multiply the equation by $-1$, you get
$$x-\frac{1}{2}-\frac{\arctan(\tan(\pi(x-\frac{1}{2})))}{\pi}=-\lceil -x \rceil=\lfloor x\rfloor$$</p>
|
2,403,851 |
<p>Here is Proposition 3 (page 12) from Section 2.1 in <em>A Modern Approach to Probability</em> by Bert Fristedt and Lawrence Gray. </p>
<blockquote>
<p>Let $X$ be a function from a measurable space $(\Omega,\mathcal{F})$ to another measurable space $(\Psi,\mathcal{G})$. Suppose $\mathcal{E}$ is a family of subsets of $\Psi$ that generates $\mathcal{G}$ and that $X^{-1}(B) \in \mathcal{F}$ for every $B \in \mathcal{E}$. Then $X$ is a measurable function.</p>
</blockquote>
<p>I´m trying to solve this problem in probability but I don´t know how to solve it.</p>
<p>I understand that this problem (if I solve it) says that I can prove that a random variable is measurable from a generator of the algebra and not all the elements of the algebra.</p>
<p>Someone can help me to prove this please, I´m really stuck with this proposition of the book. Thanks for the help and time.</p>
|
CiaPan
| 152,299 |
<p>For any applicable base $b$ you have $$\log_b x = \frac 1{\ln b}\cdot\ln x$$
so $O(\log_b f(n))$ is exactly the same as $O(\ln f(n))$.</p>
<p>Sometimes $\log$ is used as a 'general' logarithm with the base unspecified, although constant and greater than $1$ (which is useful when multiplicative constant does not matter), while $\ln$ is log with base $e$ and $\lg$ is a logarithm with base $10$.<br>
Sometimes $\log$ has a default base $2$, for example in computer science.</p>
|
3,533,023 |
<p>We have <span class="math-container">$x_1+x_2+...+x_k=n$</span> for some integers <span class="math-container">$k,n$</span>. We have that <span class="math-container">$0 \leq x_1,...,x_k$</span> and individually <span class="math-container">$x_1 \leq a_1$</span>, <span class="math-container">$x_2 \leq a_2$</span>, ..., <span class="math-container">$x_k \leq a_k$</span>.</p>
<p>How many integer solutions are there to this problem?</p>
<p>The solution when there are no constraints to the maximum number of objects in any individual bin is just the "stars and bars" method, but what about for this more constrained version?</p>
|
Narasimham
| 95,860 |
<p>Area above horizontal line of radius length r is only the outer area left out near periphery marked yellow on one of six minor segments.</p>
<p><span class="math-container">$$ A_{segment}=\pi r^2/6- r^2 \sqrt{3}/4= r^2(\pi/6-\sqrt {3}/4) $$</span>
Area of an equilateral triangle is known <span class="math-container">$ =(r^2 \sqrt{3}/4$</span> )</p>
<p>The smaller radius segments are a third in length and 1/9 in area. There are three such small areas/patches, so remaining area is <span class="math-container">$A_s-3\cdot \dfrac19 A_s = \dfrac23 A_s$</span></p>
<p>There are six such areas in a regular hexagon totalling to</p>
<p><span class="math-container">$$6 \cdot \frac23 A_s = 4 A_s = \frac23 r^2 (\pi-3 \sqrt {3}/2)$$</span>
where we plugged in from above value for <span class="math-container">$A_{segment}.$</span> </p>
<p><a href="https://i.stack.imgur.com/pfQnr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pfQnr.png" alt="enter image description here"></a></p>
<p>Fraction of total area = <span class="math-container">$\dfrac{4 A_s}{\pi r^2} = \dfrac23-\dfrac{\sqrt3}{\pi}= 11.5338 \,$</span>%</p>
|
3,427,794 |
<p>Let's see I have the following equation</p>
<p><span class="math-container">$$
x=1
$$</span></p>
<p>I take the derivate of both sides with respect to <span class="math-container">$x$</span>:</p>
<p><span class="math-container">$$
\frac{\partial }{\partial x} x = \frac{\partial }{\partial x}1
$$</span></p>
<p>Therefore, <span class="math-container">$1=0$</span>. Clearly, that is not the right approach. </p>
<p>So what is the right way to think of <span class="math-container">$x=1$</span>. What kind of object is it?</p>
|
cemsicles
| 531,285 |
<p>Let me throw my hat as well; according to fundamental theorem of calculus, which I assume we all know, derivative is the result of integration, </p>
<p><span class="math-container">$$\int_0^1\frac{d}{dx}xdx=\int_0^1dx=[x-x]_0^1=1-0=1$$</span></p>
<p>I prefer to think integration and derivatives as what they are, area approximations under the curve and slope. I think your approach does not consider that when x is a constant, the derivative is zero, so of course it does not make much sense.</p>
|
1,900,640 |
<p>I have a function $$f(x)=(x+1)^2+(x+2)^2 + \dots + (x+n)^2 = \sum_{k=1}^{n}(x+k)^2$$
for some positive integer $n$. I started wondering if there is an equivalent expression for $f(x)$ that can be calculated more directly (efficiently). </p>
<p>I began by expanding some terms to look for a pattern.
$$
(x^2+2x+1) + (x^2+4x+4) + (x^2+6x+9) + (x^2+8x+16) + \dots
$$
By regrouping the $x^2$, $x$, and constant terms, I can see that
$$
f(x) = \sum_{k=1}^{n}x^2 + \sum_{k=1}^{n}2kx + \sum_{k=1}^{n} k^2
$$
for which I've found some identities to get
$$
f(x) = n x^2 + n(n+1)x + \frac{1}{6}n(n+1)(2n+1)
$$
and simplifying some (attempting to make it computationally efficient)
$$
f(x) = n \left[ x^2 + (n+1)x + \frac{1}{6}(n+1)(2n+1) \right]
$$<br>
$$
f(x) = n \left[ x^2 + (n+1) \left( x + \frac{2n+1}{6} \right) \right]
$$ </p>
<blockquote>
<p>Is this a particular type of summation (maybe just exponential?), and if so is there a standard way to write it?<br>
Along those lines, is there a more direct derivation than what I've attempted here, perhaps using an identity I don't know?</p>
</blockquote>
|
TonyK
| 1,508 |
<p>It means this:</p>
<p>Let $n$ be a positive integer, and define $s_n$ to be the number of square-free integers between $1$ and $n$ inclusive. Then
$$\lim_{n\to\infty}\frac{s_n}{n} = \frac{6}{\pi^2}$$</p>
<p>(Equivalently, you can allow negative integers, and define $t_n$ to be the number of square-free integers between $-n$ and $n$ inclusive. Then $\lim_{n\to\infty}\frac{t_n}{2n}= \frac{6}{\pi^2}$.)</p>
|
4,104,364 |
<p>I got the following exercise:<br />
Let <span class="math-container">$W$</span> be a finite-dimensional <span class="math-container">$\Bbb{R}$</span>-vector space. Let <span class="math-container">$\Bbb{R}_W=\Bbb{R}\times W$</span>. Define addition and multiplication by <span class="math-container">$(r,w)+(s,v)=(r+s,w+v)$</span>, <span class="math-container">$(r,w)*(s,v)=(rs,sw+rv)$</span>, for <span class="math-container">$r,s\in \Bbb{R}$</span> and <span class="math-container">$w,v\in W$</span>.<br />
It is easy to show that <span class="math-container">$\Bbb{R}_W$</span> is a commutative unitary ring. Now how do I show it is Noetherian? I think I need to find all of its ideals, but I do not know how to do it. Thanks for your help!</p>
|
rschwieb
| 29,335 |
<p>You can also observe simply that it is an <span class="math-container">$n+1$</span> dimensional <span class="math-container">$\mathbb R$</span> algebra, where <span class="math-container">$n$</span> is the dimension of <span class="math-container">$W$</span>. That is a hard bound on the length of any chain of ideals, ascending or descending. So it is both Noetherian and Artinian.</p>
|
4,624,421 |
<p>I need to solve <span class="math-container">$x'=x^2$</span>, <span class="math-container">$x(0)=1$</span> using the power series <span class="math-container">$\sum_{n=0}^{\infty}a_n t^{n}$</span> and show that it has a solution on <span class="math-container">$(-1,1)$</span> that can be extended to <span class="math-container">$(-\infty,1)$</span>.</p>
<p>I have rewritten the ODE as <span class="math-container">$\sum_{n=1}^{\infty}{na_n t^{n-1}}-\sum_{n=0}^{\infty}a_n t^{2n}=0$</span> and then as <span class="math-container">$\sum_{n=0}^{\infty}{(n+1)a_n t^{n}}-\sum_{n=0}^{\infty}a_n t^{2n}=0$</span>.</p>
<p>Here I don't know how to continue, I would be very grateful for a step by step solution, because I still need to solve more problems like this one.</p>
|
user170231
| 170,231 |
<p>You made a mistake in replacing <span class="math-container">$x^2$</span> with its series form. Looks like all you did was swap out <span class="math-container">$t^n$</span> for <span class="math-container">$t^{2n}$</span>. There's <a href="https://en.wikipedia.org/wiki/Multinomial_theorem" rel="nofollow noreferrer">much more to it</a> that makes solving the ODE with this method more complicated.</p>
<p>Assuming a series solution</p>
<p><span class="math-container">$$x(t) = \sum_{n\ge0} a_n t^n$$</span></p>
<p>its square, using the multinomial theorem, would be</p>
<p><span class="math-container">$$x(t)^2 = \left(\sum_{n\ge0} a_n t^n\right)^2 = \sum_{n\ge0} {a_n}^2 t^{2n} + 2 \sum_{1\le m<n<\infty} a_m a_n t^{m+n}$$</span></p>
<p>Or, as a <a href="https://en.wikipedia.org/wiki/Cauchy_product" rel="nofollow noreferrer">Cauchy product</a>, we have</p>
<p><span class="math-container">$$x(t)^2 = \sum_{m\ge0} a_m t^m \cdot \sum_{n\ge0} a_n t^n = \sum_{m\ge0} \sum_{0\le n\le m} a_n a_{m-n} t^m$$</span></p>
<p>Neither of these look like they play nice.</p>
<p><strong>Hint</strong>: Instead of trying to attack right away with series, make a <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Euler_equation" rel="nofollow noreferrer">substitution</a> to get a linear ODE, which is much more amenable to the series method.</p>
|
4,624,421 |
<p>I need to solve <span class="math-container">$x'=x^2$</span>, <span class="math-container">$x(0)=1$</span> using the power series <span class="math-container">$\sum_{n=0}^{\infty}a_n t^{n}$</span> and show that it has a solution on <span class="math-container">$(-1,1)$</span> that can be extended to <span class="math-container">$(-\infty,1)$</span>.</p>
<p>I have rewritten the ODE as <span class="math-container">$\sum_{n=1}^{\infty}{na_n t^{n-1}}-\sum_{n=0}^{\infty}a_n t^{2n}=0$</span> and then as <span class="math-container">$\sum_{n=0}^{\infty}{(n+1)a_n t^{n}}-\sum_{n=0}^{\infty}a_n t^{2n}=0$</span>.</p>
<p>Here I don't know how to continue, I would be very grateful for a step by step solution, because I still need to solve more problems like this one.</p>
|
A. P.
| 1,027,216 |
<ul>
<li><p>If you suppose that there exists a solution in power series of the form <span class="math-container">$y=\sum_{n=0}^{+\infty}a_n t^n$</span>, then <span class="math-container">$y'=\sum_{n=1}^{+\infty}na_{n}t^{n-1}=\sum_{n=0}^{+\infty}(n+1)a_{n+1}t^n$</span></p>
</li>
<li><p>By Cauchy product <span class="math-container">$y^2=(\sum_{n=0}^{+\infty}a_nt^n)\cdot (\sum_{n=0}^{+\infty}a_nt^{n})=\sum_{n=0}^{+\infty}c_nt^{n}$</span> with <span class="math-container">$c_n=\sum_{k=0}^{n}a_{k}a_{n-k}$</span>.</p>
</li>
<li><p>Thus, <span class="math-container">$x'=x^2$</span> can be written as <span class="math-container">$\sum_{n=0}^{+\infty}(n+1)a_{n+1}t^{n}=\sum_{n=0}^{+\infty}c_n t^n$</span> and so <span class="math-container">$c_n=(n+1)a_{n+1}$</span>.</p>
</li>
<li><p>Equality coefficients: If <span class="math-container">$n=0$</span>, then <span class="math-container">$c_0=a_0^2$</span> but then <span class="math-container">$a_1=a_0^2$</span>. If <span class="math-container">$n=1$</span>, then <span class="math-container">$c_1=a_0a_1+a_1a_0=2a_0a_1$</span> but then <span class="math-container">$2a_0a_1=2a_2$</span>. If <span class="math-container">$n=2$</span>, then <span class="math-container">$c_2=a_0a_2+a_1a_1+a_2a_0=2a_0a_2+a_1^2$</span> but then <span class="math-container">$3a_3=2a_0a_1+a_1^2$</span>.</p>
</li>
</ul>
<p>Try to guess a formula for <span class="math-container">$a_n$</span> inductively and then use the initial condition for to find the particular solution.</p>
|
2,111,734 |
<p>a) Show that $f^2 - (f')^2 = 0$</p>
<p>I tried to solve by doing $f=\pm f'$ not sure what to do from here.</p>
|
Nitin Uniyal
| 246,221 |
<p>$(f-f')(f+f')=0$</p>
<p>$\implies f'=f $ or $f'=-f$.</p>
<p>Try solving the two IVP's:</p>
<p>$1$. $f'(x)-f(x)=0$ with $f(0)=f'(0)=0$.</p>
<p>$2$. $f'(x)+f(x)=0$ with $f(0)=f'(0)=0$.</p>
<p>Both have a unique solution which $f(x)=0$.</p>
|
250,651 |
<p>(<a href="https://math.stackexchange.com/questions/1940157/coend-of-mathscrdf-bullet-g-bullet?noredirect=1#comment3982852_1940157">Crosspost</a> from stack)</p>
<p>Given categories $\mathscr{C}$ and $\mathscr{D}$ and functors $F,G: \mathscr{C} \to \mathscr{D}$, we can form a bifunctor
$$\mathscr{D}(F(\bullet), G(\bullet)): \mathscr{C}^\text{op} \times \mathscr{C} \to \mathsf{Set}$$
and the end of this functor is the set of natural transformations from $F \Rightarrow G$. (I guess we need $\mathscr{C}$ to be small in order to guarantee there is a <em>set</em> of such natural transformations, in general.)</p>
<p>Can I say anything about the coend? Is it some familiar thing?</p>
|
Todd Trimble
| 2,926 |
<p>I'm not sure I have much to add beyond my comment, but I might add a point of view (which could in turn provide some search terms). </p>
<p>I'd situate this construction within the bicategory of (small) categories, profunctors/bimodules, and transformations between them. Recall that a profunctor $R: C \nrightarrow D$ is a functor of the form $C^{op} \times D \to Set$ (conventions may differ), composed much in the way relations are, via the formula </p>
<p>$$(C \stackrel{R}{\nrightarrow} D \stackrel{S}{\nrightarrow} E)(c, e) = \int^{d: D} R(c, d) \times S(d, e).$$ </p>
<p>(If you like, you can consider the bicategory of profunctors as biequivalent to a <em>strict</em> 2-category whose objects are small categories and whose morphisms $C \to D$ are given by cocontinuous functors $Set^C \to Set^D$.) </p>
<p>This bicategory is compact closed in an evident bicategorical sense: we have a symmetric monoidal bicategory whose tensor at the object level is given by cartesian product of small categories, and each object $C$ has a monoidal dual given by the opposite category $C^{op}$. For each $C$, the unit $\eta_C: 1 \nrightarrow C^{op} \times C$ is given by $\hom_C: C^{op} \times C \to Set$. (In the cocontinuous functor picture, it's the unique (up to isomorphism) cocontinuous functor $Set \to Set^{C^{op} \times C}$ that takes the terminal object $1$ to $\hom_C$.) The counit $\epsilon_C: C \times C^{op} \nrightarrow 1$ may also be described by a hom-functor, but it is probably more illuminating to think of it in terms of the cocontinuous functor picture, given by taking the coend $\int^C: Set^{C^{op} \times C} \to Set$. </p>
<p>Since we are working in a compact closed (bi)category, we can expect certain resonances with constructions in other compact closed categories, such as the category of finite-dimensional vector spaces. The construction in question is a profunctor composite </p>
<p>$$1 \stackrel{\eta_C}{\nrightarrow} C^{op} \times C \stackrel{F^{op} \times G}{\nrightarrow} D^{op} \times D \stackrel{\epsilon_{C^{op}}}{\nrightarrow} 1$$ </p>
<p>which is certainly akin to trace operations in linear algebra. Thus, in linear algebra over a field $k$, we have the notion of trace of an endomorphism $f: V \to V$, which we can form categorically as the composite: </p>
<p>$$\text{Tr}(f) = \left(k \stackrel{\eta_V}{\to} V^\ast \otimes V \stackrel{1 \otimes f}{\to} V^\ast \otimes V \stackrel{eval_V}{\to} k \right)$$ </p>
<p>where the first map $\eta_V$ takes $1 \in k$ to $\sum_{i = 1}^n f^i \otimes e_i$ (here $\{e_1, \ldots, e_n\}$ is a basis of $V$ and $f^i$ is the dual basis; the expression $\sum_{i=1}^n f^i \otimes e_i$ is independent of basis). Similarly, we speak of the trace of an endoprofunctor $B: C \nrightarrow C$; after a brief Yoneda-lemma type calculation, one finds that the composite </p>
<p>$$1 \stackrel{\eta_C}{\nrightarrow} C^{op} \times C \stackrel{1 \otimes B}{\nrightarrow} C^{op} \times C \stackrel{\epsilon_{C^{op}}}{\nrightarrow} 1$$ </p>
<p>is the profunctor $1 = 1^{op} \times 1 \to Set$ taking the unique object of $1$ to $\text{Tr}(B) = \int^{c: C} B(c, c)$. </p>
<p>Thus we could also describe your construction as the trace of the endoprofunctor or endobimodule $B: C \nrightarrow C$ defined by $B(c, d) = \hom_D(Fc, Gd)$. Possibly this gives a useful search term. </p>
<p>Usually such traces are challenging to calculate explicitly (for example, determining the trace of an identity functor can be nontrivial). Among the properties of trace formally deducible from compact closed structure is $\text{Tr}(B \circ B') \cong \text{Tr}(B' \circ B)$. </p>
|
29,016 |
<p>Suppose I want to compute $f(1)\vee f(2) \vee \ldots \vee f(10^{10})$, but I know <em>a priori</em> that $f(n)$ is <code>True</code> for some $n \ll 10^{10}$ with high probability. For example, <code>f = PrimeQ</code>.</p>
<p>One way to do this is to write: <code>Or[f/@Range[1,10^10]]</code>, but that would involve allocating memory for $10^{10}$ elements, as well as computing <code>f</code> unnecessarily. (I overcame the latter problem with using <code>Hold</code>, but the memory problem still stands).</p>
<p><strong>Question:</strong> Is there a way to compute <code>Or[f[1], ..., f[10^10]]</code> without allocating memory for $10^{10}$ booleans?</p>
<p>I've done some research: it seems like <a href="https://mathematica.stackexchange.com/questions/838/functional-style-using-lazy-lists/885#885">Functional style using lazy lists?</a> might work, but I'm wondering if there is a shorter solution -- one that does not involve defining streams?</p>
|
Leonid Shifrin
| 81 |
<h3>Streams and iterators</h3>
<p>Using streams would certainly be one of the most elegant ways to do this. In any case, you will at least need an iterator for your sequence of numbers. The reason why an abstraction of an iterator is useful is because it separates the iteration over your sequence from the stuff you want to do with individual elements, so that you can implement them independently. It also inverses the control, since rather than doing an active iteration with your sequence, you get a next number on demand. This leads to a more modular code and better abstractions.</p>
<h3>Iterator for a sequence of numbers</h3>
<p>Here is one possibility. First, define an iterator for equidistant numbers:</p>
<pre><code>ClearAll[makeIterator];
makeIterator[min_,max_,step_]:=
Module[{current=min},
With[{curr=current},
current=If[current+step<=max,current+step,Null];curr
]&
];
</code></pre>
<p>it can be used as</p>
<pre><code>iter = makeIterator[1, 10, 1];
iter[]
iter[]
iter[]
(*
1
2
3
*)
</code></pre>
<h3>Implementation of <code>Or</code></h3>
<p>Now, here is a possible implementation of <code>Or</code>:</p>
<pre><code>ClearAll[lazyOr];
lazyOr[iter_,f_]:=
While[
True,
(If[#1=!=Null,If[f[#1],Return[True]],Return[False]]&)[
iter[]
]
]
</code></pre>
<p>and here is how we can use it:</p>
<pre><code>lazyOr[makeIterator[1, 10, 1], # > 10 &]
(* False *)
lazyOr[makeIterator[1, 10, 1], # > 9 &]
(* True *)
lazyOr[makeIterator[1, 10^10, 1], PrimeQ] // AbsoluteTiming
(* {0.000977, True} *)
</code></pre>
<p>Again, I want to stress that this is just a light-weight version of the lazy streams, where we had to make <code>Or</code> less trivial because it should do the iteration. The full lazy stream construction postpones iteration until we actually request an element, and is more elegant in that sense.</p>
|
106,560 |
<p>Mochizuki has recently announced a proof of the ABC conjecture. It is far too early to judge its correctness, but it builds on many years of work by him. Can someone briefly explain the philosophy behind his work and comment on why it might be expected to shed light on questions like the ABC conjecture?</p>
|
Olaf Teschke
| 100,979 |
<p>For the sake of completeness, let me add the references of the published version in Publ. RIMS that appeared earlier in March this year (should be rather a comment, but the references are too long for that):</p>
<p><em>Mochizuki, Shinichi</em>, <a href="http://dx.doi.org/10.4171/PRIMS/57-1-1" rel="nofollow noreferrer"><strong>Inter-universal Teichmüller theory. I: Construction of Hodge theaters</strong></a>, Publ. Res. Inst. Math. Sci. 57, No. 1-2, 3-207 (2021). <a href="https://zbmath.org/?q=an:1465.14002" rel="nofollow noreferrer">ZBL1465.14002</a>.</p>
<p><em>Mochizuki, Shinichi</em>, <a href="http://dx.doi.org/10.4171/PRIMS/57-1-2" rel="nofollow noreferrer"><strong>Inter-universal Teichmüller theory. II: Hodge-Arakelov-theoretic evaluation</strong></a>, Publ. Res. Inst. Math. Sci. 57, No. 1-2, 209-401 (2021). <a href="https://zbmath.org/?q=an:1465.14003" rel="nofollow noreferrer">ZBL1465.14003</a>.</p>
<p><em>Mochizuki, Shinichi</em>, <a href="http://dx.doi.org/10.4171/PRIMS/57-1-3" rel="nofollow noreferrer"><strong>Inter-universal Teichmüller theory. III: Canonical splittings of the log-theta-lattice</strong></a>, Publ. Res. Inst. Math. Sci. 57, No. 1-2, 403-626 (2021). <a href="https://zbmath.org/?q=an:1465.14004" rel="nofollow noreferrer">ZBL1465.14004</a>.</p>
<p><em>Mochizuki, Shinichi</em>, <a href="http://dx.doi.org/10.4171/PRIMS/57-1-4" rel="nofollow noreferrer"><strong>Inter-universal Teichmüller theory. IV: Log-volume computations and set-theoretic foundations</strong></a>, Publ. Res. Inst. Math. Sci. 57, No. 1-2, 627-723 (2021). <a href="https://zbmath.org/?q=an:1465.14005" rel="nofollow noreferrer">ZBL1465.14005</a>.</p>
<p>(Peter Scholze indicates in his <a href="https://zbmath.org/1465.14002" rel="nofollow noreferrer">review</a> that the versions do not differ with respect to the issues Stix and he raised in 2018).</p>
|
155,237 |
<p>The axiom of constructibility $V=L$ leads to some very interesting consequences, one of which is that it becomes possible to give explicit constructions of some of the "weird" results of AC. For instance, in $L$, there is a definable well-ordering of the real numbers (since there is a definable well-ordering of the universe).</p>
<p>Since AC holds true in $L$, the ultrafilter lemma must be true. Does this mean that a definable non-principal ultrafilter on $\mathbb{N}$ exists in $L$, given by an explicit formula?</p>
<p>If so, what is the formula?</p>
|
Noah Schweber
| 8,133 |
<p>Yes, but it's not particularly nice: since $L$ has a definable well-ordering of the sets of reals (in fact, of all of $L$) coming from the $L$-hierarchy itself, there is a formula defining the "least" (in that well-ordering) ultrafilter, $U$.</p>
<p>(This $U$ isn't the only naturally definable ultrafilter in $L$; we could also consider the ultrafilter gotten by going through the infinite sets of natural numbers, in the $L$-order, and throwing in each set that we can - a sort of "greedy algorithm" for building an ultrafilter. This ultrafilter $U'$ would also be definable, and there's no reason for $U'$ to equal $U$. Other tricks exist.)</p>
<p>Note that, while of course there are only countably many definable ultrafilters, we can use the $L$-ordering to produce a definable ultrafilter with any given definable property ("the least ultrafilter with property X") - so there's basically nothing that can be said about the definable ultrafilters.</p>
|
320,019 |
<p>Assume $Y$ is non negative random variable. Prove that $X+Y$ is stochastically greater than $X$ for any random variable $X$.</p>
<p>We have to prove there that $\Pr(X+Y > x) \geq \Pr(X>x) $ for all $x$</p>
|
jdods
| 212,426 |
<p>Stochastic ordering is equivalent to the existence of a coupling which strictly preserves the order. I.e. there is a random variable $u$ study that $X(u) +Y(u) \geq X(u)$. The statements about probabilities fall out of this readily.</p>
|
2,486,334 |
<p>What is the plane graph of $|z-1|+|z-5| < 4$ ?</p>
<p>What i know is that there is nothing for $y\geq4$ or $y \leq -4$ or $x \geq 5$ or $x \leq 1$.</p>
<p>Trying to let $z=x+y i$ such that $x,y \in \mathbb{R}$ did not help either.</p>
|
Raffaele
| 83,382 |
<p>$\left| x+i y-5\right| +\left| x+i y-1\right| <4$</p>
<p>$\sqrt{(x-5)^2+y^2}+\sqrt{(x-1)^2+y^2}<4$</p>
<p>$2 \sqrt{(x-5)^2+y^2} \sqrt{(x-1)^2+y^2}+2x^2-12 x+2 y^2+26<16$</p>
<p>$\sqrt{(x-5)^2+y^2} \sqrt{(x-1)^2+y^2}<-x^2+6 x-y^2-5$</p>
<p>$\left((x-5)^2+y^2\right) \left((x-1)^2+y^2\right)<\left(-x^2+6 x-y^2-5\right)^2$</p>
<p>$x^4-12 x^3+2 x^2 y^2+46 x^2-12 x y^2-60 x+y^4+26 y^2+25<x^4-12 x^3+2 x^2 y^2+46 x^2-12 x y^2-60 x+y^4+10 y^2+25$</p>
<p>$16 y^2<0$</p>
<p>There is no $z$ which satisfies the relation</p>
<p>Hope this helps</p>
|
3,841,266 |
<p>Let <span class="math-container">$E$</span> and hilbert space and <span class="math-container">$f(x)=\| x\|$</span> for all <span class="math-container">$x\in E$</span>. Study the differentiability of <span class="math-container">$f$</span> on <span class="math-container">$0$</span> and find <span class="math-container">$df(x)h$</span> for all <span class="math-container">$h\in E$</span> and <span class="math-container">$x\neq0$</span>.</p>
<p><strong>My attempt :</strong>
<span class="math-container">$f$</span> is not differentiable at <span class="math-container">$x=0$</span>, in fact consider <span class="math-container">$E=\mathbb R^n$</span> and <span class="math-container">$\|x\|=\sqrt{x_1^2+\cdots+x_n^2}=f(x_1,\cdots,x_n).$</span>
<span class="math-container">$\displaystyle\lim_{x_i \to 0^+} \dfrac {f(0,..,0,x_i,0,..,0)-f(0,\cdots,0)}{x_i}=1$</span> and <span class="math-container">$\displaystyle\lim_{x_i \to 0^-} \dfrac {f(0,..,0,x_i,0,..,0)-f(0,\cdots,0)}{x_i}=-1$</span>, so <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$x=0$</span>.</p>
<p>Let <span class="math-container">$x\neq 0$</span>, I have proved in a previous question that <span class="math-container">$ \psi : x \mapsto\|x\|^2$</span> is differentiable and that <span class="math-container">$d\psi(x)h=2\langle x,h\rangle$</span>, so <span class="math-container">$f(x=\sqrt{\psi(x)}=\varphi\circ\psi(x)$</span> with <span class="math-container">$\varphi(t)=\sqrt t$</span> for all <span class="math-container">$t\geqslant 0$</span>, so <span class="math-container">$$df(x)h=d\varphi(\psi(x))\circ d\psi(x)h=\frac{1}{2\|x\|}2\langle x,h\rangle=\langle\frac{x}{\|x\|},h \rangle.$$</span>
Is my attempt correct?
Thanks in advance !</p>
|
Claude Leibovici
| 82,404 |
<p>In terms of elementary function, it is sure that you must stay with the implicit form. Howver, sooner or later, you will learn that the solution is given by
<span class="math-container">$$y=\sqrt{\frac 12 W\left(2 e^{2(1+ \sin (x))}\right)}$$</span> where <span class="math-container">$W(.)$</span> is Lambert function.</p>
<p><span class="math-container">$y$</span> is maximum when <span class="math-container">$x=\frac \pi 2+2k\pi$</span> (at these points <span class="math-container">$y_{max}\sim 1.3141$</span>; <span class="math-container">$y$</span> is minimum when <span class="math-container">$x=\frac {3\pi} 2+2k\pi$</span> (at these points <span class="math-container">$y_{min}\sim 0.6529$</span>). So, the range is quite small for the possible values of <span class="math-container">$y$</span>.</p>
<p>Beside strict numerical methods, you can make good approximations builiding the Talor expansions around the extrema.</p>
<p>For <span class="math-container">$0 \leq x \leq \pi$</span> use
<span class="math-container">$$y=p-\frac{p }{2 \left(2 p^2+1\right)}\left(x-\frac{\pi }{2}\right)^2+\frac{p \left(2
p^4-p^2+2\right) }{12 \left(2
p^2+1\right)^3}\left(x-\frac{\pi }{2}\right)^4+O\left(\left(x-\frac{\pi }{2}\right)^6\right)$$</span> where
<span class="math-container">$$p=\sqrt{\frac 12 W\left(2 e^4\right)}\sim \frac{4372}{3327}$$</span> This leads to a maximum absolute error of <span class="math-container">$0.0025$</span> at the bounds.</p>
<p>For <span class="math-container">$\pi \leq x \leq 2\pi$</span> use
<span class="math-container">$$y=q+\frac{q }{2 \left(2 q^2+1\right)}\left(x-\frac{3 \pi }{2}\right)^2-\frac{q
\left(2 q^4+5 q^2-1\right) }{12 \left(2
q^2+1\right)^3}\left(x-\frac{3 \pi }{2}\right)^4+O\left(\left(x-\frac{3 \pi }{2}\right)^6\right)$$</span>
where
<span class="math-container">$$q=\sqrt{\frac 12 W\left(2 \right)}\sim \frac{1868}{2861}$$</span> This leads to a maximum absolute error of <span class="math-container">$0.0100$</span> at the bounds.</p>
<p>All of the above would give
<span class="math-container">$$\left(
\begin{array}{ccc}
x & \text{estimation} & \text{exact} \\
0.0 & 0.99716 & 1.00000 \\
0.2 & 1.06418 & 1.06548 \\
0.4 & 1.12640 & 1.12691 \\
0.6 & 1.18193 & 1.18210 \\
0.8 & 1.22918 & 1.22922 \\
1.0 & 1.26685 & 1.26686 \\
1.2 & 1.29396 & 1.29396 \\
1.4 & 1.30980 & 1.30980 \\
1.6 & 1.31397 & 1.31397 \\
1.8 & 1.30637 & 1.30637 \\
2.0 & 1.28718 & 1.28718 \\
2.2 & 1.25690 & 1.25692 \\
2.4 & 1.21631 & 1.21638 \\
2.6 & 1.16650 & 1.16674 \\
2.8 & 1.10884 & 1.10953 \\
3.0 & 1.04501 & 1.04667 \\
3.2 & 0.98905 & 0.98048 \\
3.4 & 0.91848 & 0.91365 \\
3.6 & 0.85138 & 0.84913 \\
3.8 & 0.79075 & 0.78992 \\
4.0 & 0.73905 & 0.73886 \\
4.2 & 0.69830 & 0.69827 \\
4.4 & 0.66999 & 0.66999 \\
4.6 & 0.65514 & 0.65514 \\
4.8 & 0.65427 & 0.65427 \\
5.0 & 0.66741 & 0.66741 \\
5.2 & 0.69409 & 0.69407 \\
5.4 & 0.73338 & 0.73320 \\
5.6 & 0.78381 & 0.78309 \\
5.8 & 0.84346 & 0.84145 \\
6.0 & 0.90991 & 0.90547 \\
6.2 & 0.98023 & 0.97218
\end{array}
\right)$$</span></p>
<p>For sure, adding one more term in each expansion would give almost the exact results.</p>
|
2,441,660 |
<p>Suppose I have $n$ observations, which are all normally distributed with the same mean (which is unknown) but each has a different variance (the different variances could be called $v_1,...,v_n$ for example, which are all assumed to be known).</p>
<p>What is the best estimate of the mean? And further, how does one compute the variance of the mean?</p>
<p>Clearly, the sample average would be the best estimate if iid, but intuitively, an observation which has a relatively low variance would provide more information about the mean than the others, suggesting that the sample mean would not be the best estimate. My problem is that I don't know how to 'quantify' this intuition to generate an estimate of the mean and it's variance. Any ideas on how to do this? Thanks.</p>
|
paulinho
| 474,578 |
<p>In these types of problems, it is not too hard to just consider case by case. Let's list out all the possibilities and how many ways to reorganize them:
$$1,2,6 \rightarrow 3!=6 \text{ ways}$$
$$1,3,5 \rightarrow 3!=6 \text{ ways}$$
$$1,4,4 \rightarrow 3!/2!=3 \text{ ways}$$
$$2,2,5 \rightarrow 3!/2!=3 \text{ ways}$$
$$2,3,4 \rightarrow 3!=6 \text{ ways}$$
$$3,3,3 \rightarrow 3!/3!=1 \text{ way}$$
So in total there are $25$ ways to get a sum of $9$. If you want the probability, just take this over the total number of possibilities and you get $25/6^3 = 25/216$.</p>
|
2,938,372 |
<p>Seems to me like it is. There are only finitely many distinct powers of <span class="math-container">$x$</span> modulo <span class="math-container">$p$</span>, by Fermat's Little Theorem (they are <span class="math-container">$\{1, x, x^2, ..., x^{p-2}\}$</span>), and the coefficient that I choose for each of these powers can only be taken from <span class="math-container">$\{0,1,2,..., p-1\}$</span>. So essentially I'm choosing amongst <span class="math-container">$p$</span> things <span class="math-container">$p-1$</span> many times, resulting in at most <span class="math-container">$p^{p-1}$</span> distinct polynomials. </p>
<p>Yet an assignment claims that <span class="math-container">$\mathbb{Z}_p[x]$</span> is infinite. </p>
|
Arkady
| 23,522 |
<p>If <span class="math-container">$p=3$</span>, it's not true that <span class="math-container">$x=x^4$</span> in this ring <span class="math-container">$\mathbb Z_3[x]$</span>. Only the coefficients are mod <span class="math-container">$p$</span>. While <span class="math-container">$x^{p-1}$</span> might be the same as <span class="math-container">$1$</span> as a function, at least on <span class="math-container">$\mathbb Z_p^*$</span>, it's not the same as a symbol which represents multiplying an indeterminate <span class="math-container">$x$</span> by itself <span class="math-container">$p-1$</span> times.</p>
|
2,938,372 |
<p>Seems to me like it is. There are only finitely many distinct powers of <span class="math-container">$x$</span> modulo <span class="math-container">$p$</span>, by Fermat's Little Theorem (they are <span class="math-container">$\{1, x, x^2, ..., x^{p-2}\}$</span>), and the coefficient that I choose for each of these powers can only be taken from <span class="math-container">$\{0,1,2,..., p-1\}$</span>. So essentially I'm choosing amongst <span class="math-container">$p$</span> things <span class="math-container">$p-1$</span> many times, resulting in at most <span class="math-container">$p^{p-1}$</span> distinct polynomials. </p>
<p>Yet an assignment claims that <span class="math-container">$\mathbb{Z}_p[x]$</span> is infinite. </p>
|
nguyen quang do
| 300,700 |
<p>I think that you'll get a clearer view if you come back to the definition of a ring structure (commutative ring for simplification). The definitions being granted, let <span class="math-container">$A$</span> be a subring of a ring <span class="math-container">$B$</span>. For <span class="math-container">$x\in B, A[x]$</span> is defined as being the smallest subring of <span class="math-container">$B$</span> containing <span class="math-container">$x$</span>. You can immediately check that <span class="math-container">$A[x]$</span> consists of all the <em>polynomial expressions</em> <span class="math-container">$y=a_0 = a_1 x+...+a_nx^n$</span>, with <span class="math-container">$a_i \in A$</span> and variable degree <span class="math-container">$n$</span>. Your ring <span class="math-container">$\mathbf F_p [x]$</span> is an example, but a very restrictive one because you take <span class="math-container">$x \in \mathbf F_p$</span>, so that <span class="math-container">$\mathbf F_p [x]=\mathbf F_p$</span> .</p>
<p>The ring of <em>polynomials</em> <span class="math-container">$A[X]$</span> (remark the change of notations, <span class="math-container">$X$</span> instead of <span class="math-container">$x$</span>) is a different matter. In most textbooks (such as S. Lang's "Algebra", chap. V, §2), it is presented abstractly as a "universal object", which roughly means, without too much formalism, that any ring <span class="math-container">$A[x]$</span> as above is a quotient-ring of <span class="math-container">$A[X]$</span> via a (unique) surjective ring-homomorphism which sends <span class="math-container">$X$</span> to <span class="math-container">$x$</span>. Concretely, a "prototype" of <span class="math-container">$A[X]$</span> will be the set of all sequences <span class="math-container">$\alpha:=(a_0 ,..., a_n,...)$</span> s.t. the <span class="math-container">$a_i\in A$</span> are almost all zero (i.e. all of them, except perhaps a finite number, are zero), endowed with a ring structure by putting <span class="math-container">$\alpha + \beta = (..., a_n +b_n,...)$</span> and <span class="math-container">$\alpha . \beta = (...,c_n ,...)$</span> s.t. <span class="math-container">$c_n=a_0b_n + a_1b_{n-1} +...+ a_nb_0$</span>. The "indeterminate" <span class="math-container">$X$</span> will just be the sequence (0, 1, 0,...,0,...).</p>
<p>Coming back to your example, you have a surjective ring-homomorphism <span class="math-container">$\phi:\mathbf F_p[X] \to \mathbf F_p[x]$</span>. If <span class="math-container">$\phi$</span> is injective, <span class="math-container">$x$</span> is <em>transcendental</em> over <span class="math-container">$\mathbf F_p$</span> and <span class="math-container">$\mathbf F_p[x] \cong \mathbf F_p[X]$</span> is infinite. If <span class="math-container">$\phi$</span> is not injective, <span class="math-container">$x$</span> is <em>algebraic</em> over <span class="math-container">$\mathbf F_p$</span> and <span class="math-container">$\mathbf F_p[x] \cong \mathbf F_p[X]/ker \phi$</span> is a finite field (why ?) ./.</p>
|
2,335,275 |
<p><img src="https://i.stack.imgur.com/hZN4z.jpg" alt="image 1">
<img src="https://i.stack.imgur.com/TbcGP.jpg" alt="image 2"></p>
<p>I am asking that the convergence in probability explained in the first image shows that the probability that Y-bar in the range (uY - c) to (uY + c) becomes arbitrarily close to 1 for any constant > 0 if n increases.
However, the second image from another source shows that the probability that Xn (which is equal to Y-bar?) being far from X goes to 0 for any constant > 0 as n increases. Which explanation is correct? Or is there something I misunderstand between them? Thanks a lot.</p>
|
spaceisdarkgreen
| 397,125 |
<p>The standard definition (in my experience) of $X_n\to_P X$ is that for any $\epsilon > 0,$ $P(|X_n-X|>\epsilon) \to 0,$ i.e. your second definition. (Whether it's $>\epsilon$ or $\ge \epsilon$ is immaterial.) </p>
<p>However, that definition is equivalent to this one: For any $\epsilon>0,$ $P(|X_n-X| \le \epsilon) \to 1.$ The equivalence follows from the fact that $P(|X_n-X|\le \epsilon) = 1-P(|X_n-X| >\epsilon).$ This other form is is equivalent to your first definition when you substitute $c$ for $\epsilon$ and do a little unpacking.</p>
|
2,335,275 |
<p><img src="https://i.stack.imgur.com/hZN4z.jpg" alt="image 1">
<img src="https://i.stack.imgur.com/TbcGP.jpg" alt="image 2"></p>
<p>I am asking that the convergence in probability explained in the first image shows that the probability that Y-bar in the range (uY - c) to (uY + c) becomes arbitrarily close to 1 for any constant > 0 if n increases.
However, the second image from another source shows that the probability that Xn (which is equal to Y-bar?) being far from X goes to 0 for any constant > 0 as n increases. Which explanation is correct? Or is there something I misunderstand between them? Thanks a lot.</p>
|
Dhruv Kohli
| 97,188 |
<p>The second image is the standard definition of convergence in probability which is: A sequence of random variables $X_1, X_2, \ldots, X_n$ is said to converge to the random variable $X$ in probability if for every $\epsilon > 0$, </p>
<p>$$\lim\limits_{n \rightarrow \infty} P(|X_n-X|\geq \epsilon)=0 \equiv \lim\limits_{n\rightarrow \infty} P(|X_n-X|<\epsilon) = 1$$</p>
<p>And the first image relates to the definitions of weak law of large numbers and consistency as follows,</p>
<p>WLLN: Given iid random variables $Y_1, Y_2, \ldots, Y_n$ such that $E(Y_i) = \mu$ and $Var(Y_i) = \sigma^2 < \infty$. Define sequence of random variables $\overline{Y}_n = \frac{1}{n}\sum_{i=1}^{n}Y_i$, then for every $\epsilon > 0$,</p>
<p>$$\lim\limits_{n\rightarrow \infty}P(|\overline{Y}_n-\mu| < \epsilon) = 1$$</p>
<p>that is $\overline{Y}_n$ converges in probability to $\mu$.</p>
<p>The proof is by using the Chebyshev's inequality. For fixed $\epsilon > 0$</p>
<p>$$P(|\overline{Y}_n-\mu| < \epsilon) = P((\overline{Y}_n-\mu)^2 < \epsilon^2) \geq 1 - \frac{E((\overline{Y}_n-\mu)^2)}{\epsilon^2} = 1 - \frac{Var\overline{Y}_n}{\epsilon^2} = 1 - \frac{\sigma^2}{n}$$</p>
<p>$$\implies \lim\limits_{n\rightarrow\infty} P(|\overline{Y}_n-\mu| < \epsilon) = 1$$</p>
<p>The property summarized by the WLLN, that a sequence of the "same" sample quantity approaches a constant as $n \rightarrow \infty$, is known as consistency.</p>
|
256,138 |
<p>I need to generate four positive random values in the range [.1, .6] with (at most) two significant digits to the right of the decimal, and which sum to exactly 1. Here are three attempts that do not work.</p>
<pre><code>x = {.15, .35, .1, .4}; While[Total[x] != 1,
x = Table[Round[RandomReal[{.1, .6}], .010], 4]];
x = {.25, .25, .25, .25}; While[Total[x] == 1,
x = Table[Round[RandomReal[{.1, .6}], .010], 4]];
NestWhileList[Total[x],
x = Table[Round[RandomReal[{.1, .6}], .010], 4],
Plus @@ x == 1][[1]]
</code></pre>
|
Domen
| 75,628 |
<p>Because you have very tight constraints, the number of allowed points is not very large, so you can generate all of them and then sample.</p>
<pre><code>list = Flatten[
Table[If[10 <= (100 - i - j - k) <= 60, {i, j, k, 100 - i - j - k}/
100., Nothing], {i, 10, 60}, {j, 10, 60}, {k, 10, 60}], 2]
Length@list
(* 38831 *)
RandomChoice[list]
(* {0.17, 0.14, 0.4, 0.29} *)
Total@%
(* 1. *)
</code></pre>
<p>Not very clever or very efficient method, but it does the job ...</p>
|
4,014,756 |
<p>I was reading the book "Quantum Computing Since Democritus".</p>
<blockquote>
<p>"The set of ordinal numbers has the important property of being well
ordered,which means that every subset has a minimum element. This is
unlike the integers or the positive real numbers, where any element
has another that comes before it."</p>
</blockquote>
<p>Unlike integers? Let's consider a set <span class="math-container">$\{1,2,3\}$</span> This has a minimum element.</p>
<p>Do you get what does the author wants to say here?</p>
|
Kavi Rama Murthy
| 142,385 |
<p><span class="math-container">$C_1\times C_2\times C_3...=\bigcap_{n=1}^{\infty} (C_1\times C_2\times C_3...\times C_n\times \mathbb R\times \mathbb R\times ...)$</span> and intersection of closed sets is always closed.</p>
|
4,057,959 |
<p>What is the number of N step random walks starting from a point (x0,y0) to a point (x1,y1) assuming each direction (right,left,up,down) has equal probability. I know the expression for the one dimension case is:</p>
<p><span class="math-container">$$ \binom{N}{\frac{N+(y1-y0)}{2}} $$</span></p>
<p>is there a similar expression for two dimensions?
I saw this post (<a href="https://math.stackexchange.com/questions/2157448/probability-distribution-of-a-2d-lattice-random-walk">Probability Distribution of a 2D lattice random walk</a>) but the answer given is clearly wrong.</p>
|
Mike Earnest
| 177,399 |
<p>The linked answer has a formula, but I wanted to provide a different perspective. I am generalizing the method used <a href="https://math.stackexchange.com/a/516043/177399">by Brian M. Scott in this answer</a>.</p>
<p>Suppose the vertices your walk visits are <span class="math-container">$(x_0,y_0),(x_1,y_1),\dots,(x_N,y_N)$</span>. (I am using different notation for the final vertex). Note that</p>
<ul>
<li><p>The sums <span class="math-container">$x_0+y_0, x_1+y_1,\dots,x_N+y_N$</span> are a one-dimensional random walk from <span class="math-container">$x_0+y_0$</span> to <span class="math-container">$x_N+y_N$</span>. This is because the values change by <span class="math-container">$\pm1$</span> each time.</p>
</li>
<li><p>The differences <span class="math-container">$x_0-y_0,x_1-y_1,\dots,x_N-y_N$</span> are also a <span class="math-container">$1$</span>D random walk from <span class="math-container">$x_0-y_0$</span> to <span class="math-container">$x_N-y_N$</span>.</p>
</li>
<li><p>Any pair of <span class="math-container">$1$</span>D random walks can be generated in this fashion. You just need to check that the four possible choices of north, south, east and west for the <span class="math-container">$2$</span>D walk generate all possible pairs of <span class="math-container">$\pm1$</span> for the two <span class="math-container">$1$</span>D walks described above.</p>
</li>
</ul>
<p>It follows that the number of <span class="math-container">$2$</span>D random walks is the product of the number of ways to complete these two <span class="math-container">$1$</span>D random walks, which is
<span class="math-container">$$
\binom{N}{\frac12({N+x_0+y_0-x_N-y_N})}\times \binom{N}{\frac12({N+x_0-y_0-x_N+y_N})}.
$$</span></p>
|
2,602,410 |
<p>$$\int_{0}^{\pi /2} \frac{\sin^{m}(x)}{\sin^{m}(x)+\cos^{m}(x)}\, dx$$</p>
<p>I've tried dividing by $\cos^{m}(x) $, and subbing out the $\ 1+\cot^{m}(x) $ with $\csc^{n}(x) $ for some $n$, but to no avail. I've also tried adding and subtracting $\cos^{m}(x)$ to the numerator, and substituting $x$ by $\pi-y$, but these techniques haven't helped either.</p>
|
Anurag A
| 68,092 |
<p>Let $$I=\int_{0}^{\pi /2} \frac{\sin^{m}(x)}{\sin^{m}(x)+\cos^{m}(x)} \, dx.$$ Then using the substitution $u=\frac{\pi}{2}-x$, we get
$$I=\int_{0}^{\pi /2} \frac{\cos^{m}(u)}{\sin^{m}(u)+\cos^{m}(u)} \, du=\int_{0}^{\pi /2} \frac{\cos^{m}(x)}{\sin^{m}(x)+\cos^{m}(x)} \, dx.$$
Thus
$$2I=\int_{0}^{\pi /2} \frac{\sin^{m}(x)+\cos^{m}(x)}{\sin^{m}(x)+\cos^{m}(x)} \, dx=\frac{\pi}{2}.$$
Hence $I=\frac{\pi}{4}$.</p>
|
175,535 |
<p>I have <code>data1</code> and a target point <code>targetPts</code>, and want to find the closest point from the data. As you can see from below, the 29th point from the data is the closest point and thus the value of <code>data1[[29]]</code> is same as the desired value, which is <code>{0.67033, 0.84245}</code>.</p>
<pre><code>data1 = BlockRandom[SeedRandom[7]; RandomReal[1, {30, 2}]];
targetPts = {0.68, 0.75};
data1[[29]]
Flatten@Nearest[data1, targetPts]
</code></pre>
<p>Now my problem changed a bit. I was given <code>data1data2</code> and I need to use it as the input. My desired output for this case is <code>{0.67033, 0.84245, 0.177696, 0.49873}</code>. The first 2 elements are used in the <code>Nearest</code> searching, while all 4 elements are the desired output. It just like a lookup function.</p>
<pre><code>data2 = BlockRandom[SeedRandom[8]; RandomReal[1, {30, 2}]];
data1data2 = ArrayFlatten[{{data1, data2}}];
data1data2[[29]]
</code></pre>
<p>How can I get it done? Many thanks!</p>
|
Coolwater
| 9,754 |
<p>You could pass a <code>Rule</code> to <code>Nearest</code>:</p>
<pre><code>First[Nearest[data1 -> data1data2, targetPts, 1]]
</code></pre>
<blockquote>
<p>{0.67032993, 0.84245042, 0.17769644, 0.49872995}</p>
</blockquote>
|
1,873,648 |
<p>Let $A=\{1,2,3,...,2^n\}$. Consider the greatest odd factor (not necessarily prime) of each element of A and add them. What does this sum equal? </p>
|
KaliMa
| 50,707 |
<p>For positive integer $k$, let $L(k)$ be the largest odd factor of $k$. Then we have:</p>
<p>$L(k) = L(k/2)$ if $k$ is even, and $L(k) = k$ if $k$ is odd.</p>
<p>We compute:</p>
<p>$$S(n) = \sum_{k=1}^{2^n} L(k)$$</p>
<p>$$S(n) = \sum_{k=1}^{2^{n-1}} L(2k) + \sum_{k=1}^{2^{n-1}} L(2k-1)$$</p>
<p>$$S(n) = \sum_{k=1}^{2^{n-1}} L(2k) + \sum_{k=1}^{2^{n-1}} (2k-1)$$</p>
<p>$$S(n) = (\sum_{k=1}^{2^{n-1}} L(2k)) + 4^{n-1}$$</p>
<p>$$S(n) = (\sum_{k=1}^{2^{n-1}} L(k)) + 4^{n-1}$$</p>
<p>$$S(n) = S(n-1) + 4^{n-1}$$</p>
<p>$$S(n) = 1 + \sum_{k=1}^{n} 4^{k-1} = \frac{4^{n}+2}{3}$$</p>
|
393,280 |
<p>I am reading about the construction of the Affine Grassmannian in Dennis Gaitsgory's seminar <a href="http://www.math.harvard.edu/%7Egaitsgde/grad_2009/SeminarNotes/Oct13(AffGr).pdf" rel="noreferrer">notes</a>
and there are some commutative algebra facts that I am not able to figure out by myself apparently, like the following:</p>
<blockquote>
<p>Let <span class="math-container">$k$</span> be an algebraically closed field, <span class="math-container">$A$</span> a finite type <span class="math-container">$k$</span>-algebra and <span class="math-container">$A\subseteq B$</span> some (non finite type) extension. Let <span class="math-container">$M$</span> be a finitely generated <span class="math-container">$A[[t]]$</span>-module which is flat over <span class="math-container">$A$</span> (actually finite free over <span class="math-container">$A$</span>) and <span class="math-container">$t$</span> acts nilpotently on <span class="math-container">$M$</span>. Then, <span class="math-container">$\operatorname{Tor}^{A[[t]]}_n(M,B[[t]])=0$</span> for all <span class="math-container">$n>0$</span>.</p>
</blockquote>
<p>Intuitively, since we get <span class="math-container">$B[[t]]$</span> from <span class="math-container">$A[[t]]$</span> by extending only the coefficient ring <span class="math-container">$A$</span> in a "free" way, flatness over <span class="math-container">$A$</span> of <span class="math-container">$M$</span> should suffice, but I can't make it into a proof. Notice that <span class="math-container">$A[[t]]\otimes _A B\ne B[[t]]$</span> in general. Am I missing something obvious?</p>
<h3>Edit:</h3>
<p>After reading it again, it seems that the only additional hypothesis I missed is that <span class="math-container">$t$</span> acts nilpotently on <span class="math-container">$M$</span> which I think does not follow from what I have written. The relevant place in the notes is the first line on page 7 and a bit before that.</p>
|
KotelKanim
| 28,129 |
<p><strong>New attempt:</strong></p>
<p>Lemma 24.6.6 in Vakil's AG notes states:</p>
<blockquote>
<p>Suppose $N$ is an $R$-module and $t\in R$ is not a zero divisor on $N$. Then for any $R/(t)$-module $M$, we have
$$
\operatorname{Tor}_i^R(M,N)=\operatorname{Tor}_i^{R/(t)}(M,N/(t)).
$$
(Actually it is stated with the roles of $M$ and $N$ reversed, but Tor is symmetric.)</p>
</blockquote>
<p>Now, in our case, $R=A[[t]]$, $t^n$ is not a zero divisor on $N=B[[t]]$ and $M$ is actually an $A[[t]]/(t^n)$-module so we get</p>
<p>$$
\operatorname{Tor}_i^{A[[t]]}(M,B[[t]])=\operatorname{Tor}_i^{A[[t]]/(t^n)}(M,B[[t]]/(t^n)).
$$</p>
<p>Take a free resolution $F_{\bullet}\to B$ over $A$. Tensoring with $A[[t]]/(t^n)$ over $A$, we get
$$
F_{\bullet}\otimes _A A[[t]]/(t^n)\to B[[t]]/(t^n)
$$</p>
<p>Which is a free resolution of $B[[t]]/(t^n)$ over $A[[t]]/(t^n)$. Now, tensor with $M$ over $A[[t]]/(t^n)$, we get the complex</p>
<p>$$
(F_{\bullet}\otimes _A A[[t]]/(t^n)) \otimes _{A[[t]]/(t^n)} M \cong F_{\bullet}\otimes _A M
$$</p>
<p>On the on hand, its homology are precisely the Tor-s we want to calculate and on the other, it is exact by $A$-flatness of $M$, so its homology is zero.</p>
<hr>
<p><strong>Edit (final, I hope...)</strong>: The second step is exactly the flat base change of Tor, but on the <em>other</em> module. The map $A\to A[[t]]/(t^n)$ is flat so we get
$$
\operatorname{Tor}_i^{A}(M,B)=\operatorname{Tor}_i^{A[[t]]/(t^n)}(M,B[[t]]/(t^n))
$$</p>
<p>Where we used $B\otimes _A A[[t]]/(t^n)=B[[t]]/(t^n)$ (for this, the reduction mod $t^n$ was necessary!) and now the LHS is zero by $A$-flatness of $M$. </p>
|
3,460,749 |
<p>I'm an undergraduate student currently studying mathematical analysis. </p>
<p>Our professor uses Zorich's Mathematical Analysis, but I found the text too difficult to understand. </p>
<p>After exploring some textbooks, I found that Abbott was easier to follow, so I studied Abbott until I realized that there's a significant amount of content in Zorich that Abbott doesn't cover.</p>
<p>So I was wondering if there's a book out there that covers as much content as Zorich but is more readable?</p>
<p>Thank you for any help.</p>
|
Grothendix
| 884,032 |
<p>True that Zorich is difficult to understand.
BUT I still think if you go through with it, you will learn a lot in the end.
However, I think there’s also an other book you could consider about, which is <strong>Mathematical Analysis</strong> written by Apostol.Or, consider Courant’s book.
They are both good books.
Good luck!</p>
|
1,901,302 |
<p>Let $R$ be a principal ideal ring, meaning that every ideal is generated by one element. Given a subset $A\subseteq R$, it is generally not possible to choose one element $x\in A$ so that $I(\{x\})=I(A)$, i.e. the ideal generated by $A$ need not be generated by a single element of $A$.</p>
<p>As an example one can consider $R=\mathbb Z$ and $A=\{2,3\}$, then $I(A)=\mathbb Z$.</p>
<p>My question is whether one can choose a finite subset $X$ of $A$ so that $I(A)=I(X)$.</p>
<p>In $\mathbb Z$ for example this is always possible:</p>
<p>The ideal generated by a set is always the ideal generated by the GCD of that set. If $X\subseteq Y$ then $\mathrm{GCD}(Y)≤\mathrm{GCD}(X)$. If we have a finite sbuset $X$ of $A$ and the GCDs are not equal, there must exist an element of $A$ so that appending it to $X$ will decrease the GCD. This can only happen a finite amount of times since we are decreasing a positive number by at least $1$ each time.</p>
<p>I don't expect this argument to work in general principal ideal rings, since it is using a total ordering structure on the ideal space for which every ideal has only finitely many "lesser" ideals.</p>
|
Keith Kearnes
| 310,334 |
<p>"My question is whether one can choose a finite subset $X$ of $A$ so that $I(A)=I(X)$."</p>
<p>Yes. If $I(A) = (p)$, then $p$ is a linear combination of elements of $A$, say
$p=r_1a_1+r_2a_2+\cdots +r_ka_k$ with $r_i\in R$ and $a_i\in A$. Now take $X = \{a_1,\ldots,a_k\}$.</p>
|
4,283,063 |
<p>For positive real numbers <span class="math-container">$a,b ,c$</span> prove that: <span class="math-container">$a^{b+c}b^{c+a}c^{a+b} ≤ (a^ab^bc^c)^2$</span></p>
<p>My working</p>
<p><span class="math-container">$c^c≥b^c≥b^a$</span></p>
<p><span class="math-container">$(c^c)^2 ≥ b^{a+c}$</span></p>
<p>Similarly
<span class="math-container">$(c^c)^2≥ a^{c+b} and (c^c)^2≥c^{a+b}$</span></p>
<p>Therefore
<span class="math-container">$(c^c)^6 ≥b^{a+c}c^{a+b}a^{c+b}$</span></p>
<p>But I do not know how to use it.</p>
|
user10354138
| 592,552 |
<p><strong>Hint</strong>: Consider <span class="math-container">$g(x):=x^\alpha\sin(x^\beta)$</span> for <span class="math-container">$x\neq 0$</span>, suitable <span class="math-container">$\alpha,\beta$</span>.</p>
|
4,310,529 |
<p>I need a formula that will give me all points of random ellipse and circle intersection (ok, not fully random, the center of circle is laying on ellipse curve)</p>
<p>I need step by step solution (algorithm how to find it) if this is possible.</p>
|
Blabbo the Verbose
| 994,827 |
<p>Choose a coordinate system where the <span class="math-container">$x$</span> axis is parallel to the ellipse major semiaxis, <span class="math-container">$y$</span> axis parallel to the ellipse minor semiaxis, and origin where they intersect ("center of ellipse"). If <span class="math-container">$a$</span> is the semi-major axis and <span class="math-container">$b$</span> the semi-minor axis, and
<span class="math-container">$$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1 \tag{1a}\label{BtV1a}$$</span>
Assuming we have a nondegenerate ellipse (an ellipse with nonzero area), then <span class="math-container">$a, b \gt 0$</span>, and we can write above as
<span class="math-container">$$x^2 + \frac{a^2}{b^2} y^2 - a^2 = 0 \tag{1b}\label{BtV1b}$$</span></p>
<p> <br>A circle of radius <span class="math-container">$r$</span> (<span class="math-container">$r \ge 0$</span>) centered at <span class="math-container">$x = x_0$</span>, <span class="math-container">$y = y_0$</span> fulfills
<span class="math-container">$$(x - x_0)^2 + (y - y_0)^2 = r^2 \tag{2a}\label{BtV2a}$$</span>
which we can also expand into
<span class="math-container">$$x^2 - 2 x_0 x + y^2 - 2 y_0 y + x_0^2 + y_0^2 - r^2 = 0 \tag{2b}\label{BtV2b}$$</span></p>
<p> <br>To find the point <span class="math-container">$(x, y)$</span> where the circle and the ellipse intersects, you need to solve the pair of equations, for example <span class="math-container">$\eqref{BtV1b}$</span> and <span class="math-container">$\eqref{BtV2b}$</span>.</p>
<p>The system of equations has essentially form
<span class="math-container">$$\left\lbrace ~ \begin{aligned}
x^2 + C_1 y^2 + C_2 & = 0 \\
x^2 + C_3 x + y^2 + C_4 y + C_5 & = 0 \\
\end{aligned} \right . \tag{3}\label{BtV3}$$</span>
where
<span class="math-container">$$C_1 = \frac{a^2}{b^2}, ~
C_2 = -a^2, ~
C_3 = -2 x_0, ~
C_4 = -2 y_0, ~
C_5 = x_0^2 + y_0^2 - r^2$$</span>
One efficient way of solving <span class="math-container">$\eqref{BtV3}$</span> is to substract the first equation from the second, and solve for <span class="math-container">$x$</span>. You'll find exactly one algebraic solution for <span class="math-container">$x$</span> (that depends on <span class="math-container">$y$</span>). Substitute it back into the first equation, and you have a quartic equation in <span class="math-container">$y$</span>, which has zero, one, two, three, or four solutions. Solve that, then substitute the numeric value or values of <span class="math-container">$y$</span> back into the algebraic solution for <span class="math-container">$x$</span>, and you have the solution, which is either no intersection, one point of intersection, or two points of intersection.</p>
|
598,838 |
<p><span class="math-container">$11$</span> out of <span class="math-container">$36$</span>? I got this by writing down the number of possible outcomes (<span class="math-container">$36$</span>) and then counting how many of the pairs had a <span class="math-container">$6$</span> in them: <span class="math-container">$(1,6)$</span>, <span class="math-container">$(2,6)$</span>, <span class="math-container">$(3,6)$</span>, <span class="math-container">$(4,6)$</span>, <span class="math-container">$(5,6)$</span>, <span class="math-container">$(6,6)$</span>, <span class="math-container">$(6,5)$</span>, <span class="math-container">$(6,4)$</span>, <span class="math-container">$(6,3)$</span>, <span class="math-container">$(6,2)$</span>, <span class="math-container">$(6,1)$</span>. Is this correct?</p>
|
waj cheema
| 92,867 |
<p>The probability of rolling a certain number, can be easily calculated by listing the options available, or listing the probability of not rolling that specific number and subtracting the answer from 1. The probability of a certain event occurring is always a number between 0 and 1.</p>
|
815,963 |
<p>I am trying to understand how to do proof by induction for inequalities. The step that I don't fully understand is making an assumption that n=k+1. For equations it is simple. For example:</p>
<blockquote>
<p><strong>Prove that 1+2+3+...+n = $ \frac {n(n+1)}{2} $ is valid for $ i \ge 1 $</strong> </p>
</blockquote>
<p>1) Base case for n=1. (...)</p>
<p>2) Assume that equation is true for n=k. (...)</p>
<p>3) Proof for n=k+1.<br>
1+2+3+...+k+(k+1) = $ \frac {(k+1)[(k+1)+1]}{2} $</p>
<p>So we add +1 to k on both sides.</p>
<p>For inequalities:<br>
<strong>Prove that $ 4n \lt 2^n $ is valid for $ n \ge 5 $</strong></p>
<p>1) Base case. (...)</p>
<p>2) n=k assumption. (...)</p>
<p>3) n=k+1 proof. </p>
<p>How I would do this(add 1 to k on both sides):<br>
$ 4(k+1) < 2^{k+1} $</p>
<p>How it's done in tutorials:<br>
$ 4(k+1) < 2^k + 4 $<br>
$ 4k+4 < 2^k + 4 $</p>
<p>Why they add extra 4 to RHS instead of increasing k by 1?</p>
|
doppz
| 48,746 |
<p>For the first part, you haven't done anything, you just rewrote everything. Most importantly, you didn't prove the statement. Induction works by first showing that $P(1)$ holds then assuming $P(n)$ and proving $P(n+1)$. From there, it follows that the $n$ such that $P(n)$ is true must be all of $\mathbf{N}$. </p>
<p>In our case $P(n)$ is the statement that $1+2+\dots+n=\frac{n(n+1)}{2}$. If we look at $n=1$, we see that $\frac{n(n+1)}{2}=\frac{1\cdot 2}{2}=1$. So that $P(1)$ holds. </p>
<p>Now, we let $n$ be such that $P(n)$ holds and prove that $P(n+1)$ must also be true. Let's consider $1+2+\dots+n+(n+1).$ Recall that we know $P(n)$ is true, so, we can substitute $\frac{n(n+1)}{2}$ in for $1+\dots+n$. Therefore, $$1+2+\dots+n+(n+1)=\frac{n(n+1)}{2}+n+1=\frac{n^2+n+2n+2}{2}=\frac{(n+1)((n+1)+1)}{2}.$$ Notice that the far right hand side is exactly what we wanted to show, that is, that $P(n+1)$ holds.</p>
<p>So, we've shown that the set of $n$ such that $P(n)$ holds is all of $\mathbf{N}$, that is, all $n\geq 1$. </p>
<hr>
<p>Now, for the second question. We let $Q(n)$ be the statement $4n<2^n$ for $n\geq 5$ (I'm assuming yours is a typo, clearly $8=4\cdot 2< 2^2=4$ is not true.) </p>
<p>So, let's do the base case: $2^5=32$ and $4\cdot 5=20$ so clearly we have $4\cdot 5< 2^5$. Thus, $Q(5)$ is true. </p>
<p>Now, let $n$ be such that $Q(n)$ holds. We show that $Q(n+1)$ holds too. Observe that $4(n+1)=4n+4<2^n+4$ by the assumption. However, let's recall that $4=2^2<2^n$ for $n> 2$. Therefore, $$4(n+1)<2^n+2^n=2(2^n)=2^{n+1}.$$ </p>
<p>Notice that we've only shown that $Q(n)$ implies $Q(n+1)$ when $n>2$. But, since $5>2$, it follows that $Q(n)$ is true for all $n\geq 5$. Which is exactly what we wanted to show. </p>
<p>Hopefully this helps.</p>
|
345,260 |
<p>I'm trying to prove that there exists a multiplicative linear functional in $\ell_\infty^*$ that extends the limit funcional that is defined in $c$ (i.e., im looking for a linear functional $f \colon \ell_\infty \to \mathbb K$ such that $f( (x_n * y_n) ) = f (x_n) f(y_n)$, for every $(x_n), (y_n) \in \ell_\infty$, and such that $f((x_n)) = \lim x_n$ , if $(x_n)$ converges). </p>
<p>I found a lot of references saying that it exists but I can't find a detailed proof. The usual Hahn-Banach approach doesn't work because I get a shift invariant functional, and thats inconsistent with multiplicavity. The references I found suggest that using ultrafilters to define the limit should work. I found this interesting proof of the reciprocal: every multiplicative linear functional in $\ell_\infty^*$ is a limit along an ultrafilter:
<a href="https://math.stackexchange.com/questions/175553/every-multiplicative-linear-functional-on-ell-infty-is-the-limit-along-an">Every multiplicative linear functional on $\ell^{\infty}$ is the limit along an ultrafilter.</a> it assumes $\mathbb K = \mathbb R$, but I could adapt so I think (assuming I made no mistakes) it works for $\mathbb C$ as well. </p>
<p>In sum, I'm looking for a proof of the result here
<a href="http://planetmath.org/BasicPropertiesOfALimitAlongAFilter" rel="nofollow noreferrer">http://planetmath.org/BasicPropertiesOfALimitAlongAFilter</a>
but it should work for $\mathbb C$ and the $(x_n) \mapsto \mathcal F -\lim (x_n)$ functional should be continuous. Is there a good reference for this? or is it just trivial?</p>
<p>thanks</p>
|
Berci
| 41,488 |
<p>$\lim_{\mathcal F}$ is of course continuous, and has norm $1$, as
$$|\lim_{\mathcal F}(x_n)|\le \|(x_n)\|_\infty\,,$$
and on constant sequences it holds with equality.</p>
|
2,389,324 |
<p>For a continuous Function $f$, prove that:</p>
<p>$$\lim_{x\to0^+}\int_{x}^{2x} \frac{1}{t} f(t) dt = ln(2)f(0)$$ </p>
<p>I have already concluded that since f is continuous, it's therefore integrable.Moreover I assumed there is a function $F$
$$F(x)=\int_{x}^{2x} f(s)ds$$
in order to simplify the limit expression by partial integration. Unfortunately that gave me no solution and I am stuck.</p>
|
mouse_wheel
| 471,024 |
<p>you can use epsilons:
since f is continuous, for a given $\epsilon $ , you can find $ \eta $ such that for all $x \leq \eta$ , we have $ f(0)-\epsilon <=f(x) <= f(0) + \epsilon $
Then you integrate this (divided by t) between x and 2x for x sufficiently small and you get a bounding between $ln(2)(f(0) -\epsilon)$ and $ln(2)(f(0) -\epsilon)$</p>
|
2,389,324 |
<p>For a continuous Function $f$, prove that:</p>
<p>$$\lim_{x\to0^+}\int_{x}^{2x} \frac{1}{t} f(t) dt = ln(2)f(0)$$ </p>
<p>I have already concluded that since f is continuous, it's therefore integrable.Moreover I assumed there is a function $F$
$$F(x)=\int_{x}^{2x} f(s)ds$$
in order to simplify the limit expression by partial integration. Unfortunately that gave me no solution and I am stuck.</p>
|
tattwamasi amrutam
| 90,328 |
<p>$$\int_x ^{2x} \frac{1}{t} f(t) dt-\int_x^{2x}\frac{1}{t}f(0)dt=\int_x^{2x}\frac{1}{t}\left(f(t)-f(0)\right)dt$$
Since $f$ is continuous at $0$, given $\epsilon \gt 0$, there is $\delta \gt 0$ such that $|x| \lt \delta \implies |f(x)-f(0)| \lt \frac{\epsilon}{\ln 2}$. For $|x| \lt \delta$, since $|t| \lt |x| \lt \delta$, we have $$\left|\int_x^{2x}\frac{1}{t}\left(f(t)-f(0)\right)dt\right| \le \int_x^{2x}\left|\frac{1}{t}\left(f(t)-f(0)\right)\right|dt\lt \frac{\epsilon}{\ln 2}\int_x^{2x}\frac{1}{t}dt=\epsilon$$</p>
|
3,717,172 |
<blockquote>
<p>The random variable <span class="math-container">$X$</span> has an exponential distribution function given by density:
<span class="math-container">$$
f_X(x) =
\begin{cases}
e^{-x}, & x\ge 0,\\
0, & x<0.
\end{cases}
$$</span>
Find the distribution and density function of the random variable <span class="math-container">$Y=\max\left\{X^2,2-X\right\}$</span>.</p>
</blockquote>
<p>I honestly just don't know how to start here.</p>
<ol>
<li>Why should we be able to calculate this if we don't know what exactly X is?</li>
<li>What is X^2 and 2-X ? I don't understand that</li>
<li>And even knowing all of this, how should we be able to find the distribution and density function of Y?</li>
</ol>
<p>Thanks</p>
|
tommik
| 791,458 |
<p>A graphic approach</p>
<p>Note: as already note in a comment, <span class="math-container">$f(x)=e^{-x}\mathbb{1}_{[0;+\infty)}(x)$</span></p>
<p>First of all note that <span class="math-container">$X\sim Exp(1)$</span> and so</p>
<ul>
<li><p><span class="math-container">$\mathbb{P}[X \leq x]=F_X(x)=1-e^{-x}$</span></p>
</li>
<li><p><span class="math-container">$\mathbb{P}[X > x]=S_X(x)=e^{-x}$</span></p>
</li>
</ul>
<p>Let's have a look at the graph</p>
<p><a href="https://i.stack.imgur.com/oMk4q.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oMk4q.jpg" alt="enter image description here" /></a></p>
<p>Now it is clear which is the transformation function and which is Y domain:</p>
<p><span class="math-container">$y \in [1;+\infty)$</span></p>
<p>To derive <span class="math-container">$ CDF_Y$</span> let's use the definition</p>
<p><span class="math-container">$F_Y(y)=\mathbb{P}[Y \leq y]=\mathbb{P}[2-y \leq X \leq \sqrt{y}]=S_X(2-y)-S_X(\sqrt{y})=e^{y-2}-e^{-\sqrt{y}}$</span></p>
<p>This, obviously when <span class="math-container">$y \in[1;2)$</span></p>
<p>For the other interval, same but more simply story</p>
<p><span class="math-container">$F_Y(y)=\mathbb{P}[Y \leq y]=\mathbb{P}[X^2\leq y]=\mathbb{P}[X\leq \sqrt{y}]=F_X(\sqrt{y})=1-e^{-\sqrt{y}}$</span></p>
<p>Summarizing...</p>
<p><span class="math-container">$F_Y(y)=[e^{y-2}-e^{-\sqrt{y}}]\mathbb{1}_{[1;2)}(y)+[1-e^{-\sqrt{y}}]\mathbb{1}_{[2;+\infty)}(y)$</span></p>
<p>derive and you get your density</p>
<p><em><strong>Further Basic explanation answering to the latest comment of the OP</strong></em></p>
<p>Your density is a known Law: a Negative Exponential with <span class="math-container">$\theta=1$</span>. It is easy to verify that because the Exp neg density is the following</p>
<p><span class="math-container">$f(x,\theta)=\theta e^{-\theta x}$</span></p>
<p><span class="math-container">$x \geq 0$</span> and zero elsewhere. So we immediately know which are the CDF and the survival function (anyway they can be easily calculated with the integral)</p>
<ul>
<li>YOUR CDF</li>
</ul>
<p><span class="math-container">$F_X(x)=\mathbb{P}[X \leq x]=1-e^{-x}$</span></p>
<ul>
<li>YOUR SURVIVAL FUNCTION</li>
</ul>
<p><span class="math-container">$S_X(x)=\mathbb{P}[X > x]=1-F_X(x)=e^{-x}$</span></p>
<p>Now for the sake of simplicity let's suppose we are looking at the following transformation:</p>
<p><span class="math-container">$Y=X^2$</span></p>
<p>So, by definition, we have</p>
<p><span class="math-container">$F_Y(y)=\mathbb{P}[Y \leq y]=\mathbb{P}[X^2\leq y]=\mathbb{P}[X \leq \sqrt{y}]=$</span></p>
<p>(by definition)</p>
<p><span class="math-container">$=F_X(\sqrt{y})=1-e^{\sqrt{y}}$</span></p>
<p>If you have understood this procedure, you can apply it to your exercise that is a little bit more complex by the fact that the transformation function is</p>
<p><span class="math-container">$Y=Max[2-X;X^2]$</span></p>
<p>but with the help of the grafh I showed you should be able to understand the solution. If not, I suggest you to challenge with easier exercise to become familiar with the procedure.
Also the Fundamental Tranformation Theorem could help but the procedure I showed you is very useful to understand the issue you are facing.</p>
<p>2nd Edit:</p>
<p><span class="math-container">$\mathbb{P}[2-y < X < \sqrt{y}]=F_X(1)-F_X(2-y)+F_X(\sqrt{y})-F_X(1)=$</span></p>
<p><span class="math-container">$=F_X(\sqrt{y})-F_X(2-y)=1-S_X(\sqrt{y})-[1-S_X(2-y)]=$</span></p>
<p><span class="math-container">$=S_X(2-y)-S_X(\sqrt{y})$</span></p>
|
4,563,725 |
<p>Is it possible for two non real complex numbers a and b that are squares of each other? (<span class="math-container">$a^2=b$</span> and <span class="math-container">$b^2=a$</span>)?</p>
<p>My answer is not possible because for <span class="math-container">$a^2$</span> to be equal to <span class="math-container">$b$</span> means that the argument of <span class="math-container">$b$</span> is twice of arg(a) and for <span class="math-container">$b^2$</span> to be equal to <span class="math-container">$a$</span> means that arg(a) = 2.arg(b) but the answer is it is possible.</p>
<p>How is it possible when arg(b) = 2.arg(a) and arg(a) = 2.arg(b) contradict each other?</p>
|
whoisit
| 1,094,230 |
<p>Let <span class="math-container">$a = re^{i \theta}$</span> <br />
Then, <span class="math-container">$a = b^2 = (a^2)^2 = a^4 = r^4e^{i (4 \theta)}$</span></p>
<p>What we need is, first <span class="math-container">$r=r^4$</span> <br />
That is, <span class="math-container">$r =0$</span> or <span class="math-container">$r = 1$</span> (since <span class="math-container">$r$</span> must be real)<br />
<span class="math-container">$r=0$</span> is trivial, discard that.</p>
<p>Next, we need <span class="math-container">$e^{i \theta} = e^{i(4 \theta)}$</span><br />
That is, <span class="math-container">$4\theta = \theta$</span> or <span class="math-container">$4 \theta = \theta + 2n\pi, n\in\mathbb Z\setminus \{0\}.$</span></p>
<p><em><strong>You were missing this <span class="math-container">$\boldsymbol{2 n \pi}$</span> part in your attempt.</strong></em></p>
<p>From here, we get <span class="math-container">$\theta =0 $</span> or <span class="math-container">$3\theta = 2n\pi$</span> <br />
<span class="math-container">$\theta=0$</span> means real numbers, discard that.</p>
<p>That leaves us with <span class="math-container">$3\theta = 2n\pi$</span> <br />
For <span class="math-container">$n=1, \theta = 2\pi/3$</span><br />
For <span class="math-container">$n=2, \theta = 4\pi/3$</span><br />
Other <span class="math-container">$n$</span> values can be discarded as for those <span class="math-container">$\theta \not \in [0, 2 \pi)$</span></p>
<p>Thus we're left with <span class="math-container">$e^{i (2\pi/3)}$</span> and <span class="math-container">$e^{i (4\pi/3)}$</span></p>
<p>They are both solutions, and these are the only non-real solutions for <span class="math-container">$a = b^2$</span> and <span class="math-container">$b = a^2$</span>. They are the two complex cube roots of unity.</p>
<p>In real numbers, <span class="math-container">$a=b=0$</span> and <span class="math-container">$a=b=1$</span> form another solutions as we discovered above. The entire solution set in complex numbers is <br />
<span class="math-container">$(a, b) \in \{(0,0), (1,1),
(\omega, \omega^2),
(\omega^2, \omega)\}$</span>
where <span class="math-container">$\omega = e^{i (2\pi/3)}$</span></p>
|
3,326,310 |
<p>From Serge Lang's Linear Algebra:</p>
<blockquote>
<p>Let <span class="math-container">$x_1$</span>, <span class="math-container">$x_2$</span>, <span class="math-container">$x_3$</span> be numbers. Show that:</p>
<p><span class="math-container">$$\begin{vmatrix} 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\\ 1 & x_3 &
x_3^2 \end{vmatrix}=(x_2-x_1)(x_3-x_1)(x_3-x_2)$$</span></p>
</blockquote>
<p>The matrix presented above seems to be the specific case of Vandermonde determinant:</p>
<p><span class="math-container">$$
\begin{vmatrix}
1 & x_1 & ... & x_1^{n-1}\\
1 &x_2 & ... & x_2^{n-1}\\
... & ... & ... & ...\\
1 & x_n & ... & x_n^{n-1}
\end{vmatrix}=\prod_{i, j}(x_i - x_j), \forall (1 \leq i \leq n) \land (1 \leq j \leq n)
$$</span></p>
<hr />
<p>I'm trying to prove the specific case to then generalize it for arbitrary Vandermonde matrices.</p>
<h1>My incomplete "proof"</h1>
<p>Since determinant is a multilinear alternating function, it can be seen that adding a scalar multiple of one column (resp. row) to other column (resp. row) does not change the value (I omitted the proof to avoid too much text).</p>
<p>Thus considering that <span class="math-container">$x_1$</span> is a scalar, we can multiply each column but the last one of our specific Vandermonde matrix by <span class="math-container">$x_1$</span> and then starting from <strong>right to left</strong> subtract <span class="math-container">$n-1$</span>th column from <span class="math-container">$n$</span>:</p>
<p><span class="math-container">$$\begin{vmatrix} 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\\ 1 & x_3 &
x_3^2 \end{vmatrix}=\begin{vmatrix}
x_1 & 0 & 0 \\
x_1 & x_2 - x_1 & x^{2}_2 - x^{2}_1\\
x_1 & x_3 - x_1 & x^{2}_3 - x^{2}_1
\end{vmatrix}$$</span></p>
<p>Then using the expansion rule along the first row (since all the elements in it but <span class="math-container">$x_1$</span> are zero):</p>
<p><span class="math-container">$$... =x_1\begin{vmatrix}
x_2 - x_1 & x^{2}_2 - x^{2}_1\\
x_3 - x_1 & x^{2}_3 - x^{2}_1
\end{vmatrix}=(x_1x_2-x^2_1)(x^2_{3}-x^2_1)-(x^{2}_2x_1 - x^{3}_1)(x_3x_1 - x^2_1)$$</span></p>
<p>The first expansion seems interesting because it contains <span class="math-container">$x_2 - x_1$</span> and <span class="math-container">$x_3 - x_1$</span> (which are first two factors of specific Vandermonde matrix), but further expansion does not give satisfying results.</p>
<p><strong>Question</strong>:</p>
<p>Is this a good simple start of inductively "proving" relation between Vandermonde matrix and its factors? If so what does it lack to show the complete result? Did I make mistake during evaluation?</p>
<p>Thank you!</p>
|
Monadologie
| 669,687 |
<p>"Since determinant is a multilinear alternating function, it can be seen that <strong>adding</strong> a scalar multiple of one column (resp. row) to other column (resp. row) does not change the value (I omitted the proof to avoid too much text)
" is right. But
<span class="math-container">$$
\begin{vmatrix} 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\\ 1 & x_3 &
x_3^2 \end{vmatrix}
\neq
\begin{vmatrix}
x_1 & 0 & 0 \\
x_1 & x_2 - x_1 & x^{2}_2 - x^{2}_1\\
x_1 & x_3 - x_1 & x^{2}_3 - x^{2}_1
\end{vmatrix}
\neq
(x_1x_2-x^2_1)(x^2_{3}-x^2_1)-(x^{2}_2x_1 - x^{3}_1)(x_3x_1 - x^2_1)
$$</span>
Remember that when you multiply a row or a column by <span class="math-container">$\lambda$</span>, the determinant is multiplied by <span class="math-container">$\lambda$</span>. And be careful when distributing <span class="math-container">$x_1$</span>. We have
<span class="math-container">\begin{align}
\begin{vmatrix} 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\\ 1 & x_3 &
x_3^2 \end{vmatrix}
&=
x_1
\begin{vmatrix}
x_1 & 0 & 0 \\
x_1 & x_2 - x_1 & x^{2}_2 - x^{2}_1\\
x_1 & x_3 - x_1 & x^{2}_3 - x^{2}_1
\end{vmatrix}\\
&= x_1^2
\begin{vmatrix}
x_2 - x_1 & x^{2}_2 - x^{2}_1\\
x_3 - x_1 & x^{2}_3 - x^{2}_1
\end{vmatrix}\\
&=
x_1^2((x_2 - x_1)(x^{2}_3 - x^{2}_1) - (x^{2}_2 - x^{2}_1)(x_3 - x_1))\\
&\neq (x_1x_2-x^2_1)(x^2_{3}-x^2_1)-(x^{2}_2x_1 - x^{3}_1)(x_3x_1 - x^2_1)
\end{align}</span>
Keep in mind that we are trying to have the simplest possible factors.
Here, you can do
<span class="math-container">\begin{align}
\begin{vmatrix} 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\\ 1 & x_3 &
x_3^2 \end{vmatrix}&=_{L_3 \leftarrow L_3 - L_2 \text{ and } L_2 \leftarrow L_2 - L_1}
\begin{vmatrix} 1 & x_1 & x_1^2\\ 0 &x_2 -x_1& (x_2 - x_1)(x_2+x_1)\\ 0 & x_3 - x_2 &
(x_3 - x_2)(x_3+x_2) \end{vmatrix}\\
&=_{L_3 \leftarrow L_3 - L_2} (x_2 - x_1)(x_3-x_2)
\begin{vmatrix} 1 & x_1 & x_1^2\\ 0 &1& x_2 + x_1\\ 0 & 0 &
x_3 -x_1 \end{vmatrix}\\
&=(x_2 - x_1)(x_3-x_2)(x_3-x_1)
\end{align}</span></p>
|
3,326,310 |
<p>From Serge Lang's Linear Algebra:</p>
<blockquote>
<p>Let <span class="math-container">$x_1$</span>, <span class="math-container">$x_2$</span>, <span class="math-container">$x_3$</span> be numbers. Show that:</p>
<p><span class="math-container">$$\begin{vmatrix} 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\\ 1 & x_3 &
x_3^2 \end{vmatrix}=(x_2-x_1)(x_3-x_1)(x_3-x_2)$$</span></p>
</blockquote>
<p>The matrix presented above seems to be the specific case of Vandermonde determinant:</p>
<p><span class="math-container">$$
\begin{vmatrix}
1 & x_1 & ... & x_1^{n-1}\\
1 &x_2 & ... & x_2^{n-1}\\
... & ... & ... & ...\\
1 & x_n & ... & x_n^{n-1}
\end{vmatrix}=\prod_{i, j}(x_i - x_j), \forall (1 \leq i \leq n) \land (1 \leq j \leq n)
$$</span></p>
<hr />
<p>I'm trying to prove the specific case to then generalize it for arbitrary Vandermonde matrices.</p>
<h1>My incomplete "proof"</h1>
<p>Since determinant is a multilinear alternating function, it can be seen that adding a scalar multiple of one column (resp. row) to other column (resp. row) does not change the value (I omitted the proof to avoid too much text).</p>
<p>Thus considering that <span class="math-container">$x_1$</span> is a scalar, we can multiply each column but the last one of our specific Vandermonde matrix by <span class="math-container">$x_1$</span> and then starting from <strong>right to left</strong> subtract <span class="math-container">$n-1$</span>th column from <span class="math-container">$n$</span>:</p>
<p><span class="math-container">$$\begin{vmatrix} 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\\ 1 & x_3 &
x_3^2 \end{vmatrix}=\begin{vmatrix}
x_1 & 0 & 0 \\
x_1 & x_2 - x_1 & x^{2}_2 - x^{2}_1\\
x_1 & x_3 - x_1 & x^{2}_3 - x^{2}_1
\end{vmatrix}$$</span></p>
<p>Then using the expansion rule along the first row (since all the elements in it but <span class="math-container">$x_1$</span> are zero):</p>
<p><span class="math-container">$$... =x_1\begin{vmatrix}
x_2 - x_1 & x^{2}_2 - x^{2}_1\\
x_3 - x_1 & x^{2}_3 - x^{2}_1
\end{vmatrix}=(x_1x_2-x^2_1)(x^2_{3}-x^2_1)-(x^{2}_2x_1 - x^{3}_1)(x_3x_1 - x^2_1)$$</span></p>
<p>The first expansion seems interesting because it contains <span class="math-container">$x_2 - x_1$</span> and <span class="math-container">$x_3 - x_1$</span> (which are first two factors of specific Vandermonde matrix), but further expansion does not give satisfying results.</p>
<p><strong>Question</strong>:</p>
<p>Is this a good simple start of inductively "proving" relation between Vandermonde matrix and its factors? If so what does it lack to show the complete result? Did I make mistake during evaluation?</p>
<p>Thank you!</p>
|
DIEGO R.
| 297,483 |
<p>Using induction on <span class="math-container">$n$</span> is maybe the most convenient way. Let's proof that if we assume that the result is true for <span class="math-container">$n - 1$</span> then the result is true for <span class="math-container">$n$</span>. So, consider the Vandermonde matrix
<span class="math-container">$$
V =\begin{pmatrix}
1 & x_{1} & x_{1}^{2} \cdots & x_{1}^{n-1} \\
1 & x_{2} & x_{2}^{2} \cdots & x_{2}^{n-1} \\
& & \vdots \\
1 & x_{n -1} & x_{n - 1}^{2} \cdots & x_{n-1}^{n-1} \\
1 & x_{n} & x_{n}^{2} \cdots & x_{n}^{n-1}
\end{pmatrix}
$$</span>
Now consider <span class="math-container">$\det(V)$</span> as a polynomial of degree <span class="math-container">$n-1$</span> in the variable <span class="math-container">$x_{n}$</span>. We can compute <span class="math-container">$\det(V)$</span> using the <span class="math-container">$n$</span>-th column by means of the expansion of <span class="math-container">$\det(V)$</span> in terms of cofactors, so
<span class="math-container">$$
\det(V) = a_{n-1} x_{n}^{n-1} + a_{n-2} x_{n}^{n-2} + \cdots + a_{0} \implies
a_{n-1} =
\det
\begin{pmatrix}
1 & x_{1} & x_{1}^{2} \cdots & x_{1}^{n-2} \\
1 & x_{2} & x_{2}^{2} \cdots & x_{2}^{n-2} \\
& & \vdots \\
1 & x_{n -1} & x_{n - 1}^{2} \cdots & x_{n-1}^{n-2}
\end{pmatrix}\\
\text{(Note that $a_{n-1}$ is the leading coeficient of the polynomial $\det(V)$)}
$$</span>
In addition, <span class="math-container">$x_{1}, x_{2}, \dots, x_{n-1}$</span> are the roots of <span class="math-container">$\det(V)$</span>, so we can write
<span class="math-container">$$
\det(V) = a_{n-1}(x_{n}- x_{1}) (x_{n} - x_{2}) \cdots (x_{n} - x_{n-1}).
$$</span>
By induction hypothesis we have the
<span class="math-container">$$
a_{n-1} = \prod_{1 \leq i < j \leq n-1} (x_{i} - x_{j}).
$$</span>
Hence,
<span class="math-container">$$
\det(V) = \left[\prod_{1 \leq i < j \leq n-1}(x_{i} - x_{j})\right] (x_{n} - x_{1}) \dots (x_{n} - x_{n-1})
= \prod_{1 \leq i < j \leq n} (x_{i} - x_{j}).
$$</span></p>
|
3,267,216 |
<p>An urn contains equal number of green and red balls. Suppose you are playing the following game. You draw one ball at random from the urn and note its colour. The ball is then placed back in the urn, and the selection process is repeated. Each time a green ball is picked you get 1 Rupee. The first time you pick a red ball, you pay 1 Rupee and the game ends. Your expected income from this game is..</p>
<p>The answer given is 0 but shouldn't it be positive as the as the income rises if you consecutively draw the green ball? Please help me clarify this doubt</p>
|
Michael Rozenberg
| 190,319 |
<p>Let <span class="math-container">$a$</span> and <span class="math-container">$b$</span> be positives and <span class="math-container">$a=bx$</span>.</p>
<p>Thus, <span class="math-container">$$\frac{\frac{a+b}{2}}{\sqrt{ab}}=\frac{m}{n}$$</span> or
<span class="math-container">$$x+1=\frac{2m\sqrt{x}}{n}.$$</span>
Now, solve this quadratic equation.</p>
<p>Can you end it now?</p>
<p>I got that the needed ratio it's
<span class="math-container">$$\left(\frac{m}{n}+\sqrt{\frac{m^2}{n^2}-1}\right)^2$$</span> or
<span class="math-container">$$\left(\frac{m}{n}-\sqrt{\frac{m^2}{n^2}-1}\right)^2$$</span> </p>
|
1,266,674 |
<p>There are $2^{10} =1024$ possible $10$ -letters strings in which each letter is either an $A$ or a $B$. Find the number of such strings that do not have more than $3$ adjacent letters that are identical.</p>
|
Jack D'Aurizio
| 44,121 |
<p>Any allowed string can be seen as a sequence of blocks made of $A$ or $B$ only, whose length is between $1$ and $3$. For instance:</p>
<p>$$ ABBABABBBA \longrightarrow (A)(BB)(A)(B)(A)(BBB)(A)$$
can be associated with the identity: $10=1+2+1+1+1+3+1$. Hence we just have to count in how many ways we can write $10$ as a sum of integers between $1$ and $3$, then multiply such number by two (since we may start with an $A$ or a $B$). So we have that the number of allowed strings is given by twice a <a href="http://oeis.org/wiki/Tribonacci_numbers" rel="nofollow">tribonacci number</a>:</p>
<p>$$ 2\cdot[x^{10}]\left(\frac{1}{1-(x+x^2+x^3)}\right)=2\cdot 274=\color{red}{548}. $$</p>
|
1,965,040 |
<p>Find the cyclic subgroups $<\rho_1>, <\rho_2>, and <\mu_1>$ of $S_3$.<a href="https://i.stack.imgur.com/AOnTC.png" rel="nofollow noreferrer"> Elements of $S_3$.</a></p>
<p>I know the answer is suppose to be $<\rho_1> = <\rho_2> = \{\rho_0, \rho_1, \rho_2 \}$ and $<\mu_1> = \{ \rho_0, \rho_1 \}$.
I'm not sure if my work shows this. For $ <\rho_1, <\rho_2>$ is it sufficient to say that</p>
<p>$(231)(123)= (231)$</p>
<p>$(231)(231)= (312)$ & $(312)(231)=(123)$ & $(123)(231)=(231)$</p>
<p>We can say $<\rho_1> = <\rho_2> = \{\rho_0, \rho_1, \rho_2 \}$ </p>
|
CyclotomicField
| 464,974 |
<p>The importance of ideals comes from the fact that they're the kernels of a ring homomorphism as consequence of the <a href="https://en.wikipedia.org/wiki/Isomorphism_theorems#Theorem_A_(rings)" rel="nofollow noreferrer">first isomorphism theorem</a>. From perspective if I have a ring homomorphism <span class="math-container">$\varphi : R \rightarrow Q$</span> then we can interpret the cosets as the preimages of singletons, ie <span class="math-container">$\varphi^{-1}(q)$</span> for <span class="math-container">$q \in Q$</span>, which is <span class="math-container">$q + I$</span> in the usual quotient construction.</p>
<p>This gives us the motivation to define the multiplication of ideals in a way that respects this construction. So by <span class="math-container">$(q + I)(p + I)$</span> we mean there exist <span class="math-container">$r,s \in R$</span>
with <span class="math-container">$\varphi(r)=q$</span> and <span class="math-container">$\varphi(s)=q$</span> and we are interested in the preimage <span class="math-container">$\varphi^{-1}(\varphi(r)\varphi(s))=\varphi^{-1}(\varphi(rs))$</span> which is <span class="math-container">$(qp+I)$</span>.</p>
<p>This is why multiplication of cosets is <em>not</em> defined setwise. Even simple ideas like <span class="math-container">$(2) \subset \mathbb{Z}$</span> cause trouble immediately since <span class="math-container">$(2)(2)$</span> which should simply be <span class="math-container">$(2)$</span> since it's <span class="math-container">$0\cdot 0$</span> but setwise it would be <span class="math-container">$(4)$</span>, which as you noted isn't even a coset.</p>
|
1,461,311 |
<p>In axiomatic approach to real numbers, that is by defining them to be the complete ordered field, one is expected to prove every theorem and solve every problem by using ultimately only the axioms. I was trying to solve a Spivak's calculus problem that asked to show that the sum of any number of real numbers is still meaningful without using any parentheses. I think we have to use induction principle to solve this problem. But I don't know how to prove the induction principle using axioms of the reals.</p>
|
Domates
| 8,065 |
<blockquote>
<p>The fundamental principle behind induction is that if $S$ is some
subset of $\mathbb{N}$ with the property that <br>
1. $S$ contains $1$<br>
2. whenever $S$ contains a natural number $n$, it also contains $n+1$, then it must be that $S = \mathbb{N}$.(Abbot, Understanding Analysis)</p>
</blockquote>
<p>Axioms of the reals in Spivak's Calculus and induction principle are irrelevant. So it can't be derived from these axioms</p>
|
2,227,135 |
<p>I have this information from my notes:<span class="math-container">$\def\rk{\operatorname{rank}}$</span></p>
<p>Let <span class="math-container">$A ∈ \mathbb{R}^{m\times n}$</span>. Then </p>
<ul>
<li><span class="math-container">$\rk(A) = n$</span></li>
<li><span class="math-container">$\rk(A^TA) = n$</span></li>
<li><span class="math-container">$A^TA$</span> is invertible.</li>
</ul>
<p>In my case, <span class="math-container">$n = 1$</span>, so I would need to show <span class="math-container">$\rk(vv^T) = \rk(v^Tv) = \rk(v) = 1$</span>. Suppose <span class="math-container">$A^TAx = 0$</span>. Because <span class="math-container">$A^TA$</span> is invertible, I can multiply both sides by its inverse to get <span class="math-container">$x = 0$</span>, meaning the nullity of <span class="math-container">$A^TA$</span> is <span class="math-container">$0$</span>. Can I apply the same logic to <span class="math-container">$AA^T$</span>? i.e. I have some matrix <span class="math-container">$B = A^T$</span>, so <span class="math-container">$B^TB$</span> = <span class="math-container">$AA^T $</span> has a nullity of <span class="math-container">$0$</span> (and therefore they have the same rank by the rank-nullity theorem)?</p>
|
copper.hat
| 27,978 |
<p>Note that $v v^T v = \|v\|^2 v \neq 0$, hence $\operatorname{rk} (v v^T) \ge 1$.</p>
<p>Note that $v v^T x = (v^T x) v \in \operatorname{sp} \{ v \}$ for all $x$. Hence ${R (v v^T)} = \operatorname{sp} \{ v \}$ and hence
$\operatorname{rk} (v v^T) = 1$.</p>
|
2,965,993 |
<p>Suppose you have the surface <span class="math-container">$\xi$</span> defined in <span class="math-container">$\mathbb{R}^3$</span> by the equation:
<span class="math-container">$$ \xi :\frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2} = 1 $$</span>
For <span class="math-container">$ x \geq 0$</span> , <span class="math-container">$ y \geq 0$</span> and <span class="math-container">$ z \geq 0$</span>. Now take any point <span class="math-container">$P \in \xi$</span> and consider the tangent plane (<span class="math-container">$\pi_t)$</span> to <span class="math-container">$\xi$</span> at <span class="math-container">$P$</span>. Calculate the minimum volume of the region determined by the <span class="math-container">$xy$</span>, <span class="math-container">$yz$</span>, <span class="math-container">$xz$</span> planes and <span class="math-container">$\pi_t$</span>.</p>
<p><img src="https://i.stack.imgur.com/6zwo0.png" alt="Ellipsoid and tetrahedron."></p>
|
Cesareo
| 397,348 |
<p>Calling </p>
<p><span class="math-container">$$
\begin{cases}
p_0 = (x_0,y_0,z_0)\\
p_1 = (x_1,y_1,z_1)\\
\Lambda = \mbox{diag}[\frac{1}{a^2},\frac{1}{b^2},\frac{1}{c^2}]\\
\vec n_0 = 2\Lambda\cdot p_0\\
f(p_1^*) = \frac 16 x_1^* y_1^* z_1^*\\
x_1^* = \frac{a^2}{x_0}\\
y_1^* = \frac{b^2}{y_0}\\
z_1^* = \frac{c^2}{z_0}\\
C(p_0) = p_0\cdot\Lambda\cdot p_0-1
\end{cases}
$$</span></p>
<p>Here <span class="math-container">$(x_1^*, y_1^*,z_1^*)$</span> are the intersections of the plane <span class="math-container">$(p_1-p_0)\cdot \vec n_0$</span> with the axis <span class="math-container">$(x_1,y_1,z_1)$</span> so the problem can be stated as</p>
<p><span class="math-container">$$
\min f(p_1^*)\ \ \mbox{s.t.}\ \ (p_1-p_0)\cdot \vec n_0 = 0, \ C(p_0) = 0
$$</span></p>
<p>The Lagrangian is</p>
<p><span class="math-container">$$
L(p_0,p_1,\lambda,\mu) = f(p_1^*)+\lambda(p_1-p_0)\cdot \vec n_0+\mu (C(p_0)-1)
$$</span></p>
<p>and the stationary points are determined by solving </p>
<p><span class="math-container">$$
\nabla L = 0
$$</span></p>
<p>or</p>
<p><span class="math-container">$$
\left\{
\begin{array}{rcl}
\frac{2 \mu x_0+\lambda (x_1-2 x_0)}{a^2}-\frac{a^2 b^2 c^2}{6 x_0^2 y_0 z_0} &=&0\\
\frac{2 \mu y_0+\lambda (y_1-2 y_0)}{b^2}-\frac{a^2 b^2 c^2}{6 x_0 y_0^2 z_0} &=&0\\
\frac{2 \mu z_0+\lambda (z_1-2 z_0)}{c^2}-\frac{a^2 b^2 c^2}{6 x_0 y_0 z_0^2} &=&0\\
\frac{\lambda x_0}{a}&=&0 \\
\frac{\lambda y_0}{b}&=&0 \\
\frac{\lambda z_0}{c}&=&0 \\
\frac{x_0 (x_1-x_0)}{a^2}+\frac{y_0 (y_1-y_0)}{b^2}+\frac{z_0 (z_1-z_0)}{c^2}&=&0 \\
\frac{x_0^2}{a^2}+\frac{y_0^2}{b^2}+\frac{z_0^2}{c^2}&=&1 \\
\end{array}
\right.
$$</span></p>
<p>giving</p>
<p><span class="math-container">$$
\begin{array}{cccc}
x_0&y_0&z_0& f(p_1^*)\\
\frac{a}{\sqrt{3}}&\frac{b}{\sqrt{3}}&\frac{c}{\sqrt{3}}&\frac{1}{2} \sqrt{3} a b c
\end{array}
$$</span></p>
|
1,215,926 |
<p>Can someone highlight what is the connection between the transition probability of a continuous time stochastic process $X_t$, i.e. $p(x,t\vert x_0,0)$ and the stochastic differential equation of the evolution of a trajectory of the process $X_t$, i.e. $dX_t=bdt+\sigma dB_t$. How to get from $p$ to the SDE and what assumptions does one need to make?</p>
|
tcby_wang
| 226,098 |
<p>It will be a long story. Simply speaking, the connection is given by Kolmogorov equations. For more comprehensive understanding, I refer you to the book by Daniel Stroock, "Partial differential equations for probabilists"</p>
|
1,215,926 |
<p>Can someone highlight what is the connection between the transition probability of a continuous time stochastic process $X_t$, i.e. $p(x,t\vert x_0,0)$ and the stochastic differential equation of the evolution of a trajectory of the process $X_t$, i.e. $dX_t=bdt+\sigma dB_t$. How to get from $p$ to the SDE and what assumptions does one need to make?</p>
|
stochastic
| 491,395 |
<p>From Wikipedia: </p>
<p><a href="https://en.wikipedia.org/wiki/Fokker%E2%80%93Planck_equation#One_dimension" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Fokker%E2%80%93Planck_equation#One_dimension</a></p>
<p>The stochastic differential equation
$$
dX_t = \mu(X_t, t)dt+ \sigma(X_t,t) dW_t
$$</p>
<p>has the probability density function $p(x,t)$ satisfying </p>
<p>$$
\frac{\partial}{\partial t}P(x,t)= -\frac{\partial}{\partial x}[\mu(x,t)p(x,t)]+\frac12\frac{\partial^2}{\partial x^2}[\sigma^2(x,t)p(x,t)]
$$</p>
<p>For the transition probability, you want to impose the following initial condition on the above PDE:</p>
<p>$$
p(x, 0) = \delta(x-x_0).
$$</p>
<p>Then $p(x,t)$ with this initial condition is $p(x,t|x_0,0)$.</p>
|
3,454,395 |
<p>So I’m working on this equation <span class="math-container">$z^{10} + 2z^5 + 2 = 0$</span> to find all complex solutions, and I think I managed to solve it, but I can’t find solution manual for it, since it is really old exam task. The thing that makes me uncomfortable with my solution is that, shouldn’t I get just 10 solutions? But when I put in all <span class="math-container">$k$</span> values(<span class="math-container">$k = 0,1,2,3,4$</span>), you get 12 different angle solutions. Isn’t that wrong?</p>
<p>My answer:</p>
<h2><span class="math-container">$$ \sqrt{\mathstrut 2}^{1/5}e^{\frac{\left(\pm\frac\pi4i+2\pi k \right)}5} $$</span></h2>
<p>Sorry could't figure out how to put 5 in denominator of the polar formula...</p>
|
Ahmad Bazzi
| 310,385 |
<p>solve for <span class="math-container">$$w^2 + 2w + 2=0$$</span>
which gives <span class="math-container">$$w_{1,2} = -1 \pm i = \sqrt{2} e^{i(\pi \pm \frac{\pi}{4})}$$</span>
Now you got two equations to solve
<span class="math-container">\begin{align}
z_1^5 &= \sqrt{2} e^{i(\pi + \frac{\pi}{4})} \\
z_2^5 &= \sqrt{2} e^{i(\pi - \frac{\pi}{4})}
\end{align}</span>
which gives
<span class="math-container">\begin{align}
z_1 &= \sqrt{2} e^{i(\pi + \frac{\pi}{4} + \frac{2k\pi}{5})} \\
z_2 &= \sqrt{2} e^{i(\pi - \frac{\pi}{4}+ \frac{2k\pi}{5})}
\end{align}</span>
for <span class="math-container">$k \in \lbrace 0,1,2,3,4 \rbrace$</span></p>
|
1,691,825 |
<p>My textbook goes from</p>
<p>$$\frac{\left( \frac{6\ln^22x}{2x} \right)}{\left(\frac{3}{2\sqrt{x}}\right)}$$</p>
<p>to:</p>
<p>$$\frac{6\ln^22x}{3\sqrt{x}}$$</p>
<p>I don't see how this is right. Could anyone explain?</p>
|
Newb
| 98,587 |
<p>$$\frac{\frac{6ln^22x}{2x}}{\frac{3}{2\sqrt{x}}} = \frac{6ln^22x}{2x} \cdot \frac{2\sqrt{x}}{3} = \frac{3ln^22x}{x} \cdot \frac{2\sqrt{x}}{3} = \frac{6\sqrt{x}ln^22x}{3x} = \frac{6\ln^22x}{3} \cdot \frac{\sqrt{x}}{x}$$</p>
<p>$$\frac{6\ln^22x}{3} \cdot \frac{\sqrt{x}}{x} = \frac{6\ln^22x}{3} \cdot x^{1/2}/x^1 = \frac{6\ln^22x}{3} \cdot x^{-1/2} = \frac{6\ln^22x}{3} \cdot \frac{1}{\sqrt{x}} = \frac{6\ln^22x}{3\sqrt{x}}$$</p>
|
3,714,995 |
<blockquote>
<p>Using sell method to find the volume of solid generated by revolving the region bounded by <span class="math-container">$$y=\sqrt{x},y=\frac{x-3}{2},y=0$$</span> about <span class="math-container">$x$</span> axis, is (using shell method)</p>
</blockquote>
<p>What I try:</p>
<p><a href="https://i.stack.imgur.com/4D4Jw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4D4Jw.jpg" alt="enter image description here"></a></p>
<p>Solving two given curves <span class="math-container">$$\sqrt{x}=\frac{x-3}{2}\Longrightarrow x^2-10x+9=0$$</span></p>
<p>We have <span class="math-container">$x=1$</span> (Invalid) and <span class="math-container">$x=9$</span> (Valid).</p>
<p>Put <span class="math-container">$x=9$</span> in <span class="math-container">$y=\sqrt{x}$</span> we have <span class="math-container">$y=3$</span></p>
<p>Now Volume of solid form by rotation about <span class="math-container">$x$</span> axis is </p>
<p><span class="math-container">$$=\int^{9}_{0}2\pi y\bigg(y^2-2y-3\bigg)dy$$</span></p>
<p>Is my Volume Integral is right? If not then how do I solve it? Help me please.</p>
|
Chrystomath
| 84,081 |
<p>For any inner product space, complete or not, let <span class="math-container">$v$</span> be any vector and consider the finite-dimensional subspace generated from <span class="math-container">$v,Av,\ldots,A^{k-1}v$</span>. Then this space is <span class="math-container">$A$</span>-invariant, so <span class="math-container">$A$</span> restricted to it is a symmetric matrix. As mentioned in the question, it is straightforward to show that <span class="math-container">$A^2=I$</span> in this space, that is, <span class="math-container">$A^2v=v$</span>. Since this is true of any vector, <span class="math-container">$A^2=I$</span>.</p>
|
47,561 |
<p>The Hilbert matrix is the square matrix given by</p>
<p>$$H_{ij}=\frac{1}{i+j-1}$$</p>
<p>Wikipedia states that its inverse is given by</p>
<p>$$(H^{-1})_{ij} = (-1)^{i+j}(i+j-1) {{n+i-1}\choose{n-j}}{{n+j-1}\choose{n-i}}{{i+j-2}\choose{i-1}}^2$$</p>
<p>It follows that the entries in the inverse matrix are all integers.</p>
<p>I was wondering if there is a way to prove that its inverse is an integer matrix without using the formula above.</p>
<p>Also, how would one go about proving the explicit formula for the inverse? Wikipedia refers me to a paper by Choi, but it only includes a brief sketch of the proof.</p>
|
J. M. isn't a mathematician
| 7,934 |
<p>It's a bit circuitous, but I'd like to point out this <a href="http://dx.doi.org/10.1007/BF03167904" rel="noreferrer">paper</a> by Hitotumatu where he derives explicit expressions for the Cholesky triangle of a Hilbert matrix. From the expressions for the Cholesky triangle, you should be able to derive explicit expressions for the inverse (if $\mathbf A=\mathbf G\mathbf G^\top$, then $\mathbf A^{-1}=\mathbf G^{-\top}\mathbf G^{-1}$).</p>
<hr>
<p>Since the paper isn't that easily accessible, I'll include the main result here. If $\mathbf A$ is the Hilbert matrix, with the decomposition $\mathbf A=\mathbf L\mathbf D\mathbf L^\top$ with $\mathbf L$ unit lower triangular and $\mathbf D$ diagonal, then</p>
<p>$$\begin{align*}\ell_{j,k}&=\frac{(2k-1)\binom{2k-2}{k-1}\binom{2j-1}{j-k}}{(2j-1)\binom{2j-2}{j-1}}\\d_{k,k}&=\frac1{(2k-1)\binom{2k-2}{k-1}^2}\end{align*}$$</p>
|
2,983,199 |
<p>I am looking for an example demonstrating that <span class="math-container">$\lim\inf x_n+\lim \inf y_n<\lim \inf(x_n+y_n)$</span> but for the life of me i can't find one. any suggestions?</p>
|
user2820579
| 141,841 |
<p>Choose <span class="math-container">$(x_n) = (1,-1,1,-1,\dots)$</span> and <span class="math-container">$(y_n) = (-1,1,-1/2,1,-1/3,\dots)$</span>. Then <span class="math-container">$(x_n+y_n) = (0,0,1/2,0,2/3,\dots)$</span>. This means <span class="math-container">$\liminf x_n = -1$</span>, <span class="math-container">$\liminf y_n = 0$</span> and <span class="math-container">$\liminf (x_n + y_n)=0$</span>. With this sequence one can also see that </p>
<p><span class="math-container">\begin{equation}
\limsup (x_n+y_x) < \limsup x_n + \limsup y_n.
\end{equation}</span></p>
|
3,252,076 |
<p>I'm doing some Galois cohomology stuff (specifically, trying to calculate <span class="math-container">$H^1(\mathbb{Q}_3,E[\varphi])$</span>, where <span class="math-container">$\varphi:E\to E'$</span> is an isogeny of elliptic curves), and it involves calculating <span class="math-container">$\mathbb{Q}_3(\sqrt{-6})^{\times}/(\mathbb{Q}_3(\sqrt{-6})^{\times})^3$</span>. Here's what I've done so far.</p>
<p>Let <span class="math-container">$K=\mathbb{Q}(\sqrt{-6})$</span>. As <span class="math-container">$-6\not\equiv 1$</span> (mod 4), we have that <span class="math-container">$\mathcal{O}_K=\mathbb{Z}[\sqrt{-6}]$</span>. Let <span class="math-container">$v$</span> be the finite place of <span class="math-container">$K$</span> corresponding to the (non-principal) prime ideal <span class="math-container">$(3,\sqrt{-6})$</span>. It's fairly easy to see that <span class="math-container">$K_v=\mathbb{Q}_3(\sqrt{-6})$</span>, and that the residue field is <span class="math-container">$k_v\cong\mathcal{O}_{K}/(3,\sqrt{-6})\cong\mathbb{F}_3$</span>. Now, by Hensel's lemma, <span class="math-container">$\sqrt{-2}\in\mathbb{Q}_3$</span>, so it follows that <span class="math-container">$\sqrt{3}\in\mathbb{Q}_2(\sqrt{-6})$</span>. In <span class="math-container">$\mathcal{O}_K$</span>, <span class="math-container">$(3)$</span> decomposes as <span class="math-container">$(3,\sqrt{-6})^2$</span>, so <span class="math-container">$v(3)=2$</span>, and hence <span class="math-container">$v(\sqrt{3})=1$</span>. So we can legitimately choose <span class="math-container">$\sqrt{3}$</span> as a uniformizer of <span class="math-container">$\mathcal{O}_{K_v}$</span>. This means that every element of <span class="math-container">$\mathcal{O}_{K_v}$</span> has a unique representation
<span class="math-container">$$\sum_{n=0}^{\infty}a_n\sqrt{3}^n, \text{ where } a_n\in\{-1,0,1\}.$$</span>
The elements of <span class="math-container">$\mathcal{O}_{K_v}^{\times}$</span> are the ones where <span class="math-container">$a_0=\pm1$</span>. Now, using Hensel's lemma I went ahead and showed that
<span class="math-container">$$(\mathcal{O}_{K_v}^{\times})^3=\{\pm1+\sum_{n=3}^{\infty}a_n\sqrt{3}^n~|~a_n\in\{-1,0,1\}\}.$$</span>
But how do I find distinct representatives for <span class="math-container">$\mathcal{O}_{K_v}^{\times}/(\mathcal{O}_{K_v}^{\times})^3$</span>? Does modding out by the group above mean that I can just look up to sign and ignore everything past <span class="math-container">$\sqrt{3}^3$</span>, so that a set of representatives would be <span class="math-container">$\{1,1\pm\sqrt{3},1\pm3,1\pm\sqrt{3}\pm3\}$</span>, which has size 9 (the 2 <span class="math-container">$\pm$</span>'s are independent in the last expression)? Perhaps my working is not useful, because I've written things additively but the groups are multiplicative. Also, I could just as well have chosen <span class="math-container">$\sqrt{-6}$</span> as my uniformizer. Can I replace <span class="math-container">$\sqrt{3}$</span> with <span class="math-container">$\sqrt{-6}$</span> everywhere and still get a set of representatives for <span class="math-container">$\mathcal{O}_{K_v}^{\times}/(\mathcal{O}_{K_v}^{\times})^3$</span>? I'm very confused!</p>
<p>Of course, once <span class="math-container">$\mathcal{O}_{K_v}^{\times}/(\mathcal{O}_{K_v}^{\times})^3$</span> is determined, finding <span class="math-container">$K_v^{\times}/(K_v^{\times})^3$</span> is easy.</p>
|
Lubin
| 17,760 |
<p>First, you needn’t have worried about what parameter you used: <span class="math-container">$\sqrt{-6}$</span> is just as good as <span class="math-container">$\sqrt3$</span>. Indeed, if <span class="math-container">$\mathfrak o$</span> is a complete discrete valuation ring with fraction field <span class="math-container">$K$</span> and (additive) valuation <span class="math-container">$v:K^\times\to\Bbb Z$</span>, and if<span class="math-container">$f(X)\in\mathfrak o[X]$</span> is an Eisenstein polynomial with a root <span class="math-container">$\alpha$</span>, then <span class="math-container">$\alpha$</span> is a local parameter for the d.v.r. <span class="math-container">$\mathfrak o[\alpha]$</span>. Since both <span class="math-container">$X^2+6$</span> and <span class="math-container">$X^2-3$</span> are Eisenstein for <span class="math-container">$\Bbb Z_3$</span>, a root of either is good as a local parameter in <span class="math-container">$\Bbb Q_3(\sqrt{-6}\,)$</span>.</p>
<p>Next, it may help for you to think of <span class="math-container">$K^\times/(K^\times)^3$</span> as <span class="math-container">$K^\times\otimes(\Bbb Z/3\Bbb Z)$</span>. Whether or not, you were quite correct to see that all the contribution to <span class="math-container">$K^\times/(K^\times)^3$</span> comes from <span class="math-container">$1+\mathfrak m$</span>. Here, of course, I’m using <span class="math-container">$K=\Bbb Q_3(\sqrt{-6}\,)$</span> and <span class="math-container">$\mathfrak m=\text{max}(\Bbb Z_3[\sqrt{-6}\,])=\sqrt{-6}\cdot\Bbb Z_3[\sqrt{-6}\,]$</span>.</p>
<p>Now here’s something <em>most useful</em>: the multiplicative group <span class="math-container">$1+\mathfrak m$</span> is a <span class="math-container">$\Bbb Z_3$</span>-module, via exponentiation. That is, for <span class="math-container">$z\in\Bbb Z_3$</span> and <span class="math-container">$\alpha\in\mathfrak m$</span>, the expression <span class="math-container">$(1+\alpha)^z$</span> is well-defined, and all the rules that you know for <span class="math-container">$\Bbb Z$</span>-exponents are valid. How’s it defined? Take any <span class="math-container">$3$</span>-adically convergent sequence of positive integers with limit <span class="math-container">$z$</span>, say <span class="math-container">$n_i\to z$</span>. Then <span class="math-container">$\bigl\lbrace(1+\alpha)^{n_i}\bigr\rbrace$</span> is also <span class="math-container">$3$</span>-adically convergent. I’ll leave it to you to prove that. Of course you see that the statement is true no matter what the <span class="math-container">$3$</span>-adically complete local ring <span class="math-container">$\mathfrak o$</span> you’re dealing with. Note that the exponents are from <span class="math-container">$\Bbb Z_3$</span>, nothing bigger.</p>
<p>Well: now that you know that <span class="math-container">$1+\mathfrak m$</span> is a <span class="math-container">$\Bbb Z_3$</span>-module, what can you say about its structure? You know that it has no torsion, so it’s a free <span class="math-container">$\Bbb Z_3$</span>-module. Of what rank? I think you can convince yourself pretty easily that the rank is equal to <span class="math-container">$[K:\Bbb Q_3]=2$</span>; I’ll leave that to you, too.</p>
<p>Now it’s perfectly clear that <span class="math-container">$\bigl|(1+\mathfrak m)/(1+\mathfrak m)^3\bigr|=9$</span>, the cardinality of a two-dimensional vector space over the field <span class="math-container">$\Bbb F_3$</span>. Your enumeration of the elements is quite right, too.</p>
<p>Please don’t hesitate to ask for clarification or expansion of the above.</p>
|
164,328 |
<p>What are good introductory textbooks available on Cohomology of Groups?</p>
|
Community
| -1 |
<p>Also, if you are interested in <em>number theory</em>, <a href="http://www.mathi.uni-heidelberg.de/~schmidt/NSW2e/index-de.html" rel="nofollow">http://www.mathi.uni-heidelberg.de/~schmidt/NSW2e/index-de.html</a> (Jürgen Neukirch, Alexander Schmidt, Kay Wingberg:
Cohomology of Number Fields, second edition) and <a href="http://www.mathi.uni-heidelberg.de/~schmidt/Neukirch-en/index-de.html" rel="nofollow">http://www.mathi.uni-heidelberg.de/~schmidt/Neukirch-en/index-de.html</a> (Jürgen Neukirch: Class Field Theory-The Bonn Lectures). [These are textbooks covering (profinite) group cohomology with number theoretic applications in mind.]</p>
|
1,787,460 |
<blockquote>
<p>Suppose the n<em>th</em> pass through a manufacturing process is modelled by the linear equations <span class="math-container">$x_n=A^nx_0$</span>, where <span class="math-container">$x_0$</span> is the initial state of the system and</p>
<p><span class="math-container">$$A=\frac{1}{5} \begin{bmatrix} 3 & 2 \\ 2 & 3 \end{bmatrix}$$</span></p>
<p>Show that</p>
<p><span class="math-container">$$A^n= \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} \end{bmatrix}+\left( \frac{1}{5} \right)^n \begin{bmatrix} \frac{1}{2} & -\frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{bmatrix}$$</span></p>
<p>Then, with the initial state <span class="math-container">$x_0=\begin{bmatrix} p \\ 1-p \end{bmatrix}$</span>
, calculate <span class="math-container">$\lim_{n \to \infty} x_n$</span>.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/1G6NO.png" rel="nofollow noreferrer">(the original is here)</a></p>
<p>I am not sure how to do the proof part</p>
<p>The <a href="https://i.stack.imgur.com/FI5XE.png" rel="nofollow noreferrer">hint</a> is:</p>
<blockquote>
<p>First diagonalize the matrix; eigenvalues are <span class="math-container">$1, \frac{1}{5}$</span>.</p>
</blockquote>
<p>I understand the hint and have diagonalised it but I don't know how to change it into the given form? After diagonalisation, I just get 3 matrices multiplied together</p>
|
Emilio Novati
| 187,568 |
<p>Diagonalize the matrix $A$:
$$
A=\begin{bmatrix}
\frac{3}{5}&\frac{2}{5}\\
\frac{2}{5}&\frac{3}{5}
\end{bmatrix}=
\begin{bmatrix}
-1&1\\
1&1
\end{bmatrix}
\begin{bmatrix}
\frac{1}{5}&0\\
0&1
\end{bmatrix}
\begin{bmatrix}
-\frac{1}{2}&\frac{1}{2}\\
\frac{1}{2}&\frac{1}{2}
\end{bmatrix}=PDP^{-1}
$$</p>
<p>So we have:
$$
A^n=(PDP^{-1})^n=PD^nP^{-1}=
\begin{bmatrix}
-1&1\\
1&1
\end{bmatrix}
\begin{bmatrix}
\left(\frac{1}{5}\right)^n&0\\
0&1
\end{bmatrix}
\begin{bmatrix}
-\frac{1}{2}&\frac{1}{2}\\
\frac{1}{2}&\frac{1}{2}
\end{bmatrix}=
\begin{bmatrix}
\frac{1}{2}\left(\frac{1}{5}\right)^n+\frac{1}{2}&-\frac{1}{2}\left(\frac{1}{5}\right)^n+ \frac{1}{2}\\
-\frac{1}{2}\left(\frac{1}{5}\right)^n+\frac{1}{2}&\frac{1}{2}\left(\frac{1}{5}\right)^n+\frac{1}{2}
\end{bmatrix}=
\begin{bmatrix}
\frac{1}{2}&-\frac{1}{2}\\
-\frac{1}{2}&\frac{1}{2}
\end{bmatrix}
\left(\frac{1}{5}\right)^n+
\begin{bmatrix}
\frac{1}{2}&\frac{1}{2}\\
\frac{1}{2}&\frac{1}{2}
\end{bmatrix}
$$</p>
<p>Now you can calculate the limit.</p>
|
2,965,756 |
<p>I want to intuitively say tht the answer is yes, but if it so happens that <span class="math-container">$|\mathbf{a}|=|\mathbf{b}|cos(\theta)$</span>, where <span class="math-container">$\theta$</span> is the angle between the two vectors, then the equation will be satisfied without the two vectors being the same.</p>
<p>However, my friend keeps telling me that I'm wrong and that this would contradict the given result in our homework question anyway, which tells us that <span class="math-container">$\mathbf{a}\times\mathbf{b} = \mathbf{a}-\mathbf{b}$</span> and then asks us to prove <span class="math-container">$\mathbf{a}=\mathbf{b}$</span> (the equation in the question was obtained by dotting both sides with <span class="math-container">$\mathbf{a}$</span>. </p>
<p>Which one of us is wrong, and why?</p>
|
Hans Lundmark
| 1,242 |
<p>The equation <span class="math-container">$\mathbf{a} \cdot \mathbf{a} = \mathbf{a} \cdot \mathbf{b}$</span>
is equivalent to
<span class="math-container">$\mathbf{a} \cdot (\mathbf{a}-\mathbf{b})=0$</span>,
which is the same as saying that <span class="math-container">$\mathbf{a}$</span> is orthogonal to <span class="math-container">$\mathbf{a} - \mathbf{b}$</span>.
And this can of course happen without either <span class="math-container">$\mathbf{a}$</span> or <span class="math-container">$\mathbf{a} - \mathbf{b}$</span> being the zero vector.</p>
<p>So you cannot conclude that <span class="math-container">$\mathbf{a} = \mathbf{b}$</span> (even if you assume that <span class="math-container">$\mathbf{a} \neq \mathbf{0}$</span>).</p>
|
135,308 |
<p>Let $G$ be a compact Lie group and $M$ be a smooth manifold on which $G$ acts smoothly and effectively. Then the orbit space $M/G$ admits an Whitney stratification as follows
$$M/G=\bigsqcup_{H<G}(M_{(H)}/G).$$
Where $H$ is a closed subgroup of $G$, and $M_{(H)}$ is the set of the points in $M$ such that the isotroy groups are conjugate to $H$. A result is if $M$ admit an orientation preserved by the $G-$action then
$$codimM_{(H)}\geq2,$$ when $H$ is not trivial subgroup of $G$. Why? </p>
|
Peter Michor
| 26,935 |
<p>A codimension 1 stratum needs a reflection in the isotropy group; better, in the isotropy representation $G_x:T_xM\to T_xM$ for x over a codimension 1 stratum. </p>
|
2,119,971 |
<p>If $a\mid c$ and $b\mid c$, must $ab$ divide $c$? Justify your answer.</p>
<p>$a\mid c$, $c=ak$ for some integer $k$</p>
<p>$b\mid c$, $c=bu$ for some integer $u$</p>
<p>From here I wanted to try to check if there were counter examples I could use,</p>
<p>$c\ne(ab)w$ for some integer $w$</p>
<p>From here I got stuck because there is nothing I can plug into that equation so I know that I am probably missing something.</p>
|
E. Joseph
| 288,138 |
<p>This is <strong>not</strong> true.</p>
<p>Take for instance $a=b=2$ and $c=2$.</p>
<p>Then $a\mid c $ and $b\mid c$, but</p>
<p>$$ab=4\nmid 2=c.$$</p>
|
2,119,971 |
<p>If $a\mid c$ and $b\mid c$, must $ab$ divide $c$? Justify your answer.</p>
<p>$a\mid c$, $c=ak$ for some integer $k$</p>
<p>$b\mid c$, $c=bu$ for some integer $u$</p>
<p>From here I wanted to try to check if there were counter examples I could use,</p>
<p>$c\ne(ab)w$ for some integer $w$</p>
<p>From here I got stuck because there is nothing I can plug into that equation so I know that I am probably missing something.</p>
|
Carlos Seda
| 411,249 |
<p>No it must not, you can give a simple counter-example like <span class="math-container">$3\mid 9$</span> and <span class="math-container">$9\mid 9$</span> but obviously <span class="math-container">$$ab=9\cdot3=27 \nmid9$$</span> Hope it helps you out! </p>
|
2,319,766 |
<p>Lets say ,I have 100 numbers(1 to 100).I have to create various combinations of 10 numbers out of these 100 numbers such that no two combinations have more than 5 numbers in common given a particular number can be used max three times.
E.g.</p>
<ol>
<li>Combination 1: 1,2,3,4,5,6,7,8,9,10</li>
<li>Combination 2: 1,2,3,4,5,11,12,13,14,15</li>
<li>Combination 3: 1,2,3,4,5,16,17,18,19,20</li>
<li>Combination 4: 6,7,8,9,10,11,12,13,14,15</li>
</ol>
<p>Here Combination 1,2,3 have numbers 1 to 5 in common whereas combination 1 and 4 have numbers 6 to 10 in common.
I am finding it difficult to understand how to approach this problem.
What would be the starting point if I have to apply this logic on N numbers.</p>
|
layabout
| 352,341 |
<p>This is a <a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)" rel="nofollow noreferrer">stars and bars problem</a>. </p>
<p><a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)#Theorem_one" rel="nofollow noreferrer">Theorem 1</a>: in this case, we require that we must include at least one of each animal to be chosen from 14, this is ${n -1 \choose k - 1}$ where $n$ is the number of animals needed (14), and $k$ the number of types of animals under consideration. There are $\binom{14 - 1}{3 - 1}$ ways of choosing 14 pets, such that at least one cat, at least one dog, and at least 1 Guinea Pig. </p>
<hr>
<p><a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)#Theorem_two" rel="nofollow noreferrer">Theorem 2</a> This variation applies to situation in which we need to include the possibility that one or two of the types of animals might be zero: For example we count the possibility that we end with 6 cats, 0 dogs, and 8 Guinea Pigs; and we also count the possibility that we end up with 14 cats, no dogs, no Guinea pigs. </p>
<p>In this case, we use the following formula (again see the link to Theorem 2 given above: ${n+k - 1 \choose n }$, where $n$ is the number of animals sought, and $k$ is the number of types to choose, so in this case, we have ${ 14 + 3 - 1 \choose 14} = \binom{16}{14}=\binom{16}{2} = \frac{16!}{14!\cdot 2!}$</p>
|
2,893,388 |
<p>My textbook is confusing me a little. Here is a worked example from my textbook:</p>
<blockquote>
<p>Line <span class="math-container">$l$</span> has the equation <span class="math-container">$\begin{pmatrix}3\\ -1\\ 0\end{pmatrix}+\lambda \begin{pmatrix}1\\ -1\\ 1\end{pmatrix}$</span> and point <span class="math-container">$A$</span> has co-ordinates <span class="math-container">$(3, 9, -2)$</span>.</p>
<p>Find the coordinates of point <span class="math-container">$B$</span> on <span class="math-container">$l$</span> so that <span class="math-container">$AB$</span> is perpendicular to <span class="math-container">$l$</span>.</p>
<p><span class="math-container">$\vec{AB\cdot }\begin{pmatrix}1\\ -1\\ 1\end{pmatrix}=0$</span></p>
<p><span class="math-container">$\vec{OB}=r=\begin{pmatrix}3+\lambda \\ -1-\lambda \\ \lambda \end{pmatrix}$</span></p>
<p><span class="math-container">$\vec{AB}=\begin{pmatrix}\lambda \\ -10-\lambda \\ \lambda +2\end{pmatrix}$</span></p>
<p><span class="math-container">$\begin{pmatrix}\lambda \\ -10-\lambda \\ \lambda +2\end{pmatrix}\cdot \begin{pmatrix}1\\ -1\\ 1\end{pmatrix}=0$</span></p>
<p><span class="math-container">$3\lambda= -12, \lambda = -4$</span></p>
<p>Coordinates of <span class="math-container">$B$</span>: <span class="math-container">$(-1, 3, -4)$</span></p>
</blockquote>
<p>The thing I don't understand is why they found the dot product of the line AB and the direction vector of line l. My textbook does mention that to check whether two vectors are perpendicular, <span class="math-container">$a\cdot b = 0 $</span> and for lines, the dot product of their direction vectors = 0. So why did they mix both here? Didn't they use the entire line <span class="math-container">$AB$</span> and then just the direction vector of line l? Or am I missing something as usual?</p>
|
Umberto P.
| 67,536 |
<p>It doesn't mean that a basis $\cal B$ of $\cal T$ is countable. It means that <em>there exists</em> a basis $\cal B$ of $\cal T$ that is countable.</p>
|
52,677 |
<p>Why people have to find quadratic formula,isn't that the formula cannot solve a polynomial with 2 and 1/2 degree?
and just curious, how many roots does a polynomial with 2 and 1/2 degree have and how to solve them all(by formula)?</p>
|
Jay
| 9,814 |
<p>As Ross points out in his comment polynomials are defined to have exponents that are positive integers. If you find the roots of a polynomial $p(x)$ using this definition than, in certain cases, one can find the roots by algebraic means. A polynomial of degree $n$ has $n + 1$ different coefficients, the constant term together with the coefficient of each power of $x$. For each degree of the polynomial there is a limited amount of variability. If one allows fractional exponents this is no longer the case. Consider the following list of "fractional degree polynomials."</p>
<ul>
<li>$x^{\frac{5}{2}} + 1 = 0$</li>
<li>$x^{\frac{5}{2}} + 2x^{\frac{1}{2}} + 1 = 0$</li>
<li>$x^{\frac{5}{2}} + 2x^{\frac{1}{2}} + 3x^{\frac{1}{3}} + 1 = 0$</li>
<li>$x^{\frac{5}{2}} + 2x^{\frac{1}{2}} + 3x^{\frac{1}{3}} + 4x^{\frac{1}{4}} + 1 = 0$</li>
<li>$x^{\frac{5}{2}} + 2x^{\frac{1}{2}} + 3x^{\frac{1}{3}} + 4x^{\frac{1}{4}} + 5x^{\frac{1}{5}} + 1 = 0$</li>
<li>and so forth.</li>
</ul>
<p>All of these might be considered a "polynomial" of degree $\frac{5}{2}$. There is no finite limit to the number of terms such a "polynomial" could have. This means there is no finite limit to the number of coefficients such a "polynomial" has. It seems to me that it is unlikely there is a single algebraic formula that could find the roots of all of these "polynomials." The Rolling Stones have a song about this: <em>You can't always get what you want</em>.</p>
|
4,316,771 |
<p>I'm asked to compute <span class="math-container">$$\sum_{k=-3}^{10} 2k^4$$</span>
I looked up for<a href="https://en.m.wikipedia.org/wiki/Bernoulli_number" rel="nofollow noreferrer">Bernoulli Number</a> on Wikipedia and found a general formula for that. But my teacher has asked me to evaluate this by breaking the summation into the summations whose sum we already know i.e. say <span class="math-container">$\sum k^2$</span>, arithmetic series, geometric series etc.</p>
<p>Can it be solved in this way, can anyone please tell me or give me some hint? It's really urgent and I can't do anything about it.</p>
<p><strong>Edit :</strong> since there is a confusion, I want to clear it by saying that I'm not allowed to use the formula for <span class="math-container">$$\sum_{k=1}^n k^4$$</span>. So I need an answer that doesn't include it.</p>
|
p_square
| 920,884 |
<p><span class="math-container">$$\sum_{k=-3}^{10} 2k^4 = \sum_{n=1}^{14} 2(n-4)^4$$</span>
now expand <span class="math-container">$2(n-4)^4$</span> which comes out to be : <span class="math-container">$$2n^4 - 32n^3 + 192n^2 - 512n + 512$$</span> Now our sum becomes :
<span class="math-container">$$\sum_{n=1}^{14} 2n^4 - 32n^3 + 192n^2 - 512n + 512$$</span>
Now after that apply the sum rule which says that
<span class="math-container">$$\sum x_n + y_n = \sum x_n + \sum y_n$$</span>
so our sum would become :
<span class="math-container">$$\sum_{n=1}^{14} 2n^4 - \sum_{n=1}^{14} 32n^3 + \sum_{n=1}^{14} 192n^2 - \sum_{n=1}^{14} 512n + 512$$</span>
Now I think you can take it from here</p>
|
4,402,613 |
<p>I know that the limits of integration in spherical coordinates are these, but I can't find the reason why that 2 appears,but I can't find a way to go further, and evaluate r in a <span class="math-container">$a \sin \theta$</span>,</p>
<p><span class="math-container">$$V = \int_0^{2\pi}\int_0^{a\sin{\theta}}\int_{-\sqrt{a^{2} - r^{2}}}^{\sqrt{a^2-r^{2}}}r\mathrm{d}z\mathrm{d}r\mathrm{d}\theta = 2\int_0^{2\pi}\int_0^{a\sin{\theta}}r\sqrt{a^2-r^2}\mathrm{d}r\mathrm{d}\theta$$</span></p>
|
Átila Correia
| 953,679 |
<p>The number two appears because you have to consider the integral corresponding to the upper and lower parts of the sphere, which are equal due to the symmetry of the problem.</p>
<p>As far as I have understood, you are not applying spherical change of coordinates. Instead, you are applying the cylindrical change of coordinates.</p>
|
398,409 |
<p>I was asked to help someone with this problem, and I don't really know the answer why. But I thought I'd still try.</p>
<p>$$\lim_{t \to 10} \frac{t^2 - 100}{t+1} \cos\left( \frac{1}{10-t} \right)+ 100$$</p>
<p>The problem lies with the cos term. What can I do with the cos term to remove divide by 0 ? </p>
<p>I found the answer to be $100$ (Google), but I do not know what they did to the $\cos$ term. Is that even the answer ?</p>
<p>Thanks!</p>
|
Cantlog
| 66,760 |
<p>A necessary condition is <span class="math-container">$m\mid n$</span> and <span class="math-container">$u$</span> is an <span class="math-container">$m$</span>-th power in <span class="math-container">$\mathbb F_p$</span>. So the real question is:</p>
<blockquote>
<p>Let <span class="math-container">$a\in \mathbb Z_p$</span>. When is <span class="math-container">$1+pa$</span> an <span class="math-container">$m$</span>-th power in <span class="math-container">$\mathbb Z_p$</span> ?</p>
</blockquote>
<p><b>Edit</b> The suggestion of Jyrki in the comments makes me think to the general solution. Let <span class="math-container">$d=v_p(m)$</span>. Then:</p>
<p><b> Claim : </b></p>
<blockquote>
<p>If <span class="math-container">$p>2$</span>, then <span class="math-container">$1+pa$</span> is an <span class="math-container">$m$</span>-th power if and only if <span class="math-container">$v_p(a)\ge d$</span>.</p>
<p>If <span class="math-container">$p=2$</span>, then <span class="math-container">$1+pa$</span> is an <span class="math-container">$m$</span>-th power if and only if <span class="math-container">$v_p(a)\ge d+1$</span>.</p>
</blockquote>
<p>Let <span class="math-container">$f_m : 1+p\mathbb Z_p\to 1+p\mathbb Z_p$</span> be the <span class="math-container">$m$</span>-th power map. This is a multiplicative group homomorphism.</p>
<p><b>Lemma</b> Let <span class="math-container">$r\ge 1$</span>. We have
<span class="math-container">$$f_m(1+p^r\mathbb Z_p)=1+p^{r+\epsilon}\mathbb Z_p$$</span>
with</p>
<ol>
<li><p><span class="math-container">$\epsilon=0$</span> if <span class="math-container">$v_p(m)=0$</span>;</p>
</li>
<li><p><span class="math-container">$\epsilon=1$</span> if <span class="math-container">$m=p>2$</span>, or if <span class="math-container">$m=p=2$</span> and <span class="math-container">$r\ge 2$</span>;</p>
</li>
<li><p><span class="math-container">$\epsilon=2$</span> if <span class="math-container">$m=p=2$</span> and <span class="math-container">$r=1$</span>.</p>
</li>
</ol>
<p>Proof. That <span class="math-container">$f_m(1+p^r\mathbb Z_p)\subseteq 1+p^{r+\epsilon}\mathbb Z_p$</span> is clear. For the converse, fix an <span class="math-container">$a\in \mathbb Z_p$</span>. We want to solve an equation
<span class="math-container">$$(1+p^rx)^m=1+p^{r+\epsilon}a$$</span>
with <span class="math-container">$x\in \mathbb Z_p$</span>. In the cases 1 and 2, it is equivalent to
<span class="math-container">$$ p^{rm-r-\epsilon}x^m+mp^{rm-2r-\epsilon}x^{m-1}+\cdots+mp^{-\epsilon}x-a=0. $$</span>
The above equation modulo <span class="math-container">$p$</span> has degree one, hence (as polynomial in <span class="math-container">$\mathbb F_p[X]$</span>) has a simple root in <span class="math-container">$\mathbb F_p$</span>, so by Hensel's lemma it has a (unique) solution in <span class="math-container">$\mathbb Z_p$</span>.</p>
<p>The case 3 is treated in <a href="https://math.stackexchange.com/questions/473595/">this question</a>.</p>
<p><i>Proof of the claim:</i> Decompose <span class="math-container">$m=p^dq$</span> with <span class="math-container">$q$</span> prime to <span class="math-container">$p$</span>. Then <span class="math-container">$f_m=f_{p^r}\circ f_q$</span>. By the case 1 above, we are reduced to the case <span class="math-container">$m=p^d$</span>. We conclude by easy induction on <span class="math-container">$d$</span> using cases 2 and 3.</p>
<p><b>Remark</b> This question can certainly be solved with more involved material as formal groups and isogenies.</p>
|
1,540,412 |
<p>Let x be a (right) eigenvector of A corresponding to an eigenvalue λ and let y be a left eigenvector of A corresponding to a different eigenvalue µ, where λ $\neq$ µ. Show that x∗y = 0. Hint : Ax = λx and y'A = µy'</p>
|
YVLM
| 292,316 |
<ul>
<li>Step 1) <span class="math-container">$Ax=λx$</span></li>
<li>Step 2) <span class="math-container">$y'Ax=λy'x$</span></li>
<li>Step 3) <span class="math-container">$y'Ax-λy'x=0$</span></li>
<li>Step 4) <span class="math-container">$(y'A-λy')x=0$</span></li>
<li>Step 5) <span class="math-container">$(\mu y'-λy')x=0$</span></li>
<li>Step 6) <span class="math-container">$(\mu-λ)y'x=0$</span></li>
<li>How:
<span class="math-container">$\mu \neq λ \implies \mu-λ \neq 0$</span></li>
<li>This way:
<span class="math-container">$y'x=0$</span></li>
</ul>
|
2,605,208 |
<p>Decide whether the given set of vectors is linearly independent in the indicated vector space:</p>
<p>$\{ x_1, x_1 +x_2, x_1 +x_2 +x_3, ..., x_1+\cdots+x_n\} $</p>
<p>if $\{x_1, x_2, x_3, ..., x_n\}$ is linearly
independent, in some vector space $V$.</p>
<hr>
<p>If $n=4:$</p>
<p>$x_1 - (x_1+x_2) + (x_1+x_2+x_3) - (x_1+x_2+x_3+x_4) = -x_4.$</p>
<p>So, if $n$ is even then it's linearly independent right?</p>
<hr>
<p>If $n=3:$ </p>
<p>$x_1 - (x_1+x_2) + (x_1+x_2+x_3) = x_1 + x_3.$</p>
<p>What about this situation when $n$ is odd? What can we state from $x_1+x_3$?</p>
|
Qiaochu Yuan
| 232 |
<p>The category of principal bundles is a groupoid, so the only way it could have a terminal object is if it were contractible, meaning both that</p>
<ol>
<li>There is only one isomorphism class of principal bundle, and</li>
<li>Its automorphism group is trivial.</li>
</ol>
<p>However, you can verify that the automorphism group of the trivial $G$-bundle is the group of continuous maps $X \to G$, which is nontrivial as long as $G$ is nontrivial and $X$ is not empty. So the only way this automorphism group could be trivial is if either $X$ is empty or $G$ is trivial. </p>
|
186,182 |
<p>Suppose, for the sake of keeping things as simple as possible, that I have the following equation that I wish to simplify in Mathematica:</p>
<p><span class="math-container">$y = x x$</span></p>
<p>But suppose further that I also have a restriction, not directly on <span class="math-container">$x$</span>, but on what values <span class="math-container">$y$</span> can have. More specifically, let's suppose <span class="math-container">$y \leq 9$</span>.</p>
<p>What Mathematica expression, if any, will allow me to simplify the above expression for <span class="math-container">$y$</span> and get an output that not only simplifies, but also gives me the range of values that <span class="math-container">$x$</span> can take to satisfy my restriction on <span class="math-container">$y$</span>?</p>
<p>So, just to be clear, the function or script I am looking for will output something like the following when dealing with the above:</p>
<p>y = <span class="math-container">$x^2$</span> and <span class="math-container">$-3\leq x\leq3$</span></p>
|
Alan
| 19,530 |
<p>Here are two approaches. The first requires perhaps more tolerance for "noisy" notation. Note that I did not use a vector argument. If you must, the notation will be correspondingly "noisier".</p>
<pre><code>Clear[f, x, y]
Dt[f[x, y]]
Grad[f[x, y], {x, y}].{dx, dy}
</code></pre>
|
266,834 |
<p>I know the values for a function v[x,y] on an irregular grid of (x,y) points. Call the table storing all these points xyvtriples. Because of the irregular grid, the Mathematica function Interpolation only works as</p>
<p><code>interpolatedvfunc = Interpolation[xyvtriples, InterpolationOrder -> 1];</code></p>
<p>But what I really need are the partial derivatives of interpolatedfunc with respect to each argument, and for those partials to be continuous, which won't happen due to the edges produced by InterpolatioOrder -> 1.</p>
<p>Is there any way around this? I can make a very fine grid of (x,y) points to (I hope) counter any problems with forcing a spline like interpolation if I can somehow force this to happen.</p>
<p>Thanks.</p>
|
bmf
| 85,558 |
<pre><code>Part[CoefficientArrays[
1/4 (-1 - Subscript[φ, 49, 50] - Subscript[φ, 50, 49]) +
Subscript[φ, 50, 50]], 1]
</code></pre>
<blockquote>
<p>-(1/4)</p>
</blockquote>
|
3,774,400 |
<p>Just like the title says, I don't know how to write an example matrix here to look like matrix. If this makes sense, <span class="math-container">$A$</span> can be <span class="math-container">$[1 0 0;0 1 0;0 1 0]$</span> (like in MATLAB syntax) then if we find determinants of <span class="math-container">$A-\lambda Ι$</span> we get <span class="math-container">$0$</span> for every <span class="math-container">$\lambda$</span>. In MATLAB it says the eigenvalues are <span class="math-container">$0,1,1$</span>.</p>
|
Ennar
| 122,131 |
<p>Let <span class="math-container">$A = (a_{ij}),\ i,j=1,\ldots,n$</span>. From the definition of determinant, it's clear that <span class="math-container">$\det(A-\lambda I)$</span> is a polynomial in <span class="math-container">$\lambda$</span>. Moreover, it is of degree <span class="math-container">$n$</span>:</p>
<p><span class="math-container">\begin{align}\det(A-\lambda I) &= (a_{11} - \lambda)\ldots(a_{nn}-\lambda) + \text{lower degree terms} \\ &= (-1)^n\lambda^n + \text{lower degree terms}
\end{align}</span></p>
<p>In particular, it is not a zero polynomial and it has at most <span class="math-container">$n$</span> roots in the field you are interested in.</p>
<p>As a side note, in the case of a finite field of characteristic <span class="math-container">$p$</span>, it is possible that the determinant vanishes for all values of the field, without it being a zero polynomial, but it's not possible for the fields of characteristic <span class="math-container">$0$</span> (fields containing <span class="math-container">$\mathbb Q$</span>).</p>
|
324,219 |
<p>Given an urn with $M$ unique balls, how many times do I need to draw with replacement before the probability that I have seen each ball at least once is greater than $\epsilon$?</p>
|
Christian Blatter
| 1,303 |
<p>Any sequence of $n$ drawings results in a word $w$ of length $n$ over the alphabet $[M]$. There are $M^n$ such words.</p>
<p>How many of these words $w$ are <em>admissible</em>, meaning that $w$ contains each letter $\ell\in [M]$ at least once? Any admissible word can be fabricated in the following way: Choose a partition of $[n]$ (the set of positions for the $n$ letters to be written) into $M$ nonempty blocks and assign to each block one of the letters $1$, $2$, $\ldots$, $M$. There are
$$\left\{\matrix{n\cr M\cr}\right\}\tag{1}$$
ways to choose this partition, where $(1)$ denotes a so-called <a href="http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow">Stirling number of the second kind</a>, and there are $M!$ ways to assign a letter $\ell$ to each of the $M$ blocks. It follows that there are $M!\left\{\matrix{n\cr M\cr}\right\}$ admissible words. Therefore the probability $p_n$ that a random word of length $n$ is admissible is given by
$$p_n=\left\{\matrix{n\cr M\cr}\right\}{M!\over M^n}\ .$$
In order to determine the minimal $n$ for which $p_n>\epsilon$ one would need estimates for the Stirling numbers $\left\{\matrix{n\cr M\cr}\right\}$.</p>
|
4,634,797 |
<p>I am currently trying to show that the sequence of functions defined by <span class="math-container">$f_n(x) = \frac{x}{1 + x^n}$</span> converges pointwise on <span class="math-container">$U = [0, \infty)$</span>. I have found the limits for the three specific cases and they are:
<span class="math-container">\begin{equation*}
\lim_{n \to \infty} f_n(x) = \begin{cases}
x & \text{if $x \in [0, 1)$} \\
\frac{1}{2} & \text{if $x = 1$} \\
0 & \text{if $x \in (1, \infty)$}
\end{cases}
\end{equation*}</span>
I have started the proof for when <span class="math-container">$x \in [0, 1)$</span> but I am not sure how to proceed. Here is what I have:</p>
<p>Let <span class="math-container">$\epsilon > 0$</span>. We want to show that <span class="math-container">$(f_n(x))_n$</span> converges to <span class="math-container">$x$</span> on <span class="math-container">$x \in [0, 1)$</span>, then there exists an <span class="math-container">$N \in \mathbb{N}$</span> such that
<span class="math-container">\begin{equation*}
\left|\frac{x}{1 + x^n} - x\right| < \epsilon
\end{equation*}</span>
for all <span class="math-container">$n \geq N$</span>. Then
<span class="math-container">\begin{equation*}
\left|\frac{x}{1 + x^n} - x\right| = \frac{x^{n + 1}}{1 + x^n} < \epsilon
\end{equation*}</span>
This is the part where I am lost, but I know that I am suppose to find an <span class="math-container">$N = N(x, \epsilon)$</span>, however, where it confuses me is where we have the <span class="math-container">$n$</span> as the exponent and it is a bit unclear to me how to proceed from here. Would like some assistance for this part.</p>
|
geetha290krm
| 1,064,504 |
<p>I will asume that <span class="math-container">$\epsilon <1$</span> and let you see what heppens if <span class="math-container">$\epsilon \geq 1$</span>.</p>
<p><span class="math-container">$\frac{x^{n + 1}}{1 + x^n} < \epsilon$</span> if <span class="math-container">$\frac{x^{n }}{1 + x^n} < \epsilon$</span> which is true if <span class="math-container">$x^{n} <\frac {\epsilon} {1-\epsilon}$</span>. This is true if <span class="math-container">$n >\ln (\frac {\epsilon} {1-\epsilon})/\ln x$</span>. [Note that <span class="math-container">$\ln x <0$</span>. When you divide an inequality by a negative number the inequality sign gets reversed].</p>
|
1,752,506 |
<p>Question: $ \sqrt{x^2 + 1} + \frac{8}{\sqrt{x^2 + 1}} = \sqrt{x^2 + 9}$</p>
<p>My solution: $(x^2 + 1) + 8 = \sqrt{x^2 + 9} \sqrt{x^2 + 1}$</p>
<p>$=> (x^2 + 9) = \sqrt{x^2 + 9} \sqrt{x^2 + 1}$</p>
<p>$=> (x^2 + 9) - \sqrt{x^2 + 9} \sqrt{x^2 + 1} = 0$</p>
<p>$=> \sqrt{x^2 + 9} (\sqrt{x^2 + 9} - \sqrt{x^2 + 1}) = 0$</p>
<p>So, either $\sqrt{x^2 + 9} = 0$ or $(\sqrt{x^2 + 9} - \sqrt{x^2 + 1}) = 0$</p>
<p>From the first expression, I get $x = \pm 3 i$ and from the second expression, I get nothing.</p>
<p>Now, notice how in the 2nd step, I could've divided both the sides by $\sqrt{x^2 + 9}$, but I didn't because I learned here that we must never do that and that we should always factor: <a href="https://math.stackexchange.com/questions/1751410/how-to-square-both-the-sides-of-an-equation#comment3574205_1751435">Why one should never divide by an expression that contains a variable.</a></p>
<p>So, my question is: is the solution above correct? Would it have been any harm had I divided both the sides by $\sqrt{x^2 + 9}$?</p>
|
Murtuza Vadharia
| 132,945 |
<p>Try a different method.</p>
<p>Take $\sqrt{x^2 +1}$ on RHS Then rationalise $\sqrt{x^2+9}$ - $\sqrt{x^2 +1}$ by it's conjugate.</p>
<p>So your next step would be :</p>
<p>$\dfrac{8}{\sqrt{x^2+1}}$[$\sqrt{x^2+9}$ + $\sqrt{x^2 +1}$] = 8</p>
<p>So 8 gets cancelled. and next step is as follows:
$\dfrac{\sqrt{x^2+9} + \sqrt{x^2 +1}}{\sqrt{x^2+1}}$ = 1
and so we get $\dfrac{\sqrt{x^2+9}}{\sqrt{x^2+1}}$ + 1 = 1</p>
<p>And finally
$\dfrac{\sqrt{x^2+9}}{\sqrt{x^2+1}}$ = 0
And only Numerator becomes zero so $\sqrt{x^2+9}=0$ hence $x=\pm3i$.</p>
|
419,802 |
<p>Let <span class="math-container">$A$</span> be a C<span class="math-container">$^*$</span>-algebra with closed two-sided ideal <span class="math-container">$I$</span>. Set <span class="math-container">$B=A/I$</span> and let <span class="math-container">$\pi:A\to B$</span> be the quotient map. Suppose that <span class="math-container">$b\in B$</span> is quasi-nilpotent. Does there exist quasi-nilpotent <span class="math-container">$a\in A$</span> such that <span class="math-container">$\pi(a)=b$</span>?</p>
|
Konstantinos Kanakoglou
| 85,967 |
<p>Far from being a specialist on the topic, section 4 of the article <a href="https://kconrad.math.uconn.edu/blurbs/grouptheory/SL(2,Z).pdf" rel="nofollow noreferrer"><span class="math-container">$\text{SL}_2(\mathbb{Z})$</span></a>, by K. Conrad discusses non-congruence subgroups.</p>
|
419,802 |
<p>Let <span class="math-container">$A$</span> be a C<span class="math-container">$^*$</span>-algebra with closed two-sided ideal <span class="math-container">$I$</span>. Set <span class="math-container">$B=A/I$</span> and let <span class="math-container">$\pi:A\to B$</span> be the quotient map. Suppose that <span class="math-container">$b\in B$</span> is quasi-nilpotent. Does there exist quasi-nilpotent <span class="math-container">$a\in A$</span> such that <span class="math-container">$\pi(a)=b$</span>?</p>
|
Will Sawin
| 18,060 |
<p><span class="math-container">$\DeclareMathOperator\SL{SL}\DeclareMathOperator\PSL{PSL}\DeclareMathOperator\GL{GL}$</span>The reason that there are uncountably many irreducible representations is not so bad, and gets at an important point: You shouldn't think of irreducible representations of a group like <span class="math-container">$\SL_2(\mathbb Z)$</span> individually, but rather as points in a space of representations, i.e. objects parameterized by a geometric space.</p>
<p>This is simplest to see for representations sending <span class="math-container">$-I \in \SL_2(\mathbb Z)$</span> to the identity, i.e. representations of <span class="math-container">$\PSL_2(\mathbb Z)$</span>. By the <a href="https://mathoverflow.net/questions/419795/representation-theory-of-textsl2-mathbbz#comment1078074_419795">presentation YCor gave</a> <span class="math-container">$\PSL_2(\mathbb Z) = \langle x,y \mid x^2 =y^3 =1 \rangle$</span>, such a representation is given by a matrix <span class="math-container">$X$</span> satisfying <span class="math-container">$X^2=1$</span> and a matrix <span class="math-container">$Y$</span> satisfying <span class="math-container">$Y^3=1$</span>, and representations up to isomorphism are given by pairs of matrices up to conjugation.</p>
<p>For <span class="math-container">$n$</span>-dimensional representations, say <span class="math-container">$n$</span> a multiple of <span class="math-container">$6$</span>, it's not so hard to check that the space of matrices <span class="math-container">$X \in \GL_n(\mathbb C)$</span> with <span class="math-container">$X^2=1$</span> has dimension <span class="math-container">$n^2/2$</span> (over <span class="math-container">$\mathbb C$</span>), and the space of matrices <span class="math-container">$Y \in \GL_n(\mathbb C)$</span> with <span class="math-container">$Y^3= 1$</span> has dimension <span class="math-container">$2n^2/3$</span>, so the space of pairs has dimension <span class="math-container">$n^2/2 + 2n^2/3$</span>, isomorphism classes of <span class="math-container">$n$</span>-dimensional representations have dimension <span class="math-container">$n^2/2 +2n^2/3 - (n^2-1) = n^2/6+1$</span>.</p>
<p>One can check that irreducible representations are an open subset, and thus that the space of irreducible representations has the same dimension.</p>
<p>So certainly there are uncountably many, because they're parameterized by a positive-dimensional manifold!</p>
<p>However, it's clear from this analysis that <em>this space</em> should be one of the primary objects of study in the representation theory here, as it is for representations of surface groups and in some other cases of interest.</p>
|
419,802 |
<p>Let <span class="math-container">$A$</span> be a C<span class="math-container">$^*$</span>-algebra with closed two-sided ideal <span class="math-container">$I$</span>. Set <span class="math-container">$B=A/I$</span> and let <span class="math-container">$\pi:A\to B$</span> be the quotient map. Suppose that <span class="math-container">$b\in B$</span> is quasi-nilpotent. Does there exist quasi-nilpotent <span class="math-container">$a\in A$</span> such that <span class="math-container">$\pi(a)=b$</span>?</p>
|
Andy Putman
| 317 |
<p><span class="math-container">$\DeclareMathOperator\SL{SL}$</span>The issue is that <span class="math-container">$\SL(2,\mathbb{Z})$</span> is very close to a free group, so it is not hard to map it to other groups, and in particular to produce lots of varied representations of it.</p>
<p>You might be interested to know that things are quite different for <span class="math-container">$\SL(n,\mathbb{Z})$</span> when <span class="math-container">$n$</span> is at least <span class="math-container">$3$</span>. I wrote a note <a href="https://www3.nd.edu/%7Eandyp/notes/RepTheorySLnZ.pdf" rel="noreferrer">The representation theory of <span class="math-container">$\operatorname{SL}_n(\mathbb Z)$</span></a> describing results of Margulis and Lubotzky giving a complete and fairly simple description of its representations.</p>
|
2,482,564 |
<p>Let $(\Omega,\Sigma)$ be a given measurable space and let $f$ be a $\Sigma$-measurable function. If $h:[-\infty,\infty]\rightarrow[-\infty,\infty]$ is a continuous function, then the composite function $hf$ is measurable.</p>
<p>This can be proven easily if every continuous function is measurable as I just need to show that the inverse image of $hf$ is measurable for a given measurable set say $E$.</p>
<p>i.e. $(hf)^{-1}(E)=f^{-1}(h^{-1}(E))\in\Sigma$</p>
<p>But is a continuous function measurable? Or this can be proven in another way?</p>
|
H.Vogel
| 475,363 |
<p>As the composition $hf$ needs to be well-defined, I assume the following domain and target sets of $f$ and $h$: $f:\Omega \rightarrow \mathbb{R}$ and $h: \mathbb{R}\rightarrow \mathbb{R}$ (as $(-\infty,\infty)=\mathbb{R}$).<br>
Thus we have a measurable function $h$ from $(\Omega, \Sigma) \rightarrow (\mathbb{R},\mathcal{B}(\mathbb{R}))$ and a continuous function $f$ from $(\mathbb{R},\mathcal{B}(\mathbb{R})) \rightarrow (\mathbb{R},\mathcal{B}(\mathbb{R}))$, where I assumed suitable $\sigma$-Algebras for the function $h$, which I hope are what you are looking for.<br>
<strong>Statement</strong>: The composition $hf:(\Omega,\Sigma)\rightarrow (\mathbb{R},\mathcal{B}(\mathbb{R}))$ is measurable.<br>
The proof comes from the fact that since $h$ is a continuous function from $\mathbb{R}\rightarrow \mathbb{R}$, it is a measurable function from $(\mathbb{R},\mathcal{B}(\mathbb{R}))\rightarrow (\mathbb{R},\mathcal{B}(\mathbb{R}))$.<br>
<strong>Detailed proof</strong>:<br>
As $\mathcal{B}(\mathbb{R})$ is generated by open subsets of $\mathbb{R}$ it is sufficient to check measurability of $hf$ on an open subset $A\subset \mathbb{R}$, i.e. we have to check that $(hf)^{-1}(A)\subset \Omega$ for an open set $A\subset \mathbb{R}$. As $h$ is continuous and $A$ is open, $h^{-1}(A)$ is open, i.e. $h^{-1}(A)\in \mathcal{B}(\mathbb{A})$. As $f$ is measurable $f^{-1}(h^{-1}(A))\in \Sigma$, thus $(hf)^{-1}(A)= f^{-1}(h^{-1}(A))\in \Omega$ and we conclude. </p>
|
2,420,435 |
<p>I'm learning logic for computer science and came across the question:</p>
<blockquote>
<p>If $\ n$ is a real number, $\frac{1}{n}$ is the reciprocal of $\ n$. Prove that all
numbers have a unique reciprocal.</p>
</blockquote>
<p>I came up with the following method, but it seems so simple that I doubt it'll work:</p>
<p>$\frac{1}{n}=p$</p>
<p>Since we know that $\ n=ℝ$, and we assume that $\ p=$ the unique reciprocal, but will this work when proving <em>all</em> real numbers have unique reciprocals? </p>
|
Raffaele
| 83,382 |
<p>Your conclusion is false. Not all $x\in\mathbb{R}$ has reciprocal. $0$ as no reciprocal. </p>
<p>To show unicity. </p>
<p>If $a\ne 0$ then $a^{-1}a=aa^{-1}=1$. If $a$ had another reciprocal $b$ such that $ab=1$ then left multiply by $a^{-1}$ you get $a^{-1}ab=a^{-1}$ hence $b=a^{-1}$. Same procedure for $ba=1$ right multiply etc</p>
<p>I assumed for granted associativity</p>
|
20,463 |
<p>I'm new here and I don't know how things work here and don't have any knowledge about commenting or answering questions. How do I get to know if my question has been commented or answered?</p>
|
Ivo Terek
| 118,056 |
<p>Check your <em>inbox</em>:</p>
<p><img src="https://i.stack.imgur.com/L8ibr.png" alt="enter image description here"></p>
<p>Like this:</p>
<p><img src="https://i.stack.imgur.com/EGY2E.png" alt="enter image description here"></p>
<p>The number of unread messages will show up in red:</p>
<p><img src="https://i.stack.imgur.com/Q6KaI.png" alt="enter image description here"> </p>
<p>(thanks to Daniel Fischer for providing the last image!)</p>
|
2,361,275 |
<p>$$\neg(A\land B)\land(B\lor\neg C)\Leftrightarrow(\neg A\land B)\lor(\neg B\land\neg C)$$
Checked their truth tables are the same, but can you show the steps how to transform one to the other?</p>
|
Bram28
| 256,001 |
<p>As an addendum to @Koepi's Answer:</p>
<p>To go from $(\neg A \land B) \lor (\neg A \land \neg C) \lor (\neg B \land \neg C)$ to $(\neg A \land B) \lor (\neg B \land \neg C)$ is an immediate application of the Consensus theorem:</p>
<p><strong>Consensus</strong></p>
<p>$$(P \land Q) \lor (\neg Q \land R) \lor (P \land R) = (P \land Q) \lor (\neg Q \land R)$$</p>
<p>Proof: </p>
<p>$$(P \land Q) \lor (\neg Q \land R) \lor (P \land R) = \text{ (Adjacency)} $$</p>
<p>$$(P \land Q) \lor (\neg Q \land R) \lor (P \land Q \land R) \lor (P \land \neg Q \land R) = \text{ (Absorption)} $$</p>
<p>$$(P \land Q) \lor (\neg Q \land R)$$</p>
<p>And if you are unfamiliar with Adjacency and Absorption:</p>
<p><strong>Adjacency</strong></p>
<p>$$P = (P \land Q) \lor (P \land \neg Q)$$</p>
<p>Proof:</p>
<p>$$P = \text{ (Identity)}$$</p>
<p>$$P \land \top = \text{ (Complement)}$$</p>
<p>$$P \land (Q \lor \neg Q) = \text{ (Distribuition)}$$</p>
<p>$$(P \land Q) \lor (P \land \neg Q)$$</p>
<p><strong>Absorption</strong></p>
<p>$$P \lor (P \land Q) = P$$</p>
<p>Proof:</p>
<p>$$P \lor (P \land Q) = \text{ (Identity)}$$</p>
<p>$$(P \land \top) \lor (P \land Q) = \text{ (Distribution)}$$</p>
<p>$$P \land (\top \lor Q) = \text{ (Annihilation)}$$</p>
<p>$$P \land \top = \text{ (Identity)}$$</p>
<p>$$P$$</p>
|
627,575 |
<p>There is a whiskey made up of 64% corn, 32% rye, and 4% barley that was made by blending other whiskies together. I am trying to figure out if there is a chance the ratio of this whiskey could be the result of blending two, maybe three whiskies of different ratios.</p>
<p>The possible whiskies:</p>
<p>Whiskey A is 60% corn, 36% rye and 4% barley.</p>
<p>Whiskey B is 81% corn, 15% rye, and 4% barley. </p>
<p>Whiskey C is 75% corn, 21% rye, and 4% barley</p>
<p>I have a feeling there is a possibility because these whiskies all have 4% barley, but I can't figure out if the other percentages match up in any 1:2:3 ratio in the blend. Any help will be greatly appreciated. Thank you.</p>
|
Claude Leibovici
| 82,404 |
<p>For simplification, without any lost of information, I shall consider that the mixture you will make will be in the ratio 1:x:y. </p>
<p>Now, let us forget about percentage and consider what were percentages as quantities and let us mix 1 liter of A plus "x" liters of B plus "y" liters of C. So, the total volume of the blend is (1+x+y) liters of D. This volume then contains </p>
<p>(60 + 81 x + 75 y) of corn<br>
(36 + 15 x + 21 y) of rye<br>
( 4 + 4 x + 4 y) of barley </p>
<p>Since these was the content of (1+x+y) liters of D, then one liter of D contains </p>
<p>(60 + 81 x + 75 y) / (1 + x + y) percents of corn<br>
(36 + 15 x + 21 y) / (1 + x + y) percents of of rye<br>
and 4 percents of barley. </p>
<p>So, for your example, if you take x=2 and y=3 (your 1:2:3 ratio), the final mixture contains<br>
(60 + 2 * 82 + 3 * 75) / (1 + 2 + 3) = 447 / 6 = 74.5 % of corn<br>
(36 + 2 * 15 + 3 * 21) / (1 + 2 + 3) = 129 / 6 = 21.5 % of rye<br>
to which 4.0 % of barley have to be added. </p>
<p>If you want a mixture containing 64 percents of corn and 32 percents of rye, you just need to solve two equations for two unknowns, namely</p>
<p>(60 + 81 x + 75 y) / (1 + x + y) = 64<br>
(36 + 15 x + 21 y) / (1 + x + y) = 32 </p>
<p>the solution of which ... does not exist. From these two equations, the only thing you can extract is </p>
<p>y = (4 - 17 x) / 11</p>
<p>Then, any blend made mixing 1 liter of A, x liters of B and (4 - 17 x) /11 liters of C will give you the desired composition. For sure,the value of x cannot be larger than 4 / 17. So you can generate "many" mixtures satisfying your requirements. </p>
<p>You could have made the problem more general not considering the percentage of barley to be the same in each A, B and C. The approach would stay the same. </p>
<p>By, the way, since the drink is ready, <strong>Cheers</strong> !</p>
|
3,757,972 |
<p>If <span class="math-container">$\frac{ab} {a+b} = y$</span>, where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are greater than zero, why is <span class="math-container">$y$</span> always smaller than the smallest number substituted?</p>
<p>Say <span class="math-container">$a=2$</span> , <span class="math-container">$b=4$</span> (smallest number here is <span class="math-container">$2$</span>. Thus, the answer would be smaller than <span class="math-container">$2$</span>)</p>
<p><span class="math-container">$\frac{2\cdot4}{ 2+4} = 1.\bar 3$</span></p>
<p>I got this equation from physics. It's for getting total resistance and the miss told us to not waste time in mcq on it because the answer will always be smaller than the smallest number. But I can't explain to myself in words or by intuition why this happens. Any help??</p>
|
Michael Rozenberg
| 190,319 |
<p>For positives <span class="math-container">$a$</span> and <span class="math-container">$b$</span> let <span class="math-container">$a=kb$</span>, where <span class="math-container">$k\geq1$</span>.</p>
<p>Thus, <span class="math-container">$a\geq b$</span> and <span class="math-container">$$\frac{ab}{a+b}=\frac{kb}{k+1}<b.$$</span></p>
|
2,647,868 |
<p>I'm very confused at the following question:</p>
<blockquote>
<p>Find the basis for the image and a basis of the kernel for the following matrix:
$\begin{bmatrix} 7 & 0 & 7 \\ 2 & 3 & 8 \\ 9 & 0 & 9 \\ 5 & 6 & 17 \end{bmatrix}$</p>
</blockquote>
<p>I just don't know how to do any of this. We find the image by doing the following:
$\begin{bmatrix} 7 & 2 & 9 & 5 \\\ 0 & 3 & 0 & 6\\ 7 & 8 & 9 &17 \end{bmatrix}$
Then, after doing RREF, we get:
$\begin{bmatrix} 1 & 0 & \frac{9}{7} & \frac{1}{7} \\\ 0 & 1 & 0 & 2\\ 0&0&0&0 \end{bmatrix}$. This gives us an image of {$\begin{bmatrix} 1 \\ 0 \\ \frac{9}{7} \\ \frac{1}{7} \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \\ 2 \end{bmatrix}$}. However, I don't know how to proceed from here. Please help me?</p>
|
Connington
| 530,877 |
<p>You're absolutely correct with your basis for the image of A, aka its row space. The basis for the kernel of A is found similarly: you must solve the homogeneous system $ A \mathbf x = \mathbf 0 $, where $\mathbf x$ in your case is the 3x1 column vector $\mathbf x = (x_1, x_2, x_3)^T $. The solution to this is found by row-reducing $A$ and solving: $$\begin{bmatrix} 7 & 0 & 7 \\ 2 & 3 & 8 \\ 9 & 0 & 9 \\ 5 & 6 & 17 \end{bmatrix} \to \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}.$$ So we have the system $$ x_1 + x_3 = 0 \\ x_2 + 2x_3 = 0.$$ Letting $x_3$ be a free variable, we obtain the vector $\mathbf x= (-x_3, -2x_3, x_3)^T = (-1, -2, 1)^T$ as the basis for the kernel of A. This answer makes sense, since the dimension of $\text{ker} A$ is equal to the number of free variables in the solution of the homogeneous system (i.e., $\text{dim ker} A = 1$).</p>
|
757,702 |
<p>I am in my pre-academic year. We recently studied the Remainder sentence (at least that's what I think it translates) which states that any polynomial can be written as <span class="math-container">$P = Q\cdot L + R$</span></p>
<p>I am unable to solve the following:</p>
<blockquote>
<p>Show that <span class="math-container">$(x + 1)^{(2n + 1)} + x^{(n + 2)}$</span> can be divided by <span class="math-container">$x^2 + x + 1$</span> without remainder.</p>
</blockquote>
|
lab bhattacharjee
| 33,337 |
<p>$$(x+1)^{2n+1}+x^{n+2}=(x+1)\{(x+1)^2\}^n+x^{n+2}=(x+1)(x^2+2x+1)^n+x^{n+2}$$</p>
<p>Now as $x^2+2x+1\equiv x\pmod{x^2+x+1}$</p>
<p>$$(x+1)(x^2+2x+1)^n+x^{n+2}\equiv (x+1)(x)^n+x^{n+2}\pmod{x^2+x+1}$$</p>
<p>$$\equiv x^n(x+1+x^2)$$</p>
|
4,351,725 |
<p>This is what the solution says:</p>
<p>Since each string of 4 digits are independent, having 2018 in a string has probability of <span class="math-container">$(1/10)^4$</span></p>
<p>By geometric distribution, expected value of digits to obtain 2018 in a string would be <span class="math-container">$10^4$</span></p>
<p>So we need to see how many powers do we need to write before we reach <span class="math-container">$10^4$</span> units.</p>
<p>Since <span class="math-container">$2^{10} \approx 1000 $</span>, we can say <span class="math-container">$2^n$</span> has <span class="math-container">$0.4$</span> digits.</p>
<p>So the number of digits is:</p>
<p><span class="math-container">$\sum_{r=1}^{n}0.3r = 0.2n^2$</span></p>
<p>Thus <span class="math-container">$0.2n^2 = 10^4$</span> where <span class="math-container">$n \approx 231$</span></p>
<p>For the last part I don't get why the solution put summation of <span class="math-container">$0.3r$</span>. According to the explanation above, isn't it correct just to solve the equation</p>
<p><span class="math-container">$0.4n = 10^4$</span> because we need to see whether <span class="math-container">$2^n$</span> has <span class="math-container">$10^4$</span> digits?</p>
<p>summation implies the summation of the digits, but that means writing all numbers <span class="math-container">$2^n$</span> side by side, which the problem did not intended.</p>
<p>Hope anyone would shed some light with this one.</p>
|
Salcio
| 821,280 |
<p>Since <span class="math-container">$log_{10}2$</span> is irrational <span class="math-container">$\{n*log_{10}2\}$</span>, <span class="math-container">$n \in N$</span> is uniform distributed on <span class="math-container">$[0,1]$</span>. In particular it is dense on <span class="math-container">$[0,1]$</span>. Thus, there is a <span class="math-container">$n \in N$</span> such that <span class="math-container">$n*log_{10}2 \in [log_{10}2.018,log_{10}2.019]$</span>. It means that some <span class="math-container">$k\in N$</span> we have <span class="math-container">$$log_{10}2.018 + k <= n*log_{10}2 < log_{10}2.019 + k+1$$</span> or <span class="math-container">$$log_{10}2018 + k-3 <= nlog_{10}2 < log_{10}2019 +k-2$$</span> that is <span class="math-container">$$log_{10}2018*10^{k-3} <= log_{10}2^n <log_{10}2019*10^{k-2}$$</span> which is <span class="math-container">$$2018*10^{k-3} <= 2^n < 2019*10^{k-2}$$</span> In words, power of 2 starts with 2018. So, there are infinite many powers of 2 which start with 2018. If you want to find the first one ... good luck with that.</p>
|
2,298,873 |
<p><strong>Problem statement:</strong></p>
<p>There are three spheres. The one which will roll is $\textbf{X}=(x_1,x_2,x_3)$ with radius $R_X$. The other two spheres are $\textbf{A}=(a_1,a_2,a_3)$ with $R_A$ and $\textbf{B}=(b_1,b_2,b_3)$ with $R_B$. They are both below $X$, meaning $x_3>a_3$ and $x_3>b_3$. They both contact $X$, meaning $|\textbf{X}-\textbf{A}| = R_X+R_A$ and similarly for $B$. I need to find the axis of rotation $\textbf{V}$ and point $\textbf{v}$ about which $\textbf{X}$ can rotate to change its position while preserving its contacts with both $\textbf{A}$ and $\textbf{B}$. With this information actually performing the rotation for my task in a granular physics program is simple. </p>
<p><strong>Attempted solution:</strong></p>
<p>I thought the axis of rotation would be the vector connecting $A$ and $B$.
$$\textbf{V}= \textbf{B}-\textbf{A},$$
suitably normalized. I thought the point about which the rotation should occur would come from analyzing the triangle formed by the locations of the three spheres, and breaking it into two right triangles with height $h$ and bases $d$ and $D$, then solving the system</p>
<p>$$(R_A+R_X)^2 = d^2 + h^2,$$
$$(R_B + R_X)^2 = D^2 + h^2,$$
$$d+D = |\textbf{B}-\textbf{A}|,$$</p>
<p>which is three equations in three unknowns $h$,$d$, and $D$. I thought $h$ would be the length of the lever arm about which to rotate, and I thought $d$ or $D$ would map $\textbf{A}$ to the pivot point as $\textbf{v} = \textbf{V}d$. When I use this for the rotation I get nonsense results. What am I missing? Any help is appreciated! </p>
<p>Here is a figure to indicate a possible configuration before rolling. It is not necessary however that the two lower spheres $\textbf{A}$ and $\textbf{B}$ are touching. </p>
<p><a href="https://i.stack.imgur.com/exeOf.png" rel="nofollow noreferrer">It works. Thanks!</a></p>
<p><a href="https://i.stack.imgur.com/XXhJ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XXhJ6.png" alt="enter image description here"></a></p>
|
amd
| 265,466 |
<p>The axis of rotation is, as you’ve already determined, the line through the center of the two fixed spheres, which can be represented parametrically as $(1-\lambda)\mathbf A+\lambda\mathbf B$. This line has direction vector $\mathbf B-\mathbf A$, which is normalized to $\mathbf w=\|\mathbf B-\mathbf A\|$. When a point that’s not on this axis is rotated about the axis, it moves along a circular arc that lies in a plane perpendicular to the rotation axis and is centered on that axis. </p>
<p>To find this rotation center $\mathbf C$ for the center of the moving sphere $\mathbf X$, observe from the triangle of the sphere centers that $\mathbf C$ is the orthogonal projection of $\mathbf X$ onto $\overline{\mathbf A\mathbf B}$. Therefore, $$\mathbf C=\mathbf A+\mathbf w\cdot(\mathbf X-\mathbf A)\mathbf w.$$ Here, $\mathbf w\cdot(\mathbf X-\mathbf A)$ is the distance $d$ that you were solving for. This point could also be found by directly computing the intersection of the rotation axis and its perpendicular plane through $\mathbf X$. One way to do this is to solve the equation $$[(1-\lambda)\mathbf A+\lambda\mathbf B]\cdot\mathbf w=\mathbf X\cdot\mathbf w$$ for $\lambda$, which leads directly to the above result. (This equation comes from the point-normal form of equation for a plane.) </p>
<p>It seems useful to actually find the circle of motion. Let $\mathbf u=\|\mathbf X-\mathbf C\|$ and $\mathbf v=\mathbf w\times\mathbf u$. These two unit vectors are both parallel to the plane of motion and are orthogonal to each other. A parametric description of $\mathbf X$’s motion is then $\mathbf C+R_X\mathbf u\cos\theta+R_X\mathbf v\sin\theta$, a clockwise rotation through $\theta$ when viewed from $\mathbf A$. (I didn’t really need to normalize $\mathbf u$ for this, but I’m anticipating the next paragraph.) </p>
<p>With the center point $\mathbf C$ and the three unit vectors in hand, it’s fairly easy to construct a homogeneous rotation matrix that implements this rotation. In the $uvw$ coordinate system with origin at $\mathbf C$, it’s just a rotation about the $w$-axis, which can be represented in block form as $\small\begin{bmatrix}R_\theta&\mathbf0_2\\\mathbf0_2&I_2\end{bmatrix}$, where $R_\theta$ is the familiar two-dimensional rotation matrix and $\mathbf0_2$ is a $2\times2$ matrix of zeros. The corresponding matrix in the $xyz$ coordinate system is obtained via a change of basis: $$R=\begin{bmatrix}\mathbf u&\mathbf v&\mathbf w&\mathbf C \\ 0&0&0&1\end{bmatrix}\begin{bmatrix}R_\theta&\mathbf0_2\\\mathbf0_2&I_2\end{bmatrix}\begin{bmatrix}\mathbf u&\mathbf v&\mathbf w&\mathbf C \\ 0&0&0&1\end{bmatrix}^{-1}.$$ The change-of-basis matrix is a composition of a rotation and translation, so its inverse can be computed without going through a generic matrix inversion: $$R=\begin{bmatrix}\mathbf u&\mathbf v&\mathbf w&\mathbf C \\ 0&0&0&1\end{bmatrix}\begin{bmatrix}R_\theta&\mathbf0_2\\\mathbf0_2&I_2\end{bmatrix}\begin{bmatrix}\mathbf u^T&0\\\mathbf v^T&0\\\mathbf w^T&0\\\mathbf0^T&1\end{bmatrix}\begin{bmatrix}I_3&-\mathbf C\\\mathbf0&1\end{bmatrix}.$$ N.B.: in the first matrix, the unit vectors are <em>columns</em>, while in the third they are <em>rows</em>; $\mathbf C$ and $-\mathbf C$ are both entered as columns. Note, too, that any point on the rotation axis can be used instead of $\mathbf C$. You will end up with the same matrix. (If you like, expand the first matrix into its rotation and translation components to restore the symmetry of the product.)</p>
|
338,099 |
<p>Are there general ways for given rational coefficients <span class="math-container">$a,b,c$</span> (I am particularly interested in <span class="math-container">$a=3,b=1,c=8076$</span>, but in general case too) to answer whether this equation has a rational solution or not?</p>
|
Sam
| 144,949 |
<p>Above equation shown below:</p>
<p><span class="math-container">$ax^4+by^2=c$</span> --------<span class="math-container">$(1)$</span></p>
<p>For equation <span class="math-container">$(1)$</span>, Seiji Tomita has given parametric solution. </p>
<p>For given <span class="math-container">$(a,b,c) =[(2),(1),(3*33^4)]$</span> the numerical solution, </p>
<p>for equation (1) is shown below:</p>
<p><span class="math-container">$(x,y)=(13,1871)$</span></p>
<p>For further details see his web site & the link is given below:</p>
<pre><code> http://www.maroon.dti.ne.jp/fermat
</code></pre>
<p>Click on computational number theory & then select article #327</p>
|
1,898,810 |
<p>How do I integrate $\frac{1}{1-x^2}$ without using trigonometric identities or partial fractions? Thanks!</p>
|
Jan Eerland
| 226,665 |
<p>Use, the series representation:</p>
<p>$$\frac{1}{1-x^2}=\sum_{n=0}^{\infty}\frac{x^n(1+(-1)^n)}{2}$$</p>
<p>So, we get:</p>
<p>$$\text{I}=\int\frac{1}{1-x^2}\space\text{d}x=\int\sum_{n=0}^{\infty}\frac{x^n(1+(-1)^n)}{2}\space\text{d}x=\sum_{n=0}^{\infty}\frac{1+(-1)^n}{2}\int x^n\space\text{d}x$$</p>
<p>Now, use:</p>
<ul>
<li>$$\int x^n\space\text{d}x=\frac{x^{1+n}}{1+n}+\text{C}$$</li>
</ul>
<p>So:</p>
<p>$$\text{I}=\sum_{n=0}^{\infty}\frac{1+(-1)^n}{2}\cdot\frac{x^{1+n}}{1+n}+\text{C}=\sum_{n=0}^{\infty}\frac{x^{1+n}(1+(-1)^n)}{2(1+n)}+\text{C}$$</p>
<hr>
<p>Using, partial fractions like @haqnatural, gives us:</p>
<p>$$\int\frac{1}{1-x^2}\space\text{d}x=\frac{\ln|1+x|-\ln|1-x|}{2}+\text{C}$$</p>
<p>Because:</p>
<ul>
<li>$$\int\frac{1}{x}\space\text{d}x=\ln|x|+\text{C}$$</li>
</ul>
|
1,898,810 |
<p>How do I integrate $\frac{1}{1-x^2}$ without using trigonometric identities or partial fractions? Thanks!</p>
|
Community
| -1 |
<p>$$\frac1{1-x^2}=\frac{1-x+x}{1-x^2}=\frac1{1+x}+\frac x{1-x^2}$$ hence</p>
<p>$$\log(1+x)-\frac12\log(1-x^2).$$</p>
<p>(Not much different from a decomposition in simple fractions.)</p>
|
1,621,302 |
<blockquote>
<p>Assume $a,n\in\mathbb{N}$ such that $\gcd{(a,n)}=1$. We say $n$ is prime if $a^{n-1}\equiv 1\mod{n}$ and $a^x\not\equiv1\mod{n}$ for any divisor $x$ of $n-1$.</p>
</blockquote>
<p>I am presented with the following proof but there are jumps within it that I don't understand.</p>
<hr>
<p><em>Proof</em>: Suppose $\gcd{(a,n)} = 1$. Let $d$ be the order of $a$ modulo $n$, i.e. the smallest $x$ such that $a^x\equiv1\mod{n}$. Then $d\mid n-1$. We write $n-1=sd+r$ where $0\le r<d$. Hence $$1\equiv a^{n-1} \equiv a^{sd+r} \equiv a^r\mod{n}$$</p>
<p>Since $d\mid n-1$, we know $d\mid\varphi(n)$. Since $a^{\varphi(n)}\equiv 1\mod{n}$ we note that $d=n-1$. Also $d\mid\varphi(n) \le n-1\implies \varphi(n)=n-1\implies n$ is prime.</p>
<hr>
<p><strong>Questions:</strong></p>
<ol>
<li>How does $d\mid n-1$?</li>
<li>If $d\mid n-1$ why do we write $n-1=sd+r$, surely $r=0$?</li>
<li>What relevance does showing $1\equiv a^r\mod{n}$ have?</li>
<li>How does $d\mid n-1\implies d\mid \varphi(n)$?</li>
<li>How does $a^\varphi(n)\equiv 1\mod{n}\implies d=n-1$?</li>
<li>Can we always say that $\varphi(n)\le n-1$?</li>
<li>Is it true to say if $\varphi(n)=n-1$ then $n$ is prime?</li>
</ol>
|
Bernard
| 202,857 |
<p>Assertion 1 is false in general, except if $n$ is a prime number (in the ordinary sense). The definition here given is that of a <em>(Fermat) pseudoprime relative to base $a$</em> (first condition) $+$ an extra condition on the order..</p>
<p>A counter-example: $\gcd(3,8)=1$, but the order of $3$ mod. $8$ est $2$ – not a divisor of $7$.</p>
|
1,414,690 |
<p>This is a followup to my question <a href="https://math.stackexchange.com/questions/1414636/eigenvalues-of-matrix-with-all-1s">here</a>.</p>
<p>Let $A$ be the $n \times n$ matrix over a field of characteristic 0, all of whose entries are 1. Is $A$ diagonalizable?</p>
|
Will Jagy
| 10,400 |
<p>Here is a matrix <span class="math-container">$P$</span> that I made up some time ago. Note that <span class="math-container">$P$</span> is not orthogonal, although the columns are pairwise orthogonal.</p>
<p>10:
<span class="math-container">$$
\left(
\begin{array}{rrrrrrrrrr}
1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\
1 & 1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\
1 & 0 & 2 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\
1 & 0 & 0 & 3 & -1 & -1 & -1 & -1 & -1 & -1 \\
1 & 0 & 0 & 0 & 4 & -1 & -1 & -1 & -1 & -1 \\
1 & 0 & 0 & 0 & 0 & 5 & -1 & -1 & -1 & -1 \\
1 & 0 & 0 & 0 & 0 & 0 & 6 & -1 & -1 & -1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 7 & -1 & -1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & -1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 9
\end{array}
\right).
$$</span></p>
<p>3:
<span class="math-container">$$
\left(
\begin{array}{rrr}
1 & -1 & -1 \\
1 & 1 & -1 \\
1 & 0 & 2 \\
\end{array}
\right).
$$</span></p>
<p>4:
<span class="math-container">$$
\left(
\begin{array}{rrrr}
1 & -1 & -1 & -1 \\
1 & 1 & -1 & -1 \\
1 & 0 & 2 & -1 \\
1 & 0 & 0 & 3 \\
\end{array}
\right).
$$</span></p>
<p>5:
<span class="math-container">$$
\left(
\begin{array}{rrrrr}
1 & -1 & -1 & -1 & -1 \\
1 & 1 & -1 & -1 & -1 \\
1 & 0 & 2 & -1 & -1 \\
1 & 0 & 0 & 3 & -1 \\
1 & 0 & 0 & 0 & 4 \\
\end{array}
\right).
$$</span></p>
<p>The columns of <span class="math-container">$P$</span> are of varying lengths; for the 10 by 10 case depicted, lengths <span class="math-container">$ \sqrt{10}, \sqrt{2}, \sqrt{6}, \sqrt{12},..$</span> All that is necessary to make an orthogonal matrix <span class="math-container">$Q$</span> out of this is to divide each column by its length. In turn, this tells us that <span class="math-container">$\det P = \pm n!,$</span> as <span class="math-container">$\det Q = \pm 1.$</span> The result is <span class="math-container">$Q^T Q = Q Q^T = I$</span> and <span class="math-container">$Q^T = Q^{-1}.$</span> Meanwhile, <span class="math-container">$Q^T A Q= Q^{-1}AQ$</span> is diagonal, in this case the diagonal elements are the eigenvalues. </p>
|
2,733,142 |
<p>This question has been asked a <a href="https://math.stackexchange.com/questions/207029/a-b2-for-which-matrix-a">few</a> <a href="https://math.stackexchange.com/questions/583442/square-root-of-nilpotent-matrix">times</a>. In the former case I noticed that there was some argument trending towards using the Jordan form. For the sake of completeness, I was hoping to see if I'm on the right track.</p>
<p>Let $N$ be a nilpotent matrix such that $N^n=0$ and $N^{n-1}\neq0$. Suppose there exists $A^2=N$. Then I can write $A^2=\lambda I+N'$ where $N'$ is a nilpotent matrix. Using the Taylor expansion, I can see that $A^2=(\lambda I+N')^{1/2}=\sqrt{\lambda}\left(I+\frac{N'}{2\lambda}-\frac{N'^2}{8\lambda^2}+\frac{N'^3}{16\lambda^3}+....\right)$. This series is convergent since it only has finitely many terms. But, $A^2=N$ and $N$ is nilpotent. So $\lambda^i=0$ for all $i\in \{0,...,n\}.$ Hence such an $A^2$ doesn't exist.</p>
|
Will Jagy
| 10,400 |
<p>just calculations over some field. If
$$
\left(
\begin{array}{cc}
a & b \\
c & d
\end{array}
\right)^2 =
\left(
\begin{array}{cc}
0 & 1 \\
0 & 0
\end{array}
\right) \; ,
$$
then
$$
\left(
\begin{array}{cc}
a^2 + bc & b(a+d) \\
c(a+d) & d^2 + bc
\end{array}
\right) =
\left(
\begin{array}{cc}
0 & 1 \\
0 & 0
\end{array}
\right) \; ,
$$
Since $b(a+d) = 1,$ we know $a+d \neq 0.$ Since $c(a+d) = 0$ but $a+d \neq 0,$ we know $c=0.$ But then $a^2 = 0$ and $a=0,$ also $d^2 = 0$ so $d=0.$ We have see that $a+d = 0,$ which gives a contradiction for the existence of this matrix square root. We used only field axioms, characteristic did not matter.</p>
|
202,111 |
<p>Consider the following PDE
$$ \frac{\partial \Phi}{\partial t} - \frac{1}{2}y \frac{\partial \Phi}{\partial x} + \alpha \beta y^{3/2} \frac{\partial^2 \Phi}{\partial x \partial y} + \frac{1}{2} y \frac{\partial^2 \Phi}{\partial x^2} + \frac{1}{2} \alpha^2 y^2 \frac{\partial^2 \Phi}{\partial y^2} = 0 $$
What is a good substitution to solve this PDE ? I once used affine change of variable of type
$$ \Phi(t,x,y) = \exp\left\{ A(t)x+B(t)y\right\} $$
and then deal with Ricatti PDE but it does not seem to help here.</p>
|
Mhenni Benghorbal
| 35,472 |
<p>Here is a solution to the pde</p>
<p>$$ \Phi \left( x,y,t \right) = A\left( 2\,{\frac {\ln \left( y \right) }{
{\alpha}^{2}}} + t \right) + B\left( {\frac {y\ln \left(
y \right) }{{\alpha}^{2}}} - {\frac {y}{{\alpha}^{2}}}+x \right)+{\it C_1}+C_2{{\rm e}^{x}}+{\it C_3}\,y\,,$$</p>
<p>where $A,B,C_i\,,i=1,2,3 \, $ are constants. </p>
|
202,111 |
<p>Consider the following PDE
$$ \frac{\partial \Phi}{\partial t} - \frac{1}{2}y \frac{\partial \Phi}{\partial x} + \alpha \beta y^{3/2} \frac{\partial^2 \Phi}{\partial x \partial y} + \frac{1}{2} y \frac{\partial^2 \Phi}{\partial x^2} + \frac{1}{2} \alpha^2 y^2 \frac{\partial^2 \Phi}{\partial y^2} = 0 $$
What is a good substitution to solve this PDE ? I once used affine change of variable of type
$$ \Phi(t,x,y) = \exp\left\{ A(t)x+B(t)y\right\} $$
and then deal with Ricatti PDE but it does not seem to help here.</p>
|
doraemonpaul
| 30,938 |
<p>Case $1$ : $\alpha=0$</p>
<p>Then $\dfrac{\partial\Phi}{\partial t}-\dfrac{y}{2}\dfrac{\partial\Phi}{\partial x}+\dfrac{y}{2}\dfrac{\partial^2\Phi}{\partial x^2}=0$</p>
<p>Case $1$a : $\text{Re}(yt)\geq0$</p>
<p>Let $\Phi(x,y,t)=X(x)T(y,t)$ ,</p>
<p>Then $X(x)\dfrac{\partial T(y,t)}{\partial t}-\dfrac{yT(y,t)}{2}\dfrac{dX(x)}{dx}+\dfrac{yT(y,t)}{2}\dfrac{d^2X(x)}{dx^2}=0$</p>
<p>$\biggl(\dfrac{d^2X(x)}{dx^2}-\dfrac{dX(x)}{dx}\biggr)\dfrac{yT(y,t)}{2}=-X(x)\dfrac{\partial T(y,t)}{\partial t}$</p>
<p>$\dfrac{\dfrac{d^2X(x)}{dx^2}-\dfrac{dX(x)}{dx}}{X(x)}=-\dfrac{2\dfrac{\partial T(y,t)}{\partial t}}{yT(y,t)}=\dfrac{4(f(u))^2-1}{4}$</p>
<p>$\begin{cases}\dfrac{d^2X(x)}{dx^2}-\dfrac{dX(x)}{dx}-\dfrac{4(f(u))^2-1}{4}X(x)=0\\\dfrac{\partial T(y,t)}{\partial t}=-\dfrac{(4(f(u))^2-1)yT(y,t)}{8}\end{cases}$</p>
<p>$\begin{cases}X(x)=\begin{cases}c_1(u)e^{\frac{x}{2}}\sinh(xf(u))+c_2(u)e^{\frac{x}{2}}\cosh(xf(u))&\text{when}~f(u)\neq0\\c_1xe^{\frac{x}{2}}+c_2e^{\frac{x}{2}}&\text{when}~f(u)=0\end{cases}\\T(y,t)=c_3(u)e^{-\frac{yt(4(f(u))^2-1)}{8}}\end{cases}$</p>
<p>$\therefore\Phi(x,y,t)=c_1xe^{\frac{4x+yt}{8}}+c_2e^{\frac{4x+yt}{8}}+\int_uC_3(u)e^{\frac{4x-yt(4(f(u))^2-1)}{8}}\sinh(xf(u))~du+\int_uC_4(u)e^{\frac{4x-yt(4(f(u))^2-1)}{8}}\cosh(xf(u))~du$</p>
<p>or $c_1xe^{\frac{4x+yt}{8}}+c_2e^{\frac{4x+yt}{8}}+\sum\limits_uC_3(u)e^{\frac{4x-yt(4(f(u))^2-1)}{8}}\sinh(xf(u))+\sum\limits_uC_4(u)e^{\frac{4x-yt(4(f(u))^2-1)}{8}}\cosh(xf(u))$</p>
<p>Case $1$b : $\text{Re}(yt)\leq0$</p>
<p>Let $\Phi(x,y,t)=X(x)T(y,t)$ ,</p>
<p>Then $X(x)\dfrac{\partial T(y,t)}{\partial t}-\dfrac{yT(y,t)}{2}\dfrac{dX(x)}{dx}+\dfrac{yT(y,t)}{2}\dfrac{d^2X(x)}{dx^2}=0$</p>
<p>$\biggl(\dfrac{d^2X(x)}{dx^2}-\dfrac{dX(x)}{dx}\biggr)\dfrac{yT(y,t)}{2}=-X(x)\dfrac{\partial T(y,t)}{\partial t}$</p>
<p>$\dfrac{\dfrac{d^2X(x)}{dx^2}-\dfrac{dX(x)}{dx}}{X(x)}=-\dfrac{2\dfrac{\partial T(y,t)}{\partial t}}{yT(y,t)}=-\dfrac{4(f(u))^2+1}{4}$</p>
<p>$\begin{cases}\dfrac{d^2X(x)}{dx^2}-\dfrac{dX(x)}{dx}+\dfrac{4(f(u))^2+1}{4}X(x)=0\\\dfrac{\partial T(y,t)}{\partial t}=\dfrac{(4(f(u))^2+1)yT(y,t)}{8}\end{cases}$</p>
<p>$\begin{cases}X(x)=\begin{cases}c_1(u)e^{\frac{x}{2}}\sin(xf(u))+c_2(u)e^{\frac{x}{2}}\cos(xf(u))&\text{when}~f(u)\neq0\\c_1xe^{\frac{x}{2}}+c_2e^{\frac{x}{2}}&\text{when}~f(u)=0\end{cases}\\T(y,t)=c_3(u)e^{\frac{yt(4(f(u))^2+1)}{8}}\end{cases}$</p>
<p>$\therefore\Phi(x,y,t)=c_1xe^{\frac{4x+yt}{8}}+c_2e^{\frac{4x+yt}{8}}+\int_uC_3(u)e^{\frac{4x+yt(4(f(u))^2+1)}{8}}\sin(xf(u))~du+\int_uC_4(u)e^{\frac{4x+yt(4(f(u))^2+1)}{8}}\cos(xf(u))~du$</p>
<p>or $c_1xe^{\frac{4x+yt}{8}}+c_2e^{\frac{4x+yt}{8}}+\sum\limits_uC_3(u)e^{\frac{4x+yt(4(f(u))^2+1)}{8}}\sin(xf(u))+\sum\limits_uC_4(u)e^{\frac{4x+yt(4(f(u))^2+1)}{8}}\cos(xf(u))$</p>
<p>Case $2$ : $\alpha\neq0$ and $\beta=0$</p>
<p>Then $\dfrac{\partial\Phi}{\partial t}-\dfrac{y}{2}\dfrac{\partial\Phi}{\partial x}+\dfrac{y}{2}\dfrac{\partial^2\Phi}{\partial x^2}+\dfrac{\alpha^2y^2}{2}\dfrac{\partial^2 \Phi}{\partial y^2}=0$</p>
<p>Let $\Phi(x,y,t)=X(x)Y(y)T(t)$ ,</p>
<p>Then $X(x)Y(y)\dfrac{dT(t)}{dt}-\dfrac{yY(y)T(t)}{2}\dfrac{dX(x)}{dx}+\dfrac{yY(y)T(t)}{2}\dfrac{d^2X(x)}{dx^2}+\dfrac{\alpha^2y^2X(x)T(t)}{2}\dfrac{d^2Y(y)}{dy^2}=0$</p>
<p>$\dfrac{\dfrac{dT(t)}{dt}}{T(t)}+\dfrac{y}{2}\dfrac{\dfrac{d^2X(x)}{dx^2}-\dfrac{dX(x)}{dx}}{X(x)}+\dfrac{\alpha^2y^2}{2}\dfrac{\dfrac{d^2Y(y)}{dy^2}}{Y(y)}=0$</p>
<p>$\dfrac{\dfrac{d^2X(x)}{dx^2}-\dfrac{dX(x)}{dx}}{X(x)}=-\dfrac{\alpha^2y\dfrac{d^2Y(y)}{dy^2}}{Y(y)}-\dfrac{2\dfrac{dT(t)}{dt}}{yT(t)}=-\dfrac{4(f(u))^2+1}{4}$</p>
<p>$\begin{cases}\dfrac{d^2X(x)}{dx^2}-\dfrac{dX(x)}{dx}+\dfrac{4(f(u))^2+1}{4}X(x)=0\\\dfrac{\alpha^2y^2\dfrac{d^2Y(y)}{dy^2}}{2Y(y)}+\dfrac{\dfrac{dT(t)}{dt}}{T(t)}=\dfrac{(4(f(u))^2+1)y}{8}\end{cases}$</p>
<p>$\begin{cases}\dfrac{d^2X(x)}{dx^2}-\dfrac{dX(x)}{dx}+\dfrac{4(f(u))^2+1}{4}X(x)=0\\\dfrac{\alpha^2y^2\dfrac{d^2Y(y)}{dy^2}}{2Y(y)}-\dfrac{(4(f(u))^2+1)y}{8}=-\dfrac{\dfrac{dT(t)}{dt}}{T(t)}=\dfrac{\alpha^2((g(v))^2-1)}{8}\end{cases}$</p>
<p>$\begin{cases}\dfrac{d^2X(x)}{dx^2}-\dfrac{dX(x)}{dx}+\dfrac{4(f(u))^2+1}{4}X(x)=0\\\begin{cases}y^2\dfrac{d^2Y(y)}{dy^2}-\biggl(\dfrac{(4(f(u))^2+1)y}{4\alpha^2}+\dfrac{(g(v))^2-1}{4}\biggr)Y(y)=0\\\dfrac{\dfrac{dT(t)}{dt}}{T(t)}=-\dfrac{\alpha^2((g(v))^2-1)}{8}\end{cases}\end{cases}$</p>
<p>According to <a href="http://eqworld.ipmnet.ru/en/solutions/ode/ode0215.pdf" rel="nofollow">http://eqworld.ipmnet.ru/en/solutions/ode/ode0215.pdf</a>,</p>
<p>$\begin{cases}X(x)=\begin{cases}c_1(u)e^{\frac{x}{2}}\sin(xf(u))+c_2(u)e^{\frac{x}{2}}\cos(xf(u))&\text{when}~f(u)\neq0\\c_1xe^{\frac{x}{2}}+c_2e^{\frac{x}{2}}&\text{when}~f(u)=0\end{cases}\\\begin{cases}Y(y)=\begin{cases}c_3(u,v)\sqrt{y}I_{g(v)}\biggl(\dfrac{\sqrt{(4(f(u))^2+1)y}}{\alpha}\biggr)+c_4(u,v)\sqrt{y}I_{-g(v)}\biggl(\dfrac{\sqrt{(4(f(u))^2+1)y}}{\alpha}\biggr)&\text{when}~g(v)~\text{is not an integer}\\c_3(u,v)\sqrt{y}I_{g(v)}\biggl(\dfrac{\sqrt{(4(f(u))^2+1)y}}{\alpha}\biggr)+c_4(u,v)\sqrt{y}K_{g(v)}\biggl(\dfrac{\sqrt{(4(f(u))^2+1)y}}{\alpha}\biggr)&\text{when}~g(v)~\text{is an integer}\end{cases}\\T(t)=c_5(v)e^{-\frac{\alpha^2t((g(v))^2-1)}{8}}\end{cases}\end{cases}$</p>
<p>Case $3$ : $\alpha\neq0$ and $\beta\neq0$</p>
<p>Then $\dfrac{\partial\Phi}{\partial t}-\dfrac{y}{2}\dfrac{\partial\Phi}{\partial x}+\alpha\beta y^{\frac{3}{2}}\dfrac{\partial^2\Phi}{\partial x\partial y}+\dfrac{y}{2}\dfrac{\partial^2\Phi}{\partial x^2}+\dfrac{\alpha^2y^2}{2}\dfrac{\partial^2\Phi}{\partial y^2}=0$</p>
<p>Let $\Phi(x,y,t)=F(x,y)T(t)$ ,</p>
<p>Then $F(x,y)\dfrac{\partial T(t)}{\partial t}-\dfrac{yT(t)}{2}\dfrac{\partial F(x,y)}{\partial x}+\alpha\beta y^{\frac{3}{2}}T(t)\dfrac{\partial^2F(x,y)}{\partial x\partial y}+\dfrac{yT(t)}{2}\dfrac{\partial^2F(x,y)}{\partial x^2}+\dfrac{\alpha^2y^2T(t)}{2}\dfrac{\partial^2F(x,y)}{\partial y^2}=0$</p>
<p>$\biggl(\dfrac{y}{2}\dfrac{\partial^2F(x,y)}{\partial x^2}+\alpha\beta y^{\frac{3}{2}}\dfrac{\partial^2F(x,y)}{\partial x\partial y}+\dfrac{\alpha^2y^2}{2}\dfrac{\partial^2F(x,y)}{\partial y^2}-\dfrac{y}{2}\dfrac{\partial F(x,y)}{\partial x}\biggr)T(t)=-F(x,y)\dfrac{\partial T(t)}{\partial t}$</p>
<p>$\dfrac{\dfrac{y}{2}\dfrac{\partial^2F(x,y)}{\partial x^2}+\alpha\beta y^{\frac{3}{2}}\dfrac{\partial^2F(x,y)}{\partial x\partial y}+\dfrac{\alpha^2y^2}{2}\dfrac{\partial^2F(x,y)}{\partial y^2}-\dfrac{y}{2}\dfrac{\partial F(x,y)}{\partial x}}{F(x,y)}=-\dfrac{\dfrac{\partial T(t)}{\partial t}}{T(t)}=-f(u)$</p>
<p>$\begin{cases}\dfrac{y}{2}\dfrac{\partial^2F(x,y)}{\partial x^2}+\alpha\beta y^{\frac{3}{2}}\dfrac{\partial^2F(x,y)}{\partial x\partial y}+\dfrac{\alpha^2y^2}{2}\dfrac{\partial^2F(x,y)}{\partial y^2}-\dfrac{y}{2}\dfrac{\partial F(x,y)}{\partial x}+f(u)F(x,y)=0\\\dfrac{\dfrac{\partial T(t)}{\partial t}}{T(t)}=f(u)\end{cases}$</p>
<p>Let $F(x,y)=\int_ve^{xg(v)}K(v,y)~dv$ or $\sum\limits_ve^{xg(v)}K(v,y)$ ,</p>
<p>Then $\begin{cases}\dfrac{y}{2}\int_v(g(v))^2e^{xg(v)}K(v,y)~dv+\alpha\beta y^{\frac{3}{2}}\int_vg(v)e^{xg(v)}\dfrac{\partial K(v,y)}{\partial y}dv+\dfrac{\alpha^2y^2}{2}\int_ve^{xg(v)}\dfrac{\partial^2K(v,y)}{\partial y^2}dv-\dfrac{y}{2}\int_vg(v)e^{xg(v)}K(v,y)~dv+f(u)\int_ve^{xg(v)}K(v,y)~dv=0~\text{or}\dfrac{y}{2}\sum\limits_v(g(v))^2e^{xg(v)}K(v,y)+\alpha\beta y^{\frac{3}{2}}\sum\limits_vg(v)e^{xg(v)}\dfrac{\partial K(v,y)}{\partial y}+\dfrac{\alpha^2y^2}{2}\sum\limits_ve^{xg(v)}\dfrac{\partial^2K(v,y)}{\partial y^2}-\dfrac{y}{2}\sum\limits_vg(v)e^{xg(v)}K(v,y)+f(u)\sum\limits_ve^{xg(v)}K(v,y)=0\\T(t)=c(u)e^{tf(u)}\end{cases}$</p>
<p>$\begin{cases}\int_ve^{xg(v)}\biggl(\dfrac{\alpha^2y^2}{2}\dfrac{\partial^2K(v,y)}{\partial y^2}+\alpha\beta g(v)y^{\frac{3}{2}}\dfrac{\partial K(v,y)}{\partial y}+\dfrac{((g(v))^2-g(v))y+2f(u)}{2}K(v,y)\biggr)dv=0~\text{or}\sum\limits_ve^{xg(v)}\biggl(\dfrac{\alpha^2y^2}{2}\dfrac{\partial^2K(v,y)}{\partial y^2}+\alpha\beta g(v)y^{\frac{3}{2}}\dfrac{\partial K(v,y)}{\partial y}+\dfrac{((g(v))^2-g(v))y+2f(u)}{2}K(v,y)\biggr)=0\\T(t)=c(u)e^{tf(u)}\end{cases}$</p>
<p>$\therefore\begin{cases}F(x,y)=\int_ve^{xg(v)}K(v,y)~dv~\text{or}~\sum\limits_ve^{xg(v)}K(v,y)\\\dfrac{\alpha^2y^2}{2}\dfrac{\partial^2K(v,y)}{\partial y^2}+\alpha\beta g(v)y^{\frac{3}{2}}\dfrac{\partial K(v,y)}{\partial y}+\dfrac{((g(v))^2-g(v))y+2f(u)}{2}K(v,y)=0\\T(t)=c(u)e^{tf(u)}\end{cases}$</p>
<p>But $\dfrac{\alpha^2y^2}{2}\dfrac{\partial^2K(v,y)}{\partial y^2}+\alpha\beta g(v)y^{\frac{3}{2}}\dfrac{\partial K(v,y)}{\partial y}+\dfrac{((g(v))^2-g(v))y+2f(u)}{2}K(v,y)=0$ is very difficult to solve.</p>
|
2,716,036 |
<p>In reviewing some old homework assignments, I found two problems that I really do not understand, despite the fact that I have the answers.</p>
<p>The first is: R(x, y) if y = 2^d * x for some nonnegative integer d. What I do not understand about this relation is how it can possibly be transitive (according to my notes it is). My understanding is that if the relation were transitive, the following would apply: if y = 2^d * x and x = 2^d * z, then y = 2^d * z. That seems impossible unless x = z. Am I missing something?</p>
<p>The second is: R(x, y) if x and y are both divisible by 17. What I do not understand about this relation is why it is not reflexive. My understanding is that if the relation is reflexive, if x is divisible by 17 then both x and x are divisible by 17. I think that I am possibly applying the quality of reflexiveness incorrectly to this relation, but I am not quite sure.</p>
<p>Thank you for any help in correcting these misunderstandings!</p>
|
Thomas Bakx
| 545,960 |
<p>Recall that the first relation has to hold for $\textit{some}$ integer $d$. Can you find it, given the ones that relate x to y and y to z? </p>
<p>For the second: x is related to x if and only if x and x are divisible by 17, which is the same as x being divisible by 17. This need not be true, right? </p>
<p>Hope this helps</p>
|
1,187,706 |
<p>I'd imagine this is a duplicate question, but I can't find it:</p>
<p>How many quadratic residues are there $\pmod{2^n}$. </p>
<p>I tried small $n$: $n=1: 2, n=2:2, n=3: 3, n=4: 4, n=5: $not 5: 0, 1, 4, 9, 16, 25</p>
<p>No pattern :/</p>
|
Jef
| 188,361 |
<p>There is a basic solution that only uses modular arithmetic.
Let $p(n)$ be the number of quadratic residues modulo $2^n$. We know that $p(1) = p(2) = 2$. Now suppose $n \geq 3$. We will proof 3 small lemmas.</p>
<p><strong>Lemma 1.</strong> It suffices to consider the residues of the numbers $0,1,...,2^{n-2}$</p>
<p><strong>Proof.</strong> Indeed, the following relations hold:
$$(2^n-x)^2 \equiv x^2 \mod 2^n, \quad (2^{n-1}-x)^2 \equiv x^2 \mod 2^n $$
So every square can be reduced to the square of one of the numbers $0,1,...2^{n-2}$</p>
<p><strong>Lemma 2.</strong> If $x,y$ are distinct odd integers and $1 \leq x,y \leq 2^{n-2}-1$, then $x^2\not\equiv y^2 \mod 2^n$.</p>
<p><strong>Proof.</strong> Suppose that $2^n \mid x^2-y^2 = (x-y)(x+y)$. Now $\text{gcd}(x+y,x-y)$ is divisible by $2$ but not by $4$ (because it divides $2x = (x+y)+(x-y)$). This implies that one of the two factors is divisible by $2$ and the other one is divisible by $2^{n-1}$. But $x+y \leq 2^{n-1}-2$, so $x-y$ should be divisible by $2^{n-1}$. We conclude that $x = y$.</p>
<p>These two lemmas imply that there are exactly $2^{n-3}$ odd quadratic residues modulo $2^n$. Now we look at the even residues:</p>
<p><strong>Lemma 3.</strong> There are $p(n-2)$ even quadratic residues mod $2^n$.</p>
<p><strong>Proof.</strong> Indeed, if $x = 2m$ and $y = 2n$, then $x^2 \equiv y^2 \mod 2^n$ if and only if $m^2\equiv n^2 \mod 2^{n-2}$. This gives the result immediately.</p>
<p>We conclude that $p(n) = p(n-2)+2^{n-3}$. From this recurrence relation it is quite easy to prove by induction that </p>
<ul>
<li>$p(n) = \frac{5+2^{n-1}}{3}$ if $n$ is odd</li>
<li>$p(n) = \frac{1+2^{n-1}}{3}$ if $n$ is even</li>
</ul>
<p>If you like to put everything together, we get the following result:</p>
<blockquote>
<p>There are $$\frac{2^{n-1}+3+2.(-1)^{n-1}}{3}$$ quadratic residues modulo $2^n$</p>
</blockquote>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.