qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
2,252,256 |
<p>The Chernoff bound, being a sharp quantitative version of the law of large numbers, is incredibly useful in many contexts. Some general applications that come to mind (which I guess are really the same idea) are:</p>
<ul>
<li>bounding the sample complexity of PAC algorithms;</li>
<li>estimating confidence intervals for polling (somewhat surprisingly, the Chernoff bound tells you that if you want to poll a population of $N$ people, the number $N$ doesn't really matter for the tradeoff between randomly sampled people and accuracy of the empirical average)</li>
<li>more generally, very often in the analysis of randomized algorithms you need to argue that you have 'enough' samples, and Chernoff bounds are the way to go.</li>
</ul>
<p>By now I feel like I have a good intuitive grasp of the power and limitations of Chernoff bounds. Basically, my question is about getting a similar understanding of matrix Chernoff bounds:</p>
<ul>
<li>How do I obtain a similar palette of 'classical' applications of matrix Chernoff bounds?</li>
<li>What are some of the nice proofs they give us?</li>
<li>Have they substantially simplified previous work that didn't explicitly use them?</li>
</ul>
|
amakelov
| 286,977 |
<p>To answer my question a bit, I've recently learned that Kyng and Sachdeva have used some matrix concentration inequalities to give a very simple Laplacian solver algorithm; paper <a href="https://arxiv.org/pdf/1605.02353.pdf" rel="nofollow noreferrer">here</a>.</p>
|
87,091 |
<p>Find $C$ such that</p>
<p>$$Ce^{(-4x^2-xy-4y^2)/2}$$</p>
<p>is a joint probability density of a $2$-variable Gaussian.</p>
<p>If someone could give me a jumping off point, or a process as to how to go about this, I'd really appreciate it.</p>
|
DonAntonio
| 31,254 |
<p>For some reason the OP won't post, or can't post, an answer, so summarizing the comments:</p>
<p>$(1)\,\,\forall\,g,x,y\in G\,$ , and putting $\,a^b:=b^{-1}ab\,\,,\,a,b\in G\,$:
$$[x,y]^g:=g^{-1}[x,y]g:=g^{-1}x^{-1}y^{-1}xy g=\left(x^{-1}\right)^g\left(y^{-1}\right)^gx^gy^g=[x^g,y^g]\in G'\Longrightarrow G'\triangleleft G$$and thus the quotient $\,G/G'\,$ is a group.</p>
<p>$(2)\,\,$ Let now $\,N\,$ be any normal subgroup of $\,G\,$ s.t. $\,G/N\,$ is abelian, then:
$$\forall\,x,y\in G,\,\,xNyN=yNxN\Longleftrightarrow xyN=yxN \Longleftrightarrow (yx)^{-1}xy\in N \Longleftrightarrow [x,y] \in N$$
and since $\,G':=\langle\,[x,y]\;:\;x,y\in G\,\rangle\,$ , then $\,G'\leq N\,\Longrightarrow \,G'$ is the minimal (normal) subgroup of </p>
<p>$\,G\,$ s.t. its quotient is abelian -- "minimal" wrt set inclusion --.</p>
<p><strong>Exercise:</strong> Explain the parentheses around "normal" above, i.e. show that any subgroup of G containing the commutator subgroup is normal.</p>
|
2,407,183 |
<blockquote>
<p>Calculate $\int_{\lambda} dz/(z^2-1)^2$, where $\lambda$ is the path
in $\mathbb{R^2}-\{1,-1\}$ plotted below:</p>
<p><a href="https://i.stack.imgur.com/T1uFH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T1uFH.png" alt="enter image description here"></a></p>
</blockquote>
<p>This may seem like an ordinary calculus integral but I'm studying $1$-forms and homotopic paths, but I can't connect all the theory I've been viewing in order to solve this integral in this context. </p>
<p>For example, I know that this is an integral over a closed path. There are theorems relating closed paths with the form being exact, and therefore the integral should be $0$, I guess? However, I'm not integrating an $1$-form... So I wonder what this exercise is askign me to use. Could somebody help me?</p>
|
robjohn
| 13,854 |
<p>Using the contour</p>
<p><a href="https://i.stack.imgur.com/sEDKB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sEDKB.png" alt="enter image description here"></a></p>
<p>contour integration gives
$$
\begin{align}
\overbrace{\color{#C00}{\text{PV}\int_0^\infty\frac{e^{i\pi x}}{\log(x)}\,\mathrm{d}x}}^{\substack{\text{integral along the line}\\\text{minus an infinitesimal}\\\text{interval centered at $1$}}}+\overbrace{\vphantom{\int_0^\infty}\ \ \ \ \ \color{#00F}{\pi i}\ \ \ \ \ }^{\substack{\text{integral along}\\\text{an infinitesimal}\\\text{semicircular arc}\\\text{centered at $1$}}}
&=\overbrace{\color{#090}{\int_0^\infty\frac{e^{-\pi x}}{\frac\pi2-i\log(x)}\,\mathrm{d}x}}^{\substack{\text{integral along the}\\\text{positive imaginary axis}}}\\
&=\int_0^\infty\frac{e^{-\pi x}\left(\frac\pi2+i\log(x)\right)}{\frac{\pi^2}4+\log(x)^2}\,\mathrm{d}x\tag{1}
\end{align}
$$
since there are no singularities inside the contour and the integral along the black arc vanishes as the radius tends to $\infty$.</p>
<p>Taking the imaginary part of $(1)$, we get an integral that is far easier to evaluate numerically:
$$
\begin{align}
\int_0^\infty\frac{\sin(\pi x)}{\log(x)}\,\mathrm{d}x
&=-\pi+\int_0^\infty\frac{e^{-\pi x}\log(x)}{\frac{\pi^2}4+\log(x)^2}\,\mathrm{d}x\\[6pt]
&\doteq-3.2191900386476588051\tag{2}
\end{align}
$$</p>
|
4,315,844 |
<p>Find the greatest number <span class="math-container">$k$</span> such that there exists a perfect square that is not a multiple of 10, with its last <span class="math-container">$k$</span> digits the same</p>
<p>I could find <span class="math-container">$12^2 = 144$</span>, <span class="math-container">$38^2 = 1444$</span>, <span class="math-container">$62^2 = 3844$</span> and <span class="math-container">$88^2 = 7744$</span></p>
|
Callum
| 416,266 |
<p>This is definitely not true. An important example of this is the Cartan decomposition of a semisimple Lie Algebra. These are decompositions of the form <span class="math-container">$\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{p}$</span> where <span class="math-container">$\mathfrak{k}$</span> is a maximal compact subalgebra (in other words a maximal subalgebra with negative definite Killing form).
Then <span class="math-container">$\mathfrak{p}$</span> is very far from being a subalgebra. In fact, <span class="math-container">$[\mathfrak{p},\mathfrak{p}] \subset \mathfrak{k}$</span>.</p>
<p>Note this works for any symmetric decomposition too.</p>
<p>For a simple Lie algebra the Killing form is the only ad-invariant, nondegenerate symmetric bilinear form up to scale. So unless the Lie algebra is compact there is no definite (positive or negative) form with those properties. Thus, "orthogonal complement" is a bit of a dodgy concept. The orthogonal subspace may overlap with the original so it can't be called complementary in general (the Cartan decomposition is an important exception).</p>
|
117,836 |
<p>I'm not the best at math(but eager to learn) so please excuse me if I'm not explaining this problem correctly, I will try to add as much info to make it clear. I basically receive 2 pieces of data, one is a list of integers and the other is a target_sum, and I want to figure out all the ways I can use the list to equal the target sum. So for a list of <code>[1,2,4]</code> to a target_sum of <code>10</code>, I would get:</p>
<pre><code>2 * 4 + 1 * 2 + 0 * 1
2 * 4 + 0 * 2 + 2 * 1
1 * 4 + 3 * 2 + 0 * 1
1 * 4 + 2 * 2 + 2 * 1
1 * 4 + 1 * 2 + 4 * 1
1 * 4 + 0 * 2 + 6 * 1
0 * 4 + 5 * 2 + 0 * 1
0 * 4 + 4 * 2 + 2 * 1
0 * 4 + 3 * 2 + 4 * 1
0 * 4 + 2 * 2 + 6 * 1
0 * 4 + 1 * 2 + 8 * 1
0 * 4 + 0 * 2 + 10 * 1
</code></pre>
<p>The current algorithm I'm using is two parts, one builds a look up table of what combinations are possible and the other builds the actual table:
Table building:</p>
<pre><code>for i = 1 to k
for z = 0 to sum:
for c = 1 to z / x_i:
if T[z - c * x_i][i - 1] is true:
set T[z][i] to true
</code></pre>
<p>Possibility construction:</p>
<pre><code>function RecursivelyListAllThatWork(k, sum) // Using last k variables, make sum
/* Base case: If we've assigned all the variables correctly, list this
* solution.
*/
if k == 0:
print what we have so far
return
/* Recursive step: Try all coefficients, but only if they work. */
for c = 0 to sum / x_k:
if T[sum - c * x_k][k - 1] is true:
mark the coefficient of x_k to be c
call RecursivelyListAllThatWork(k - 1, sum - c * x_k)
unmark the coefficient of x_k
</code></pre>
<p>This is the basic idea, my actual code is a slightly different because I am using bounds to remove the possibility of infinite values(I say a single value cannot exceed the value of the sum). </p>
<p>The problem is, the table building part does not scale. It is flawed in, at least two ways, one is its dependent on the previous number to be completed(thus I cannnot break it and run it individually for each number) and the second problem is it requires to read a table before it writes(I am learning about how to get around this technically but currently it makes the program very slow).</p>
<p>Is there a more efficient way to do this that scales?</p>
<p>Here's an approach I tried to take but failed(so far):</p>
<pre><code>create a large table full of all possible values.z to target_sum..
create another large table of T[z - c * x_i][i - 1] and compare if the values exist.
If they do exists, add T[z][i] to a third table that contains the correct master
</code></pre>
<p>I don't need code just the logic(if this is possible). If it helps you(as it often helps me understand) here is some python code with my approach/examples:</p>
<pre><code>#data = [-2,10,5,50,20,25,40]
#target_sum = 100
data = [1,2,3,4,5,6,7,8,9,10]
target_sum = 10
# T[x, i] is True if 'x' can be solved
# by a linear combination of data[:i+1]
T = [] # all values are False by default
T.append([0, 0]) # base case
R=200 # Maximum size of any partial sum
max_percent=0.3 # Maximum weight of any term
for i, x in enumerate(data): # i is index, x is data[i]
for s in range(-R,R+1): #set the range of one higher than sum to include sum itself
max_value = int(abs((target_sum * max_percent)/x))
for c in range(max_value + 1):
if [s - c * x, i] in T:
T.append([s, i+1])
coeff = [0]*len(data)
def RecursivelyListAllThatWork(k, sum): # Using last k variables, make sum
# /* Base case: If we've assigned all the variables correctly, list this
# * solution.
# */
if k == 0:
# print what we have so far
print(' + '.join("%2s*%s" % t for t in zip(coeff, data)))
return
x_k = data[k-1]
# /* Recursive step: Try all coefficients, but only if they work. */
max_value = int(abs((target_sum * max_percent)/x_k))
for c in range(max_value + 1):
if [sum - c * x_k, k - 1] in T:
# mark the coefficient of x_k to be c
coeff[k-1] = c
RecursivelyListAllThatWork(k - 1, sum - c * x_k)
# unmark the coefficient of x_k
coeff[k-1] = 0
RecursivelyListAllThatWork(len(data), target_sum)
</code></pre>
<p>Any help or suggestions would be appreciated. I have worked on this for a long time and all my experiments have failed. I'm hoping to get the correct answer but even ideas of different approaches would be great so I can experiment with them.</p>
<p>Thank you.</p>
<p>p.s. I have asked a question on stackoverflow 2 days ago about improving my existing algo, but I got answers from posters who admitted to not fully understanding what I was asking for and because they have answered the question, I am unable to delete it to post here. I have flagged it for deletion. </p>
<p>Update: Regarding some of the comments,I'm not looking for a fast way of doing this(although it would be nice), I'm looking for a scalable way..my method works but each loop is dependent on the last loop which causes it to be bound to a single process. The math is in such a way that it builds upon previous results. If I can somehow break the process up into independent parts then I can use more cpu/computers to handle the work. I know it'll take a long time, but if it takes 600 hours on one cpu, then two should cut it down a bit and so on..right now I can't use other computers so I'm forced to wait 600 hours(while everything else on the system is ideal). please help!</p>
<p>Also the results are large but not infinite as I have bounds set so the number cannot exceed a certain percent of target_sum. </p>
|
Henry
| 6,460 |
<p>I assume that you are trying to generate partitions rather than the simpler task of counting how many there are. </p>
<p>What you need to do is generate smaller partitions with part of the data and then add the next piece of data up to the target. There is a cost here as you are also building all partitions from the data that add up to less than the target. </p>
<p>The following is in R code, and not very efficient at that, but should be reasonably clear for translation into other languages. It will not work for negative or zero data; it will work for the data in any order.</p>
<pre><code>data <- c(4,2,1)
target_sum <- 10
partitions <- as.data.frame( matrix( rep(0,length(data)+1), nrow=1 ) )
names(partitions) <- c(data,"total")
for (i in 1:length(data) ) {
for (j in data[i]:target_sum) {
newpartitions = partitions[partitions$total == j - data[i], ]
if (nrow(newpartitions) > 0) {
newpartitions[ ,i] <- newpartitions[ ,i] + 1
newpartitions$total <- j
partitions <- rbind(partitions, newpartitions)
}
}
}
result <- partitions[partitions$total == target_sum, ]
rownames(result) <- NULL
result
</code></pre>
<p>This particular version produces </p>
<pre><code> 4 2 1 total
1 2 1 0 10
2 1 3 0 10
3 0 5 0 10
4 2 0 2 10
5 1 2 2 10
6 0 4 2 10
7 1 1 4 10
8 0 3 4 10
9 1 0 6 10
10 0 2 6 10
11 0 1 8 10
12 0 0 10 10
</code></pre>
|
887,282 |
<p>I was given a question to find a greatest possible length to measure $495$, $900$,$1665$ (in centimetres). </p>
<p>The solution is finding GCF or GCD or HCF (highest common factor) of these numbers which is $45$.</p>
<p>How does this works in real time, please explain in layman words</p>
<p>Thank you all</p>
|
DSinghvi
| 148,018 |
<p>Obviously question has imprecise wording . It assumes whatever you get must be a whole number if you take HCF it will have some whole number into HCF so you can find the length.</p>
|
887,282 |
<p>I was given a question to find a greatest possible length to measure $495$, $900$,$1665$ (in centimetres). </p>
<p>The solution is finding GCF or GCD or HCF (highest common factor) of these numbers which is $45$.</p>
<p>How does this works in real time, please explain in layman words</p>
<p>Thank you all</p>
|
Michael Albanese
| 39,599 |
<p>The way that we measure lengths is to have a standard unit of measurement and then count how many of these units a given length is. Common standard units include millimetre, centimetre, metre, kilometre, inch, foot, mile, lightyear, astronomical unit, etc. </p>
<p><strong>Example:</strong> An Olympic size swimming pool is $50$ metres long; this means that if I had $50$ one-metre long pieces of wood, I could lay them end to end and they would stretch from one end of the pool to the other. Note, if I had chosen a unit other than metre, the length may not be as nice. As $50$ metres is approximately $164.042$ feet, we would need $164$ one-foot long pieces of wood, and another small piece to reach from one end of the pool to the other.</p>
<p>In the above example, choosing the unit to be a metre was much better than choosing the unit to be a foot as the former lead to a whole number of units (or pieces of wood), while the latter did not. What the question is asking for is the longest possible unit so that all three lengths can be expressed as a whole number of units.</p>
<p>Suppose the unit is $U$ centimetres long. If we want to express a length as a whole number of units then it lengths in centimetres must be a multiple of $U$ - to see this, note that being $k$ units long is precisely saying that it is $kU$ centimeters long. As we want all three of $495$, $900$, and $1665$ to be expressed as a whole number of units, we require all three of them to be multiples of $U$; i.e. $U$ is a common divisor of the three numbers. As the question asks for the greatest possible length (of unit) we could use to measure the three given lengths, we see that $U$ must in fact be the greatest common divisor of the three numbers. Therefore</p>
<p>$$U = \operatorname{gcd}(495, 900, 1665) = 45.$$</p>
|
3,752,948 |
<p>Find <span class="math-container">$\displaystyle \lim_{x \to\frac{\pi} {2}} \{1^{\sec^2 x} + 2^{\sec^2 x} + \cdots + n^{\sec^2 x}\}^{\cos^2 x} $</span><br />
I tried to do like this :
Let <span class="math-container">$A=\displaystyle \lim_{x \to\frac{\pi}{2}} \{1^{\sec^2 x} + 2^{\sec^2 x} + \cdots + n^{\sec^2 x}\}^{\cos^2 x} $</span></p>
<p>Then <span class="math-container">$\ln A=\lim_{x \to\frac{\pi} {2}} \cos^2 x \ln\{1^{\sec^2 x} + 2^{\sec^2 x} + \cdots + n^{\sec^2 x}\}$</span><br />
which implies
<span class="math-container">$$\ln A = \lim_{x \to\frac {\pi}{2}} \frac{1^{\sec^2 x} + 2^{\sec^2 x}+ \cdots + n^{\sec^2 x}}{\sec^2 x}$$</span>
I'm stuck here. If I am on the right way please guide me to reach conclusion. Otherwise please describe the actual way. Thanks in advance.</p>
|
Felix Marin
| 85,343 |
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
&\bbox[15px,#ffd]{\lim_{x \to \pi/2}\bracks{\large1^{\sec^{2}\pars{x}} + 2^{\sec^{2}\pars{x}} +
\cdots + n^{\sec^{2}\pars{x}}}^{\large\cos^{2}\pars{x}}}
\\[5mm] = &\
\lim_{x \to \infty}\pars{\sum_{k = 1}^{n}k^{x}}^{1/x} =
\exp\pars{\lim_{x \to \infty}{\ln\pars{\sum_{k = 1}^{n}k^{x}} \over x}}
\\[5mm] = &\
\exp\pars{\lim_{x \to \infty}{\sum_{k = 1}^{n}k^{x}\ln\pars{k} \over
\sum_{k = 1}^{n}k^{x}}}
\\[5mm] = &\
\exp\pars{\lim_{x \to \infty}
{\sum_{k = 1}^{n - 1}\pars{k/n}^{x}\ln\pars{k} + \ln\pars{n} \over
\sum_{k = 1}^{n - 1}\pars{k/n}^{x} + 1}} =
\expo{\ln\pars{n}} =
\bbox[15px,#ffd,border:1px solid navy]{n}
\end{align}</span></p>
|
1,761,775 |
<p>Suppose we have angular momentum operators $L_1,L_2,L_3$ which satisfy $[L_1,L_2]=iL_3$, $[L_2,L_3]=iL_1$ and $[L_3,L_1]=iL_2$. We can show that the operator $L^2:=L_1^2+L_2^2+L_3^2$ commutes with $L_1,L_2$ and $L_3$. Now define $L_{\pm}=L_1\pm iL_2$ and then we can also show that $$L_+L_-=L^2-L_3(L_3-I)$$ and $$L_-L_+=L^2-L_3(L_3+I)$$
If now we have a vector $v$ in our Hilbert space $\mathcal{H}$ such that $L^2v=\lambda v$ and $L_3 v=\lambda_3v$. Then it is easy to show using the commutation relations that $L_+v$ is an eigenvector of $L^2$ with eigenvalue $\lambda$ and it is also an eigenvector of $L_3$ with eigenvalue $1+\lambda_3$. Now I have to show that $$||L_+v||^2=|\lambda-\lambda_3(\lambda_3+1)|||v||^2$$</p>
<p>I dont see how this can be deduced from the above calculations, we do know that $$||L_-L_+v||^2=|\lambda-\lambda_3(\lambda_3+1)|^2||v||^2$$
This is true because of the second formula above for $L_-L_+$.</p>
<p>I need a hint for this calculation. Thanks.</p>
|
Dac0
| 291,786 |
<p>You have to work out the commutators, i.e.
$$[L_+,L_-]=-2iL_3$$
Then consider that
$$L_-L_+=L_+L_--[L_+,L_-]=L_+L_--2iL_3$$
so that
$$(L_-L_+)v=(L_+L_-)v-2iL_3v$$
I don't know where you should plug this one in but I'm pretty sure that if you didn't use it already you should use it now.</p>
|
84,982 |
<p>I am a new professor in Mathematics and I am running an independent study on Diophantine equations with a student of mine. Online I have found a wealth of very helpful expository notes written by other professors, and I would like to use them for guided reading. <strong>I am wondering whether it is customary to ask permission of the author before using his or her online notes for my own reading course.</strong> </p>
<p>Also, if anyone has suggestions for good sources Diophantine Equations please feel free to enlighten me.</p>
|
Gerhard Paseman
| 3,568 |
<p>Yes, asking permission is the right thing to do. If you don't get it, the right thing to do is to mention the resource to your students, along with the proviso that you do not have permission to copy it, and suggest they do their best not to make or distribute paper or electronic copies beyond viewing it in a browser until after they get permission.</p>
<p>Gerhard "Ask Me About System Design" Paseman, 2012.01.05</p>
|
2,390,215 |
<p>There exist a bijection from $\mathbb N$ to $\mathbb Q$, so $\mathbb Q$ is countable. And by well ordering principle $\mathbb N$ has a least member say $n_1$ which is mapped to something in $\mathbb Q$, In this way $\mathbb Q$ can be well arranged? Is my argument good? </p>
|
TRUSKI
| 91,784 |
<p>$\mathbb{Q}$ is not well ordered. That can be seen very easily for example $(\sqrt{2},3)$ intersection $\mathbb{Q}$ has no least element in $\mathbb{Q}$. Comparing with $\mathbb{N}$ is not a good idea because there does not exist an order preserving bijective map form $\mathbb{N} \to \mathbb{Q}$.</p>
|
234,063 |
<p>There are some answers on how to get a smooth squarewave function. But I would like to have a smooth boxcar function or rectangle function with 2 different widths.: <code>wup</code>, and <code>wdown</code></p>
<p>One solution is the Fourier Transform, but I prefer having an approximation with a smoothness factor.</p>
<p>Thank you very much.</p>
<p>Here is the non-smooth version. I would like to smooth it.
<strong>Note that: this is not a square wave function (the smooth version is known <a href="https://mathematica.stackexchange.com/questions/38293/make-a-differentiable-smooth-sawtooth-waveform">the solution</a>.</strong> This is the boxcar function with 2 widths: up width is 2, and down width is 3.</p>
<p><a href="https://i.stack.imgur.com/BHJNK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BHJNK.jpg" alt="enter image description here" /></a></p>
<pre><code>pulse[h_, wup_, wdown_, t_] :=
With[{period = wup + wdown, duty = wup/(wup + wdown)},
h*UnitBox[Mod[t/period, 1.]/(2. duty)]]
Plot[{pulse[1, 2, 3, t]}, {t, 0, 10}, Exclusions -> None]
</code></pre>
<hr />
<p><strong>Square Wave Pulse with Uniform Width:</strong></p>
<p>I can do a smooth version only for a square wave function, with a uniform width, but not the rectangle function.</p>
<pre><code>smoothPulse[ePulse_, wup_, wdown_, smoothness_, t_] :=
ePulse/2 + ePulse/\[Pi]*ArcTan[Sin[\[Pi] t/wup]*smoothness]
</code></pre>
<p><a href="https://i.stack.imgur.com/F6pX4.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F6pX4.jpg" alt="enter image description here" /></a></p>
<hr />
<p><strong>Rectangle Pulse with 2 differents widths:</strong></p>
<p>I don't know the smooth version. I tried to build a smooth UnitBox version.</p>
<pre><code>smoothunitbox[t_, sharpness_] :=
Piecewise[{{Tanh[sharpness*(2 t + 1)] + 1,
t < 0}, {1 - Tanh[sharpness*(2 t - 1)], t >= 0}}]/2
smoothRectangle[h_, wup_, wdown_, smoothness_, t_] :=
With[{period = wup + wdown, duty = wup/(wup + wdown)},
h*smoothunitbox[Mod[t/period, 1.]/(2. duty), smoothness]]
Plot[smoothRectangle[1, 2, 4, 20, t], {t, 0, 10}, Exclusions -> None]
</code></pre>
<p>But the smooth is not good. How to improve?</p>
<p><a href="https://i.stack.imgur.com/Oa0qH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oa0qH.jpg" alt="enter image description here" /></a></p>
|
Ted Ersek
| 460 |
<p>Suppose you want the square wave high 20% of the time. The following helps.</p>
<pre><code>DutyCycle = 0.2; Plot[Piecewise[{{x/(2 DutyCycle),
x < DutyCycle}, {(1 - 2 DutyCycle + x)/(2 - 2 DutyCycle),
DutyCycle < 1}}], {x, 0, 1}]
</code></pre>
<p><a href="https://i.stack.imgur.com/7Ht2V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Ht2V.png" alt="piecewise linear plot" /></a>
Based on that we can make one period of our smooth square-wave by doing this.</p>
<pre><code>singlePeriod[t_, Smoothness_, DutyCycle_] := Piecewise[{
{ArcTan[Sin[2 \[Pi] t/(2 DutyCycle)]*Smoothness]/
ArcTan[Smoothness], t < DutyCycle},
{ArcTan[Sin[\[Pi] (1 - 2 DutyCycle + t)/(1 - DutyCycle)]*Smoothness]/
ArcTan[Smoothness], DutyCycle < 1}
}]; Plot[singlePeriod[t, 12, 0.2], {t, 0, 1}, PlotRange -> All,Exclusions -> None]
</code></pre>
<p><a href="https://i.stack.imgur.com/AyBL6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AyBL6.png" alt="one period with 20% duty cycle." /></a>
Make the above periodic using this:</p>
<pre><code>smoothPulse[t_,Smoothness_,DutyCycle_]:=singlePeriod[Mod[t,1],Smoothness,DutyCycle]; Plot[smoothPulse[t,12,0.2],{t,0,4},PlotRange->All,Exclusions->None]
</code></pre>
<p><a href="https://i.stack.imgur.com/X0Ic6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X0Ic6.png" alt="periodic smooth square wave" /></a></p>
<p>With appropriate use smoothPulse above, we can change the high-value, low-value, period, and phase of a smoothPulse. Here is an example:</p>
<pre><code>Plot[2.5+2.5*smoothPulse[12t+0.2,12,0.2],{t,0,0.33333},PlotRange->All,Exclusions->None]
</code></pre>
<p><a href="https://i.stack.imgur.com/FonCa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FonCa.png" alt="enter image description here" /></a></p>
|
815,418 |
<p>Ok, so I've been playing around with radical graphs and such lately, and I discovered that if the </p>
<pre><code>nth x = √(1st x √ 2nd x ... √nth x);
</code></pre>
<p>Then</p>
<p>$$\text{the "infinith" } x = x$$</p>
<p>Example: </p>
<p>$$\sqrt{4\sqrt{4\sqrt{4\sqrt{4\ldots}}}}=4$$<br>
Try it yourself, type calc in Google search, hit then a number, such as $4$, and repeat, ending with $4$, (or press the buttons instead).</p>
<p>I'm a math-head, not big enough though, I think this sequence is divergent or convergent or whatever, too lazy to search up the difference.</p>
<p>However, can this be explained to me? Like how the Pythagorean Theorem can be explained visually.</p>
|
DanielV
| 97,045 |
<p>$$\begin{align}
X & = \sqrt{n \cdot \sqrt {n \cdot \sqrt{n \dots} } } \\
& = \sqrt{n} \cdot \sqrt{\sqrt{n}} \cdot \sqrt{\sqrt{\sqrt{n}}}
\cdot \dots \\
&= n^{1/2} \cdot n^{1/4} \cdot n^{1/8} \dots \\
&= n^{1/2 + 1/4 + 1/8 \dots} \\
&= n^1 \\
&= n \\
\end{align}$$</p>
|
1,110,652 |
<p>Would anyone happen to know any introductory video lectures / courses on partial differential equations? I have tried to find it without success (I found, however, on ODEs). </p>
<p>It does not have to be free material, but something not to expensive would be nice.</p>
|
Tomás
| 42,394 |
<p>You can find a lecture on Hyperbolic Conservation Laws, given by Constantine Dafermos <a href="http://video.impa.br/index.php?page=Curso-Hyperbolic-Conservation-Laws" rel="nofollow">here</a>.</p>
<p>And <a href="http://video.impa.br/index.php?page=escola-de-altos-estudos-unique-continuation-and-nonlinear-dispersive-equations" rel="nofollow">here</a> a course on unique continuation and nonlinear dispersive equations, given by Gustavo Ponce.</p>
|
1,110,652 |
<p>Would anyone happen to know any introductory video lectures / courses on partial differential equations? I have tried to find it without success (I found, however, on ODEs). </p>
<p>It does not have to be free material, but something not to expensive would be nice.</p>
|
krishnab
| 38,239 |
<p>The best PDE introduction that I have found is by Dr. Pavel Grinfeld on youtube. The link is:</p>
<p><a href="https://www.youtube.com/playlist?list=PLlXfTHzgMRUK56vbQgzCVM9vxjKxc8DCr" rel="nofollow noreferrer">https://www.youtube.com/playlist?list=PLlXfTHzgMRUK56vbQgzCVM9vxjKxc8DCr</a></p>
<p>These are really really excellent. He starts with ODEs and then moves to PDEs. He covers the introduction to elliptic, hyperbolic,and parabolic PDEs, heat equation, separation of variables, and Fourier series. Unfortunately he does not cover other common topics like Sturm-Liouville problems and Green's functions, etc. But this is probably the best place to start. After that you can find individual videos on Green's functions, etc.</p>
|
3,954,066 |
<p>How do you show that for some <span class="math-container">$A\subset \mathbb R$</span>, <span class="math-container">$A\not \in \mathbb R$</span>.</p>
<p>Intuitively it makes sense, but how do you actually prove it?</p>
|
Stinking Bishop
| 700,480 |
<p>To prove that <em>none</em> of the subsets <span class="math-container">$A\subset\mathbb R$</span> are contained in <span class="math-container">$\mathbb R$</span> as elements, one needs to craft the proof tailored to the particular definition of <span class="math-container">$\mathbb R$</span>. For some definitions of <span class="math-container">$\mathbb R$</span> it may be impossible. For example, if you accept that <span class="math-container">$0\in\mathbb N\subset\mathbb R$</span> and if you accept the set-theoretic definition of zero (<span class="math-container">$0:=\emptyset$</span>), then <span class="math-container">$\emptyset$</span> is both an element and a subset of <span class="math-container">$\mathbb R$</span>.</p>
<p>However, it is possible to prove that <em>at least one</em> subset <span class="math-container">$A\subset\mathbb R$</span> is not an element of <span class="math-container">$\mathbb R$</span>. A well-known consequence of the <em>axiom of regularity</em> is that <a href="https://en.wikipedia.org/wiki/Axiom_of_regularity#No_set_is_an_element_of_itself" rel="nofollow noreferrer"><em>no set is an element of itself</em></a>. Let <span class="math-container">$A:=\mathbb R$</span>. As <span class="math-container">$A\subset\mathbb R$</span>, if every subset of <span class="math-container">$\mathbb R$</span> also belonged to <span class="math-container">$\mathbb R$</span>, we would have <span class="math-container">$A=\mathbb R\in\mathbb R$</span> - contradicting the axiom of regularity.</p>
<p>Without use of the axiom of regularity, you can still mount a proof which would use something like Cantor's diagonal argument. Let <span class="math-container">$X=\{A\subset\mathbb R\mid A\not\in A\}$</span>. If every subset of <span class="math-container">$\mathbb R$</span> is an element of <span class="math-container">$\mathbb R$</span>, then <span class="math-container">$X\subset\mathbb R$</span>. Now, try to answer the question: <em>does <span class="math-container">$X\in X$</span></em>? (You will conclude that <span class="math-container">$X\in X\iff X\not\in X$</span> - a contradiction.)</p>
|
3,355,215 |
<blockquote>
<p>For <span class="math-container">$x$</span> and <span class="math-container">$k$</span> real numbers, for what values of <span class="math-container">$k$</span> will the graphs of <span class="math-container">$f(x)=-2\sqrt{x+1}$</span> and <span class="math-container">$g(x)=\sqrt{x-2}+k$</span> intersect?</p>
</blockquote>
<p>I tried to make an equation of them, but I’m stuck with the two variables and I couldn’t solve it. Much appreciation.</p>
<p>We didn’t do calculus yet..</p>
|
Toby Mak
| 285,313 |
<p>Rearranging, we have:</p>
<p><span class="math-container">$$k = -2 \sqrt{x+1} - \sqrt{x-2}$$</span></p>
<p>The domain of the RHS is <span class="math-container">$[2, \infty)$</span>. Therefore, the graphs will intersect when</p>
<p><span class="math-container">$$k ≤ -2 \sqrt{2+1} - \sqrt{2-2} \Rightarrow k ≤ -2 \sqrt3.$$</span></p>
|
3,355,215 |
<blockquote>
<p>For <span class="math-container">$x$</span> and <span class="math-container">$k$</span> real numbers, for what values of <span class="math-container">$k$</span> will the graphs of <span class="math-container">$f(x)=-2\sqrt{x+1}$</span> and <span class="math-container">$g(x)=\sqrt{x-2}+k$</span> intersect?</p>
</blockquote>
<p>I tried to make an equation of them, but I’m stuck with the two variables and I couldn’t solve it. Much appreciation.</p>
<p>We didn’t do calculus yet..</p>
|
Olivier Roche
| 649,615 |
<p>This problem is not about solving an equation in the variables <span class="math-container">$x$</span> and <span class="math-container">$k$</span>. Keep in mind that you're looking for values of <span class="math-container">$k$</span> such that there exist <span class="math-container">$x$</span> such that <span class="math-container">$f(x)=g(x)$</span>.</p>
<p><span class="math-container">$f$</span> is decreasing and <span class="math-container">$g$</span> is increasing for any value of <span class="math-container">$k$</span>. <strong>edit</strong> Furthermore, <span class="math-container">$\lim_{x \to \infty} f(x) = -\infty$</span> and <span class="math-container">$\lim_{x \to \infty} g(x) = +\infty$</span> . <strong>/edit</strong><br>
Since <span class="math-container">$g$</span> is defined only for <span class="math-container">$x\geqslant 2$</span>, you hence get :</p>
<p><span class="math-container">$$\Gamma_f \textrm{ and } \Gamma_g \textrm{ intersect } \Longleftrightarrow g(2) \leqslant f(2) $$</span></p>
<p>But <span class="math-container">$g(2) = k$</span>. Hence, <span class="math-container">$\Gamma_f$</span> and <span class="math-container">$\Gamma_g$</span> intersect iff <span class="math-container">$k\leqslant -2 \sqrt{3}$</span>.</p>
|
4,557,576 |
<p>Working on a 3U CubeSat as part of a project for a Space Engineering club. To calculate the maximum solar disturbance force, we are trying to calculate the largest shadow a 0.1 * 0.1 * 0.3 rectangular prism can cast.</p>
<p>If the satellite was oriented with the largest side facing the sun directly, the shadow cast would be 0.03 m^2. It is our thought that there is a certain orientation in which the shadow is larger than this, so the solar disturbance force will be maximized. Is there a function we could use to maximize this value?</p>
<p>Any help would be greatly appreciated!</p>
|
cbob
| 1,109,675 |
<p>Thank you for everyone's help! I was able to use a similar method by defining a matrix of the vertices of the prism and multiplying by the X rotation matrix and Y rotation matrix.</p>
<p>We then have a function of thetaX and thetaY, which can be maximized to describe the optimal angles. I also calculated an answer of 0.0436 m^2.</p>
|
231,317 |
<p>The task I'm faced with is to implement a poly-time algorithm that finds a nontrivial factor of a Carmichael number. Many resources on the web state that this is easy, however without further explanation why.</p>
<p>Furthermore, since Miller-Rabin exits when a nontrivial square root of 1 is found, this can be used to find a factor to the Carmichael number: $x^2 \equiv 1 = (x+1)(x-1)\equiv0 \pmod N$, where N is the Carmichael number we want to factor and $x$ the nontrivial square root of 1. Hence factors must be found using $\gcd(x+1,N)$ and $\gcd(x-1, N)$, correct? </p>
<p>Due to problems with strong liars, in some cases we will miss out on factors. Is this a major problem? Since Miller-Rabin tests only passes composites with a probability 1/4, is it correct to say that the chances of finding a factor is > 0.5?</p>
<p>Kind regards!</p>
|
Ravindra HV
| 262,480 |
<p>Try this (a description of the comment above - also blogged the same!) :</p>
<p>For each prime base $(2,3,5,7,11...)$ try checking the remainder for the exponents under $\frac{n-1}{2},\frac{n-1}{4},\frac{n-1}{8}\dots$ and so on. Once a number other than $1$ is found then try : </p>
<p>$\gcd (x-1,n)$ </p>
<p>or</p>
<p>$\gcd (x+1,n)$. </p>
<p>It should result in one of the factors!</p>
<p>For example : </p>
<p>Moduluo : $n=561$
base : $a=2$</p>
<p>$a^{(n-1/1)}$ $\mod n$ : $(2^{560}) \mod (561) = 1$ </p>
<p>$a^{(n-1/2)}$ $\mod n$ : $(2^{280}) \mod (561) = 1$ </p>
<p>$a^{(n-1/4)}$ $\mod n$ : $(2^{140}) \mod (561) = 67$ </p>
<p>$\gcd(561,68)=17$</p>
<p>$\gcd(561,66)=33$</p>
<p>$561/33=17$</p>
<p>$561=3\cdot 11\cdot 17$ !!</p>
<p>Related links :</p>
<p><a href="http://mathforum.org/kb/message.jspa?messageID=5488111" rel="nofollow noreferrer">http://mathforum.org/kb/message.jspa?messageID=5488111</a>
<a href="https://en.wikipedia.org/wiki/Carmichael_number" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Carmichael_number</a></p>
|
101,974 |
<p>We are familiar with <strong><em>Hurwitz’s theorem</em></strong> which implies there is only the Fibonacci 2-Square, Euler 4-Square, Degen 8-Square, and no more. However, if we relax conditions and allow for <em>rational expressions</em>, then <strong><em>Pfister's theorem</em></strong> states that similar identities are possible for <strong>ALL $2^n$ squares</strong>. His 4-square version is: </p>
<p>$\begin{align}&(a_1^2+a_2^2+a_3^2+a_4^2)(b_1^2+b_2^2+b_3^2+b_4^2)=\\
&(a_1 b_4 + a_2 b_3 + a_3 b_2 + a_4 b_1)^2 +\\
&(a_1 b_3 - a_2 b_4 + a_3 b_1 - a_4 b_2)^2 +\\
&\left(a_1 b_2 + a_2 b_1 + \frac{a_3 (b_1^2b_4-2b_1b_2b_3-b_2^2b_4)}{b_1^2+b_2^2} - \frac{a_4 (b_1^2b_3+2b_1b_2b_4-b_2^2b_3)}{b_1^2+b_2^2}\right)^2+\\
&\left(a_1 b_1 - a_2 b_2 - \frac{a_4 (b_1^2b_4-2b_1b_2b_3-b_2^2b_4)}{b_1^2+b_2^2} - \frac{a_3 (b_1^2b_3+2b_1b_2b_4-b_2^2b_3)}{b_1^2+b_2^2}\right)^2
\end{align}$</p>
<p><em>Question</em>: What does the <em>Pfister 8-square</em> version look like? (<strong>And, if you have the strength, can you also give the 16-square version?</strong>) Here is K. Conrad's <a href="http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/pfister.pdf" rel="nofollow">pdf file</a> which describes the general method, but I can’t make heads or tails out of it. </p>
<p>$\color{red}{\text{Attention}}$ (Feb. 16): Someone is trying to delete Wikipedia's article on <a href="https://en.wikipedia.org/wiki/Degen%27s_eight-square_identity" rel="nofollow">Degen's Eight-Square Identity</a> <em>simply because he finds it uninteresting</em>. Please <a href="https://en.wikipedia.org/wiki/Wikipedia%3aArticles_for_deletion/Degen%27s_eight-square_identity" rel="nofollow">vote to keep</a>.</p>
|
Tito Piezas III
| 4,781 |
<p>Persistence pays off. I really wanted to see <strong><em>Pfister’s 8-Square Identity</em></strong> (distinct from Degen's version) and, since no one was answering my question, I took another look at K. Conrad’s paper and managed, with some heuristics, to find the identity myself. Without further ado,</p>
<p>$\begin{align} &(x_1^2\,+\,x_2^2\,+\,x_3^2\,+\,x_4^2\,+\,x_5^2\,+\,x_6^2\,+\,x_7^2\,+\,x_8^2)(y_1^2\,+\,y_2^2\,+\,y_3^2\,+\,y_4^2\,+\,y_5^2\,+\,y_6^2\,+\,y_7^2\,+\,y_8^2)\\ &=z_1^2\,+\,z_2^2\,+\,z_3^2\,+\,z_4^2\,+\,z_5^2\,+\,z_6^2\,+\,z_7^2\,+\,z_8^2
\end{align}$</p>
<p>where,</p>
<p>$\begin{align}
z_1 &= x_1 y_1 - x_2 y_2 - x_3 y_3 - x_4 y_4 + u_1 y_5 - u_2 y_6 - u_3 y_7 - u_4 y_8\\
z_2 &= x_2 y_1 + x_1 y_2 + x_4 y_3 - x_3 y_4 + u_2 y_5 + u_1 y_6 + u_4 y_7 - u_3 y_8\\
z_3 &= x_3 y_1 - x_4 y_2 + x_1 y_3 + x_2 y_4 + u_3 y_5 - u_4 y_6 + u_1 y_7 + u_2 y_8\\
z_4 &= x_4 y_1 + x_3 y_2 - x_2 y_3 + x_1 y_4 + u_4 y_5 + u_3 y_6 - u_2 y_7 + u_1 y_8\\
z_5 &= x_5 y_1 - x_6 y_2 - x_7 y_3 - x_8 y_4 + x_1 y_5 - x_2 y_6 - x_3 y_7 - x_4 y_8\\
z_6 &= x_6 y_1 + x_5 y_2 + x_8 y_3 - x_7 y_4 + x_2 y_5 + x_1 y_6 + x_4 y_7 - x_3 y_8\\
z_7 &= x_7 y_1 - x_8 y_2 + x_5 y_3 + x_6 y_4 + x_3 y_5 - x_4 y_6 + x_1 y_7 + x_2 y_8\\
z_8 &= x_8 y_1 + x_7 y_2 - x_6 y_3 + x_5 y_4 + x_4 y_5 + x_3 y_6 - x_2 y_7 + x_1 y_8
\end{align}$</p>
<p>and,</p>
<p>$\begin{align}
u_1 &= \frac{(-x_1^2+x_2^2+x_3^2+x_4^2)x_5 - 2x_1(0x_1 x_5+x_2 x_6+x_3 x_7+x_4 x_8)}{d}\\
u_2 &= \frac{(x_1^2-x_2^2+x_3^2+x_4^2)x_6 - 2x_2(x_1 x_5+0x_2 x_6+x_3 x_7+x_4 x_8)}{d}\\
u_3 &= \frac{(x_1^2+x_2^2-x_3^2+x_4^2)x_7 - 2x_3(x_1 x_5+x_2 x_6+0x_3 x_7+x_4 x_8)}{d}\\
u_4 &= \frac{(x_1^2+x_2^2+x_3^2-x_4^2)x_8 - 2x_4(x_1 x_5+x_2 x_6+x_3 x_7+0x_4 x_8)}{d}\\
d &=x_1^2+x_2^2+x_3^2+x_4^2
\end{align}$</p>
<p>I’ve verified it with Mathematica and it holds true. Note also the nice sub-identity,</p>
<p>$u_1^2+u_2^2+ u_3^2+ u_4^2 = x_5^2+x_6^2+ x_7^2+x_8^2$</p>
<p><strong>P.S.</strong> By <em>Pfister’s Theorem</em>, the <a href="http://en.wikipedia.org/wiki/Pfister%27s_sixteen-square_identity" rel="nofollow">16-square version</a> is analogous.</p>
|
3,331,865 |
<p>Find the Orthonormal basis of vector space <span class="math-container">$V$</span> of the linear polynomials of the form <span class="math-container">$ax+b$</span> such that <span class="math-container">$\:$</span>, <span class="math-container">$p:[0,1] \to \mathbb{R}$</span>. with inner product</p>
<p><span class="math-container">$$\langle p,q \rangle= \int_0^1 p(x)q(x) dx$$</span> for <span class="math-container">$q, p \in V$</span></p>
<p>The polynomial in <span class="math-container">$V$</span> are linear so the basis of <span class="math-container">$V$</span> are <span class="math-container">$\left \{1,x \right \}$</span> further i am not getting how to proceedes further </p>
<p>please help</p>
<p>Thankyou.</p>
|
ZAF
| 609,023 |
<p>With Gram-Schmidt</p>
<p><span class="math-container">$v_{1} = 1$</span>, <span class="math-container">$v_{2} = \frac{x - <x,v_{1}>v_{1}}{ \| x - <x,v_{1}> v_{1}\|} $</span></p>
<p>So <span class="math-container">$<x,1> = \int_{[0,1]}xdx = \frac{1}{2}$</span></p>
<p>And <span class="math-container">$\| x - <x,1> \|^{2} = \int_{[0,1]} (x - \frac{1}{2})^{2} = \frac{1}{12}$</span> </p>
<p>Then <span class="math-container">$v_{2} = \frac{x - \frac{1}{2}}{\sqrt{12}}$</span></p>
<p><span class="math-container">$\{v_{1},v_{2} \}$</span> is a orthonormal basis</p>
|
20,802 |
<p>Look at the following example:</p>
<p>Which picture has four apples?</p>
<p>A<a href="https://i.stack.imgur.com/Tpm46.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Tpm46.png" alt="enter image description here" /></a></p>
<hr />
<p>B <a href="https://i.stack.imgur.com/AOv29.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AOv29.png" alt="enter image description here" /></a></p>
<hr />
<p>C <a href="https://i.stack.imgur.com/lZNmQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lZNmQ.png" alt="enter image description here" /></a></p>
<hr />
<p>D <a href="https://i.stack.imgur.com/BWqpH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BWqpH.png" alt="enter image description here" /></a></p>
<p>B is the expected answer but should not the correct answer be BCD? Technically if a set has <strong>exactly</strong> <span class="math-container">$m$</span> elements, then it has <span class="math-container">$k$</span> elements if <span class="math-container">$k\leq m$</span>. This is also how we talk in everyday language:</p>
<blockquote>
<p>"Do you have three dollars?"
"Yes."</p>
</blockquote>
<p>The second speaker is not indicating he has exactly three dollars. He simply indicates that he has <strong>at least</strong> three dollars.</p>
<p>So I am wondering if we are teaching children correct logic here. Shouldn't the original question be rephrased as "which picture has <strong>exactly</strong> four apples"?</p>
|
JVC
| 13,160 |
<p>I think a lot of it has to do with the age of the child, and what the goal of the question is. If this is for children just learning their numbers, like say 4 or 5 years old, then I think B is the correct answer as thy are not being asked to stretch their logical capabilities, but to simply recognize and call out the difference between 4 of something, or 5 of something, etc.</p>
<p>However if this was for older children who already have comfort with basic numbers concepts and the goal was to demonstrate more creative thinking, then I believe B, C, and D would be the correct answer.</p>
|
2,530,298 |
<p>I tried putting y alone and got y=(-6x-5)/5. Which I then put into the distance formula sqrt((x-1)^2+(y+5) and substitute the number above in for y but my answer never comes out correct.. Wondering if I could get some help.</p>
|
Community
| -1 |
<p><strong>Hint:-</strong><br>
$(1,-5)$ projected on the point $(x, \frac{-5-6x}{5})$. Line passes through $(0,-1)$. Vector joining points $(0,-1)$and $(x, \frac{-5-6x}{5})$ is perpendicular to the Vector joining $(x, \frac{-5-6x}{5})$ to $(1,-5)$. Find $x$ from the condition. </p>
|
28,389 |
<p>I've been studying the axiomatic definition of the real numbers, and there's one thing I'm not entirely sure about.</p>
<p>I think I've understood that the Archimedean axiom is added in order to discard ordered complete fields containing infinitesimals like the hyperreal numbers. Additionally, this property clearly cannot be derived solely from the axioms of ordered field and completeness, since $^*\mathbb{R}$ and $\mathbb{R}$ are two complete ordered fields, two models of the axioms, one of them Archimedean and the other non-Archimedean. Are these ideas correct?</p>
<p>Thanks.</p>
|
Scott Carter
| 722 |
<p>There are non-archemedian completions of the rationals, called <a href="http://en.wikipedia.org/wiki/P-adic_number"> p-adic </a> completions. The book Gouvêa, Fernando Q. (2000). p-adic Numbers : An Introduction (2nd ed.). Springer is an excellent introduction to these.</p>
|
619,526 |
<p>I'm studying up for my algebra exam, and I'm not exactly sure how to solve a problem like the following</p>
<blockquote>
<p>Let $f = X^2 + 1 \in \mathbb{F}_5[X]$, $R = \mathbb{F}_5[X]/\langle f \rangle$ and $\alpha = X + \langle f \rangle \in R$. Show that $\alpha \in R^*$ and that $\vert \alpha \vert = 4$ in $R^*$.</p>
</blockquote>
<p>I have already proven that R is not a field by proving that $\langle f \rangle$ is not a maximal ideal, but I'm unsure of how I should proceed.</p>
|
Andreas Caranti
| 58,401 |
<p>Working in $R$ means working with polynomials in $\mathbb{F}_5[X]$ modulo the polynomial $f = X^{2} + 1$.</p>
<p>So you have $\alpha^{2} = X^{2} \equiv -1 \pmod{f}$, and $\alpha^{4} \equiv (-1)^{2} = 1 \pmod{f}$. This shows that $\alpha$ is invertible, with inverse $\alpha^{3} \equiv -X \pmod{f}$. (This is because $1 \equiv \alpha^{4} \equiv \alpha \alpha^{3} \pmod{f}$.</p>
<p>Also, since $\alpha^{4} \equiv 1 \pmod{f}$, the order of $\alpha$ is a divisor of $4$. But since $\alpha^{2} \equiv -1 \not\equiv 1 \pmod{f}$, the order does not divide $2$, and thus it is indeed $4$.</p>
|
363,881 |
<p>The problem gives the curl of a vector field, and tells us to calculate the line integral over $C$ where $C$ is the intersection of $x^2 + y^2 = 1$ and $z= y^2$. I know I should use Stokes Theorem, but how do I find $dS$? </p>
<p>I did $z = \frac{1}{2}(y^2 + 1 - x^2)$ and calculated $dS$ as $\langle-dz/dx,-dz/dy,1\rangle$ but apparently that was wrong.</p>
|
Glen O
| 67,842 |
<p>So long as there isn't any sort of problem point (such as if the vector field you're integrating is undefined at the origin), you can integrate over any surface that is closed within the definition. So let $z=y^2$ be the surface, and let $x^2+y^2=1$ describe the limits of the surface (because $x^2+y^2=1$ doesn't form a simple surface).</p>
<p>If you convert to cylindrical coordinates, then $z=y^2=r^2\sin^2\theta$ is the surface and the bound is given by $r\leq 1$, $0\leq \theta< 2\pi$. So
$$\mathbf{r}=x\mathbf{i}+y\mathbf{j}+y^2\mathbf{k}$$ and $$d\mathbf{S}=\frac{\partial\mathbf{r}}{\partial x}\times\frac{\partial\mathbf{r}}{\partial y}dxdy=\mathbf{i}\times\left(\mathbf{j}+2y\mathbf{k}\right)dxdy=\left(\mathbf{k}-2y\mathbf{j}\right)dxdy$$ and since $$dxdy=rdrd\theta$$ we have
$$
d\mathbf{S}=(\mathbf{k}-2r\sin\theta\mathbf{j})rdrd\theta
$$</p>
|
128,695 |
<p>Is there any good guide on covering space for idiots? Like a really dumped down approach to it . As I have an exam on this, but don't understand it and it's like 1/6th of the exam. </p>
<p>So I'm doing Hatcher problem and stuck on 4.</p>
<ol>
<li>Construct a simply-connected covering space of the space $X \subset \mathbb{R}^3$ that is a union of a sphere and diameter. </li>
</ol>
<p>All I can think of is just connecting a bunch of spheres in a line.</p>
<p>But, yeah pretty scared will fail my degree because of this. So I need a good guide of covering spaces that isn't Hatcher. The only other uses heavy category theory which is even worse to read. </p>
|
Eric O. Korman
| 9 |
<p>I think it will help if you "pull the diameter out of the sphere using the 4th dimension" (think about the analogous situation of a diameter in a circle) to see that space is homeomorphic to <img src="https://i.stack.imgur.com/eiMLK.jpg" alt="enter image description here"></p>
<p>Now this is similar to the wedge sum of a circle with a sphere which you might have seen before (I think there's a similar example in hatcher). If you want to see the solution to the problem, go here: <a href="https://i.imgur.com/afVPm.jpg" rel="nofollow noreferrer">http://i.imgur.com/afVPm.jpg</a> </p>
|
4,523,363 |
<p>A set <a href="https://en.m.wikipedia.org/wiki/Lattice_(order)#:%7E:text=A%20lattice%20is%20an%20abstract,greatest%20lower%20bound%20or%20meet" rel="nofollow noreferrer">lattice</a> is a lattice where the meet is the intersection, the join is the union, and the partial order is <span class="math-container">$\subseteq$</span>. The standard example is is <span class="math-container">$P(A)$</span> where <span class="math-container">$A$</span> is a set. I’ve thought about other possible examples, yet have failed to come up with any. Are there any set/distributive lattices that aren’t isomorphic to <span class="math-container">$P(A)$</span> for some <span class="math-container">$A$</span>?</p>
|
Dumper D Garb
| 1,076,376 |
<p>Total orders with <span class="math-container">$\land = \min$</span> and <span class="math-container">$\lor = \max$</span> are lattices and any of them, call it <span class="math-container">$(X,\le)$</span> and call <span class="math-container">$(-\infty,x)=\{y\in X\,:\, y<x\}$</span>, is isomorphic to the set lattice <span class="math-container">$(\{(-\infty,x)\,:\, x\in X\}, \subseteq)$</span>. However, <span class="math-container">$P(A)$</span> is hardly ever totally ordered.</p>
|
4,135,449 |
<p>A few years ago, I got interested in an apparently hard integration problem which had me fascinated. This was the integral of a Sophomore Dream like integral except with the bounds over the real positive numbers denoted by <span class="math-container">$\Bbb R^+$</span> and not from 0 to 1 over the unit square, or unit line in this case? Nevertheless, this quirky problem caught my attention for having its area be almost 2 with this <a href="https://www.desmos.com/calculator/xuntecgloz" rel="noreferrer">graph</a>. Here is my solution, although there are a couple of calculations that I could have done differently:
<a href="https://i.stack.imgur.com/fPxAi.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/fPxAi.jpg" alt="enter image description here" /></a>
<span class="math-container">$$ A\mathit =\mathrm{\int_{\Bbb R^+}x^{-x}dx=\int_0^{\infty}\sum_{n=0}^{\infty}\frac{(-1)^nx^n ln^n(x)}{n!}dx}$$</span></p>
<p>I naively used Wolfram Alpha to integrate, but this can easily be still done with the substitution of <span class="math-container">$x=e^{-y}$</span> which is similar to the other <a href="https://en.m.wikipedia.org/wiki/Sophomore%27s_dream" rel="noreferrer">Sophomore Dream integrals</a>, so it will be ignored here for conciseness and should be able to use an interchanging theorem. Here I chose not to do the substitution as the same result appears anyways and this is simpler:</p>
<p><span class="math-container">$$\mathrm{\sum_{n=0}^{\infty} \frac{(-1)^n}{n!}\int_0^{\infty}x^n ln^n(x)dx= \sum_{n=0}^{\infty} \frac{(-1)^n}{n!}(-x^{-n}x^nln^n(x)ln(x)ln^{-(n+1)}(x)(-1)^{-(n+1)}(n+1)^{-(n+1)}Γ\big(n+1,-(n+1)ln(x)\big)\big|_0^{\infty}= \sum_{n=0}^{\infty}\frac{Γ\big(n+1,-(n+1)ln(\infty)\big)}{Γ(n+1)(n+1)^{n+1}}}$$</span></p>
<p>This -<span class="math-container">$\infty$</span> in the gamma function argument does not seem to exist, but taking a limit helps as well as some other forms with the <a href="https://functions.wolfram.com/GammaBetaErf/GammaRegularized/" rel="noreferrer">Regularized Incomplete Gamma Function</a>:
<span class="math-container">$$A=\mathrm{\sum_{n=1}^{\infty}n^{-n}\frac{Γ(n,-\infty n)}{Γ(n)}=\lim_{x\to \infty}\sum_{n\ge1}\frac{Q(n,-nx)}{n^n}=1.99545595…}$$</span></p>
<p>As seen from the link, the index n has to be a natural number which is true here. Also, there are other forms with the exponential sum function, and other series representations if non-elementary functions are not allowed. Here is <a href="https://www.wolframalpha.com/input/?i=sum+Q%28x%2C-3x%29%2Fx%5Ex+from+1+to+1000" rel="noreferrer">proof</a> of this answer from Wolfram Alpha.</p>
<p>Please do not give me any integral expressions of this constant, but rather special functions or other representations are encouraged. I will “check” the best answer. I just need a “nice” non-integral expression of the constant. Please give me feedback and correct me!</p>
<p>An alternate form solution is very needed for this problem also concerning power tower, so please check it out. I have left my context, attempts, and questions, therefore any help with alternate forms for either of these would be appreciated.</p>
<p>Using the definition of the Riemann Sum, we can rewrite this as:</p>
<p><span class="math-container">$$\mathrm{\int_{\Bbb R^+}x^{-x}dx=\lim_{b,n\to \infty}\frac bn \sum_{k=0}^n\left(\frac{bk}{n}\right)^{-\left(\frac{bk}{n}\right)},n\gg b}$$</span></p>
<p>Here is proof of this result: <a href="https://www.desmos.com/calculator/ictlmmzqwy" rel="noreferrer">Graph</a></p>
<p>Here are some unlikely <a href="https://www.wolframalpha.com/input/?i=1.99545595750013800041872469845272435208621663696797887278830006098303161714656636306695492778946387705816763770" rel="noreferrer">closed forms.</a></p>
<p>There is also a form of the constant using a summation expansion. Here is proof of a third form of this constant <a href="https://www.desmos.com/calculator/uyvdvtjpdq" rel="noreferrer">here</a>: <span class="math-container">$$\mathrm{A= \lim_{x\to \infty}\sum_{n\ge 1}\sum_{k=0}^{n-1}\frac{(-1)^k e^{nx} n^{k-n}x^k}{k!}}$$</span></p>
<p>Here is another sum and double sum answer <a href="https://www.desmos.com/calculator/tq5qjafgyg" rel="noreferrer">here</a>. This has a sophomore dream in the solution plus the same integrand from 1 to <span class="math-container">$\infty$</span>:
<span class="math-container">$$\mathrm{A=\sum_{n\ge 1}n^{-n}-\lim_{x\to \infty}\sum_{n\ge1}\sum_{k\ge1}\frac{(-x)^n(nx)^k}{(k+n)k!n!}}$$</span></p>
<p>Let <span class="math-container">$\bar F$</span> represent a Regularized Hypergeometric function then, as <span class="math-container">$\mathrm{Q(n,-nx)= 1-(-nx)^n\ _1\bar F_1(n,n+1,nx)}$</span>, the following can be proven with the <a href="https://mathworld.wolfram.com/ConfluentHypergeometricFunctionoftheFirstKind.html" rel="noreferrer">Kummer Confluent Hypergeometric Function</a>: <span class="math-container">$$\mathrm{\sum_{n\ge1}n^{-n}-\lim_{x\to\infty}\sum_{n\ge1}(-x)^n\ _1\bar F_1(n,n+1,nx)}$$</span></p>
<p>Also, feel free to use <a href="https://functions.wolfram.com/GammaBetaErf/Gamma2/06/01/04/01/02/" rel="noreferrer">these</a> identities to find an alternate form. I could go on, but I have found no form that does not need a limit in the answer.</p>
|
Nikos Bagis
| 223,191 |
<p><strong>This is a note</strong>. I will try to find more in the near future.
<span class="math-container">$$
I=\int^{\infty}_{0}x^{-x}dx=\int^{\infty}_{1}e^{-x\log x}dx+\int^{1}_{0}x^{-x}dx.
$$</span>
Using <span class="math-container">$\sum_{n\geq 1}n^{-n}=\int^{1}_{0}x^{-x}dx$</span> and making the change of variable <span class="math-container">$x\rightarrow e^{W(x)}$</span> where <span class="math-container">$W(x)$</span> is the well known Lambert function, we get
<span class="math-container">$$
I=\sum_{n\geq 1}n^{-n}+\int^{\infty}_{0}\exp\left(-e^{W(x)}W(x)\right)e^{W(x)}W'(x)dx=
$$</span>
<span class="math-container">$$
=\sum_{n\geq 1}n^{-n}+\int^{\infty}_{0}e^{-x}e^{W(x)}W'(x)dx=\sum_{n\geq 1}n^{-n}+\int^{\infty}_{0}e^{-x}d\left(e^{W(x)}\right)=
$$</span>
<span class="math-container">$$
\textrm{ integration by parts }
$$</span>
<span class="math-container">$$
=\sum_{n\geq 1}n^{-n}-1+\int^{\infty}_{0}e^{-x}e^{W(x)}dx=\sum_{n\geq 1}n^{-n}-1+\int^{\infty}_{0}e^{-x}\frac{x}{W(x)}dx
$$</span>
If we make the change of variable <span class="math-container">$x\rightarrow -\log t$</span> we arrive easily to
<span class="math-container">$$
I=\sum_{n\geq 1}n^{-n}-1+\int^{1}_{0}t\frac{-\log t}{W(-\log t)}\frac{-dt}{t}=\sum_{n\geq 1}n^{-n}-1+\int^{1}_{0}\frac{-\log t}{W(-\log t)}dt.
$$</span>
But the function <span class="math-container">$h(z)=\frac{W(-\log z)}{-\log z}$</span>, is the analytic continuation of the Euler's iterated exponential function <span class="math-container">$z^{z^{z^{\ldots}}}$</span>. Hence finaly we can write
<span class="math-container">$$
\int^{\infty}_{0}\frac{1}{x^x}dx=\sum_{n\geq 1}\frac{1}{n^n}-1+\int^{1}_{0}\frac{1}{t^{t^{t^{ \ldots}}}}dt.
$$</span>
In the sense <span class="math-container">$z^{z^{z^{\ldots}}}=h(z)$</span>.</p>
<p><strong>REVISED.</strong></p>
<p>Assume that <span class="math-container">$W(t)$</span> is the well known Lambert function. Then for <span class="math-container">$k=0,1,2,\ldots$</span>, we have
<span class="math-container">$$
\int^{M}_{0}\frac{t^k}{1+W(t)}dt=\frac{(-1)^{k+1}k!}{(k+1)^{k+1}}+
$$</span>
<span class="math-container">$$
+\frac{M^{k+1} k!}{W(M)^{k+1}(k+1)^{k+1}}\sum_{0\leq j\leq k}(-1)^{j+k}\frac{(k+1)^jW(M)^j}{j!}.
$$</span>
Hence
<span class="math-container">$$
\int^{M}_{0}\frac{e^{-t}}{1+W(t)}dt=-\sum^{\infty}_{k=0}\frac{1}{(k+1)^{k+1}}+
$$</span>
<span class="math-container">$$
+\sum^{\infty}_{k=0}\frac{e^{-(k+1)W(M)} M^{k+1}W(M)^{-k-1}\Gamma(k+1,-(k+1)W(M))}{(k+1)^{k}(k+1)!}=
$$</span>
<span class="math-container">$$
=-\sum_{k\geq 1}k^{-k}+\sum_{k\geq 1}\frac{\Gamma(k,-kW(M))}{k^{k-1}k!},\tag 1
$$</span>
where
<span class="math-container">$$
\Gamma(a,z)=\int^{\infty}_{z}e^{-t}t^{a-1}dt
$$</span>
is the incomplete Gamma function. But one can see with the help of (1) easily that
<span class="math-container">$$
\int^{M}_{1}x^{-x}dx=\int^{M\log M}_{0}\frac{e^{-t}}{1+W(t)}dt=-\sum^{\infty}_{k=1}k^{-k}+\sum^{\infty}_{k=1}\frac{\Gamma(k,-kW(M\log M))}{(k!) k^{k-1}}.
$$</span>
Hence using <span class="math-container">$\int^{1}_{0}x^{-x}dx=\sum^{\infty}_{k=1}k^{-k}$</span>, we get
<span class="math-container">$$
\int^{M}_{0}x^{-x}dx=\sum^{\infty}_{k=1}\frac{\Gamma(k,-k\log M)}{(k!) k^{k-1}}\textrm{, }\forall M>0.\tag 2
$$</span></p>
|
2,439,863 |
<p>I was working on the series </p>
<p>$\sum_{n=1}^{\infty}{\frac{(-1)^n}{n}z^{n(n + 1)}}$ and I was to consider when $z = i$. I have that $$\sum_{n=1}^{\infty}{\frac{(-1)^n}{n}i^{n(n + 1)}} = \sum_{n=1}^{\infty}{\frac{(-1)^{\frac{3}{2}n+\frac{1}{2}n^2}}{n}} = 1 - \frac{1}{2} - \frac{1}{3} + \frac{1}{4} + \frac{1}{5} - \frac{1}{6} - \frac{1}{7} + \frac{1}{8} + ... \cong 0.43882457311697565541$$</p>
<p>I believe it converges. Does anyone have any suggestions to find an exact value for the infinite series?</p>
|
Claude Leibovici
| 82,404 |
<p>The number you give is very close to $$\frac{\pi }{4}-\frac{\log (2)}{2}$$ They differ by $5 \times 10^{-12}$.</p>
<p>I bet that you obtained this value summing up to $n=10^6$.</p>
<p>Now, prove it !</p>
|
4,356,938 |
<p>I'm reading through <a href="https://press.princeton.edu/books/hardcover/9780691151199/elliptic-tales" rel="nofollow noreferrer">Elliptic Tales</a>.</p>
<p>Addition of 2 points on an elliptic curve is described as follows:</p>
<p><a href="https://i.stack.imgur.com/Hifi9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hifi9.png" alt="Image of addition" /></a></p>
<p><span class="math-container">$L$</span> is the line between <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> and <span class="math-container">$R$</span></p>
<p><span class="math-container">$L'$</span> is the line between <span class="math-container">$O$</span> and <span class="math-container">$P + Q$</span> and <span class="math-container">$R$</span></p>
<p>The book describes the algebraic process of adding together 2 points on an elliptic curve.</p>
<p>First: It describes adding together <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> to get <span class="math-container">$R$</span>. It then says we need to connect <span class="math-container">$O$</span> and <span class="math-container">$R$</span> with a line, and where that line intersects <span class="math-container">$E$</span> will be the point <span class="math-container">$P + Q$</span>. So far so good.</p>
<p>It then says the line connecting <span class="math-container">$O$</span> and <span class="math-container">$R$</span> is vertical and is easy to describe in projective coordinates as <span class="math-container">$x = x_3z$</span> where <span class="math-container">$R$</span> is <span class="math-container">$(x_3, y_3)$</span>.</p>
<p>The line connecting <span class="math-container">$O$</span> and <span class="math-container">$R$</span> is <span class="math-container">$L'$</span> doesn't seem to be vertical. Clearly, in the picture it's slanted downwards.</p>
<p>Does anyone know what's going on?</p>
|
brainjam
| 1,257 |
<p>The construction and diagrams you're quoting are in Section 8.1 of the book.</p>
<p>In Section 8.2, the author changes the definition of <span class="math-container">$\mathcal O$</span> to <span class="math-container">$(0:1:0)$</span>, i.e. the point at infinity in the vertical direction. In this new context, <span class="math-container">$L'$</span> becomes a vertical line.</p>
<p>This group theoretic construction in 8.1 works for nonsingular cubic curves in general where <span class="math-container">$\mathcal O$</span> can be any point on the curve. Elliptic curves are a special case of these curves, and always pass through (0:1:0). This simplifies the construction somewhat, in that the final stage is a simple reflection across the x-axis, rather than the intersection of the curve and a line.</p>
|
2,203,770 |
<p>If I am asked to find all values of $z$ such that $z^3=-8i$, what is the best method to go about that?</p>
<p>I have the following formula:
$$z^{\frac{1}{n}}=r^\frac{1}{n}\left[\cos\left(\frac{\theta}{n}+\frac{2\pi k}{n}\right)+i\sin\left(\frac{\theta}{n}+\frac{2\pi k}{n}\right)\right]$$</p>
<p>for $k=0,\pm1, \pm2,...$</p>
<p>Applying this formula, I find the cubed root of $8$, which is $2$. And then when I apply it to the formula, I get the following:</p>
<p>$$z = 2\left[\cos\left(\frac{\pi}{3}+\frac{2\pi k}{3}\right)+i\sin\left(\frac{\pi}{3}+\frac{2\pi k}{3}\right)\right]$$ for $k=0,\pm1, \pm2,...$</p>
<p>I am confused, because the given solution is as follows:
$$z = 2\left[\cos\left(\frac{\pi}{2}+\frac{2\pi k}{3}\right)+i\sin\left(\frac{\pi}{2}+\frac{2\pi k}{3}\right)\right]$$ for $k=0,\pm1, \pm2,...$</p>
<p>Where did I go wrong? How would my approach changed if I was asked to find all values for $-8$, or $8i$?</p>
|
Warren Hill
| 86,986 |
<p>There are several ways to solve this but one of the simplest is to write $ z $ of the form $ r \angle \theta $.</p>
<p>In this case $ z = 8 \angle \frac{-\pi}{2} $ We want cube roots and the cube root of 8 is 2. We get the first one by dividing $ \theta$ by 3</p>
<p>$ 2 \angle \frac{-\pi}{6}$ is thus one solution. There are three unique solutions and we get a solution by adding or subtracting $ \frac{2 \cdot \pi}{3} $ we can get a solution every time we do this but to avoid repeated solutions we are only interested in the ones where $ -\pi \lt \theta \le \pi$.</p>
<p>I'll leave this as an exercise, If required you can now convert these solutions back into $ a + i b$ form. </p>
|
675,695 |
<p>[<b>Added by PLC</b>: This question is a followup to <a href="https://math.stackexchange.com/questions/298870/inner-product-change-of-axioms/298891">this already answered question</a>.]</p>
<p>Keep the axioms for a real inner product (symmetry, linearity, and homogeneity).
But make the fourth be
$$\langle x,x \rangle = 0 \text{ if and only if } x = 0.$$</p>
<p>I want to prove that either $\langle x,x \rangle > 0$ or $\langle x,x \rangle < 0$ for all $x \neq 0$. </p>
<p>Note: $c_1 = \langle x,x \rangle > 0$ and and $c_2 = \langle y,y \rangle < 0$.</p>
<p>Here's the sketch of the proof I want: Assume $\langle x,x \rangle > 0$ for some $x$ and $\langle y,y \rangle < 0$ for some $y$. I'm trying to find a $z \neq 0$ such that $\langle z,z\rangle = 0$, where $z$ is in the space spanned by $\{x,y\}$. By contradiction, we know that $\langle x,x \rangle < 0$ or $\langle x,x \rangle > 0$ for all $x \neq 0$.</p>
<p>Unfortunately, my proof doesn't work that way, and I don't think it proves what I want it to prove.</p>
<p>I say let $\langle z,z \rangle = \langle ax + by, ax + by\rangle$, for $a,b \in \mathbb{R}$. Then
$$\langle z,z \rangle = a^2 \langle x,x \rangle + 2ab \langle x,y \rangle + b^2 \langle y,y \rangle = 0.$$
Let $a = \langle y,y \rangle$ and $b = \langle x,x \rangle$.
After plugging $a$ and $b$ in, I get:
\begin{align*}
& \langle y,y \rangle + 2\langle x,y \rangle + \langle x,x \rangle\langle y,y \rangle = 0 \\
\implies& c_2 + 2 \langle x,y \rangle + c_1 c_2 = 0 \\
\implies& 2\langle x,y \rangle = -(c_1 c_2 + c_2).
\end{align*}
Thus,
\begin{align*}
\langle z,z\rangle=
&= c_1 c_2 + 2(-c_1 c_2 - c_2) + c_1 c_2 \\
&= -2c_1c_2 + 2(c_1c_2 c_2) \\
&= -c_1c_2+c_1c_2 +c_2 = 0
\end{align*}
Then $c_2 = 0$</p>
<p>The only thing I can think to do now is to claim a contradiction: we said $c_2 < 0$. But I don't think this proves what we want to prove that either $\langle x,x \rangle < 0$ OR $\langle x,x \rangle > 0$ for all $x \neq 0$.</p>
<p>I think my issue is that I don't know how to choose $a,b$ to make $\langle z,z \rangle = 0$. Someone please offer some help.</p>
|
Berci
| 41,488 |
<p>We can assume by scaling that $\langle x,x\rangle =1$ and $\langle y,y\rangle=-1$.</p>
<p>The method should work: let $z=x+\lambda y$. Then we have
$$\langle z,z\rangle\ =\ 1+2\lambda\langle x,y\rangle-\lambda^2$$
which has a real root in $\lambda$, as $\langle x,y\rangle$ is considered constant (given).</p>
|
1,329,539 |
<p>I'm trying to figure out how to solve this word problem. I'm pretty sure it involves calculus or something even harder, but I don't know how to solve <strong><em>the general form</em></strong>.</p>
<p>Let me start with the concrete form, however:</p>
<p><strong>Concrete Form:</strong></p>
<p>You start with 5 scientists. A scientist can train 50 students for 5 years, after which each student becomes a scientist. (Assume a perfect graduation rate always, and assume you have an infinite population from which to draw students). Or a scientist can work on a project.</p>
<p>The problem is you have 30 Type-A projects, 50 Type-B projects, and 75 Type-C Projects, and they all need to be completed in minimal time. Each Type-A Project requires at least 10 scientists and takes 200/x years to complete, where x is the number of scientists assigned to them. Type-B's require at least 18 scientists and take 150/x years to complete. Type-C's require at least 25 scientist and take 120/x to complete.</p>
<p>What is the minimum time necessary to complete all projects, and what is the "event-order" of such an optimal solution?</p>
<p>I could solve this numerically by doing simulations in a computer program (although that will still be a pain in the neck), but what I really need is <strong>how to solve this in its general form.</strong></p>
<p><strong>General Form:</strong></p>
<p>Just assign constants to everything. You start with s scientists, who can train t students for y years. There are A type-a projects, B type-b's, and C type-c's. Respectively, they require a minimum of d, e, and f scientists, and take g/x, h/x, and i/x years to complete.</p>
<p>How do you go about solving this? Is that even possible? Solving this requires finding an optimal solution (completing all projects in minimal time), and proving that no other solution exists that has a smaller finish time.</p>
<p><strong>EDIT:</strong> Thanks to @Paul for this clarification. For projects, scientists can join or leave at any time. This is indiscrete time. For training, however, only 1 scientist can train a group of 50. (Two scientists training 50 does not make it go 2x faster.) The training has to be "atomic", which I think is the right word.</p>
|
tomi
| 215,986 |
<p>I will introduce the following notation:</p>
<p>Let $s_0$ be the initial number of scientists.</p>
<p>Let $s$ be the number of scientists at a particular point.</p>
<p>Let $X_i$ be the different types of projects.</p>
<p>Let $W_i$ be the work required for project $X_i$ - the units will be "scientist years".</p>
<p>Let $N_i$ be the number of projects of type $X_i$.</p>
<p>Let $m_i$ be the minimum number of scientists needed to work on project $X_i$.</p>
<p>Let $r$ be the rate at which students are trained. In the example you have given, where $1$ scientist can train $50$ students, the value of $r$ would be $51$ because the $1$ scientist will become $51$.</p>
<p>Let $p$ be the period of time required to train the students.</p>
<p>Simplification:</p>
<p>Let $W$ be the total amount of work required by all the projects.</p>
<p>$W=\Sigma W_i N_i$</p>
<p>It can be shown that as soon as the total number of scientists is greater than $\max (m_i)$ we do not need to consider the separate values of $W_i$, but can just deal with the problem as being about the total amount of work $W$.</p>
<p>Consider the simplest scenario, "Scenario A":</p>
<p>$W$ is a fairly small number.</p>
<p>$s_0$ is a fairly small number, too, so that $s_0<\min(m_i)$.</p>
<p>You can't start on any projects immediately because you don't have enough scientists, so you must start training.</p>
<p>$p$ years later you have $s=rs_0$ scientists. Assuming that $s>\min(m_i)$, they can now start the projects.</p>
<p>If all the scientists get to work on the projects, they will finish the required work in $\frac {W}{s}=\frac {W}{rs_0}$ years. As $W$ is fairly small, $\frac {W}{rs_0}<p$, so there is no point in setting any of the scientists to train more scientists; the project will be completed before their training is completed.</p>
<p>Consider the next simplest scenario, "Scenario B":</p>
<p>$W$ is a larger number. Specifically, ${W}>prs_0$</p>
<p>As before, you can't start on any projects immediately because you don't have enough scientists, so you must start training.</p>
<p>$p$ years later you have $s=rs_0$ scientists. Assuming that $s \ge \min(m_i)$, they can now start the projects.</p>
<p>Option B.1</p>
<p>If all the scientists get to work on the projects, they will finish the required work in $\frac {W}{s}= \frac {W}{rs_0}$ years. In this case, $\frac {W}{s}>p$, so there are two other possible approaches:</p>
<p>Option B.2 Set all scientists to training students again.</p>
<p>$p$ years later you have $s=r^2s_0$ scientists. If all the scientists now get to work on the projects, they will finish the required work in $\frac {W}{s}= \frac {W}{r^2s_0}$ years. For the moment let us assume that $\frac {W}{r^2s_0}<p$.</p>
<p>Option B.3 Set some scientists to work on the projects and set some to training students.</p>
<p>Let $n$ be the number of scientists allocated to working. Clearly $n \ge \min(m_i)$</p>
<p>$p$ years later these scientists will have completed work equivalent to $np$ scientist years. There remains $W-np$ work to be done.</p>
<p>The $r^2s_0-n$ scientists who were set to train now have become $r^3s_0-rn$ scientists. They joined the $n$ scientists to work on the projects, so the time to complete the remaining work is $\frac {W-np}{r^3s_0-rn+n}$ years.</p>
<p>The times taken by these three approaches are:</p>
<p>Option B.1</p>
<p>$p+\frac {W}{rs_0}$</p>
<p>Option B.2</p>
<p>$2p+\frac {W}{r^2s_0}$</p>
<p>Option B.3</p>
<p>$2p+\frac {W-np}{r^3s_0-rn+n}$</p>
<p>Compare to see which is better.</p>
<p>In general, a good approach seems to be to throw all your resources into training until you have so many scientists that you can complete the projects in less time than it will take to train the next batch.</p>
<p>If $k$ is the number of times you run successive training sessions, then you will have $r^ks_0$ scientists, giving you a time to complete of $pk + \frac {W}{r^ks_0}$.</p>
<p>You find $k$ by solving $\frac {W}{r^ks_0}<p$</p>
<p>$\frac {W}{ps_0}<r^k$</p>
<p>$\ln(W) - \ln (p)- \ln (s_0) <k \ln (r)$</p>
<p>$k = \frac{\ln(W) - \ln (p)- \ln (s_0)}{\ln (r)}$</p>
<p>Then check if you would have been better to have alloacted some scientists to start projects during the previous training season by considering $pk + \frac {W-pn}{r^ks_0-rn+n}$.</p>
|
106,775 |
<p>I don't get this, need some help, examples and information</p>
<blockquote>
<p>The linear function $f$ is given by
$$f(x) = 3x - 2 ,\quad -2 \leq x \leq 4.$$</p>
<ol>
<li><p>Enter the independent variable and the dependent variable.</p></li>
<li><p>Determine the function values $f (-2)$, $f (-1)$, $f (0)$ and $f (4)$.</p></li>
</ol>
<p>Enter the definitions and values crowd. </p>
</blockquote>
<p>I know what a function is, but how can you find the independent variable and the dependent variable?</p>
<p>How one can determines the function values and how you specify the definitions and values crowd?</p>
<p>What I know about functions:</p>
<p>Variables can have any name, $x$, $y$, $z$, or "maria", "girl", "young"; at a specific value and function can be called anything.</p>
<p>My own example of functions:</p>
<p>Age = 18</p>
<p>year = 6</p>
<p>Maria (age, years) = age + years = 24</p>
<p>in 6 years Maria is going be 24 years</p>
|
André Nicolas
| 6,312 |
<p>The independent variable is $x$. The dependent "variable" is probably intended to be $f(x)$. This is a somewhat unusual use of language. It is used more often when write the relationship as $y=3x-2$. Then $y$ is called the dependent variable. In modern mathematics, the terms "independent variable" and "dependent variable" are used much less than in the past. </p>
<p>For the calculations, we answer a question that wasn't asked: What is $f(0.5)$? Well, $f(0.5)=3(0.5)-2=-0.5$. We just plug in $0.5$ everywhere that we see $x$, and then calculate. Similar calculations will deal with the questions of this type that you were asked.</p>
<p>The "definitions crowd" is the set of all numbers at which the function is defined. You were told in the problem what this set is. It is the set of all real numbers $x$ such that $-2\le x\le 4$. In English, this is usually called the <em>domain</em> (of definition) of the function.</p>
<p>What you call the "values crowd" is usually called in English the <em>range</em> of the function. It is the set of all values that $f(x)$ can take on as $x$ takes on all possible values in the domain of definition.</p>
<p>Note that $f(-2)=3(-2)-2=-8$, and that $f(4)=3(4)-2=10$. It is probably clear that as $x$ travels from $-2$ to $4$, $f(x)$ steadily increases from $-8$ to $10$. So the range of the function is the set of all real numbers $y$ such that $-8\le y\le 10$.</p>
|
1,773,303 |
<p>Let $k$ be a field and $L=k(\sqrt{a_1},\sqrt{a_2},\cdots,\sqrt{a_n})$ with $a_i\in k$. Suppose $[L:k]=2^n$. Let $a=\sqrt{a_1}+\sqrt{a_2}+\cdots\sqrt{a_n}$. Show that $L=k(a)$.</p>
|
egreg
| 62,967 |
<p>Let $x_0\in X$; for each $x\in X$, $x\ne x_0$, there is an open neighborhood $V_x$ of $x$ that doesn't contain $x$. Since $X$ is finite,
$$
\bigcap_{\substack{x\in X\\x\ne x_0}}V_x
$$
is an open set containing $x_0$ and no other element of $X$. So $\{x_0\}$ is open. As $x_0$ is arbitrary, you're done.</p>
<p>Note that just the $T_1$ property has been used.</p>
|
199,199 |
<p>Suppose a box contains 5 white balls and 5 black balls.</p>
<p>If you want to extract a ball and then another:</p>
<p>What is the probability of getting a black ball and then a black one?</p>
<p>I think that this is the answer:</p>
<p>Let $A:$ get a black ball in the first extraction, $B:$ get a black ball in the second extraction. Then: $P(A)=\frac{5}{10}$ and
$$P(B|A)=\frac{P(A\cap B)}{P(A)}=\frac{\frac{4}{9}}{\frac{1}{2}}=\frac{8}{9}$$</p>
<p>Is this correct? </p>
<p>Now, If you want to extract two balls at the same time:
what is the probability that both are black?
Is exactly the same probability of the first question?
Why?</p>
<p>Thanks for your help.</p>
|
Cameron Buie
| 28,900 |
<p>Not quite. Now, if the first ball <strong>isn't</strong> replaced before drawing the second, then $P(B|A)=\frac49$, from which $$P(A\cap B)=P(A)\cdot P(B|A)=\frac12\cdot\frac49=\frac29.$$ If we <em>are</em> replacing the first ball, $P(B|A)=P(B)=\frac12$, and that changes things.</p>
<p>As for the second question, let's use combinatorics. There are $5$ black balls, and we want to choose $2;$ $5$ white balls, choose $0$. Total, there are $10$ balls, and we're choosing $2$. Then the probability of choosing $2$ black balls simultaneously is $$\frac{_5C_2\cdot{}_5C_0}{_{10}C_2}=\frac{10\cdot 1}{45}=\frac29.$$ Thus, we have the same result (if I'm right about not replacing the first ball, anyway).</p>
|
3,506,340 |
<p>I have no idea how to calculate <span class="math-container">$z_x+z_y$</span> at a point <span class="math-container">$\left( \frac{\pi +3}{3}, \frac{\pi+1}{2}\right)$</span>, if <span class="math-container">$z=uv^2$</span> and <span class="math-container">$x=u+sinv$</span>, <span class="math-container">$y=v+cosu$</span>. <span class="math-container">$z$</span> is not expressed in terms of <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. Maybe it is meant to be solved as <span class="math-container">$x_u=1$</span> and <span class="math-container">$x_v=cosv$</span>, <span class="math-container">$y_u=-sinu$</span> and <span class="math-container">$y_v=1,$</span> then <span class="math-container">$x_u=cos^2v$</span> and <span class="math-container">$y_u=-sinu$</span>. </p>
<p><span class="math-container">$$z_u=v^2 \Rightarrow v^2= \frac{\partial z}{\partial x} \cdot 1+\frac{\partial z}{\partial y} \cdot (-sinu)$$</span></p>
<p><span class="math-container">$$z_v=2uv \Rightarrow 2uv=\frac{\partial z}{\partial x} cosv+\frac{\partial z}{\partial y}\cdot 1$$</span></p>
<p><span class="math-container">$$\Rightarrow \frac{\partial z}{\partial x}=v^2+sin(u) \frac{\partial z}{\partial y}$$</span></p>
<p><span class="math-container">$$\Rightarrow 2uv=cos(v)\left( v^2+sin(u)\frac{\partial z}{\partial y} \right)+\frac{\partial z}{\partial y}$$</span></p>
<p><span class="math-container">$$\Rightarrow 2uv-v^2cos(v)=\frac{\partial z}{\partial y}\left(sin(u)+1 \right)$$</span></p>
<p><span class="math-container">$$\Rightarrow \frac{\partial z}{\partial y}=\frac{2uv-v^2cos(v)}{sin(u)+1}$$</span></p>
<p>I know that what I have done looks just confusing.</p>
<p><span class="math-container">$$v^2=\frac{\partial z}{\partial x}+\frac{\sqrt{3}}{2}\Rightarrow \frac{\partial z}{\partial x}=\frac{\pi^2}{9}-\frac{\sqrt{3}}{2}$$</span></p>
<p><span class="math-container">$$2uv=\frac{\partial z}{\partial y} \Rightarrow \frac{\partial z}{\partial y}=\frac{\pi^2}{18}$$</span></p>
<p><span class="math-container">$$z_x+z_y=\frac{3\pi^2-9\sqrt{3}}{18}$$</span></p>
|
Keen-ameteur
| 421,273 |
<p><strong>Hint:</strong></p>
<p>Define <span class="math-container">$Z(u,v)=z\big( x(u,v),y(u,v) \big)$</span>. By the chain rule (assuming the conditions hold) you can write:</p>
<p><span class="math-container">$$ \frac{\partial Z}{\partial u}(u,v)= \frac{\partial z}{\partial x}\big( x(u,v),y(u,v) \big) \cdot \frac{\partial x}{\partial u}(u,v)+ \frac{\partial z}{\partial y}\big( x(u,v),y(u,v) \big) \cdot \frac{\partial y}{\partial u}(u,v)$$</span></p>
<p>and:</p>
<p><span class="math-container">$$ \frac{\partial Z}{\partial v}(u,v)= \frac{\partial z}{\partial x}\big( x(u,v),y(u,v) \big) \cdot \frac{\partial x}{\partial v}(u,v)+ \frac{\partial z}{\partial y}\big( x(u,v),y(u,v) \big) \cdot \frac{\partial y}{\partial v}(u,v) $$</span></p>
<p>And using this set of linear equations you want to find what is:</p>
<p><span class="math-container">$$ \frac{\partial z}{\partial x}\left( \frac{\pi +3}{3}, \frac{\pi+1}{2}\right)+ \frac{\partial z}{\partial y}\left( \frac{\pi +3}{3}, \frac{\pi+1}{2}\right) $$</span></p>
|
3,019,882 |
<p>Í already have found a proof About the statment which I did not understand, I will write it down, so maybe someone can explain me the part that I did not understand. But if there are easier ways to prove the statement I would be delighted to know. </p>
<p><span class="math-container">$\alpha:=\sqrt{2}+\sqrt{3}\notin U:= \mathbb{Q}(\sqrt{2})\cup\mathbb{Q}(\sqrt{3})\iff\alpha \notin \mathbb{Q}(\sqrt{2})$</span> and <span class="math-container">$\alpha \notin \mathbb{Q}(\sqrt{3})$</span></p>
<p>Proof: <span class="math-container">$\alpha \notin \mathbb{Q}(\sqrt{2})$</span></p>
<p>Assume <span class="math-container">$\exists_{r,s\in \mathbb{Q}}r+s\sqrt{2}=\sqrt{2}+\sqrt{3}$</span></p>
<p><span class="math-container">$\iff \sqrt{3}=r+(s-1)\sqrt2$</span></p>
<p><span class="math-container">$\iff 3=r^2+2r(s-1)\sqrt2+2(s-1)^2$</span></p>
<p>Which implicates <span class="math-container">$3=0$</span> or <span class="math-container">$\sqrt3\in \mathbb{Q}$</span> or <span class="math-container">$\sqrt{3/2}\in \mathbb{Q}$</span> or <span class="math-container">$\sqrt{2}\in \mathbb{Q}$</span></p>
<p>I don't understand the last implication, I Need some help here.</p>
<p>Many Thanks </p>
|
Sujit Bhattacharyya
| 524,692 |
<p>Instead of that just try to check is this <span class="math-container">$U:=\mathbb{Q}(\sqrt 2)\cup \mathbb{Q}(\sqrt 3)$</span> a Group?</p>
<p>Let us take <span class="math-container">$\sqrt2,\sqrt3\in U$</span> [Both exists] but <span class="math-container">$\sqrt2+\sqrt3\notin U$</span>.</p>
<p>So definitely it cannot be a Subfield of <span class="math-container">$\mathbb{R}$</span>.</p>
|
2,682,531 |
<p>The definition of a convex set is the following:</p>
<blockquote>
<p>A set $\Omega \subset \mathbb R^n$ is convex if $\alpha x + (1 − \alpha) y \in \Omega, \forall x, y \in \Omega$ and $\forall \alpha \in [0, 1]$.</p>
</blockquote>
<p>With this it should be easy enough to prove that a set is not convex: just find a counterexample. But how do you prove that it <em>is</em> convex? How do I do it for the unit disk? </p>
<p>$$\Omega = \{(x, y) \in \mathbb R^2 \mid x^2 + y^2 \leq 1\}$$</p>
<p>Also what exactly does it mean for a set to be convex? </p>
|
Rodrigo de Azevedo
| 339,790 |
<p>If one can prove that a given set is a <a href="https://en.wikipedia.org/wiki/Spectrahedron" rel="nofollow noreferrer">spectrahedron</a>, then one can conclude that given set is <em>convex</em>.</p>
<p>For example, the unit disk can be represented by the following <a href="https://en.wikipedia.org/wiki/Linear_matrix_inequality" rel="nofollow noreferrer">linear matrix inequality</a> (LMI)</p>
<p><span class="math-container">$$\begin{bmatrix} 1 & x & 0\\ x & 1 & y\\ 0 & y & 1\end{bmatrix} \succeq \mathrm O_3$$</span></p>
<p>and, thus, is a (convex) spectrahedron. Note, however, that not all convex sets are spectrahedra.</p>
<hr />
<p><a href="/questions/tagged/convex-analysis" class="post-tag" title="show questions tagged 'convex-analysis'" rel="tag">convex-analysis</a> <a href="/questions/tagged/linear-matrix-inequality" class="post-tag" title="show questions tagged 'linear-matrix-inequality'" rel="tag">linear-matrix-inequality</a> <a href="/questions/tagged/spectrahedra" class="post-tag" title="show questions tagged 'spectrahedra'" rel="tag">spectrahedra</a></p>
|
2,606,999 |
<p>$f: X \to Y$ prove that if $A \subseteq X$, then A is a subset of the pre-image of the image of $A$, which is shown by these symbols respectively: $f^{-1}[f[A]]$</p>
<p>This is my proof:</p>
<p>If $A \nsubseteq f^{-1}[f[A]]$, then there exists "$x$" as an element of $A$, such that $f^{-1}[f[A]] \neq x$
which is a contradiction.</p>
<p>Is this the correct way of going about doing this? I'm also not sure if we can assume that $f^{-1}[f[A]] = x$ is always true</p>
|
The Phenotype
| 514,183 |
<p>$f^{-1}[f[A]]$ is a set, and $x$ is an element. They cannot be equal.</p>
<p>The correct way of proving this is: let $x\in A$, then $$f(x)\in \{f(x)\ |\ x\in A\}=f[A]$$ by the definition of image. Now because $f(x)\in f[A]$ (and obviously $x\in X$), we have that $$x\in\{x\in X\ |\ f(x)\in f[A]\}=f^{-1}[f[A]]$$ by the definition of preimage. Therefore $A\subseteq f^{-1}[f[A]]$.</p>
|
4,489,898 |
<p>After 18 months of studying an advanced junior high school mathematics course, I'm doing a review of the previous 6 months, starting with solving difficult quadratics that are not easily factored, for example:
<span class="math-container">$$x^2+6x+2=0$$</span>
This could be processed via the quadratic equation but the course I'm working through asks me to use the complete the square method. I can do it, and I appreciate the geometric illustration of what is happening.</p>
<p>But it's so powerful and elegant, I can't help but wonder where else this method of adding something into an expression only to take it away in another is employed in mathematics. And is there a name for the general case of this kind of operation?</p>
<p><a href="https://i.stack.imgur.com/I0fmi.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/I0fmi.jpg" alt="geometric complete the square" /></a></p>
|
emacs drives me nuts
| 746,312 |
<p>One example is multiplying an expression by some term and dividing it out again, like:</p>
<p><span class="math-container">$$\begin{align}
\frac1{\sqrt{n+1}+\sqrt n}
&= \frac{\sqrt{n+1}-\sqrt n}{(\sqrt{n+1}+\sqrt n)(\sqrt{n+1}-\sqrt n)} \\
&= \frac{\sqrt{n+1}-\sqrt n}{n+1 - n} \\
&= \sqrt{n+1}-\sqrt n\\
\end{align}$$</span></p>
<p>This also involves a binomic formula, namely <span class="math-container">$(a+b)(a-b) = a^2-b^2$</span>. This is useful when calculating sums which can be turned into telescoping sums:</p>
<p><span class="math-container">$$\begin{align}
\sum_{k=1}^n\frac1{\sqrt{k+1}+\sqrt k}
&= \frac1{\sqrt2 + 1}+\frac1{\sqrt3 + \sqrt 2}+\frac1{\sqrt4 + \sqrt 3}+\cdots\\
&= \sum_{k=1}^n (\sqrt{k+1}-\sqrt k) \\
&= \sqrt{n+1}-1
\end{align}$$</span></p>
|
4,489,898 |
<p>After 18 months of studying an advanced junior high school mathematics course, I'm doing a review of the previous 6 months, starting with solving difficult quadratics that are not easily factored, for example:
<span class="math-container">$$x^2+6x+2=0$$</span>
This could be processed via the quadratic equation but the course I'm working through asks me to use the complete the square method. I can do it, and I appreciate the geometric illustration of what is happening.</p>
<p>But it's so powerful and elegant, I can't help but wonder where else this method of adding something into an expression only to take it away in another is employed in mathematics. And is there a name for the general case of this kind of operation?</p>
<p><a href="https://i.stack.imgur.com/I0fmi.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/I0fmi.jpg" alt="geometric complete the square" /></a></p>
|
Community
| -1 |
<blockquote>
<p>Evaluating recurrence relations using generating functions.</p>
</blockquote>
<p>To evaluate <span class="math-container">$F_n = F_{n-1} + F_{n-2}$</span> (<span class="math-container">$F_0 = 0, F_1 = 1$</span>), we can construct the generating function <span class="math-container">$F(x) = \sum_{n=0}^\infty F_n x^n$</span>. Note that the <span class="math-container">$x^n$</span> terms have been <em>added</em> and they will be <em>taken away</em> at the final step of the process.</p>
<p><span class="math-container">$$F(x) = \sum_{n=0}^\infty \color{green}{F_n} x^n = F_0 + F_1 x + \sum_{n = 2}^\infty F_n x^n$$</span>
<span class="math-container">$$xF(x) = \sum_{n = 1}^\infty F_{n-1}x^n = F_0x + \sum_{n=2}^\infty F_{n-1}x^n$$</span>
<span class="math-container">$$x^2F(x) = \sum_{n=2}^\infty F_{n-2}x^n$$</span></p>
<p>Hence,</p>
<p><span class="math-container">$$F(x) - xF(x) - x^2F(x) = F_0 + F_1 x - F_0x + \sum_{n = 2}^\infty (F_n - F_{n-1} - F_{n-2}) x^n$$</span></p>
<p>From the recurrence, it follows that <span class="math-container">$F_n - F_{n-1} - F_{n-2} = 0$</span>. Therefore, substituting values, we get</p>
<p><span class="math-container">\begin{align}
F(x) - xF(x) - x^2F(x) &= x\\
F(x) &=\frac{x}{1-x-x^2}\\
&= \frac{1}{\sqrt{5}}\left(\frac{1}{1-\varphi x} - \frac{1}{1 - \psi x}\right), \varphi = \frac{1 + \sqrt{5}}{2}, \psi = \frac{1 - \sqrt{5}}{2}\\
&= \frac{1}{\sqrt{5}}\left(\sum_{n=0}^\infty (\varphi x)^n - \sum_{n=0}^\infty (\psi x)^n\right)\\
&= \sum_{n=0}^\infty \color{green}{\frac{\varphi^n - \psi^n}{\sqrt{5}}}x^n
\end{align}</span></p>
<p>Now, all we have to do is <em>take away</em> the <span class="math-container">$x^n$</span> terms we previously put in. And we get the <a href="https://en.wikipedia.org/wiki/Fibonacci_number" rel="nofollow noreferrer">Fibonacci sequence</a></p>
<p><span class="math-container">$$\boxed{F_n = \frac{\varphi^n - \psi^n}{\sqrt{5}}}$$</span></p>
|
4,489,898 |
<p>After 18 months of studying an advanced junior high school mathematics course, I'm doing a review of the previous 6 months, starting with solving difficult quadratics that are not easily factored, for example:
<span class="math-container">$$x^2+6x+2=0$$</span>
This could be processed via the quadratic equation but the course I'm working through asks me to use the complete the square method. I can do it, and I appreciate the geometric illustration of what is happening.</p>
<p>But it's so powerful and elegant, I can't help but wonder where else this method of adding something into an expression only to take it away in another is employed in mathematics. And is there a name for the general case of this kind of operation?</p>
<p><a href="https://i.stack.imgur.com/I0fmi.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/I0fmi.jpg" alt="geometric complete the square" /></a></p>
|
CR Drost
| 42,154 |
<p>With tensors/matrices, we often do this to provide various identities. For example, every matrix is the sum of a symmetric matrix <span class="math-container">$A_{ij} = A_{ji}$</span> and a skew-symmetric matrix <span class="math-container">$A_{ij} = -A_{ji}.$</span> The easiest proof is,
<span class="math-container">$$
\begin{align}
A_{ij} &= \frac12 A_{ij} + \frac12 A_{ij}\\
&= \left(\frac12 A_{ij} + \frac12 A_{ji}\right) + \left(\frac12 A_{ij} - \frac12 A_{ji}\right)
\end{align}
$$</span>
And the first term is manifestly symmetric while the second is manifestly antisymmetric.</p>
<p>The product rule of calculus is sometimes used in this way, it says that <span class="math-container">$${\mathrm d\phantom t\over\mathrm dt}\big(u(t)~v(t)\big) = u~{\mathrm d v\over\mathrm dt} + v~{\mathrm d u\over\mathrm dt}$$</span>and the problem is that you usually have something of the form <span class="math-container">$u~\mathrm dv/\mathrm dt$</span> which by itself is not strong enough to use this, so you add the other term. So for instance
<span class="math-container">$$
\begin{align}
x~\frac{\mathrm d^2x}{\mathrm dt^2} &= x~\frac{\mathrm d^2x}{\mathrm dt^2} + \left(\frac{\mathrm dx}{\mathrm dt}\right)^2 - \left(\frac{\mathrm dx}{\mathrm dt}\right)^2\\
&= \frac{\mathrm d\phantom t}{\mathrm dt}\left( x~\frac{\mathrm dx}{\mathrm dt} \right) - \left(\frac{\mathrm dx}{\mathrm dt}\right)^2 \\
&= \frac{\mathrm d^2}{\mathrm dt^2}\left(\frac12 x^2 \right) - \left(\frac{\mathrm dx}{\mathrm dt}\right)^2
\end{align}$$</span>
The very last step is essentially the same process applied over again, that is one can observe that <span class="math-container">$x~\frac{\mathrm dx}{\mathrm dt} = \frac{\mathrm d}{\mathrm dt}(x^2) - \frac{\mathrm dx}{\mathrm dt}~x$</span> and then collect the like terms on the left-hand-side and divide by 2.</p>
<p>If this reminds you of something it is probably integration by parts, which is this exact procedure under an integral sign. A lot of times folks don't realize that the integral sign is kind of formally unnecessary and will insist on doing the above manipulation by first forming the definite integral, then manipulating it with integration by parts, and finally differentiating it. But yeah there it looks like:<span class="math-container">$$
\begin{align}
\int u~\frac{\mathrm dv}{\mathrm dt}\mathrm dt &= \int \left(u~\frac{\mathrm dv}{\mathrm dt} + v\frac{\mathrm du}{\mathrm dt}\right)\mathrm dt - \int v~\frac{\mathrm du}{\mathrm dt}~\mathrm dt\\
&= u~v- \int v~\frac{\mathrm du}{\mathrm dt}~\mathrm dt
\end{align}
$$</span>
although, as with the quadratic formula, many folks just memorize the result.</p>
|
2,550,655 |
<p>When we're integrating a one variable function, there is only one path to follow between points $a$ and $b$.</p>
<p>$F(b)=F(a)+h(f(a)+f(a+h)+f(a+2h)+........)$ where $h$ is very small. So, we approach from $a$ to $b$ in steps of $h$ along the line joining unique path joining $a$ and $b$.</p>
<p>I was wondering what if we have a multi-variable function $f(x,y)$. What does it mean to integrate $f(x,y)$ from $(a,b)$ to $(c,d)$? Clearly, there isn't a unique path joining $(a,b)$ and $(c,d)$.</p>
<p>We have partial integration:</p>
<p>$$\int_{(a,b)}^{(c,d)} f(x,y)dxdy=\int_{a}^{c}f(x,b)dx+\int_{b}^{d}f(c,y)dy$$</p>
<p>But that's basically doing it in two steps, first keeping $y$ constant then $x$ constant. How can we vary both $x$ and $y$ simultaneously?</p>
<p>I thought of finding a curve which contains both the points $(a,b)$ and $(c,d)$ and varying both variables simultaneously along that path. Suppose $g(x)$ is a curve such that $g(a)=b$ and $g(c)=d$. Then we can replace $y$ with $g(x)$ to get:</p>
<p>$$\int_{(a,b)}^{(c,d)} f(x,y)dxdy=\int_a^b f(x,g(x))g'(x)dx$$</p>
<p>Does this integration of $f(x,y)$ from $(a,b)$ to $(c,d)$ depend on the choice of curve $g(x)$?</p>
|
user
| 505,767 |
<p>Only for some special functions the path-integral is path independent.</p>
<p>In general it gives different results.</p>
<p>Take a look here:</p>
<p><a href="https://math.stackexchange.com/questions/1734106/what-does-it-mean-for-an-integral-to-be-independent-of-a-path">What does it mean for an integral to be independent of a path?</a></p>
|
3,901,613 |
<p>Determine <span class="math-container">$\int_Axy\space\space d(x,y)$</span>, when <span class="math-container">$A$</span> is a closed set, bounded by the curve <span class="math-container">$y=x-1$</span>, and the parabel <span class="math-container">$y^2 = 2x+6.$</span></p>
<p>I believe we only need to know the range, where <span class="math-container">$A$</span> is actually bounded by these two curves. I think this will be <span class="math-container">$-3\leq x \leq 5$</span>. But to be specific, isn't <span class="math-container">$A\subset Int ( G_h \cap G_g)$</span>, s.t <span class="math-container">$g(x)=x-1$</span>, and <span class="math-container">$h(x)=y^2 = 2x+6$</span>? Where <span class="math-container">$-3\leq x \leq 5$</span>. Can we determine the integral as <span class="math-container">$$\int_{-3}^5\Bigg(\int_{x-1}^{2x+6}xy\space\space dy\Biggr)dx\space\space?$$</span></p>
<p>Also, I'm assuming that <span class="math-container">$(x,y)\in \Bbb{R^2}$</span>.</p>
|
Taras
| 583,783 |
<p>See that <span class="math-container">$2^{16^x} = 16^{2^x}$</span> is <span class="math-container">$2^{2^{4x}} = 2^{2^{x+2}}$</span></p>
<p>then <span class="math-container">$4x = x + 2$</span> and get <span class="math-container">$x = 2/3$</span>.</p>
|
353,920 |
<p>I am studying a problem where a quadratic matrix equation emerges. The equation is as follow (all capital letters are n by n matrices)</p>
<p><span class="math-container">$(I-X^{\prime}L)X=D$</span></p>
<p>where <span class="math-container">$L$</span> and <span class="math-container">$D$</span> are both symmetric and positive definite. How much can I say about a solution <span class="math-container">$X$</span>?</p>
|
yc0000
| 153,034 |
<p>I figure out something and would like to share and check whether it is correct.</p>
<p>It is perfectly analogous to real quadratic equations.</p>
<p>Since <span class="math-container">$L$</span> is symmetric and positive definite, they can be decomposed:</p>
<p><span class="math-container">$L=U_L^{\prime} \Lambda_L U_L$</span> where <span class="math-container">$U_L$</span> is orthonormal, and <span class="math-container">$\Lambda_L$</span> is diagonal with positive entries. </p>
<p>Let <span class="math-container">$Q=U_L^{\prime} \Lambda_L^{-1/2} U_L$</span>, which is the inverse of the square root of <span class="math-container">$L$</span>, and it is symmetric.</p>
<p>Then the above equation is equivalent to </p>
<p><span class="math-container">$\tilde{X}^{\prime} \tilde{X}-Q\tilde{X}=D$</span> where <span class="math-container">$\tilde{X}=Q^{-1}X$</span></p>
<p>Suppose, in addition, <span class="math-container">$D$</span> and <span class="math-container">$L$</span> are simultaneously diagonalizable, I conjecture that <span class="math-container">$\tilde{X}$</span> is symmetric. Therefore, the above is equivalent to</p>
<p><span class="math-container">$(\tilde{X}-\frac{1}{2}Q)^{\prime}(\tilde{X}-\frac{1}{2}Q)=\frac{1}{4}L^{-1}-D$</span></p>
<p>Note that the right-hand side is symmetric; hence, there exists a real solution of <span class="math-container">$\tilde{X}$</span> if and only if the right-hand side is positive semi-definite.</p>
<p>And after a bit of algebra, if solutions exist, it would be</p>
<p><span class="math-container">$X=\frac{1}{2}L^{-1}[I \pm (I-4LD)^{1/2}]$</span> where <span class="math-container">$(I-4LD)^{1/2}$</span> is the real p.s.d square root of <span class="math-container">$(I-4LD)$</span>, which exists because <span class="math-container">$(I-4LD)^{1/2}$</span> is p.s.d. and symmetric.</p>
<p>Hence, there are at most two solutions. </p>
<p>Are all of these arguments correct?</p>
|
4,570,396 |
<p>Is the set <span class="math-container">$\{\frac 1n :n\in\mathbb{N}\}$</span> countable?</p>
<p>This was part of the Exercises in a book on Real Analysis. The answer given is that this set is uncountable.</p>
<p><strong>My work:</strong></p>
<p>I'm just starting on Real Analysis and , to my knowledge, this set is countable, since we can map every element of this set to natural numbers as
<span class="math-container">\begin{eqnarray}1&\to&1\\
2&\to&\frac12\\
3&\to&\frac13
\end{eqnarray}</span>
and so on.</p>
<p>So, is there a mistake in the book?</p>
|
Nicolás Atanes Santos
| 1,116,223 |
<p>Yes. As a subset of a countable set is countable, and rationals are (Countability of Rational Numbers), the set <span class="math-container">$\{\frac 1n :n\in\mathbb{N}\}$</span> is countable.
Another way to see it is that for all <span class="math-container">$n$</span>, <span class="math-container">$n+1$</span> exists. If you add <span class="math-container">$\frac 1n + \frac{2n+1}{n^2 +n}$</span> you get <span class="math-container">$\frac {1}{n+1}$</span>, so it always exists. You can use proof by induction or reductio ad absurdum.</p>
|
719,056 |
<p>I'm writing a Maple procedure and I have a line that is "if ... then ..." and I would like it to be if k is an integer (or if k^2 is a square) - how would onw say that in Maple? Thanks!</p>
|
DanielV
| 97,045 |
<p>The relation
$$\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}$$</p>
<p>requires that $u$ has a nonzero relationship with $x$ and $y$. This doesn't have to be explicitly stated when the chain rule is written as
$$D_x f(g(x)) = f'g(x) \cdot g'(x)$$</p>
<p>Example:</p>
<p>Consider using $g = \text{gravitational force}$ and $r = \text{distance}$. You might know:</p>
<p>$$g = G\frac{m_1m_2}{r^2}$$</p>
<p>What if $u$ is your temperature? Perhaps your temperature does not change based on your location.
$$\frac{du}{dr} = 0$$
$$\frac{dg}{du} = 0$$</p>
<p>If you applied the chain rule without realizing that $u$ is an independent variable, you'd get:</p>
<p>$$\begin{align}\frac{dg}{dr} &= \frac{dg}{du} \cdot \frac{du}{dr}\\
&= 0 \cdot 0 \end{align}$$</p>
<p>...and I think we can agree that gravitational force is not universally constant. So when using the differential form of the chain rule, make sure you are not using an independent variable.</p>
|
758,950 |
<p>I have a pretty straightforward combinatorical problem which is an exercise to one paper about generating functions.</p>
<ol>
<li>How many ways are there to get a sum of 14 when 4 distinguishable dice are rolled? </li>
</ol>
<p>So, one die has numbers 1..6 and as dice are distinguishable then we should use exponential generating functions (we count sequences of rolled dice), because $3,4,3,4$ differs from $3,3,4,4$. So, we end up with answer
$$[\frac{x^{14}}{14!}](x+\frac{x^2}{2}+\frac{x^3}{3!}+\frac{x^4}{4!}+\frac{x^5}{5!}+\frac{x^6}{6!})^4$$
How can we nicely calculate the coefficient of $\frac{x^{14}}{14!}$? I don't want to do this brutally, because next task is </p>
<p>2 Find generating function for the number of ways a sum of n can occur when rolling a die an infinite (or at least n) number of times.</p>
<p>I'd appreciate some help on such problems and how to "wrap" such limited exponential series.</p>
|
Stefan Gruenwald
| 149,416 |
<p>You can do it quite easily with Python:</p>
<pre><code> from sympy.abc import x
from sympy import expand
from math import factorial
d1 = expand((1+x+x**2/2+x**3/factorial(3)+x**4/factorial(4)+x**5/factorial(5)+x**6/factorial(6))**4).as_coefficients_dict()
print d1[x**14]
</code></pre>
<p>It returns:</p>
<p>169/64800</p>
|
2,439,202 |
<p>i want to solve this question but i don't know what way i use :
$$\lim_{n\to \infty}a_n= \lim_{n\to\infty} \frac{\binom n 1} {n} +\frac {\binom n 2} {n^2} + \cdots + \frac {\binom n n}{n^n} $$</p>
|
user577215664
| 475,762 |
<p><strong><em>hint</em></strong></p>
<p>Newton's binomial $(x+y)^n=\sum_{k=0}^n \binom n k x^ky^{n-k}$</p>
<p>With $y=1$ and $x=\frac 1 n$</p>
<p>and the fact that $\lim_{ n\to \infty} (1+ \frac 1 n)^n =e$</p>
|
2,941,311 |
<p>Given diophantine equation <span class="math-container">$11x+17y +19z =2561$</span> , which <span class="math-container">$x,y,z \geq 1$</span></p>
<p>Find minimum and maximum value of <span class="math-container">$x+y+z$</span></p>
<p>I'm start with reduces equation to <span class="math-container">$11x+17y +19z =2514$</span> , which help us guarantee that <span class="math-container">$x,y,z \geq 1$</span> then set each variable to <span class="math-container">$0$</span> to find bound of <span class="math-container">$x+y+z$</span> </p>
<p>that comes to this<br>
<a href="https://i.stack.imgur.com/N3fwl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N3fwl.png" alt="enter image description here"></a></p>
<p>I'm not sure that the solution I did was correct. So, minimum <span class="math-container">$x+y+z = 137$</span> and maximum <span class="math-container">$x+y+z = 231$</span> ???. sorry for my english and thank you in advance.</p>
|
davidlowryduda
| 9,754 |
<p>My pocket example of such a limit is
<span class="math-container">$$ \lim_{x \to \infty} \frac{x + \sin x}{x + \cos x}.$$</span>
This limit is very clearly <span class="math-container">$1$</span>. But an application of l'Hopital's rule would lead to the consideration of
<span class="math-container">$$ \lim_{x \to \infty} \frac{1 + \cos x}{1 - \sin x},$$</span>
which doesn't exist! Thus existence of the left hand limit does not guarantee the existence of the right hand limit.</p>
|
4,465,528 |
<p>Solving the integral of <span class="math-container">$\cos^2x\sin^2 x$</span>:</p>
<p>My steps are: <span class="math-container">$(\cos x\sin x)^2=\left(\frac{\sin(2x)}{2}\right)^2$</span>. Now we know that</p>
<p><span class="math-container">$$\sin^2(\alpha)=\frac{1-\cos (2x)}{2}\iff\left(\frac{\sin(2x)}{2}\right)^2=\frac 14\sin^2(2x)=\frac 14\cdot \frac{1-\cos (4x)}{2}=\frac 18(1-\cos (4x))$$</span>
Rewriting all the steps:</p>
<p><span class="math-container">$$\int \sin ^2\left(x\right)\cos ^2\left(x\right)dx=\frac{1}{8}\left(x-\frac{1}{4}\sin \left(4x\right)\right)+k, \,\, k\in \Bbb R \tag 1$$</span></p>
<p>Is there another method to solve this integral (1)?</p>
|
Gregory J. Puleo
| 183,812 |
<p>I disagree with your reasoning: you are assuming that the <span class="math-container">$300$</span> students who answered wrong to the third question and the <span class="math-container">$400$</span> students who answered wrong to the fourth question are totally separate from each other, but there's no guarantee of that. Ignoring for a moment the information about the first and second question, it seems possible that all the students who answered the third question wrong also answered the fourth question wrong, in which case there would be <span class="math-container">$600$</span> medals awarded. I believe <span class="math-container">$300$</span> is actually the <em>minimum</em> number of students that can get an award.</p>
<p>If that's true, then you need to show that <em>more</em> than <span class="math-container">$300$</span> students cannot have won a medal. You haven't used any information about the first and second questions yet; maybe that will be helpful in finishing things off.</p>
|
3,207,453 |
<p>studying the series <span class="math-container">$\sum_\limits{n=2}^\infty \frac{1}{n(\log n)^ {2}}$</span>.</p>
<p>I've tried with the root criterion</p>
<p><span class="math-container">$\lim_{n \to \infty} \sqrt[n]{\frac{1}{n(\log n)^ {2}}}>1$</span>
and the series should diverge.</p>
<p>But I'm not sure
Can someone help me to understand?</p>
|
Bernard
| 202,857 |
<p><strong>Hint</strong>:</p>
<p>Use the integral test to show that all series <span class="math-container">$\;\displaystyle\sum_\limits{n=2}^\infty \frac{1}{n(\log n)^{\alpha}}$</span> converge if and only if <span class="math-container">$\alpha >1$</span>.</p>
|
3,784,889 |
<p><span class="math-container">$$\begin{array}{ll} \text{minimize} & f(x,y,z) := x^2 + y^2 + z^2\\ \text{subject to} & g(x,y,z) := xy - z + 1 = 0\end{array}$$</span></p>
<hr />
<p>I tried the Lagrange multipliers method and the system resulted from has no solution. So I posted it to see if the question is wrong by itself or I'm missing something.</p>
<p>So I made the Lagrangian equation <span class="math-container">$L(x,y,z,λ)=x^2 + y^2 + z^2 + λ(xy -z+1)$</span></p>
<p>and then</p>
<p><span class="math-container">$θL/θx = 2x + λy =0$</span></p>
<p><span class="math-container">$θL/θy = 2y + λx =0$</span></p>
<p><span class="math-container">$θL/θz = 2z - λ =0$</span></p>
<p><span class="math-container">$θL/θλ = xy -z +1 =0 $</span></p>
<p>The obvious solution for that system is x=0 , y=0 , z=1 and λ=2</p>
<p>But solving it in an online solver for nonlinear systems of equation the answer I get is that it's unsolvable.</p>
<p>So my question is: What I'm doing wrong</p>
|
Rodrigo de Azevedo
| 339,790 |
<p>Since <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are polynomial, using <a href="http://www.sympy.org" rel="nofollow noreferrer">SymPy</a>'s <code>solve_poly_system</code>:</p>
<pre><code>>>> from sympy import *
>>> x, y, z, mu = symbols('x y z mu', real=True)
>>> L = x**2 + y**2 + z**2 + mu * (x*y - z + 1)
>>> solve_poly_system([diff(L,x), diff(L,y), diff(L,z), diff(L,mu)], x, y, z, mu)
[(0, 0, 1, 2), (-sqrt(2)*I, -sqrt(2)*I, -1, -2), (sqrt(2)*I, sqrt(2)*I, -1, -2)]
</code></pre>
<p>Hence, the <strong>only</strong> real solution is <span class="math-container">$(x,y,z,\mu) = (0, 0, 1, 2)$</span>. Not very insightful, however.</p>
<p>Let <span class="math-container">$\mathcal L$</span> be the Lagrangian. Computing <span class="math-container">$\partial_x \mathcal L$</span>, <span class="math-container">$\partial_y \mathcal L$</span> and <span class="math-container">$\partial_z \mathcal L$</span> and finding where they vanish,</p>
<p><span class="math-container">$$\begin{bmatrix} 2 & \mu & 0\\ \mu & 2 & 0\\ 0 & 0 & 2\end{bmatrix} \begin{bmatrix} x\\ y\\ z\end{bmatrix} = \begin{bmatrix} 0\\ 0\\ \mu\end{bmatrix}$$</span></p>
<p>Note that the matrix is singular when <span class="math-container">$\mu = \pm 2$</span>. Hence, we have three cases to consider.</p>
<hr />
<p><span class="math-container">$\color{blue}{\boxed{\mu = 2}}$</span></p>
<p>The solution set is the line parameterized by</p>
<p><span class="math-container">$$\begin{bmatrix} x\\ y\\ z\end{bmatrix} = \begin{bmatrix} t\\-t\\ 1\end{bmatrix}$$</span></p>
<p>and, since, <span class="math-container">$xy - z + 1 = 0$</span>, we obtain <span class="math-container">$t = 0$</span> and <span class="math-container">$\color{blue}{(x,y,z) = (0,0,1)}$</span>.</p>
<hr />
<p><span class="math-container">$\color{blue}{\boxed{\mu =-2}}$</span></p>
<p>The solution set is the line parameterized by</p>
<p><span class="math-container">$$\begin{bmatrix} x\\ y\\ z\end{bmatrix} = \begin{bmatrix} t\\ t\\ -1\end{bmatrix}$$</span></p>
<p>and, since, <span class="math-container">$xy - z + 1 = 0$</span>, we obtain the equation <span class="math-container">$t^2 = -2$</span>, which has no solution over the reals.</p>
<hr />
<p><span class="math-container">$\color{blue}{\boxed{\mu \neq \pm2}}$</span></p>
<p>The solution set is the line parameterized by</p>
<p><span class="math-container">$$\begin{bmatrix} x\\ y\\ z\end{bmatrix} = \begin{bmatrix} 0\\ 0\\ \frac{\mu}{2}\end{bmatrix}$$</span></p>
<p>and, since, <span class="math-container">$xy - z + 1 = 0$</span>, we obtain <span class="math-container">$\mu = 2$</span>, which contradicts the assumption.</p>
<hr />
<p><a href="https://i.stack.imgur.com/SabUG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SabUG.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/u3MiD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u3MiD.png" alt="enter image description here" /></a></p>
|
3,784,889 |
<p><span class="math-container">$$\begin{array}{ll} \text{minimize} & f(x,y,z) := x^2 + y^2 + z^2\\ \text{subject to} & g(x,y,z) := xy - z + 1 = 0\end{array}$$</span></p>
<hr />
<p>I tried the Lagrange multipliers method and the system resulted from has no solution. So I posted it to see if the question is wrong by itself or I'm missing something.</p>
<p>So I made the Lagrangian equation <span class="math-container">$L(x,y,z,λ)=x^2 + y^2 + z^2 + λ(xy -z+1)$</span></p>
<p>and then</p>
<p><span class="math-container">$θL/θx = 2x + λy =0$</span></p>
<p><span class="math-container">$θL/θy = 2y + λx =0$</span></p>
<p><span class="math-container">$θL/θz = 2z - λ =0$</span></p>
<p><span class="math-container">$θL/θλ = xy -z +1 =0 $</span></p>
<p>The obvious solution for that system is x=0 , y=0 , z=1 and λ=2</p>
<p>But solving it in an online solver for nonlinear systems of equation the answer I get is that it's unsolvable.</p>
<p>So my question is: What I'm doing wrong</p>
|
Moko19
| 618,171 |
<p>This can be solved in at least two methods.
First, let's solve without Lagrange, using convenient changes of variables. Let <span class="math-container">$u=x+y, v=xy$</span>. This results in <span class="math-container">$u^2=x^2+y^2+2xy=x^2+y^2+2v$</span>.</p>
<p>We now need to minimize <span class="math-container">$u^2-2v+z^2$</span> under the constraint <span class="math-container">$v-z+1=0$</span>. We can rearrange this constraint to be of the form <span class="math-container">$z=1+v$</span> and therefore <span class="math-container">$z^2=1+2v+v^2$</span>. Substituting this, we need to minimize <span class="math-container">$u^2+v^2$</span>. The minimum of this is for <span class="math-container">$u=0, v=0$</span>, which returns <span class="math-container">$x=0, y=0, z=1$</span>.</p>
<p>Solving this using Lagrange:</p>
<p><span class="math-container">$$L=x^2+y^2+z^2-\lambda(xy-z+1)=x^2+y^2+z^2-\lambda xy-\lambda z-\lambda$$</span>
<span class="math-container">$$\frac{\partial L}{\partial z}=2z-\lambda\rightarrow\lambda=2z$$</span>
<span class="math-container">$$\frac{\partial L}{\partial x}=2x-\lambda y=0\rightarrow x=\frac{\lambda y}{2}=yz$$</span>
<span class="math-container">$$\frac{\partial L}{\partial y}=2y-\lambda x=0\rightarrow2y-2yz^2=0$$</span><span class="math-container">$$\rightarrow y=0, x=0\cup z=1,\lambda=2,x=y\cup z=-1,\lambda=-2,x=-y$$</span>
We have three possible solutions to this. We will plug each into the equation for the constraint <span class="math-container">$xy-z+1=0$</span></p>
<p>If <span class="math-container">$x=0, y=0$</span>, our constraint becomes <span class="math-container">$-z+1=0$</span>, which has the solution <span class="math-container">$x=0, y=0, z=1$</span>, with the value of <span class="math-container">$x^2+y^2+z^2=1$</span></p>
<p>If <span class="math-container">$z=1, x=y$</span>, our constraint becomes <span class="math-container">$x^2-1+1=0$</span>, which has the exact same solution</p>
<p>If <span class="math-container">$z=-1, x=-y$</span>, cour constraint becomes <span class="math-container">$-y^2+1+1=0$</span>, which has the solutions <span class="math-container">$x=\pm\sqrt{2}, y=\mp\sqrt{2}, z=-1$</span>. The value here is <span class="math-container">$x^2+y^2+z^2=5$</span>, which is not the minimum</p>
|
1,364,417 |
<p>Find a real number k such that the limit $$\lim_{n\to\infty}\ \left(\frac{1^4 + 2^4 + 3^4 +....+ n^4}{n^k}\right)$$ has as positive value.
If I am not mistaken every even $k$ can be the answer. But the answer is 5.</p>
|
MadSax
| 253,866 |
<p>Observe that
$$
1^4+2^4+...+n^4 = \sum_{i=1}^n i^4 = \frac{1}{30}(6n^5+15n^4+10n^3-n)
$$ so if $k<5$ the limit is $+\infty$ (does not exist) but if $k>5$ the limit is $0$.</p>
|
3,102,218 |
<p>Given a fraction:</p>
<p><span class="math-container">$$\frac{a}{b}$$</span></p>
<p>I now add a number <span class="math-container">$n$</span> to both numerator and denominator in the following fashion:</p>
<p><span class="math-container">$$\frac{a+n}{b+n}$$</span></p>
<p>The basic property is that the second fraction is suppose to closer to <span class="math-container">$1$</span> than the first one. My question is how can we prove that?</p>
<p>What I have tried:</p>
<p>I know <span class="math-container">$\frac{n}{n} = 1$</span> so now adding numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> to it would actually "move it away" from <span class="math-container">$1$</span>. But I cannot understand why <span class="math-container">$\frac{a}{b}$</span> is actually farther away from <span class="math-container">$1$</span> than <span class="math-container">$\frac{a+n}{b+n}$</span>.</p>
<p>Why is that? What does it mean to add a number to both the numerator and denominator?</p>
|
Claude Leibovici
| 82,404 |
<p><em>Just for the fun of it, since you already received very good answers.</em></p>
<p>Perform the long division to get
<span class="math-container">$$\frac{a+n}{b+n}=1+\frac{a-b}n\left(1-\frac{b}{n}+\frac{b^2}{n^2}-\frac{b^3}{n^3} +\cdots\right)=1+\frac{a-b}n\sum_{k=0}^\infty (-1)^k \left(\frac bn\right)^k$$</span></p>
|
299,795 |
<p>I seem to have completely lost my bearing with implicit differentiation. Just a quick question:</p>
<p>Given $y = y(x)$ what is $$\frac{d}{dx} (e^x(x^2 + y^2))$$</p>
<p>I think its the $\frac d{dx}$ confusing me, I don't what effect it has compared to $\frac{dy}{dx}$. Any help will be greatly appreciated.</p>
|
Community
| -1 |
<p>You could write the equation as
\begin{equation*}
x^7(x^2+x+1)=0.
\end{equation*}
Now, solve the equation $x^7=0$ and $x^2+x+1=0$.</p>
|
73,039 |
<p>I have a huge data file which I can't ListPlot.</p>
<p>This code generates similar kind of data:</p>
<pre><code>datatest =RandomSample[Join[RandomReal[{0.5, 15}, 20], RandomReal[.1, 10000]]];
datatest2 = 5 + Riffle[datatest, -datatest];
</code></pre>
<p>I want to filter (delete) the part of the data that is not necessary as follows:</p>
<pre><code>peaks = FindPeaks[datatest2, 0, 0, 5.2];
ListPlot[datatest2, PlotRange -> All, Joined -> True,
Epilog -> {Red, PointSize[0.01], Point[peaks]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/RpKLb.jpg" alt="enter image description here">
Currently I am using some kind of long way to do the task.</p>
<p>Is there any signal processing functionality in MMA that can do this easily?</p>
<p>Thank you</p>
|
yohbs
| 367 |
<p>I'm not sure I understand what you need, but here's my try: If you want to keep only <code>2d+1</code> data points around each peak, you can use</p>
<pre><code>toKeep = Map[# + Range[-d, d] &, peaks[[All, 1]]];
choppedData = Map[Part[datatest2, #] &, toKeep];
</code></pre>
<p><code>choppedData</code> is a list of lists, the <code>i</code>-th list contains the <code>2d+1</code> values around the <code>i</code>-th peak. </p>
<p>To speed up you can use <code>ParallelMap</code> instead of <code>Map</code>.</p>
|
5,927 |
<p>I have a problem with the binomial coefficient $\binom{5}{7}$. I know that the solution is zero, but I have problems to reproduce that:</p>
<p>${\displaystyle \binom{5}{7}=\frac{5!}{7!\times(5-7)!}=\frac{5!}{7!\times(-2)!}=\frac{120}{5040\times-2}=\frac{120}{-10080}=-\frac{1}{84}}$</p>
<p>Where is my mistake?</p>
|
Bill Dubuque
| 242 |
<p>By definition $\rm\binom{n}k$ is the coefficient of $\rm x^k$ in $\rm (1+x)^n$ so it is $0$ for $\rm k > n\:$.</p>
|
6,929 |
<p>For example, I have posted quite a few questions regarding mathematics in general, because I want to learn more about the field and I just want to make sure I get an answer or an opinion from someone who has good knowledge about my question. I tag it with the advice tag and I have seen, more than once, that people want to close it or it gets downvoted. It's not like it's a bad question but people don't want it on the mathSE website. So if that's the case and that's how people are going to be, what's the point of the advice tag if people just want to close down the question? And I have also noticed that it's not just my questions, but others as well.</p>
|
Mariano Suárez-Álvarez
| 274 |
<p>There are currently 106 (not deleted) questions tagged with the «advice» tag, and only a few of them are closed: exactly 12.</p>
<p>There is the possibility that your questions are being closed for reasons other than the tag!</p>
<p>Moreover, as far as I can tell, none of your three advice questions is closed (although you have an extra one which is deleted)</p>
|
592,963 |
<p>Find $x^4+y^4$ if $x+y=2$ and $x^2+y^2=8$</p>
<p>So i started the problem by nothing that $x^2+y^2=(x+y)^2 - 2xy$ but that doesn't help!</p>
<p>I also seen that $x+y=2^1$ and $x^2+y^2=2^3$ so maybe $x^3+y^3=2^5$ and $x^4+y^4=2^7$ but i think this is just coincidence</p>
<p>So how can i solve this problem?</p>
<p>PLEASE i need some help and thanks for all!!</p>
|
Mercy King
| 23,304 |
<p>\begin{eqnarray}
x^4+y^4&=&(x^2+y^2)^2-2x^2y^2=(x^2+y^2)^2-\frac12(2xy)^2=(x^2+y^2)^2-\frac12\left[(x+y)^2-(x^2+y^2)\right]^2\\
&=&8^2-\frac12(2^2-8)^2=56.
\end{eqnarray}</p>
|
2,913,156 |
<p>I'm working through the definitions given in <strong>2.18</strong> in Baby Rudin, and I was wondering how to prove that <span class="math-container">$E = [0,1] \cap \Bbb{Q}$</span> is closed in <span class="math-container">$\Bbb{Q}$</span>.</p>
<p>Clearly, the set <span class="math-container">$[0,1] \cap \Bbb{R}$</span> is closed in <span class="math-container">$\Bbb{R}$</span>, since for all neighborhoods of a point <span class="math-container">$p \in [0,1]$</span>, we can always find a <span class="math-container">$q \neq p$</span> such that <span class="math-container">$q \in [0,1]$</span>. Intuitively, this is because for all <span class="math-container">$p \in [0,1] \cap \Bbb{R}$</span>, we can always find a <span class="math-container">$q \in [0,1] \cap \Bbb{R}$</span> such that <span class="math-container">$p - \epsilon < q < p$</span> or <span class="math-container">$p < q < p+\epsilon$</span> for all <span class="math-container">$\epsilon >0$</span>.</p>
<p>However, that the set <span class="math-container">$E = [0,1] \cap \Bbb{Q}$</span> is closed in <span class="math-container">$\Bbb{Q}$</span> is sort of clear, but not intuitive to me, since <span class="math-container">$\Bbb{Q}$</span> is not uncountably infinite as <span class="math-container">$\Bbb{R}$</span> is. Does one have to use that fact that it is closed in <span class="math-container">$\Bbb{R}$</span> and that <span class="math-container">$\Bbb{Q}$</span> is dense in <span class="math-container">$\Bbb{R}$</span>? How would one solve it from first principles? i.e. Something along the lines of: Consider an arbitrary <span class="math-container">$p \in [0,1] \cap \Bbb{Q}$</span>, we show that it is a limit point by...</p>
<p>Any help would be greatly appreciated!</p>
|
José Carlos Santos
| 446,262 |
<p>The density of $\mathbb Q$ plays no role here. If $A$ and $B$ are subsets of $\mathbb R$ and $A$ is a closed subset of $\mathbb R$, then $A\cap B$ is a closed subset of $B$. That's so because the complement of $A\cap B$ in $B$ is equal to $A^\complement\cap B$. Now, since $A$ is a closed subset of $\mathbb R$, $A^\complement$ is an open subset of $\mathbb R$ and therefore $A^\complement\cap B$ is an open subset of $B$.</p>
|
54,856 |
<p>I'm trying to simplify an expression with sums of <code>ArcTan</code>, for which I've written a transformation rule that works for simple sums, so that <code>ArcTan[a] + ArcTan[b] -> ArcTan[(a+b)/(1-ab)]</code>. However, it fails for terms of the type <code>5 ArcTan[a] + ArcTan[b]</code>. One way to simplify this expression would be to expand the integer into a sum of units, then transform repeatedly until the expression consists of only one <code>ArcTan</code>, but I haven't been able so far to find the way.</p>
<p>I've tried <code>Hold</code>,<code>Distribute</code>, <code>Sum</code>,<code>Fold</code> with <code>Plus</code>to no avail.</p>
|
Michael E2
| 4,999 |
<p>How about:</p>
<pre><code>5 ArcTan[a] + ArcTan[b] /.
{(m_Integer: 1) ArcTan[a_] + (n_Integer: 1) ArcTan[b_] /; Positive[m] && Positive[n] :>
With[{min = Min[m, n]},
(m - min) ArcTan[a] + (n - min) ArcTan[b] +
min ArcTan[(a + b)/(1 - ab)]]}
(* 4 ArcTan[a] + ArcTan[(a + b)/(1 - ab)] *)
</code></pre>
<p>Works on positive integral coefficients, which is how I read the question:</p>
<pre><code>7 ArcTan[a] + 4 ArcTan[b] /.
{(m_Integer: 1) ArcTan[a_] + (n_Integer: 1) ArcTan[b_] /; Positive[m] && Positive[n] :>
With[{min = Min[m, n]},
(m - min) ArcTan[a] + (n - min) ArcTan[b] +
min ArcTan[(a + b)/(1 - ab)]]}
(* 3 ArcTan[a] + 4 ArcTan[(a + b)/(1 - ab)] *)
</code></pre>
|
1,109,706 |
<p>I have just started going over abstract algebra.</p>
<p>One of the question is </p>
<p>$*$ is defined on $\mathbb C$ such that $a*b=|ab|$</p>
<p>I tried to check three axioms : 1) Associativity 2) identity 3) inverse</p>
<p>I found out with long computation that Associativity works.</p>
<p>For identity, $a*e=e*a=a$ , where $a*e=|ae|=a$</p>
<p>I am quite confused how to find out the value of e, since this is a complex number.</p>
<p>Can anyone show me why this is a group or not a group?</p>
|
user207950
| 207,950 |
<p>It is not group. Problem is with a=-s s$\in R^+$.</p>
|
85,288 |
<p>For some reason, I'd like to use <code>ParallelTable</code> with a variable number of iterators.</p>
<ul>
<li><code>Table[a[1], {a[1], 0, 10}]</code> works fine:</li>
</ul>
<blockquote>
<p><em>Output:</em> <code>{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}</code></p>
</blockquote>
<ul>
<li><code>ParallelTable[a[1], {a[1], 0, 10}]</code> returns an error:</li>
</ul>
<blockquote>
<p><em>Output:</em>
ParallelTable::nopar1: ParallelTable[ci[1],{ci[1],0,10}] cannot be parallelized; proceeding with sequential evaluation. >></p>
<pre><code>{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
</code></pre>
</blockquote>
<hr />
<p>I am also interested in workarounds to use <code>ParallelTable</code> and a variable number of indices (typically: <code>ParallelTable[f[Table[a[i],{i,1,n}]],Table[a[j],{j,1,n}]]</code> where <code>f</code> is a function of a list with variable size).</p>
|
Albert Retey
| 169 |
<p>This seems to just be a limitation of <code>ParallelTable</code>, can't comment whether that has a deeper reason due to parallelism or is just a simple oversight. I think it was not possible to use expressions like <code>a[i]</code> as e.g. iterators in older versions but in newer version that has been added as a feature to many functions, but obviously not <code>ParallelTable</code> (as of 10.0.2, still doesn't work in 10.1.0). That is somewhat inconsistent, and it <em>might</em> work with future versions, but for the moment you'd have to use a workaround as Kattern has suggested and use explicit symbols instead of the <code>a[i]</code> like e.g.:</p>
<pre><code>asym[i_Integer] := Symbol["a" <> ToString[i]];
With[{
iter = Sequence @@ Table[{asym[j], -1, 1}, {j, 1, 3}]
},
ParallelTable[$KernelID -> f[Table[asym[i], {i, 1, 3}]], iter]
]
</code></pre>
<p>alternatives to create symbol names would include:</p>
<pre><code>asym[i_Integer] := asym[i] = Module[{a}, a];
asym[i_Integer] := asym[i] = Unique["a"];
</code></pre>
<p>each has there advantages and disadvantages, a common problem is that you have to manually cleanup as they create a bunch of symbols instead of just one as your original approach...</p>
|
2,900,014 |
<p>How would solve for $a$ in this equation without using an approximation ?
is it possible?</p>
<p>where $x>0$ and $0<a<\infty$</p>
<p>$x=\Sigma _{i=1}^{n} i^a$</p>
<p>for example $120=\Sigma _{i=1}^{6} i^a$ what is $a$ in this equation?</p>
|
Community
| -1 |
<p>If you set $a=\log t$, you can write the question as</p>
<p>$$x=\sum_{k=1}^n k^a=\sum_{k=1}^n t^{\log k}$$</p>
<p>where the RHS is a generalized polynomial (with irrational powers). So this is even less solvable than a polynomial and you can't avoid numerical methods.</p>
<p>The function (of $a$) is strictly growing, comes from $0$ in the negatives, equals $n$ for $a=0$, then tends to $n^a$ asymptotically. A reasonable starting value for Newton's iterations is given by $\log_n x$.</p>
|
2,588,408 |
<p>A question from <em>Introduction to Analysis</em> by Arthur Mattuck:</p>
<p>Suppose $f(x)$ is continuous for all $x$ and $f(a+b)=f(a)+f(b)$ for all $a$ and $b$. Prove that $f(x)=Cx$, where $C=f(1)$, as follows:</p>
<p>(a)prove, in order, that it is true when $x=n, {1\over n}$ and $m\over n$, where $m, n$ are integers, $n\ne 0$;</p>
<p>(b)use the continuity of $f$ to show it is true for all $x$.</p>
<p>I can show the statement is true when $x=n$. As for $x={1\over n},{m\over n}$, I don't know how.</p>
|
QuIcKmAtHs
| 515,633 |
<p>This is true for all: $f(x)=Cx$. For instance, for $f({1\over n})$, you would need $n$ of that term to make C. Hence, it is ${C\over n}$, which is also $C*{1\over n}$. For $f({m\over n})$, it is ${C*m\over n}$. Firstly, ${m\over n}$ is actually just $m*{1\over n}$. Since $f({1\over n})$ = ${C\over n}$, $f({m\over n})$ would be $m$ times of ${C\over n}$, so it would be ${C*m\over n}$, which can be expressed as $C*{m\over n}$. Hence, we have proven that for any $f(x)$, it is always equal to $Cx$(at least for rational numbers).</p>
|
690,569 |
<p>Suppose a function is given by:
$$
f(x)=
\begin{cases}
\cos\left(\dfrac{1}{x}\right) & x\neq 0 \\
0 & x=0
\end{cases}
$$</p>
<p>Show that this function is not continuous. Please help - I don't know how to proceed with formally using the limits.</p>
|
BoZenKhaa
| 60,953 |
<p>To show that it is not continous, show that the function is not continous at 0. To do that, think about what hapens to cos(1/x) as x goes to zero, i.e. What happens to cos(x) as x goes to infinity. Does it have zero as a limit? Nah, it oscilates. So because function f is continuous at a point c iff f(c) equals to the limit of f(x) as x goes to c, it is apparent that f is not continuous.</p>
|
2,349,476 |
<p>Suppose $n$ and $k$ are positive integers. Are there any known conditions on $n$ and $k$ such that the polynomial $x^{k+1}-x^k-n$ has rational or integer roots?</p>
|
Dietrich Burde
| 83,966 |
<p>Because of the <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">Rational Root Theorem</a>, every rational root $x=\frac{p}{q}$ of $x^{k+1}-x^k-n$ must satisfy $p\mid n$ and $q\mid 1$ in $\mathbb{Z}$. For most $k$, in practice, there will be no integer roots for given $n$ of the form $m^k(m-1)$. </p>
<p>As an example, let us consider $n=12$. Then only for $k=1$ we have an integer root:
\begin{align*}
x^2-x-12 &=(x+3)(x-4). \\
\end{align*}</p>
|
244 |
<p>I know that Hilbert schemes can be very singular. But are there any interesting and nontrivial Hilbert schemes that are smooth? Are there any necessary conditions or sufficient conditions for a Hilbert scheme to be smooth?</p>
|
G.G.
| 118,706 |
<p>For some more examples of smooth HS see <a href="https://arxiv.org/abs/1702.00080" rel="nofollow noreferrer">A.P.Staal: The ubiquity of smooth Hilbert schemes, arxiv AG 31.Jan. 2017</a>.</p>
|
496,882 |
<p>If I have an experiment that has $1000$ trials, and $10$% of the time there is an error, what is the approximate probability that I will have $125$ failures? I figured out that $\mu =100$ and $\sigma =9.4868$. I'm trying to approximate with a normal distribution, since with this many trials, and the fact that each experiment is independent the total number of failrues should be normally distributed. So I set up my problem as so $$P(X\le 125)=P\left(z\le \frac{125-100}{\frac{9.4868}{\sqrt{1000}}}\right)=\Phi\left(\frac{125-100}{\frac{9.4868}{\sqrt{1000}}}\right)$$ But there's no way that with probability $1$ there are at most $125$ errors. So what am I missing?</p>
|
Dilip Sarwate
| 15,941 |
<p>The DeMoivre-Laplace theorem says that if $X$ is a binomial random variable with parameters $(n,p)$, then
$$P\{a < X < b\} \approx \Phi\left(\frac{b-np}{\sqrt{np(1-p)}}\right) - \Phi\left(\frac{a-np}{\sqrt{np(1-p)}}\right)$$
and so, for your problem,
$$P\{X \leq 125\} \approx \Phi\left(\frac{125-100}{\sqrt{1000\cdot 0.1\cdot 0.9}}\right)
= \Phi\left(\frac{25}{\sqrt{90}}\right)$$
This is a form of the central limit theorem applied to binomial random variables.</p>
<p>For a more standard application of the central limit theorem, note that if
$Y_1, Y_2, \ldots, Y_{n}$ are independent random variables with mean
$\mu$ and variance $\sigma^2$, then the central limit theorem says
that the CDF of $(Y_1+Y_2+\cdots+Y_{n} - n\mu)/\sigma\sqrt{n}$
is approximated by the CDF of a standard normal random variable $Z$.
If the $Y_i$ are Bernoulli random variables
with parameter $p$ (and hence mean $p$ and variance $p(1-p)$, standard deviation
$\sigma = \sqrt{p(1-p)}\, $), then
$$\frac{(Y_1+Y_2+\cdots+Y_{1000})-1000p}{\sigma\sqrt{1000}}
= \frac{(Y_1+Y_2+\cdots+Y_{1000})-100}{\sqrt{0.1\cdot 0.9}\sqrt{1000}}
= \frac{X-100}{\sqrt{90}}$$
has approximately the same CDF as a standard normal random variable $Z$. Hence,
$$P\{X \leq 125\} \approx P\left\{Z \leq \frac{25}{\sqrt{90}}\right\}
= \Phi\left(\frac{25}{\sqrt{90}}\right)$$
just as we got from the Demoivre-Laplace theorem.</p>
|
3,118,238 |
<p>I know there is a pretty simply way to check if a 2D point is inside a circle, I wanted to know how to do there same but in the 3rd dimension. The variables are the point's X, Y, Z and the sphere's 3D point and radius.</p>
<p>Let's say we know that X and Y is inside the circle, can we rotate it 90 degrees and pretend the Z is the X or Y again and use the same algorithm?</p>
<p>I will check in 2 for loops for X and Y if each point is inside circle, I will use the X and Y to plot the pixel and the Z to determine the colour.</p>
<p>EDIT:</p>
<p>Thanks for the answer, I didn't realize I could just include the Z into the algorithm I was talking about. In case you were interested here is the code, it works great! It's in F#.</p>
<pre><code>open System.IO
open System.Drawing
type Vector = { X: int; Y: int; Z: int }
type Sphere = { Center: Vector; Radius: int }
let intersect v s =
let x0, y0, z0 = float(v.X), float(v.Y), float(v.Z)
let x1, y1, z1 = float(s.Center.X), float(s.Center.Y), float(s.Center.Z)
sqrt(pown(x0 - x1) 2 + pown(y0 - y1) 2 + pown(z0 - z1) 2) < float(s.Radius)
let sphere = { Center = { X = 127; Y = 127; Z = 127 }; Radius = 127 }
let bitmap = new Bitmap(256, 256)
for x in 0 .. 255 do
for y in 0 .. 255 do
for z in 0 .. 255 do
if intersect { X = x; Y = y; Z = z } sphere then
bitmap.SetPixel(x, y, Color.FromArgb(z, 0, 0, 255))
bitmap.Save(Path.Combine(__SOURCE_DIRECTORY__, "bitmap.png"))
</code></pre>
|
Parcly Taxel
| 357,390 |
<p>Rather than using two 2D tests to check (which is incorrect – points passing both tests are only guaranteed to lie inside a <a href="https://en.wikipedia.org/wiki/Steinmetz_solid" rel="noreferrer">mouhefanggai/Steinmetz solid</a> which strictly encloses the sphere), a simple extension of the 2D test will work. Let the sphere's centre coordinates be <span class="math-container">$(c_x,c_y,c_z)$</span> and its radius be <span class="math-container">$r$</span>, then point <span class="math-container">$(x,y,z)$</span> is in the sphere iff <span class="math-container">$(x-c_x)^2+(y-c_y)^2+(z-c_z)^2<r^2$</span>.</p>
|
16,740 |
<p>Excuse me for my unclear question title, math terms have never been my forté. However being new here and knowing I shall be staying here for a while, I thought it's best to do things right with the community and site.</p>
<p>I just asked my first <a href="https://math.stackexchange.com/questions/920192/differentiation-with-respect-to-variable">question</a> here and got a great response and nice welcome. However I have no idea which <em>answer</em> I, as a community member should pick. </p>
<p>The reason being is that the first one explains how to get there <em>(which is something I requested in the question itself)</em> whilst the other one is the actual answer to the questions but doesn't directly tell me how he got there even though he did somewhat clarify that in the comments.</p>
<p>A general rule of thumb for such scenarios would be greatly appreciated!</p>
|
user642796
| 8,348 |
<p>In similarity to posters of link-only answers, users who post questions which rely on familiarity with or access to a <em>specific</em> external source to be understood and answered should be urged to include as much relevant information from the source so as to make the question self-contained (roughly meaning intelligible to an "expert" without familiarity with or access to the source, for suitably loose definitions of "expert"). </p>
<p>Leaving appropriate comments and downvoting are clearly part of this urging, but one should also consider voting to close the question, where either "unclear what you're asking" or "missing context or other details" seem appropriate.</p>
<p>In extreme situations, you can also attempt to edit this information in yourself.</p>
<p>This should apply to all questions, not just those of the <a href="https://math.stackexchange.com/questions/tagged/proof-verification" class="post-tag" title="show questions tagged 'proof-verification'" rel="tag">proof-verification</a> variety. It should also apply equally to all kinds of external sources.</p>
|
2,424,722 |
<blockquote>
<p>Let a,b,c be real numbers such that $a+2b+c=4$. What is the value of $\max(ab+bc+ac)$</p>
</blockquote>
<p>My attempt:</p>
<p>Squaring both the sides: </p>
<p>$a^2 +4b^2+c^2+2ac+4bc+4ab=16$</p>
<p>Then I tried factoring after bringing 16 to LHS but couldn't. It's not even a quadratic in one variable or else I could have directly found the maximum. </p>
<p>How do I proceed? </p>
|
GAVD
| 255,061 |
<p>Note that $4xy \leq (x+y)^2$ for all $x$, $y$.</p>
<p>You have $$ab+bc+ca = b(a+c) + ac \leq b(a+c) + \frac{1}{4}(a+c)^2 = (a+c) \frac{a+c+4b}{4} = \frac{1}{4}(a+c)(8-(a+c)) \leq \frac{1}{4}\frac{1}{4}8^2 = 4.$$</p>
<p>Equality is happen when $a=c$ and $a+c = 8-(a+c)$, or $a+c=4$.</p>
|
4,005,738 |
<p>I am interested in a method of how to compute the expected number of steps for absorpion in a Markov Chain with only one absorption node and given the starting and pre-final node (before absorption).</p>
<p>So, if the transition probability matrix is given, for example by:</p>
<p><span class="math-container">$P = \begin{bmatrix}
0 & 1/5 & 2/5 & 1/5 & 1/5\\
1/5 & 0 & 1/5 & 1/5 & 2/5\\
2/5 & 0 & 1/5 & 1/5 & 1/5\\
1/5 & 1/5 & 0 & 1/5 & 2/5\\
0 & 0 & 0 &0 & 1
\end{bmatrix}$</span></p>
<p>what will be the expected number of steps for being absorbed <strong>given that we start in node 1 (first line) and the last state before absorption is node 2 (second line)</strong> ?</p>
<p>So what I tried so far... I know the very known technique of using the fundamental matrix <span class="math-container">$N$</span> and then computing <span class="math-container">$N = (I-Q)^{-1}$</span> where <span class="math-container">$Q$</span> is the recurrent probabilities matrix (from the canonical form of <span class="math-container">$P$</span>) and the expected number of steps would be the first entry of the vector <span class="math-container">$(I-Q)^{-1}1$</span>, where <span class="math-container">$1$</span> is the five dimensional vector with all entries equal to <span class="math-container">$1$</span>, in the case where we dont know the information about where we are absorbed.</p>
<p>Then, as we know that we are absorbed from node two, I tried to replace all the probabilties of the last columns by <span class="math-container">$0$</span> (execept for line <span class="math-container">$2$</span> and for the aborption node that I kept <span class="math-container">$2/5$</span> and <span class="math-container">$1$</span>, respectively), then I am not pretty sure of how re-normalizing all the others...</p>
<p>Thank you very much!</p>
|
Dieter Kadelka
| 882,293 |
<p>Let <span class="math-container">$S := \{1,2,3,4,5\}$</span> be the state space of this homogeneous Markov chain (MC) and <span class="math-container">$\xi_n$</span> be the random state at time <span class="math-container">$n \in \mathbb{N}_0$</span>. Let <span class="math-container">$\mathbb{P}_s$</span> be the joint distribution of <span class="math-container">$\xi_0,\xi_1,\ldots$</span> if the MC start in <span class="math-container">$s \in S$</span>, <span class="math-container">$(\Omega,\mathcal{A},\mathbb{P}_s)$</span> be the underlying probability space and <span class="math-container">$\tau$</span> be the entrance time in <span class="math-container">$s_a = 5$</span>. Let <span class="math-container">$A := \{\xi_{\tau-1} = 2\}$</span> be the event that immediately before stopping the state is <span class="math-container">$2$</span> (with <span class="math-container">$\xi_{-1} := 5$</span>). Of interest are <span class="math-container">$$\mathbb{E}_s[\tau ~|~ A] = \frac{g(s)}{f(s)}$$</span>
with <span class="math-container">$f_s = \mathbb{P}_s(A)$</span> and <span class="math-container">$g(s) = \mathbb{E}_s Y$</span> with <span class="math-container">$Y = \tau \cdot 1_A$</span> (<span class="math-container">$1_A$</span> the indicator function of <span class="math-container">$A$</span>) for <span class="math-container">$s \not= s_a$</span>.</p>
<p>Let <span class="math-container">$P = (p_{si})_{s,i \in S}$</span> be the transition matrix of the MC. By the Markov property (here we don't need the strong Markov property) we get
<span class="math-container">$$\mathbb{P}_s(A) = 2/5 \cdot \delta_{s2} + \sum_{i=1}^5 p_{si} \cdot \mathbb{P}_i(A)$$</span>
resp. <span class="math-container">$f = Pf + (0,2/5,0,0,0)'$</span>. Since <span class="math-container">$f(5) = 0$</span> we can omit the state <span class="math-container">$s = 5$</span> and with <span class="math-container">$\tilde P := (p_{si})_{s,i \not= 5}$</span> we have to solve the equation
<span class="math-container">$$(\mathbb{1} - \tilde P) \tilde f = (0, 2/5, 0, 0)'$$</span>. We get
<span class="math-container">$$\tilde f = (0.1897, 0.5, 0.1379, 0.1724)'$$</span></p>
<p>Similarly for <span class="math-container">$g$</span> we get
<span class="math-container">$$\mathbb{E}_s Y = \sum_{i \in S} p_{si} \mathbb{E}_i [Y ~|~ \xi_1 = i, \xi_0 = s]$$</span>
with <span class="math-container">$\mathbb{E}_i [Y ~|~ xi_1 = i, \xi_0 = s] = 1$</span>, if <span class="math-container">$s = 2, i = 5$</span>, <span class="math-container">$ = 0$</span>, if <span class="math-container">$s \not=2, i = 5$</span> or <span class="math-container">$s = 5$</span> and <span class="math-container">$= g_i+1$</span>, else. Thus
<span class="math-container">$$g_s = (\tilde P (g+1))_s + 2/5 \cdot \delta_{s2}$$</span>
resp. with <span class="math-container">$x = (g_1,\ldots,g_4)'$</span>
<span class="math-container">$$x = \tilde P (x+1) + (0,2/5,0,0)' = \tilde P x + \tilde P (1,1,1,1)' + (0,2/5,0,0)'$$</span>
with the solution <span class="math-container">$x = (2.9612, 2.6250, 3.0172, 2.1466)'$</span>. In particular we get the solution <span class="math-container">$x(1)/f(1) = 15.6136$</span>.</p>
|
855,088 |
<p>Consider two vectors $V_1$ and $V_2$ in $\mathbb{R}^3$.
When we take their dot product we get a real number.
How is that number related to the vectors?
Is there any way we can visualize it? </p>
|
blue
| 34,139 |
<p>The dot product ${\bf a}\cdot{\bf b}$ measures the length of ${\bf a}$'s orthogonal projection onto $\bf b$ (the $1$-dimensional subspace it is a part of), scaled by the length of $\bf b$ itself. And conversely. The scaling is nice to have because it means the dot product is bilinear in its two arguments. Physically, it can measure how much of something in one direction moves in a different direction. For instance, suppose water is moving through a net (in the shape of a plane) put in the ocean - and, for simplicity, the water is moving monolithically: the same direction at the same speed at every point - but the direction of movement is not perfectly perpendicular to the net. The dot product of the net's unit normal vector with the water's velocity vector will tell us how much is moving through the net. When this is applied on an infinitessimal scale and then integrated, we can determine how much of something moving with a vector field is displaced through a surface; this is half of the divergence theorem.</p>
|
2,485,447 |
<p>My attempt:</p>
<p>3x≡1 mod 7 (1)</p>
<p>4x≡1 mod 9 (2)</p>
<p>Multiply (1) by 5</p>
<p>Multiply (2) by 7</p>
<p>x≡5 mod 7</p>
<p>x≡7 mod 9</p>
<p>So x≡9k+7</p>
<p>9k+7=5(mod7)</p>
<p>k=5(mod7)</p>
<p>k=7j+5</p>
<p>x=9(7j+5)+7</p>
<p>=63j+52</p>
<p>x≡52(mod63)</p>
|
Deepak
| 151,732 |
<p>From the first equation (note that $5$ is the multplicative inverse of $3 \pmod 5$),</p>
<p>$3(5)x \equiv 5 \pmod 7 \implies 15x \equiv 5 \pmod 7 \implies x \equiv 5 \pmod 7$. So $x = 7k + 5$</p>
<p>Substitute into the second to give $28k + 20 \equiv 1 \pmod 9 \implies k \equiv -19\pmod 9 \implies k \equiv -1 \pmod 9$. So $k = 9t -1$</p>
<p>So $x = 7(9t-1) + 5 = 63t -2$, giving the solution $x \equiv -2 \equiv 61 \pmod {63}$.</p>
<p>In asker's question, the error is here:</p>
<p>Going from "9k+7=5(mod7)"</p>
<p>to "k=5(mod7)"</p>
<p>is unjustifiable. From the first, we get $9k \equiv -2 \pmod 7$ or $2k \equiv -2 \pmod 7$. That gives $k \equiv -1 \equiv 6 \pmod 7$. </p>
<p>It is also equally unclear how "So x≡9k+7" became "x=9(k)+5" later on. The latter should be $x = 9(k) + 7$. </p>
<p>Correcting for those two errors gives the expected solution (identical to mine).</p>
|
1,531,171 |
<p>if $fg$ and $f$ are differentiable at $a$, must $g$ be differentiable at $a$? If not, what condition is needed to imply that $g$ be differentiable at $a$?</p>
<p>I realized that people asked similar question before, like
"If $f+g$ and $f$ are differentiable at $a$, must $g$ be differentiable at $a?$" It can be easily proved by using limits. </p>
<p>However, when it comes to multiply, I find it no longer eligible to use limits to finish the proof.</p>
|
Tsemo Aristide
| 280,301 |
<p>f=x and g=$\sqrt x$ at 0 fg=x$\sqrt x$ g is not derivable at 0</p>
|
2,233,185 |
<p>I'm finding how many integers under a limit, $L$, only have prime factors from a given set of prime numbers, $P$. The numbers that meet these conditions are called n-smooth numbers. (I've never used sets before so feel free to correct any mistakes I make). Take, for example, $P = \left\{2, 3 \right\}$, and $L = 25$, there are 10 numbers that are 3-smooth: 2, 3, 4, 6, 8, 9, 12, 16, 18, 24.</p>
<p>I'm trying to find a method/algorithm that easily finds <em>how many</em> n-smooth numbers there are for a given $P$ and $L$. Here's my work so far:</p>
<p>To start, a number from $P$ to any power will be n-smooth as long as it's < $L$. Using the same example from above, this would include the powers of two: 2, 4, 8, 16, and the powers of three: 3, 9. By using the floor function and log, I can essentially tell how many powers of a number there are that are less than or equal to L. The total amount of these numbers can be modeled with this expression:</p>
<p><a href="https://i.stack.imgur.com/LvlIT.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LvlIT.gif" alt="enter image description here"></a></p>
<p>But this leaves the other numbers 6, 12, 18, and 24 leftover. I don't know how to account for these "leftover" type of numbers. Does anyone have any ideas?</p>
|
Paramanand Singh
| 72,031 |
<p>We can directly prove that $M_{a}$ is maximal without using any theorems.</p>
<p>Let us take any ideal $N \supset M_{a}$ and we show that $N = R$. Clearly since $N \supset M_{a}$ there is an $f \in N$ such that $f(a) \neq 0$. Now let $g$ be any arbitrary member of $R$. Note that the function $h(x)$ given $$h(x) = g(x) - f(x)\cdot\frac{g(a)}{f(a)}$$ is continuous and hence $h \in R$. Also note that $h(a) = 0$ so that $h\in M_{a}\subset N$. It now follows that $g \in N$ as $g(x) = h(x) + f(x)g(a)/f(a)$.</p>
|
90,548 |
<p>Suppose we are handed an algebra $A$ over a field $k$. What should we look at if we want to determine whether $A$ can or cannot be equipped with structure maps to make it a Hopf algebra?</p>
<p>I guess in order to narrow it down a bit, I'll phrase it like this: what are some necessary conditions on an algebra for it to be a Hopf algebra?</p>
<p>Thoughts so far:</p>
<p>The first obvious condition is that $A$ must be augmented, i.e. there must be a nontrivial character $\varepsilon : A \to k$. Since this is generally not that hard to determine if we are given the algebra in some fairly concrete way, let's suppose that $A$ is given to us with an augmentation map.</p>
<p>If $A$ is finite-dimensional, then $A$ must be a Frobenius algebra. But not every finite-dimensional Frobenius algebra is a Hopf algebra, e.g. $\Lambda^\bullet(k^2)$ is not a Hopf algebra if the characteristic of $k$ is not 2. And generally I am more interested in the infinite-dimensional case.</p>
<p>All I can come up with is this: the category of finite-dimensional $A$-modules must be a (left) rigid monoidal category. But I don't know if that is a helpful observation: given a category with a forgetful functor to finite-dimensional vector spaces over some field, how can one prove that it can't be given the structure of a <s>braided</s> rigid monoidal category?</p>
<p>And perhaps there are some homological invariants that one can look at?</p>
<p>To sum up, the question is:</p>
<h3>Question</h3>
<p>Given a $k$-algebra $A$ and a nonzero character $\varepsilon : A \to k$, are there invariants we can look at in order to show that $A$ cannot be given the structure of a Hopf algebra?</p>
|
Qiaochu Yuan
| 290 |
<p>I'm a bit late, but here's a simple observation. Consider a topological version of the question: given a topological space $X$, how can we recognize when $X$ can be given the structure of a topological group? A simple necessary condition is that that $X$ must be homogeneous, and in particular each point should have homeomorphic neighborhoods, because $X$ must act transitively on itself.</p>
<p>An analogous statement about Hopf algebras is the following. Let $H$ be a commutative Hopf algebra over a field $k$. Then $G = \text{Spec } H$ is an affine group scheme over $k$; moreover, the group $G(k)$ acts on $G$, and this action is transitive on $k$-points. In particular the dimension of the Zariski tangent space at each $k$-point of $H$ must be the same. So any commutative algebra without this property can't be the underlying commutative algebra of a Hopf algebra. Examples are given by the ring of functions on any singular variety, such as the cuspidal cubic $k[x, y]/(y^2 - x^3)$ ($k$ a field of characteristic other than $2$ or $3$); in this case the dimension of the tangent space at any point is $1$ except at $(0, 0)$ where it's $2$. </p>
<p>I think this is essentially the point of David Speyer's answer, modulo some technicalities.</p>
<p>(A topological analogue of the observation about $\text{Ext}_A(k, k)$ being graded commutative is that if $X$ is a topological group then $\Omega X$ is an $E_2$-algebra, and in particular $\pi_1(X)$ is abelian by the Eckmann-Hilton argument.) </p>
|
1,760,148 |
<p>If I have a connected metric space $X$, is any ball around a point $x\in X$ also connected?</p>
|
Moishe Kohan
| 84,907 |
<p>On the positive side, if $(X,d)$ is a <a href="https://en.wikipedia.org/wiki/Intrinsic_metric" rel="nofollow noreferrer">length metric space</a> then every open ball $B(a, r)\subset (X,d)$ is path connected. (Every closed ball is connected as well, since, in this situation, each closed ball is either a singleton or is the closure of an open ball and the closure of a connected subset in a metric space is connected.) </p>
<p>Edit. The proof is utterly straightforward: Let $B(a,R)$ be an open ball centered at $a$ of radius $R$. For each point $x\in B(a,R)$, for every $\epsilon>0$ there exists a path $p_{ax}$ (connecting $a$ to $x$ of length $< d(x,a)+\epsilon$. Taking $\epsilon:= R- d(a,x)$, we obtain that this path is entirely contained in the open ball $B(a,R)$. Hence, $B(a,R)$ is path connected. </p>
|
4,164,069 |
<p>I guess it's true for functions that are Lipshitz or uniformly continuous since we can limit the length of the intervals after the transformation.<br />
However, I don't know if it's true or not, and since <span class="math-container">$1/x$</span> is not one of those, I don't know how to solve this problem.</p>
|
Oliver Díaz
| 121,671 |
<p>Here is an interesting result that helps with your problem:</p>
<blockquote>
<p><strong>Theorem:</strong> If <span class="math-container">$f:(a,b)\rightarrow\mathbb{R}$</span> is measurable, and <span class="math-container">$f$</span> is differentiable on a measurable set <span class="math-container">$C\subset(a,b)$</span>, then
<span class="math-container">$$m^*(f(C))\leq \int_C |f'(x)|\,dx$$</span>
where <span class="math-container">$m$</span> is Lebesgue's measure and <span class="math-container">$m^*$</span> is the outer measure.</p>
</blockquote>
<p>See <strong>Bruckner, A. M</strong>. et. al., <em>Real Analysis</em>, 2nd ed 2008, section 7.3. for example, or <a href="https://math.stackexchange.com/a/4158230/121671">here</a></p>
<hr />
<p>For your OP, consider <span class="math-container">$f:(0,1)\rightarrow(1,\infty)$</span> given by <span class="math-container">$f(x)=\frac{1}{x}$</span>. Notice that <span class="math-container">$H=f(E)$</span>. Applying the result stated above gives you
<span class="math-container">$$
m^*(H)=m^*(f(E))\leq\int_E\frac{1}{x^2}\,dx=0
$$</span>
since <span class="math-container">$m(E)=0$</span>.</p>
<hr />
<p><strong>Edit:</strong> It is also possible to show <em>directly</em> without using big machinery that <span class="math-container">$m(H)=0$</span> using only the concept of measure <span class="math-container">$0$</span> and the fact that the countable union of sets of measure zero has also measure <span class="math-container">$0$</span> (Most Advanced Calculus or basic Analysis books discuss this, for example <strong>Apostol, T</strong>. <em>Mathematical Analysis</em>, 2nd ed. Section 7.26)).</p>
<p>I leave many details to the OP. Start by splitting <span class="math-container">$(0,1]$</span> in a countable subintervals, for example <span class="math-container">$\{(\tfrac{1}{n+1},\frac1n]:n\in\mathbb{N}\}$</span>. The taks is then to show that each <span class="math-container">$H_n=\{\tfrac1x:x\in E\cap(\tfrac{1}{n+1},\frac{1}{n}]\}$</span> has measure zero. Notice that on each the function <span class="math-container">$f(x)=\frac{1}{x}$</span> is Lipschitz on <span class="math-container">$I_n:=[\frac{1}{n+1},\frac{1}{n}]$</span>, There exists a constant <span class="math-container">$L_n$</span> such that <span class="math-container">$|f(x)-f(y)|\leq L_n|x-y|$</span> for all <span class="math-container">$x,y\in I_n$</span>. Since <span class="math-container">$E\cap I_n$</span> has measure zero, then you can find a countable collection of intervals <span class="math-container">$J_{n,k}\cap _n=[a_{n,k},b_{n,k}$</span>, <span class="math-container">$k\in\mathbb{N}$</span> such that
<span class="math-container">$$\sum_k\ell(b_{n,k}-a_{n,k})<\frac{\varepsilon}{L_n}$$</span>
where <span class="math-container">$\ell$</span> stands for length. Then
<span class="math-container">$$\sum_k|f(b_{n,k})-f(a_{n,k})|\leq L_n\sum_k(b_{n,k}-a_{n,k})<\varepsilon$$</span>
Notice that <span class="math-container">$f(J_{n,k}\cap I_n)$</span>, <span class="math-container">$k\in\mathbb{N}$</span> is a collection of intervals that cover <span class="math-container">$H_n$</span>. That shows that <span class="math-container">$H_n$</span> has measure zero.</p>
|
1,764,729 |
<blockquote>
<p>Let $x(t) : [0,T] \rightarrow \mathbb{R}^n$ be a solution of a differential equation
$$
\frac{d}{dt} x(t) = f(x(t),t).
$$
In addition we have functions $E :\mathbb{R}^n \rightarrow \mathbb{R}$ and $h:\mathbb{R}^{n+1}\rightarrow \mathbb{R}$ such that
$$
\frac{d}{dt}{E(x(t))} = h(x(t),t) E(x(t))
$$
for any solution $x(t)$ of the differential equation. Can we show that if $E$ is positive at the time $0$ then it is positive at all times? i.e.
$$
E(x(0))>0 \Longrightarrow E(x(t)) \qquad 0<t\leq T
$$
Assume that all functions are at least one continuously differentiable.</p>
</blockquote>
<p>It is easy to show that it is true if $h$ does not depend on $x$ or can be written in the form $h(x(t),t) =\hat h(E(x(t)),t)$. Then $E$ satisfy following differential equation
$$
\frac{d}{dt}{E(t)} = \hat h(E(t),t) E(t)
$$
Thanks to the uniqueness of the solution, any solution $E(t)$ cannot cross the trivial solution $E(t)=0$ and therefore it has to have the same sign at all times.</p>
<hr>
<p>Application: Let's define matrix $A(t)$ via differential equation
$$
\frac{d}{dt}{A(t)} = B(A(t),t) A(t) \qquad A(0) = I
$$
where $B$ is matrix valued function with arguments $A$ and $t$. Is $A(t)$ invertible for all $t\geq 0$?</p>
<p>We have
$$
\frac{d}{dt}{\det{A(t)}} = Tr(B(A(t),t)) \det{A(t)}
$$
If $B$ is just a function of $t$ and $\det{A(t)}$ then the answer is yes, in general I do not know if $A(t)$ is invertible or not.</p>
|
Salsifis
| 334,805 |
<p>Winther was right. You can use Gronwall's Lemma to understand it. Define the function $f:\mathbb{R}\rightarrow \mathbb{R}$ such that: $$f(t)=E(x(t))^2\times 1_{E(x(t))\leqslant 0}$$
For sure, this is a non-negative function. Take the derivative:
\begin{align}
\partial_t f(t)&\leqslant h(x(t),t)E(x(t))^2 \times 1_{E(x(t))\leqslant 0} \\
&\leqslant h(x(t),t)f(t)
\end{align}
this implies that: $$f(t) \leqslant \int_0^t h(x(s),s) f(s) ds $$
By hypothesis $(E(x(0))>0 \Leftrightarrow f(0)=0)$, which gives: $$f(t) \leqslant f(0) \times \int_0^t h(x(s),s) f(s) ds=0 $$
By definition of $f$, this implies that $E(x(t))$ is positive. Of course, here all the $\leqslant$ could be replaced by $=$ since you have equalities. Note that the derivative of $1_{E(x(t),t)\leqslant 0}$ has a meaning with the notion of weak derivative; its "derivative" is a dirac at the point "$E(x(t),t)=0$".</p>
|
782,427 |
<p><strong>I - First doubt</strong> : free variables on open formulas . </p>
<p>I'm having a hard time discovering what different kinds of variables in an open formula of fol are refering to., For example, lets take some open formulas : </p>
<p>1: $"x+2=5"$<br>
2: $"x=x"$<br>
3: $"x+y = y+x"$<br>
4: $"x+2 = 5 -> x=3 "$ .</p>
<p>So let's suppose we have a domain D of a structure which is interpreting that fol language.
Even tho all those formulas are open formulas variables occuring free, i think the variables refer to different kind of sets ( obviously all sets with the same type of objects, be it numbers, or whatever ) depending on the kind of formula... to what exactly, in each case ? </p>
<p>The formula 1 would be refering to a fixed element on the domain (whatever kind of elements it has ) ? Would that be it ? Or would it be an arbitrary one ? </p>
<p>Should we always use the word "refering to" in open formula, instead of "ranging through" ?</p>
<p>The free variable on the formula 2 would be refering to a bunch of elements of the domain ? Or would it be ranging through a bunch of elements of the domain ? How does the truth-set ( set of individuals that satisfy the open sentence " of each formula enter in picture ? </p>
<p><strong>II - Second doubt</strong> : When to close a formula, and what does it provide us ? </p>
<p>The second point i'm having a hard time with is how to differentiate, and how to know the difference between the uses and the reasons of choosing to use either the formula<br>
$"x+2 =5 -> x = 3 "$ or $\forall x (x+2 = 5 -> x = 3 )$ . </p>
<p>And also between the difference of using ( and choosing to use ) the open formula $"x+y = y+x"$ or the closed formula $"\forall x \forall y ( x + y = y + x ) "$. </p>
<p>Same thing about $"x = x$ and $"\forall x (x = x) "$. </p>
|
Mauro ALLEGRANZA
| 108,274 |
<p><em>Question n°1</em> :</p>
<blockquote>
<p>let's suppose we have a domain <span class="math-container">$D$</span> of a structure which is interpreting that fol language. [...] the I think the variables refer to different kind of things depending on the kind of formula... to what exactly, in each case ?</p>
</blockquote>
<p>The answer is : <strong>NO</strong>.</p>
<p>If you are working in FOL, the domain <span class="math-container">$D$</span> contains only one "sort" of thing; if you "apply" (as in the example) first-order language to "speak of" numbers, the domain will be, for example, <span class="math-container">$\mathbb N$</span>, the set of <em>natural numbers</em> and <strong>all</strong> individual variables will "get value" from the domain <span class="math-container">$\mathbb N$</span>.</p>
<p><em>Question n°2</em> :</p>
<blockquote>
<p>how to interpret a formula like : <span class="math-container">$x+2=5$</span>, with a <em>free</em> variable ?</p>
</blockquote>
<p>One way (there are more than one, all producing the same result) is by considering an object <span class="math-container">$a \in D$</span> (an object in the domain) and enlarging the language with a "name" for it : call it <span class="math-container">$\overline a$</span>.</p>
<p>If <span class="math-container">$\mathcal A(x)$</span> is a formula with only one free var <span class="math-container">$x$</span>, we consider an <em>instance</em> of it : <span class="math-container">$\mathcal A(x/\overline a)$</span>, i.e.the formula (without free vars) obtained from <span class="math-container">$\mathcal A(x)$</span> by substitution of all occurrences of <span class="math-container">$x$</span> with <span class="math-container">$\overline a$</span>.</p>
<p>Now the formula <span class="math-container">$\mathcal A(x/\overline a)$</span> is <em>closed</em> (i.e. without free vars), and we call it a <em>sentence</em>, and we are able to evaluate its truth-value.</p>
<p>Consider the formula of your example (I omit the overline for simplicity) :</p>
<blockquote>
<p><span class="math-container">$x+2=5$</span>;</p>
</blockquote>
<p>if we substitute to <span class="math-container">$x$</span> the (name for the) number <span class="math-container">$3$</span>, we obtain a <em>true</em> sentence (i.e. <span class="math-container">$3+2=5$</span>).</p>
<p>If instead we substitute to <span class="math-container">$x$</span> the (name for the) number <span class="math-container">$4$</span>, we obtain a <em>false</em> sentence (i.e. <span class="math-container">$4+2=5$</span>).</p>
<p>A formula with free var is called <em>open</em> because it has <strong>no</strong> (fixed) meaning : it is "open to" different interpretations; in order to give it meaning, you have to transform it into a <em>sentence</em> (i.e. a <em>closed</em> formula).</p>
<p>We have two ways to do this :</p>
<ul>
<li><p>either use quantifiers; thus, from <span class="math-container">$x+2=5$</span>, we can obtain <span class="math-container">$\forall x(x+2=5)$</span>, which is plainly <em>false</em>;</p></li>
<li><p>or instantiate it (as above); of course, for different instances of the same formula, we will obtain different <em>sentences</em> with different truth-values.</p></li>
</ul>
<p><em>Question n°3 :</em></p>
<p>In the case of <span class="math-container">$x = x$</span>, the two ways above will give as results :</p>
<ul>
<li><p><span class="math-container">$\forall x (x = x)$</span>, wich is clearly <em>true</em> (and more: it is <em>valid</em>, i.e.true in every domain)</p></li>
<li><p>in the domain <span class="math-container">$\mathbb N$</span>, we can instantiate it as, e.g. <span class="math-container">$1=1$</span>, <span class="math-container">$2=2$</span>, and so on; all will be true instances. Of course the <span class="math-container">$x$</span> must be instantiated to the same objects in both occurrences of it into <span class="math-container">$x=x$</span>.</p></li>
</ul>
<p><em>Question n°4 :</em></p>
<blockquote>
<p>When we have to use formulae with free vars or closed one ?</p>
</blockquote>
<p><em>It depends</em>...</p>
<p>Consider : <span class="math-container">$x+2=5 \rightarrow x=3$</span></p>
<p>What happens with <span class="math-container">$3$</span> in place of <span class="math-container">$x$</span> ? </p>
<p>We get : <span class="math-container">$3+2=5$</span>, which is <em>true</em>, and <span class="math-container">$3=3$</span>, also <em>true</em>; thus, by truth-table for <span class="math-container">$\rightarrow$</span> :</p>
<blockquote>
<p><span class="math-container">$(x+2=5 \rightarrow x=3)[3/x]$</span>, i.e. <span class="math-container">$3+2=5 \rightarrow 3=3$</span>, that reduces to <em>True</em> <span class="math-container">$\rightarrow$</span> <em>True</em>, which is <em>True</em>.</p>
</blockquote>
<p>And with <span class="math-container">$4$</span> in place of <span class="math-container">$x$</span> ? We can easily check that we will have <em>False</em> <span class="math-container">$\rightarrow$</span> <em>False</em>, which is again <em>True</em>.</p>
<p>Thus, we have showed that <span class="math-container">$∀x(x+2=5 \rightarrow x=3)$</span> is <em>true</em> in <span class="math-container">$\mathbb N$</span>.</p>
<p>But this is not always so : <span class="math-container">$\forall x (2 \times x = 4)$</span> is not <em>true</em> in <span class="math-container">$\mathbb N$</span>.</p>
|
3,428,546 |
<p>Consider whether the following series converges or diverges <span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^n\sin\frac{n\pi}{3n+1}}{\sqrt{n+3}}$$</span></p>
<p>I have tried Leibniz's test but failed to show that <span class="math-container">$\sin\frac{n\pi}{3n+1} / \sqrt{n+3}$</span> is monotone, which I graphically checked it is. I have managed to show that the series diverges absolutely, therefore I've shown nothing. Also I have tried using Cauchy's criterion, but failed to arrive at any useful result.</p>
|
eternalGoldenBraid
| 225,419 |
<p>Terence Tao has some interesting related commentary in his blog post "<a href="https://terrytao.wordpress.com/2010/01/01/254a-notes-0-a-review-of-probability-theory/" rel="noreferrer">A review of probability theory</a>":</p>
<blockquote>
<p>Elements of the sample space <span class="math-container">$\Omega$</span> will be denoted <span class="math-container">$\omega$</span>.
However, for reasons that will be explained shortly, we will try to
avoid actually referring to such elements unless absolutely required
to.</p>
</blockquote>
<p>... </p>
<blockquote>
<p>In order to have the freedom to perform extensions every time we need
to introduce a new source of randomness, we will try to adhere to the
following important dogma: <strong>probability theory is only “allowed” to
study concepts and perform operations which are preserved with respect
to extension of the underlying sample space.</strong> (This is analogous to how
differential geometry is only “allowed” to study concepts and perform
operations that are preserved with respect to coordinate change, or
how graph theory is only “allowed” to study concepts and perform
operations that are preserved with respect to relabeling of the
vertices, etc..) As long as one is adhering strictly to this dogma,
one can insert as many new sources of randomness (or reorganise
existing sources of randomness) as one pleases; but if one deviates
from this dogma and uses specific properties of a single sample space,
then one has left the category of probability theory and must now take
care when doing any subsequent operation that could alter that sample
space. This dogma is an important aspect of the probabilistic way of
thinking, much as the insistence on studying concepts and performing
operations that are invariant with respect to coordinate changes or
other symmetries is an important aspect of the modern geometric way of
thinking. <strong>With this probabilistic viewpoint, we shall soon see the
sample space essentially disappear from view altogether</strong>, after a few
foundational issues are dispensed with.</p>
</blockquote>
<p>I get the impression that for people who deeply understand probability theory, it is "bad form" to focus too much on the underlying sample space, because the sample space might be changed (extended) anyway. Personally, I always need to define the sample space explicitly in my mind (as in your step 1) in order to feel like I really understanding what I'm doing.</p>
|
989,740 |
<p>How do I prove the following statement?</p>
<blockquote>
<p>If $x^2$ is irrational, then $x$ is irrational. The number $y = π^2$ is irrational. Therefore, the number $x = π$ is irrational</p>
</blockquote>
|
mfl
| 148,513 |
<p>The statement </p>
<p>"$x^2$ irrational $\implies$ $x$ irrational" </p>
<p>is logically equivalent to </p>
<p>"$x$ not irrational $\implies x^2$ not irrational".</p>
|
3,655,215 |
<p>There are many contradictions in literature on tensors and differential forms. Authors use the words coordinate-free and geometric. For example, the book Tensor Analysis and Elementary Differential Geometry for Physicists and Engineers say differential forms are coordinate free while tensors are dependendent on coordinate. But when you look at the wikipedia article on tensor calculus it says that tensors are coordinate free representation. Another, mention would be Kip Thornes Modern Classical Physics where he explains that he develops physics in a coordinate free way using tensors. Other authors say, we develop differential geometry in a geometric way. Or we develop physics in a geometric way. Is geometry synonymous with coordinate free? This is all very confusing. There are many more examples in the literature but I dont see a definitive answer. The further I look the contradictions between authors. I am looking for an authoritative textbook that I can learn from. What do you think about Chris Isham's Modern Differential Geometry for Physicists? Also, is it better to use tensors vs differential forms in theoretical physics? </p>
|
Lee Mosher
| 26,501 |
<p>I cannot suggest any particular textbook because mathematical physics is quite far from my area of expertise, although I will confess a fondness for Misner, Thorne and Wheeler.</p>
<p>But I will say that there does not exist any "authoritative textbook" in which all ambiguities of terminology and notation are erased and all terms and notations are used in manner that all physicists and mathematicians throughout the world will agree upon. </p>
<p>If you study a good book on mathematical physics then you'll learn that author's point of view on terminology and notation. But more importantly, you'll learn some math and physics. Then you'll have a solid foundation for further studies, and you'll be in a good position to navigate around the inevitable variations of terminology and notation that you will encounter in your further readings.</p>
|
1,517,086 |
<p>I spent a long time trying to find a natural deduction derivation for the formula $\exists x(\exists y A(y) \rightarrow A(x))$, but I always got stuck at some point with free variables in the leaves. Could someone please help me or give me some hints to find a proof. </p>
<p>Thanks.</p>
|
oerpli
| 184,354 |
<p>If you prefer the tree style notation this is a valid proof:</p>
<p>$$\dfrac{\dfrac{\lower{1.5ex}{[\exists y~A(y)]^1}~~\dfrac{[A(s)]^2}{A(s)}{}}{\dfrac{A(s)}{\exists y~A(y)~\to~A(s)}{(\to i,1)}}{(\exists e,1)}}{\exists x~(\exists y~A(y)~\to~A(x))}{(\exists i)}$$</p>
|
121,791 |
<p>Now I'm trying to use </p>
<pre><code>Show[plota, Epilog->Inset[inset, Scaled->[{0.2, 0.7}]]]
</code></pre>
<p>operation to insert an inset of a figure, where plota is an Overlay of two plots</p>
<pre><code>plota = Overlay[{plotb, plotc}];
</code></pre>
<p>and then I cannot be successfully inserted in because "Overlay is not a type of graphics" so it cannot be showed. </p>
<p>What should I do to make the inset works? </p>
|
Mr.Wizard
| 121 |
<p>As JasonB temporarily commented you can <a href="http://reference.wolfram.com/language/ref/Rasterize.html" rel="nofollow noreferrer"><code>Rasterize</code></a> an <a href="http://reference.wolfram.com/language/ref/Overlay.html" rel="nofollow noreferrer"><code>Overlay</code></a> to convert it into a (rasterized) <code>Graphics</code> expression.</p>
<pre><code>plotB = Plot[2 Sin[x] + x, {x, 0, 15}, Filling -> Bottom];
plotC = Plot[Sinc[x], {x, 0, 10}, PlotStyle -> Green, Axes -> False];
plotA = Overlay[{plotB, plotC}];
inset = Graphics[{Red, Thick, Dashed, Circle[]}];
Show[Rasterize @ plotA,
Epilog -> Inset[inset, Scaled[{0.3, 0.6}]]
]
</code></pre>
<p><a href="https://i.stack.imgur.com/ZK7RL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZK7RL.png" alt="enter image description here"></a></p>
<p>Doing this you will however loose the scalability of (un-rasterized) vector graphics. You may instead want to <code>Inset</code> repeatedly:</p>
<pre><code>Show[plotB,
Epilog -> {
Inset[plotC, Automatic, Automatic, Scaled[{1, 1}]],
Inset[inset, Scaled[{0.3, 0.6}]]
}
]
</code></pre>
|
267,753 |
<p>A number of seemingly unrelated elementary notions can be defined uniformly with help of (iterated) Quillen lifting property
(a category-theoretic construction I define below) "starting" to a single (counter)example or a simple class of morphisms,
for example a finite group being nilpotent, solvable, p-group, a topological space being compact, discrete, T4 (normal).</p>
<p>I would like to see more examplees, to help me understand if there is a bigger picture behind. </p>
<p>Let me give the definitions.</p>
<p>For a property $C$ of arrows (morphisms) in a category, define
its {\em left and right orthogonals} as</p>
<p>$$ C^\perp := \{ f :\text{ for each }g \in C\ f \,\rightthreetimes\, g \} $$
$$ {}^\perp C := \{ g :\text{ for each }f \in C\ f \,\rightthreetimes\, g \} $$</p>
<p>here $f \,\rightthreetimes\, g$ reads " $f$ has the left lifting property wrt $g$ ",
" $f$ is (left) orthogonal to $g$ ",
i.e. for $f:A\longrightarrow B$, $g:X\longrightarrow Y$,
$f \,\rightthreetimes\, g$ iff for each $i:A\longrightarrow X$, $j:B\longrightarrow Y$ such that $ig=fj$ ("the square commutes"),
there is $j':B\longrightarrow X$ such that $fj'=i$ and $j'g=j$ ("there is a diagonal
making the diagram commute").</p>
<p>Examples:</p>
<p>In the category Sets of sets the right orthogonal
${}^\perp \{\emptyset \longrightarrow \{*\}\}$ of the simplest non-surjection
$\emptyset \longrightarrow \{*\}$ is the class of surjections.
The triple left orthogonal $ ((\{\emptyset \longrightarrow \{*\}\}^\perp)^\perp)^\perp$ is the class of functions which split.</p>
<p>The left and right orthogonals of $ \{x_{1},x_{2}\}\longrightarrow \{*\} $, the simplest non-injection, are both precisely the class of injections.</p>
<p>A finite group $H$ is nilpotent iff $H\longrightarrow H\times H$ is in ${}^\perp(\{ 0\longrightarrow G : G\text{ arbitrary} \}^\perp)$ </p>
<p>A Hausdorff space $K$ is compact iff $K\longrightarrow \{*\}$ is in ${}^\perp({}^\perp(\{a\}\longrightarrow \{a{<}b\})_{<5})^{\perp})$;
here $^\perp(\{a\}\longrightarrow \{a{<}b\})_{<5}$ denotes maps in $^\perp(\{a\}\longrightarrow \{a{<}b\})$
between spaces of size less than 5.</p>
<p>I give more examples in the answers to my own question I posted,
as they require some notation.</p>
|
Tim Campion
| 2,362 |
<p>A word on the bigger picture. Examples are absolutely ubiquitous in category theory. First note that if <span class="math-container">$\mathcal{C} \perp \mathcal{D}$</span>, then <span class="math-container">$\mathcal{C}$</span> and <span class="math-container">$\mathcal{D}$</span> determine each other uniquely up to the closure operations <span class="math-container">${}^\perp(()^\perp)$</span> and <span class="math-container">$(^\perp())^\perp$</span>, so to understand what's going on largely boils down to understanding classes of morphisms of the form <span class="math-container">${}^\perp(\mathcal{C}^\perp)$</span>. Now consider:</p>
<p><strong>Theorem</strong>
Let <span class="math-container">$\mathcal{K}$</span> be a locally presentable category and <span class="math-container">$\mathcal{K}^{[1]}$</span> its category of morphisms. Then the following are equivalent:</p>
<ol>
<li><p>Accessible, accessibly-embedded, weakly-reflective full subcategories <span class="math-container">$\mathcal{L} \subseteq \mathcal{K}^{[1]}$</span>.</p>
</li>
<li><p>Accessible, accessibly-embedded full subcategories <span class="math-container">$\mathcal{L} \subseteq \mathcal{K}^{[1]}$</span> which are closed under coproduct, pushout along arbitrary morphisms of <span class="math-container">$\mathcal{L}$</span>, composition, transfinite composition, and retracts.</p>
</li>
<li><p>Full subcategories <span class="math-container">$\mathcal{L} \subseteq \mathcal{K}^{[1]}$</span> of the form <span class="math-container">$\mathcal{L} = {}^\perp(\mathcal{C}^\perp)$</span> for some small set <span class="math-container">$\mathcal{C} \subseteq \mathcal{K}^{[1]}$</span></p>
</li>
</ol>
<p>The proof is via the small object argument. Note that the "closure" conditions of (2) are always satisfied by a subcategory closed under <span class="math-container">${}^\perp(()^\perp)$</span> in an arbitrary category.</p>
<p>And consider this: if <span class="math-container">$f: A \to B$</span> is a morphism and you can form the pushout <span class="math-container">$B\cup_A B$</span> of <span class="math-container">$f$</span> along itself, then the lifting property with respect to the map <span class="math-container">$B \cup_A B \to B$</span> is equivalent to lifts with respect to <span class="math-container">$f$</span> being <em>unique</em>, which means that the orthogonality relation we're talking about (usually called <em>weak</em> orthogonality) can be used to express <em>strong</em> orthogonality (where <em>unique</em> lifts exist). And we get analogously:</p>
<p><strong>Theorem</strong>
Let <span class="math-container">$\mathcal{K}$</span> be a locally presentable category and <span class="math-container">$\mathcal{K}^{[1]}$</span> its category of morphisms. Then the following are equivalent:</p>
<ol>
<li><p>Accessible, accessibly-embedded, reflective full subcategories <span class="math-container">$\mathcal{L} \subseteq \mathcal{K}^{[1]}$</span>.</p>
</li>
<li><p>Accessible, accessibly-embedded full subcategories <span class="math-container">$\mathcal{L} \subseteq \mathcal{K}^{[1]}$</span> which are closed under colimits and pushouts and retracts along arbitrary morphisms of <span class="math-container">$\mathcal{L}$</span>.</p>
</li>
<li><p>Full subcategories <span class="math-container">$\mathcal{L} \subseteq \mathcal{K}^{[1]}$</span> of the form <span class="math-container">$\mathcal{L} = {}^\perp(\mathcal{C}^\perp)$</span> for some small set <span class="math-container">$\mathcal{C} \subseteq \mathcal{K}^{[1]}$</span> (where for the moment I'm using the more standard convention that <span class="math-container">${}^\perp$</span> denotes <em>strong</em> orthogonality.</p>
</li>
</ol>
<p>These theorems have implications for full subcategories of <span class="math-container">$\mathcal{K}$</span> rather than <span class="math-container">$\mathcal{K}^{[1]}$</span> by identifying an object <span class="math-container">$X$</span> with either <span class="math-container">$X \to 1$</span> or <span class="math-container">$\emptyset \to X$</span>, as appropriate.</p>
<p>These two example theorems are meant to be illustrative. The provisos "accessible, accessibly-embedded" are technical, and can be omitted in the presence of the set-theoretical Vopenka's Principle.</p>
<p>In another direction, in a non-locally-presentable category like <span class="math-container">$\mathsf{Top}$</span> many of the same principles still apply, allowing one to concluded from "closure" properties of a class of morphisms as in (2) above that it is generated under <span class="math-container">${}^\perp(()^\perp)$</span> by a small class of morphisms. <span class="math-container">$\mathsf{Top}$</span> itself also satisfies a weakening of the notion of local presentability called <a href="https://ncatlab.org/nlab/show/locally+bounded+category" rel="nofollow noreferrer">local boundedness</a>; see the references on the nlab page for analogs of the above theorems in the locally bounded case. There is also a way to apply the above theorems directly to <span class="math-container">$\mathsf{Top}$</span>, which also illustrates an example of the phenomenon you're looking for:</p>
<p><strong>Example</strong> <span class="math-container">$\mathsf{Top}$</span> is the union of a chain of full, coreflective subcategories <span class="math-container">$\mathsf{Top} = \cup_\kappa \mathsf{Top}_\kappa$</span> which are locally presentable. Here <span class="math-container">$\kappa$</span> is a regular cardinal and <span class="math-container">$\mathsf{Top}_\kappa$</span> is the category of spaces of <span class="math-container">$<\kappa$</span>-<a href="https://en.wikipedia.org/wiki/Countably_generated_space" rel="nofollow noreferrer">tightness,</a> (I may be off by taking a successor cardinal here) and in fact</p>
<p><span class="math-container">$\mathsf{Top}_\kappa = {}^\perp(\mathsf{Disc}_\kappa^\perp)$</span></p>
<p>where here <span class="math-container">$\perp$</span> denotes strong orthogonality and <span class="math-container">$\mathsf{Disc}_\kappa$</span> is the category of discrete spaces of cardinality <span class="math-container">$<\kappa$</span>, considered as a full subcategory of <span class="math-container">$\mathsf{Top}$</span>.</p>
<p>The moral is: any nice (~ "sufficiently cocomplete") subcategory <span class="math-container">$\mathcal{L}$</span> of a sufficiently-good cocomplete category <span class="math-container">$\mathcal{K}$</span> is definable in terms of lifting properties, unless some weird set-theoretical phenomenon is occurring.</p>
|
1,304,948 |
<p>If I solve $Tx=0$ where $T$ is some square matrix
then if I multiply both sides by $T$ and solve for $T^2x=0$, will my x be the same?
In other words if I were to multiply to both sides of the equation $Tx=0$ to any order of $T$, will my x be the same? or at least satisfy the first equation?</p>
|
Kevin
| 61,342 |
<p>As Prahlad Vaidyanathan stated, this is, in general, not true. However, it is true that $W\subseteq (W^\perp)^\perp$. </p>
|
1,304,948 |
<p>If I solve $Tx=0$ where $T$ is some square matrix
then if I multiply both sides by $T$ and solve for $T^2x=0$, will my x be the same?
In other words if I were to multiply to both sides of the equation $Tx=0$ to any order of $T$, will my x be the same? or at least satisfy the first equation?</p>
|
copper.hat
| 27,978 |
<p>In general you have $W^{\bot \bot} = \overline{\operatorname{sp} W}$.</p>
<p>For a simple example, take $l_2$ and let $W =\operatorname{sp} \{e_k \}$, a proper dense subspace of $l_2$.
Then $W^\bot = \{0\}$ and so $W^{\bot \bot} = l_2$.</p>
|
1,109,918 |
<p>Is it always possible to add terms into limits, like in the following example? (Or must certain conditions be fulfilled first, such as for example the numerator by itself must converge etc)</p>
<p>$\lim_{h \to 0} {f(x)} = \lim_{h \to 0} \frac{e^xf(x)}{e^x}$</p>
|
Pp..
| 203,995 |
<p>$$\frac{\partial f(x+h)}{\partial h}=f'(x+h)$$</p>
<p>$$\frac{\partial f(x-h)}{\partial h}=-f'(x-h)$$</p>
<p>$$\frac{\partial f(x)}{\partial h}=0$$</p>
<p>$$(h^2)'=2h$$</p>
|
1,423,456 |
<p>I am completely stuck on this problem: $C[0,1] = \{f: f\text{ is continuous function on } [0,1] \}$ with metric $d_1$ defined as follows:</p>
<p>$d_1(f,g) = \int_{0}^{1} |f(x) - g(x)|dx $.</p>
<p>Let the sequence $\{f_n\}_{n =1}^{\infty}\subseteq C[0,1]$ be defined as follows:</p>
<p>$
f_n(x) = \left\{
\begin{array}{l l}
min\{1, {1\over 2}-n(x -{1\over 2} )\} & \quad \text{ $x\in [0, 1/2]$}\\
max\{0, {1\over 2}-n(x -{1\over 2} )\} & \quad \text{$x\in [1/2, 1]$}\\
\
\end{array} \right.
$<br>
Using this function to show that $C[0,1]$ is not complete.</p>
<p>I need help to solve this problem.</p>
|
DanielWainfleet
| 254,665 |
<p>Estimate $d_1(f_m,f_n)$ for $0<m<n$ as follows: For any $n>0$ let $p(n)=(1-1/n)/2$ and $q(n)=(1+1/n)/2$. For $0<m<n$ we have $0\le p(m)<p(n)<q(n)<q(m) \le 1$ .We have $$f_m(x)=f_n(x)=1$$ for $x \in [0,p(m)]$ and $$f_m(x)=f_n(x)=0$$ for $y \in [q(m),1]$.Therefore $$d_1(f_m,f_n)=\int_{p(m)}^{q(m)} |f_m(x)-f_n(x)| dx$$. A crude upper bound for $|f_m(x)-f_n(x)|$ for $ x \in [p(m).q(m)]$ is $$|f_m(x)-f_n(x)|\leq |f_m(x)|+|f_n(x)|\le 2$$. Therefore $$d_1(f_m,f_n) \leq \int_{p(m)}^{q(m)} 2 dz =2(q(m)-p(m)=2/m.$$ So $(f_n)_{n \in N}$ is a Cauchy sequence. Now consider a function $g$ on $[0,1]$ where $g(x)=1$ when $x \in[0,1/2)$ and $g(x)=0$ when $x \in (1/,1]$. (The value of $g(1/2)$ is irrelevant.) Observe that for any $h \in C[0,1]$ there exists $x \in (0,1/2) \cup (1/2,1)$ where $h(x) \ne g(x)$ .(Else $h$ is discontinuous at the point $1/2$).So there exists $e>0$ such that $$(x-e,x+e) \subset (0,1/2) \cap (1/2,1)$$and $$|g(y)-h(y)|> |g(x)-h(x)|/2 $$for all $y \in (x-e,x+e)$.BUT for all but finitely many $ n $ we have : $ f_n(y)=g(y)$ for all $y \in (x-e,x+e)$ because $$(x-e,x+e)\cap [p(n),q(n)]=\phi$$ for all but finitely many $n$.Therefore for all sufficiently large $n$ we have $$d_1(f_n,h) > \int_{x-e}^{x+e} |g(y)-h(y)| dy > (2 e) (|g(x)-h(x)|/2).$$ So any $h \in C[0,1]$ is NOT the limit of $(f_n)_{n \in N}$. Note that we didn't need an exact value for $d_1(f_m,f_n)$ ,which simplified the calc's.</p>
|
2,538,553 |
<p>For example in statistics we learn that mean = E(x) of a function which is defined as</p>
<p>$$\mu = \int_a^b xf(x) \,dx$$</p>
<p>however in calculus we learn that</p>
<p>$$\mu = \frac {1}{b-a}\int_a^b f(x) \,dx $$ </p>
<p>What is the difference between the means in statistics and calculus and why don't they give the same answer?</p>
<p>thank you</p>
|
Michael Hardy
| 11,667 |
<p>This seems to be based on confusion resulting from resemblance between the notations used in the two situations.</p>
<p>In probability and statistics, one learns that $\displaystyle\int_{-\infty}^\infty x f(x)\,dx$ is the mean, <b>NOT</b> of the function $f$, but of a random variable denoted (capital) $X$ (whereas lower-case $x$ is used in the integral) whose probability density function is $f.$ This is the same as $\displaystyle \int_a^b xf(x)\,dx$ in cases where the probability is $1$ that the random variable $X$ is between $a$ and $b.$ (The failure, in the posted question, to distinguish betweeen the lower-case $x$ used in the integral and the capital $X$ used in the expression $\operatorname E(X)$ is an error that can make it impossible to understand expressions like $\Pr(X\le x)$ and some other things.)</p>
<p>In calculus, the expression $\displaystyle \frac 1 {b-a} \int_a^b f(x)\,dx$ is the mean, <b>NOT</b> of any random variable $X,$ but of the function $f$ itself, on the interval $[a,b].\vphantom{\dfrac11}$ </p>
<p>Notice that in probability, you necessarily have $\displaystyle \int_a^b f(x)\,dx=1$ and $f(x)\ge 0,$ and the mean $\displaystyle \int_a^b xf(x)\,dx$ is necessarily between $a$ and $b.$ But none of that applies to the calculus problem, since the quantity whose mean is found is on the $f(x)$-axis, not on the $x$-axis.
$$\S \qquad\qquad \S \qquad\qquad \S$$
<b>Postscript:</b> Nine people including me have up-voted "Jack M"'s comment, so just to satisfy that point of view I will add some things.</p>
<p>If $f$ is the density function of the probability distribution of the random variable (capital) $X,$ then the mean of $g(X)$ (where $g$ is some other function) is $$ \int_{-\infty}^\infty g(x) f(x)\,dx. $$ Applying that to the situation in calculus, one can say that the density function of the <b>uniform distribution</b> on the interval $[a,b]$ is $1/(b-a),$ so if $X$ is a random variable with that distribution, then $$ \operatorname E(f(X)) = \int_a^b f(x) \frac 1 {b-a} \, dx. $$ And a random variable $X$ itself can be regarded as a function whose domain is a sample space $\Omega,$ with the probability measure $P$ assigning probabilities to subsets of $\Omega,$ and then you have $$ \operatorname E(X) = \int_\Omega X(\omega)\, P(d\omega). $$</p>
|
4,333,062 |
<p>I stumbled upon the following identity
<span class="math-container">$$\sum_{k=0}^n(-1)^kC_k\binom{k+2}{n-k}=0\qquad n\ge2$$</span>
where <span class="math-container">$C_n$</span> is the <span class="math-container">$n$</span>th Catalan number. Any suggestions on how to prove it are welcome!</p>
<p>This came up as a special case of a generating function for labeled binary trees. Actually I can directly prove the identity is zero by showing that certain trees don't exist, but I expect that seeing a direct proof will help me find nice closed formulae for other coefficients of the generating function.</p>
|
Clyde Kertzer
| 1,026,367 |
<p>I'll sum up the concepts it seems a few have addressed in the comments.</p>
<p>Setting the square root of something equal to a negative will only result in an imaginary solution, as I'm sure you are aware. Thus, squaring both sides will ALWAYS result in an extraneous solution. So, if you can show that an equation is of the form "square root = a negative," then you know the equation has no real solutions. So yes, your logic is correct.</p>
<p>Good of you to be thinking about equations such as these in a more general sense!</p>
|
167,221 |
<p>I'm wondering if this is possible for the general case. In other words, I'd like to take $$\int_a^b{g(x)dx} + \int_c^d{h(x)dx} = \int_e^f{j(x)dx}$$ and determine $e$, $f$, and $j(x)$ from the other (known) formulas and integrals. I'm wondering what restrictions, limitations, and problems arise.</p>
<p>If this is not possible in the general case, I'm wondering what specific cases this would be valid for, and also how it could be done. It's a curiosity of mine for now, but I can think of some possible problems and applications to apply it to.</p>
|
Mercy King
| 23,304 |
<p>Assume $a< b$ and $c< d$. Let<br>
$$
e \le \min\{a,c\}, \ f \ge \max\{b,d\},
$$
and set $j=\tilde{g}+\tilde{h}$ where
$$
\tilde{g}:=g.1_{[a,b]},\tilde{h}:=h.1_{[c,d]}: [e,f] \to \mathbb{R}.
$$
Then
$$
\int_a^b g+\int_c^d h=\int_e^f(\tilde{g}+\tilde{h})=\int_e^f j.
$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.