hcju commited on
Commit
54d4412
·
verified ·
1 Parent(s): c00b3ad

Upload folder using huggingface_hub

Browse files
Files changed (5) hide show
  1. README.md +75 -0
  2. corpus.jsonl +0 -0
  3. excluded_ids.jsonl +0 -0
  4. qrels/test.jsonl +0 -0
  5. queries.jsonl +76 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ multilinguality:
5
+ - monolingual
6
+ task_categories:
7
+ - text-retrieval
8
+ source_datasets:
9
+ - math stackexchange
10
+ task_ids:
11
+ - document-retrieval
12
+ config_names:
13
+ - corpus
14
+ tags:
15
+ - text-retrieval
16
+ dataset_info:
17
+ - config_name: default
18
+ features:
19
+ - name: query-id
20
+ dtype: string
21
+ - name: corpus-id
22
+ dtype: string
23
+ - name: score
24
+ dtype: float64
25
+ splits:
26
+ - name: test
27
+ num_examples: 11457
28
+ - config_name: corpus
29
+ features:
30
+ - name: _id
31
+ dtype: string
32
+ - name: title
33
+ dtype: string
34
+ - name: text
35
+ dtype: string
36
+ splits:
37
+ - name: corpus
38
+ num_examples: 9969
39
+ - config_name: queries
40
+ features:
41
+ - name: _id
42
+ dtype: string
43
+ - name: text
44
+ dtype: string
45
+ splits:
46
+ - name: queries
47
+ num_examples: 76
48
+ - config_name: excluded_ids
49
+ features:
50
+ - name: query-id
51
+ dtype: string
52
+ - name: excluded-ids
53
+ sequence:
54
+ dtype: string
55
+ splits:
56
+ - name: excluded_ids
57
+ num_examples: 76
58
+ configs:
59
+ - config_name: default
60
+ data_files:
61
+ - split: test
62
+ path: qrels/test.jsonl
63
+ - config_name: corpus
64
+ data_files:
65
+ - split: corpus
66
+ path: corpus.jsonl
67
+ - config_name: queries
68
+ data_files:
69
+ - split: queries
70
+ path: queries.jsonl
71
+ - config_name: excluded_ids
72
+ data_files:
73
+ - split: excluded_ids
74
+ path: excluded_ids.jsonl
75
+ ---
corpus.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
excluded_ids.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
qrels/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
queries.jsonl ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"_id": "B.301", "text": "Formula: $\\|A\\|_2=\\sqrt{\\rho(A^TA)}$\nContext: Inequality between norm 1,norm 2 and norm $\\infty$ of Matrices Suppose $A$ is a $m\\times n$ matrix. Then Prove that, $\\begin{equation*} \\|A\\|_2\\leq \\sqrt{\\|A\\|_1 \\|A\\|_{\\infty}} \\end{equation*}$ I have proved the following relations: $\\begin{align*} \\frac{1}{\\sqrt{n}}\\|A\\|_{\\infty}\\leq \\|A\\|_2\\leq\\sqrt{m}\\|A\\|_{\\infty}\\\\ \\frac{1}{\\sqrt{m}}\\|A\\|_{1}\\leq \\|A\\|_2\\leq\\sqrt{n}\\|A\\|_{1} \\end{align*}$ Also I feel that somehow Holder's inequality for the special case when $p=1$ and $q=\\infty$ might be useful.But I couldn't prove that. Edit: I would like to have a prove that do not use the information that $\\|A\\|_2=\\sqrt{\\rho(A^TA)}$ Usage of inequalities like Cauchy Schwartz or Holder is fine. "}
2
+ {"_id": "B.302", "text": "Formula: $z=\\sqrt[n]{s}e^{\\frac{i\\varphi}{n}}$\nContext: $n$-th root of a complex number I am confused about the following problem. With $w=se^{i{\\phi}}$, where $s\\ge 0$ and $\\phi \\in \\mathbb{R}$, solve the equation $z^n=w$ in $\\mathbb{C}$ where $n$ is a natural number. How many solutions are there? Now my approach is simply taking the $n$-th root which gives $$z=\\sqrt[n]{s}e^{\\frac{i\\varphi}{n}}$$ However, it seems that this problem is asking us to show the existance of the $n$-th root. Can I assume that the $n$-th root of a complex number already exists? Moreoover, would I be correct to say that there is only one solution which is given above? "}
3
+ {"_id": "B.303", "text": "Formula: $[x,y] = x$\nContext: Number of non-commutative Lie algebras of dimension 2 Theorem- Up to isomorphism, the only noncommutative Lie algebra of dimension 2 is that with basis $x , y$ and bracket determined by $[x,y] = x$. I understand that all vector spaces of dimension 2 over the field $K$ are isomorphic to each other. So the number of lie algebras of dimension 2 in a field $K$ is determined by the number of possible bilinear operations [ ]$:\\ V \\ X \\ V \\rightarrow V$ satisfying the conditions $a)$ $[x,x]=0$ for all $x\\in V$ $b)$ $[x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0$ for all $x,y,z \\in V$ The bilinear operations on the other hand is determined by the elements to which the pair of base elements are mapped to in the bilinear operation. And since in a lie algebra $[x,x]=[y,y]=0$ and $[x,y]=-[x,y]$ we ony need to determine $[x,y]$. Now how do we prove that $[x,y]=x$ and $[y,x]=-x$ always and why can't it be [y,x]=y or any other vector ? "}
4
+ {"_id": "B.304", "text": "Formula: $[T_X^{0,1}, T_X^{0,1}] \\subset T_X^{0,1}$\nContext: Elementary proof that a non-orientable manifold of real dimension $2$ does not admit a quasi-complex structure. Is there an easy proof that a non-orientable real surface $X$ does not admit a quasi-complex structure? The proof I know is to observe that any quasi-complex structure on a real surface $X$ necessarily satisfies the integrability condition $$[T_X^{0,1}, T_X^{0,1}] \\subset T_X^{0,1}$$ of the Newlander-Nirenberg theorem, because $T_X^{0,1}$ is a $1$-dimensional complex vector bundle, and the bracket $[-,-]$ is alternating, i.e. it vanishes on $T_X^{0,1}$. So by the Newlander-Nirenberg theorem, $X$ admits a complex structure, and complex manifolds have to be orientable. However, the Newlander-Nirenberg theorem is a deep theorem and feels a bit overkill. Also I don't really see why there cannot be a quasi-complex structure. Is there a more elementary proof to convince myself? "}
5
+ {"_id": "B.305", "text": "Formula: $\\lim\\limits_{N\\to\\infty}\\left\\lfloor\\sum\\limits_{r=1}^N\\frac{1}{2^r}\\right\\rfloor$\nContext: What will be the value of floor function of $\\lim\\limits_{N\\to\\infty}\\left\\lfloor\\sum\\limits_{r=1}^N\\frac{1}{2^r}\\right\\rfloor$ What would be the value of floor function of $\\lim\\limits_{N\\to\\infty}\\left\\lfloor\\sum\\limits_{r=1}^N\\frac{1}{2^r}\\right\\rfloor$ would it be $1$ or would it be $0$ ? The formula I use for this is that of infinite summation series that is $\\frac{a}{1-r}$ but I have no clue how to find out what the floor value of the above expression would be. P.s I am a high school student so please explain in simple terms, and yes I do know basic calculus. EDIT: I'm sorry it was given $\\lim_{N \\to \\infty}$ in the problem "}
6
+ {"_id": "B.308", "text": "Formula: $\\zeta(s)=\\sum_{n=1}^\\infty\\frac{1}{n^s}=\\frac{1}{\\Gamma(s)}\\int_0^\\infty \\frac{x^{s-1}}{e^x-1}dx$\nContext: Riemann's definition of the zeta function I am having trouble understanding Riemann's definition of the zeta function, and I will need to give a brief summary here before I can get to my question. In his 1859 paper, Riemann derived the integral representation $$\\zeta(s)=\\sum_{n=1}^\\infty\\frac{1}{n^s}=\\frac{1}{\\Gamma(s)}\\int_0^\\infty \\frac{x^{s-1}}{e^x-1}dx$$ that is valid for $\\mbox{Re}(s)\\gt 1$, and then modified the integral in order to define a function that is defined for all complex values of $s$, except $s=1$, where it has a simple pole. The extension is given by $$\\zeta(s)=\\frac{\\Gamma(1-s)}{2\\pi i}\\int_C \\frac{(-z)^s}{e^z-1}\\frac{dz}{z}$$ where $C$ is a \"Hankel contour\", that is, a path that travels from $+\\infty $ at a small distance $\\epsilon$ above the positive $x$-axis, circles around the origin once in counterclockwise direction with a small radius $\\delta$, and returns to $+\\infty$ traveling at distance $\\epsilon$ below the positive real axis. Taking the limit as $\\epsilon\\rightarrow 0$ and $\\delta \\rightarrow 0$ one can see that the integral $$\\int_C \\frac{(-z)^s}{e^z-1}\\frac{dz}{z}$$ becomes $$(e^{i\\pi s}-e^{-i\\pi s})\\int_0^\\infty\\frac{x^{s-1}}{e^x-1}dx$$ and then the rest follows easily from known identities satisfied by the Gamma function. While the original real integral over $[0,\\infty)$ is clearly divergent if $\\mbox{Re}(s)\\leq 1$, the contour integral over $C$ is defined for all complex $s$, because the path stays away from the singularity at $s=0$ and from the branch cut along the positive $x$-axis. My problem is understanding why the integral over $C$ does not depend on $\\epsilon$ and $\\delta$, so that we can keep them at a safe positive distance from the singularities for the definition, but we can take the limit for the purpose of evaluating the integral. I know that by Cauchy's theorem we can modify a path of integration (without changing the value of the integral) starting and ending at the same point as long as we do not cross any singularity, but this path starts and ends at infinity, so I am not sure how to rigorously proceed using Cauchy's theorem. Even if I start the path at $R+i\\epsilon$ and end it at $R-i\\epsilon$ for some large $R$, the path starting and ending points change as $\\epsilon$ changes. "}
7
+ {"_id": "B.310", "text": "Formula: $\\frac{4}{x}+\\frac{10}{y}=1$\nContext: Finding positive integer solutions to $\\frac{4}{x}+\\frac{10}{y}=1$ Find the positive integer solutions for: $\\frac{4}{x} + \\frac{10}{y} = 1$ I had calculated the solutions manually but it was a very tedious process. Is there any better way to do this? "}
8
+ {"_id": "B.311", "text": "Formula: $f(x)=\\frac{9^{x}}{9^x+3}$\nContext: Given a function $f(x)=\\frac{9^{x}}{9^x+3}$, what is $f(\\frac{1}{27})+ f(\\frac{2}{27}) + f(\\frac{3}{27})+ ...+ f(\\frac{26}{27})$? While I was going through past Olympiad math papers, I found this question without any explanation. Here is the question: Given a function $f(x)=\\frac{9^{x}}{9^x+3}$, what is $f(\\frac{1}{27})+ f(\\frac{2}{27}) + f(\\frac{3}{27})+ ...+ f(\\frac{26}{27})$? The answer was 13. I took a really bad approach and converted $\\frac{9^{x}}{9^x+3}$ to $1+\\frac{9^{x}}{3}$, which I then noticed was wrong. I also accidentally multiplied $9^{\\frac{1}{27}}$ with $9^{\\frac{2}{27}}$, $9^{\\frac{2}{27}}$ with $9^{\\frac{3}{27}}$, and so on, before realizing that the functions were added and not multiplied. I suspect that there is something to the power of $\\frac{n}{27}$, because 9 is a multiple of 27. However, I am not completely sure. Is there a law that tells me how I can solve this question? Since this is a Math Olympiad question, there is probably a maximum time limit of five minutes to do this question. This means that I probably won’t have time for tedious mathematical calculations with a calculator and online tools, or something like that. Please give me a quick, fast solution that is probably suitable for an 8th grader, at most a solution at a 10th grader level. "}
9
+ {"_id": "B.312", "text": "Formula: $\\left\\lfloor \\frac{\\left\\lfloor a/b \\right\\rfloor}{c} \\right\\rfloor=\\left\\lfloor\\frac{a}{bc}\\right\\rfloor$\nContext: Proving $\\left\\lfloor \\frac{\\left\\lfloor a/b \\right\\rfloor}{c} \\right\\rfloor=\\left\\lfloor\\frac{a}{bc}\\right\\rfloor$ for positive integer $a$, $b$, $c$ How can we prove the following? $$\\left\\lfloor \\frac{\\left\\lfloor \\dfrac{a}{b} \\right\\rfloor}{c} \\right\\rfloor = \\left\\lfloor \\frac{a}{bc} \\right\\rfloor$$ for $a,b,c \\in \\mathbb{Z}^+$ I don’t know if I’m doing something wrong, but I can’t prove it even though I’m pretty sure it’s true. Obviously, because the concept of algebra isn’t aware of the fact that we are restricting the variables to positive integers, and given my assumption that the equality doesn’t necessarily hold for non-integers, an element of non-algebraic problem solving is needed, i.e. making a change to the expression given our knowledge of that condition, which then allows for algebraic maneuvers that show that the equality holds. I think that’s what I’m missing. Thanks. "}
10
+ {"_id": "B.313", "text": "Formula: $|ab|=rs$\nContext: Let $a,b\\in G$, a finite abelian group and $|a|=r, |b|=s$ with $\\gcd(r,s)=1$. Prove that $|ab|=rs$. Let $a,b\\in G$, a finite abelian group and $|a|=r, |b|=s$ with $\\gcd(r,s)=1$. Prove that $|ab|=rs$. My attempt: Let $|ab|=n$. Since $G$ is ableian, $(ab)^n=a^nb^n=1$. Thus $r\\mid n$ and $s\\mid n$. Together with $\\gcd(r,s)=1$, it follows that $rs\\mid n$. This is where I'm stuck; need to show that $rs=n$. Any hints on how to proceed? Edit: I've come up with a solution that is a somewhat different approach to what has been provided in the hints. Here it goes: Since $G$ is abelian, $n\\mid{\\rm lcm}(r,s)$. But since $\\gcd(r,s)=1$, ${\\rm lcm}(r,s)=rs$ by an elementary result in number theory. Thus $n\\mid rs$. Together with $rs\\mid n$, we have that $n=rs$, which is what we want to prove. "}
11
+ {"_id": "B.314", "text": "Formula: $\\sum \\|e_n-x_n\\| < 1$\nContext: Closed span of a sequence in Hilbert spaces. Suppose that you have an orthonormal basis $\\{e_n\\}$ in a Hilbert space such that $\\sum \\|e_n-x_n\\| < 1$. Is this condition enough to prove that the closed span of $\\{x_n\\}$ is $H$? My efforts to prove this have not led anywhere promising. I have tried showing that the only vector perpendicular to all of the $x_n$ would be $0$. Not sure which way I can proceed. Does anyone have an idea how to approach this? Thank you. "}
12
+ {"_id": "B.315", "text": "Formula: $P(m\\text{++})\\implies P(m)$\nContext: Backwards Induction (Exercise 2.2.6) Analysis 1 by Terence Tao I am new to the study of analysis and I decided to start with Terence's book in my endeavor. I want to show my \"proof\" of backwards induction since I have some difficulty in understanding this. I want to now if my proof is correct or have some error, because if have,$a $can't infer that. Any feedback is appreciated. Let $n$ be a natural number, and let $P(m)$ be a property pertaining to the natural numbers such that whenever $P(m\\text{++})$ is true, then $P(m)$ is true. Suppose that $P(n)$ is also true. Prove that $P(m)$ is true for all natural numbers $m ≤ n$; this is known as the principle of backwards induction. (Hint: apply induction to the variable $n$.) First i want to show $P(m)$ is true $\\forall$ $0\\geq m$. H1: $\\forall m$ $P(m\\text{++})\\implies P(m)$ H2: $P(0)$ C: $P(m)$ is true $\\forall$ $0\\geq m$. $0\\geq m$ means $0=m+a$ for some natural number $a$, then $m=a=0$ for corollary 2.2.9. But $P(0)$ is true for H2, then the case $n=0$ is proved. Suppose now that works for $n$ and prove $n\\text{++}$. then: H1: $\\forall m$ $P(m\\text{++})\\implies P(m)$ H2: $P(n)\\implies P(m)$ $\\forall$ $n\\geq m$ H3: $P(n\\text{++})$. In H1, for $m=n$ we have $P(n\\text{++})\\implies P(n)$ and for H2 we now $P(n)\\implies P(m)$ (specifically for $n=m$), then $P(n\\text{++})\\implies P(m)$ for $n\\text{++}>m$. We need to prove that works for $n\\text{++}=m$ but for that $P(n\\text{++})$ is true for H3. We conclude that $P(n\\text{++})\\implies P(m)$ $\\forall$ $n\\text{++}\\geq m$. "}
13
+ {"_id": "B.317", "text": "Formula: $\\int \\frac{1}{\\left(x^2+1\\right)^n}dx$\nContext: $\\int \\frac{1}{\\left(x^2+1\\right)^n}dx$ Let be $n\\in \\mathbb{Z_+}$. Compute the following integral: $$\\int \\frac{1}{\\left(x^2+1\\right)^n}dx$$ I obtained that for $$n=1$$ the value of the integral is $$\\tan^{-1}x+C$$ and for $$n=2$$ $$x\\left(\\frac{1}{2\\left(x^2+1\\right)}+\\frac{\\tan \\:^{-1}}{2x}\\right)+C$$ How to do the rest of the cases? "}
14
+ {"_id": "B.318", "text": "Formula: $e^{x} \\geq \\left(1+\\frac{x}{n}\\right)^{n}$\nContext: How do you prove $e^{x} \\geq \\left(1+\\frac{x}{n}\\right)^{n}$ for $n \\geq 1$ How do you prove $e^{x} \\geq \\left(1+\\frac{x}{n}\\right)^{n}$ for $n \\geq 1$? I can prove this for natural numbers only via induction, but how do you prove this for any real $n \\geq 1$? We start with the base case $n=1$. We have $e^x \\geq 1+x$ by a variety of methods. For the induction step, assume $e^{x} \\geq \\left(1+\\frac{x}{n}\\right)^{n}$. Notice that taking the derivative of $(1+\\frac{x}{n+1})^{n+1}$ gives us $(1+\\frac{x}{n+1})^{n}$ and thus $(1+\\frac{x}{n+1})^{n} < \\left(1+\\frac{x}{n}\\right)^{n} \\leq e^x = \\frac{d}{dx} e^x$. I'm not sure how to extend this to the non-integer case. Any help would be appreciated. "}
15
+ {"_id": "B.319", "text": "Formula: $A_1 \\subseteq A_2 \\subseteq \\ldots \\subseteq A_n \\subseteq A_{n+1} \\subseteq \\ldots$\nContext: A wrong argument for $\\mathbb{R}$ being countable We assume $A$ is the set of all countable subsets of the set of real numbers. We know $A$ is a partially ordered set $(A, \\subseteq)$. Suppose $$A_1 \\subseteq A_2 \\subseteq \\ldots \\subseteq A_n \\subseteq A_{n+1} \\subseteq \\ldots$$ is a chain in $A$. We can prove $B=\\bigcup_{n \\in \\Bbb{N}} A_n$ is a countable set. For each natural number $m$, we have $A_m \\subseteq B$. So $B$ is an upper bound for $A$. This shows each chain in $A$ has an upper bound according to Zorn's lemma. $A$ has a maximal element $X$, and we know $X$ is a countable set. Now we prove $X = \\Bbb{R}$. If $X \\neq \\Bbb{R}$, then there is an $x \\in \\Bbb{R}$ such that $x \\notin X$. Let $Y=X \\cup \\{x\\}$. It's obvious that $Y$ is a countable subset of the real numbers and $X \\subsetneq Y$. This contradicts $X$ being a maximal element. Thus, $X = \\Bbb{R}$ and $\\Bbb{R}$ is a countable set. What is wrong with this argument? "}
16
+ {"_id": "B.320", "text": "Formula: $\\int_{0}^{1}\\frac{\\sin^{-1}(x)}{x}$\nContext: An inverse trigonometric integral So my integral is $$\\int_{0}^{1}\\frac{\\sin^{-1}(x)}{x}$$ To avoid confusion let me re-write the integral as $$\\mathcal I = \\int_0^1 \\frac{\\arcsin(x)}{x}$$ I started off with a trig-substitution that is let $x = \\sin(t)$ and $t = \\arcsin(x)$ which means that $dx = \\cos(t) dt$ So our integrand becomes $$\\mathcal I = \\int_0^{\\frac{\\pi}{2}} \\frac{t}{\\sin(t)} \\cos(t) dt\\tag{Bounds have changed}$$ $$= \\int_0^{\\frac{\\pi}{2}} t\\space\\cot(t) dt$$ Then using Integration by Parts,$\\space$$u = t$ $\\implies du = dt$ and $dv = \\cot(t)$ $\\implies v = \\ln(\\sin(t))$ So our integrand thus becomes, $= t\\space\\ln(\\sin(t))$ from $0$ to $\\frac{\\pi}{2}$ $$-\\int_0^{\\frac{\\pi}{2}} \\ln(sin(t))dt\\tag{t*ln(sin(t)) = 0}$$ From here, I don't know how to proceed further. Any help/hint is appreciated :) Thanks in advance "}
17
+ {"_id": "B.321", "text": "Formula: $ax+by=d$\nContext: Solve Equation $ax+by=d$ where $d \\neq \\gcd(a,b)$ using Bézout I want to solve this equation: $3x+4y=14$ I present you what I have so far: $\\gcd(3,4)=1$ which is not $14$. I notice that: $3(6) + 4(-1) =14$ So using Bézout : $3(6-4k) + 4(-1+3k) = 14 (1)$, where $k$ integer. So we have $k>1/3$ and $k<3/2$. So $k = 1$. By replacing $k$ in equation $(1)$ we get: $a=2, b=2$, which indeed solves the equation. However, I dont get my previous solution $(a,b)=(4,-1)$ which is also correct. Am I applying Bézout wrong? Or am I not supposed to find the solution that I used to find the new ones, if they exist? Do I have $2$ solutions and that's it or am I missing something? Thank you. "}
18
+ {"_id": "B.322", "text": "Formula: $a_n=\\frac{n(n+1)}{2}$\nContext: How do I calculate the sum of sum of triangular numbers? As we know, triangular numbers are a sequence defined by $\\frac{n(n+1)}{2}$. And it's first few terms are $1,3,6,10,15...$. Now I want to calculate the sum of the sum of triangular numbers. Let's define $$a_n=\\frac{n(n+1)}{2}$$ $$b_n=\\sum_{x=1}^na_x$$ $$c_n=\\sum_{x=1}^nb_x$$ And I want an explicit formula for $c_n$. After some research, I found the explicit formula for $b_n=\\frac{n(n+1)(n+2)}{6}$. Seeing the patterns from $a_n$ and $b_n$, I figured the explicit formula for $c_n$ would be $\\frac{n(n+1)(n+2)(n+3)}{24}$ or $\\frac{n(n+1)(n+2)(n+3)}{12}$. Then I tried to plug in those two potential equations, If $n=1$, $c_n=1$, $\\frac{n(n+1)(n+2)(n+3)}{24}=1$, $\\frac{n(n+1)(n+2)(n+3)}{12}=2$. Thus we can know for sure that the second equation is wrong. If $n=2$, $c_n=1+4=5$, $\\frac{n(n+1)(n+2)(n+3)}{24}=5$. Seems correct so far. If $n=3$, $c_n=1+4+10=15$, $\\frac{n(n+1)(n+2)(n+3)}{24}=\\frac{360}{24}=15$. Overall, from the terms that I tried, the formula above seems to have worked. However, I cannot prove, or explain, why that is. Can someone prove (or disprove) my result above? "}
19
+ {"_id": "B.323", "text": "Formula: $(\\mathbb{R} [x]/(x^4 + 1))^*$\nContext: How to use the Euclidean Algorithm to find the inverse of $\\overline{x^2 -x}$ in $(\\mathbb{R} [x]/(x^4 + 1))^*$? I've tried a lot with the Euclidean Algorithm, but I still can't figure it out. Do you know how I can use the Euclidean Algorithm to find the inverse of $\\overline{x^2 -x}$ in $(\\mathbb{R} [x]/(x^4 + 1))^*$? Thanks in advance! "}
20
+ {"_id": "B.324", "text": "Formula: $\\sum_{n=1}^{\\infty} \\sum_{m=1}^\\infty \\frac{1}{{n^2 +m^2}}.$\nContext: The Double Basel Problem I have been playing with the series which I had been calling the 'Double Basel problem' for the past couple of hours $$ \\sum_{n=1}^{\\infty} \\sum_{m=1}^\\infty \\frac{1}{{n^2 +m^2}}. $$ After wrestling with this for awhile, I managed to generalize a result demonstrated here. This identity is: $$ \\sum_{m=1}^{\\infty}\\frac{1}{x^2+m^2} = \\frac{1}{2x}\\left[ \\pi \\coth{\\pi x} - \\frac{1}{x}\\right]. $$ Hence the original series becomes: $$ \\sum_{n=1}^{\\infty} \\frac{1}{2n}\\left[\\pi \\coth{\\pi n} - \\frac{1}{n} \\right]. $$ I have no idea where to go next with this problem. I seriously doubt that this series is convergent; however, I have been unable to prove it. Can you prove that this series is divergent? If it converges what is its value? Thanks so much! "}
21
+ {"_id": "B.325", "text": "Formula: $n!+2, n! +3, ..., n! + n$\nContext: Find consecutive composite numbers How to find 100 consecutive composite numbers? After many attempts I arrived at the conclusion that to find $m$ consecutive composite numbers we can use this $n!+2, n! +3, ..., n! + n$ where $n! + 2$ is divisible by $2$, $n! + 3$ is divisible by $3$ and so on... and where $m$ = $n-1$ Thus $n!+2, n! +3, ..., n! + n$ tells that there are $(n-1)$ consecutive numbers. However, there seems to be some gaps or incompetence. For example: $4!+2, 4! +3, 4! +4$ $→$ $26, 27, 28$. Although it's right there are for sure smaller numbers such as $8, 9, 10$ and $14, 15 ,16.$ Is there another method for solving such a problem mathematically? Is it a correct method or have I misunderstood it? "}
22
+ {"_id": "B.326", "text": "Formula: $F=P \\oplus T$\nContext: Every projective module is a direct summand of free module. I was reading \"Serial Rings\" by Gennadi Puninski. There it is written that , \"Since every module is a homomorphic image of a free module, every projective module is a direct summand of free module\".(ie. if $P$ is a projective module, there exists a free module F such that, $ F=P \\oplus T$ for some module $T$.) But I can't understand how \"Every module is a homomorphic image of a free module\" implies that \"Every projective module is a direct summand of free module\". (I have found a proof for \"Every projective module is a direct summand of free module\" but the first part of the above mentioned sentence wasn't used there.) "}
23
+ {"_id": "B.327", "text": "Formula: $\\begin{vmatrix}\n1 & 1 &1 \\\\ \nx & y & z \\\\\nx^2 & y^2 &z^2 \\\\\n\\end{vmatrix}$\nContext: Determinant not equal to volume error (closed) The determinant of a $3\\times 3$ matrix $\\begin{vmatrix} 1 & 1 &1 \\\\ x & y & z \\\\ x^2 & y^2 &z^2 \\\\ \\end{vmatrix} $ is the volume of a parallelopiped with its three sides as the vectors whose tails rest on origin and heads at the coordinates $(1,x,x^2),(1,y,y^2)$ and $(1,z,z^2)$ $^{[1]} $. The determinant of this matrix can be simplified to $(x-y)(y-z)(z-x)$. Proof: Subtracting column$1 $from column 2, and putting that in column 2, $\\begin{equation*} \\begin{vmatrix} 1 & 1 &1 \\\\ x & y & z \\\\ x^2 & y^2 &z^2 \\\\ \\end{vmatrix} = \\begin{vmatrix} 1 & 0 &1 \\\\ x & y-x & z \\\\ x^2 & y^2-x^2 &z^2 \\\\ \\end{vmatrix} \\end{equation*}$ $ = z^2(y-x)-z(y^2-x^2)+x(y^2-x^2)-x^2(y-x) $ rearranging the terms, $ =z^2(y-x)-x^2(y-x)+x(y^2-x^2)-z(y^2-x^2) $ taking out the common terms $(y-x)$ and $(y^2-x^2)$, $ =(y-x)(z^2-x^2)+(y^2-x^2)(x-z) $ expanding the terms $(z^2-x^2)$ and $(y^2-x^2)$ $ =(y-x)(z-x)(z+x)+(y-x)(y+x)(x-z) $ $ =(y-x)(z-x)(z+x)-(y-x)(z-x)(y+x) $ taking out the common term $(y-x)(z-x)$ $ =(y-x)(z-x) [z+x-y-x] $ $ =(y-x)(z-x)(z-y) $ $ =(x-y)(y-z)(z-x) $ As the $x$ coordinate of the heads of these three vectors is $1$, the head of these vectors lies in a plane perpendicular to the $x$-axis and a distance of $1$ unit away from the origin. (If we connect these three points, we get a triangle.) This plane will cut the parallelopiped into two equal triangular pyramids whose base lies in the plane. The perpendicular distance from the base of the pyramid to the tip is $1$ unit. The volume of the required parallelogram is the sum of the volume of the two triangular pyramids. $\\text{volume of a pyramid}=\\frac{1}{3}bh$. The height is $1$ units. The area of a triangle is, by Shoelace formula, $$A = \\frac{1}{2} \\begin{vmatrix} 1 & 1 &1 \\\\ x_1 & x_2 & x_3 \\\\ y_1 & y_2 & y_3 \\\\ \\end{vmatrix} $$ where the vertices of the triangle are $(x_1,y_1),(x_2.y_2),(x_3,y_3)$ $^{[2]}$ The vertices of the required traingle has the coordinates $(x,x^2),(y,y^2)$ and $(z,z^2)$. So the area of the triangle, $$A=\\frac{1}{2}\\begin{vmatrix} 1 & 1 &1 \\\\ x & y & z \\\\ x^2 & y^2 &z^2 \\\\ \\end{vmatrix}$$ which, as shown above, can be simplified to $\\frac{1}{2} (x-y)(y-z)(z-x)$ So, the volume is $$\\frac{1}{3}bh=\\frac{1}{3}\\times\\frac{1}{2}(x-y)(y-z)(z-x)\\times 1$$ $$= \\frac{1}{6}(x-y)(y-z)(z-x) $$ But, shouldn't the volume be equal to the determinant which is $(x-y)(y-z)(z-x)$ ? References [1]Youtube video by 3Blue1brown: https://youtu.be/Ip3X9LOh2dk?t=345 [2]Wikipedia article:https://en.wikipedia.org/wiki/Shoelace_formula "}
24
+ {"_id": "B.328", "text": "Formula: $\\sum_{k=1}^{n}\\cos\\frac{2\\pi k}{n}=0$\nContext: Proving $\\sum_{k=1}^{n}\\cos\\frac{2\\pi k}{n}=0$ I want to prove that the below equation can be held. $$\\sum_{ k=1 }^{ n } \\cos\\left(\\frac{ 2 \\pi k }{ n } \\right) =0, \\qquad n>1 $$ Firstly I tried to check the equation with small values of $n$ $$ \\text{As } n=2 $$ $$ \\cos\\left(\\frac{ 2 \\pi \\cdot 1 }{ 2 } \\right) + \\cos\\left(\\frac{ 2 \\pi \\cdot 2 }{ 2 } \\right) $$ $$ = \\cos\\left(\\pi\\right) + \\cos\\left(2 \\pi\\right) $$ $$ = -1+ 1 =0 ~~ \\leftarrow~~ \\text{Obvious} $$ But $$ \\text{As}~~ n=3 $$ $$ \\cos\\left(\\frac{ 2 \\pi \\cdot 1 }{ 3 } \\right) +\\cos\\left(\\frac{ 2 \\pi \\cdot 2 }{ 3 } \\right) + \\cos\\left(\\frac{ 2 \\pi \\cdot 3 }{ 3 } \\right) $$ $$ = \\cos\\left(\\frac{ 2 \\pi }{ 3 } \\right) + \\cos\\left(\\frac{ 4 \\pi }{ 3 } \\right) + \\cos\\left( 2\\pi \\right) $$ $$ = \\cos\\left(\\frac{ 2 \\pi }{ 3 } \\right) + \\cos\\left(\\frac{ 4 \\pi }{ 3 } \\right) + 1 =?$$ What formula(s) or property(s) can be used to prove the equation? "}
25
+ {"_id": "B.329", "text": "Formula: $A \\subseteq V \\subseteq \\overline{V} \\subseteq U$\nContext: How do I show that if $A$ is compact and $U \\supseteq A$ is open, then there is an open $V$ with $A \\subseteq V \\subseteq \\overline{V} \\subseteq U$? This question is from Wayne Patty's Topology Section 5.2. Consider $A$ be a compact subset of a regular space and let $U$ be an open set such that $A\\subseteq U$. Prove that there is an open set $V$ such that $A \\subseteq V \\subseteq \\overline{V} \\subseteq U$. Let $p \\in A$ which implies $p \\in U$. Then a result is given in the book (Theorem 5.11): A $T_1$-space $(X, \\mathcal T)$ is regular if and only if for each member $p$ of $X$ and each neighbourhood $U$ of $p$, there is a neighbourhood $V$ of $p$ such that $\\overline{V}\\subseteq U$. So, I got $ V \\subseteq \\overline{V} \\subseteq U$. But I am unable to prove that $A\\subseteq V \\subseteq \\overline{V}$. I thought that I should let $V\\subseteq \\overline{V} \\subseteq A$ but I am not able to find a contradiction. Can you please help with that? "}
26
+ {"_id": "B.330", "text": "Formula: $\\Delta=[x+a(n-1)](x-a)^{n-1}$\nContext: Shilov's Linear Algebra - Chapter 1, Problem 9 Calculate the $n$-th order determinant: $$\\Delta= \\begin{vmatrix} x&a&a&\\ldots&a\\\\ a&x&a&\\ldots&a\\\\ a&a&x&\\ldots&a\\\\ \\cdot&\\cdot&\\cdot&\\ldots&\\cdot\\\\ a&a&a&\\ldots&x \\end{vmatrix}$$ The answer is $\\Delta=[x+a(n-1)](x-a)^{n-1}$. If we add all the other columns to the first column, we get the first multiplicative factor of the answer, and are left with the following determinant: $$\\begin{vmatrix} 1&a&a&\\ldots&a\\\\ 1&x&a&\\ldots&a\\\\ 1&a&x&\\ldots&a\\\\ \\cdot&\\cdot&\\cdot&\\ldots&\\cdot\\\\ 1&a&a&\\ldots&x \\end{vmatrix}$$ How can we calculate this determinant to obtain the answer? "}
27
+ {"_id": "B.331", "text": "Formula: $4^x+6^x=9^x$\nContext: Finding roots of $4^x+6^x=9^x$ by hand The function $f(x)=4^x+6^x-9^x$ is such that $f(0)=1>0, f(1)=1>0, f(2)=-29$ and next $g(x)=(4/9)^x+(6/9)^x-1 \\implies f'(x)<0$ for all real values of $x$. So $g(x)$ being monotonic the equation $$4^x+6^x=9^x$$ has exactly one real solution. The question is whether this real root can be found analytically by hand. "}
28
+ {"_id": "B.333", "text": "Formula: $\\frac{\\sqrt{1-p^2}}{2\\pi(1-2p\\sin(\\varphi)\\cos(\\varphi))}$\nContext: Bivariate normal density $( X, Y)$ have a bivariate normal density centered at the origin with $E(X^2)$ = $E(Y^2) = 1$, and $E(XY) = p$ . In polar coordinates $(X, Y)$ becomes $(R,\\Phi)$ where $R^2 = X^2 + Y^2$. Prove that $\\Phi$ has a density given by $$\\frac{\\sqrt{1-p^2}}{2\\pi(1-2p\\sin(\\varphi)\\cos(\\varphi))}$$ And is uniformly distributed iff $p = 0$. (To this point everything is clear) what i do not understand is how to conclude that $P\\{XY > 0\\} = \\frac{1}{2} +\\pi^{-1} \\arcsin (p)$ and $P\\{XY < 0\\}= \\pi^{-1} \\arccos (p)$. "}
29
+ {"_id": "B.334", "text": "Formula: $a^{log_a(b)}=b$\nContext: logarithm proof for $a^{log_a(b)}=b$ I have tried proving for $a^{log_a(b)}=b$ , but I feel is incorrect, so how can I prove this? I have proved it as follows: $log_aa^{log_a(b)}=log_ab$ $log_a(b)log_aa= log_ab$ $log_a(b)= log_ab$ "}
30
+ {"_id": "B.338", "text": "Formula: $y = \\frac{a+bx}{b-x}$\nContext: Find all integer solutions of equation $y = \\frac{a+bx}{b-x}$ How to find all integer solutions for the equation $y = \\frac{a+bx}{b-x}$, where a and b are known integer values? P.S. x and y must be integer at the same time "}
31
+ {"_id": "B.341", "text": "Formula: $\\{a, a+b, a+2b, \\cdots\\}$\nContext: Do all arithmetic sequences with coprime coefficients contain a prime? Given $a, b \\in Z^+$, where $\\gcd(a, b) = 1$, we can define an arithmetic sequence $c_i = a + i \\cdot b$. The sequence is thus $\\{a, a+b, a+2b, \\cdots\\}$. Do all such sequences contain a prime? Do they contain an infinite number of primes? Example: $a=2, b=3$. Then, $c_1 = a+b = 5$, which is prime. Meanwhile, $a=4, b=2$ does not have primes, but $\\gcd(a, b) = 2 \\neq 1$, so this isn't a counterexample. "}
32
+ {"_id": "B.342", "text": "Formula: $f(x_4) =\\frac{2}{5}e^{\\frac{-x}{10}}\\left(1-e^{\\frac{-x}{10}}\\right)^3.$\nContext: Four students are giving presentations In four sections of a course, running (independently) in parallel, there are four students giving presentations that are each Exponential in length, with expected value of$10 $minutes each. How much time do we expect to be needed until all four of the presentations are completed? I'm a little thrown off by this question since it's in the chapter of order statistics in my book. But I believe that this is just gamma distribution. If each student has expected value of $10$ minutes each. Shouldn't the time needed till all four of the presentations are completed be $40$ minutes? $(10 \\cdot 4 = 40)$ Or is it the following. Calculate the density of the fourth order statistics $$f(x_4) =\\frac{2}{5}e^{\\frac{-x}{10}}\\left(1-e^{\\frac{-x}{10}}\\right)^3.$$ Then $$E(X_4) = \\int_0^\\infty\\frac{2x}{5}e^\\frac{-x}{10}\\left(1-e^\\frac{-x}{10}\\right)^3 \\,dx= 125/6.$$ So is the answer $40$ minutes or $125/6$ minutes? Any help is greatly appreciated. "}
33
+ {"_id": "B.343", "text": "Formula: $P(E) = 1/6$\nContext: When two dice are identical, why are ordered pairs considered for determining probability of getting sum x? Problem statement : +++++++++++++++ Given two identical unbiased dice, determine the probability of getting sum as 7. Event = Sum of dots on the top face of both dice is 7. $E = {(1,6),\\ (2,5),\\ (3,4),\\ (4,3),\\ (3,4),\\ (5,2),\\ (6,1)}$ $|Sample Space|$ = $36$. Hence, $P(E) = 1/6$ I have a doubt here. As the two dice are given identical, why do we have to consider ordered pairs? Shouldn't it be unordered consisting of only 3 possible pairs $\\{(1,6),\\ (2,5),\\ (3,4)\\} $? Hence, $|S| = 21$ and $P(E) = 3/21$. "}
34
+ {"_id": "B.344", "text": "Formula: $\\mathcal{A}=\\{\\{a\\}\\}$\nContext: Any collection of subsets of a set is a subbasis for a topology Theorem Any collection of subsets $\\mathcal{A}$ of a nonempty set $X$ forms the subbasis for a unique topology $\\tau$ on $X$. This theorem is absolutely amazing to me. I really enjoy the idea of it as a powerful tool, but I have come up with a counterexample that I just can't get over. So the theorem states that any collection of subsets of a nonempty $X$ form a subbasis for a unique topology on $X$. The emphasis there is any. So, consider the following counterexample: Let $X= \\{a,b,c,d,e\\}$ and let $\\mathcal{A}=\\{\\{a\\}\\}$. Clearly, this is a collection of subsets of $X$. Assume that, by our theorem, then $\\mathcal{A}=\\{\\{a\\}\\}$ is a subbasis for some topology on $X$. Okay, so since $\\mathcal{A}$ is a subbasis of some topology on $X$, let's try taking intersections of members of $\\mathcal{A}$. Well, $\\{a\\}\\cap\\{a\\}=\\{a\\}$. Then our basis for our topology is $\\mathcal{B} = \\{\\{a\\}\\}$ This is problematic because this means that our basis $\\mathcal{B}$ is just $\\{a\\}$, but note that $\\displaystyle\\bigcup \\mathcal{B} = \\{a\\}$ and $\\{a\\} \\neq X.$ Therefore, $X \\not \\in \\tau.$ How do we get $X$ in $\\tau$? Is my counterexample logically consistent? "}
35
+ {"_id": "B.345", "text": "Formula: $r\\equiv \\begin{cases} x=1 \\\\ y=1 \\\\z=\\lambda -2 \\end{cases}$\nContext: Elementary geometry question: How to calculate distance between two skew lines? I am helping someone with highschool maths but I got stacked in a elementary geometry problem. I am given the equation of two straigh lines in the space $r\\equiv \\begin{cases} x=1 \\\\ y=1 \\\\z=\\lambda -2 \\end{cases}$ and $s\\equiv\\begin{cases} x=\\mu \\\\ y=\\mu -1 \\\\ z=-1\\end{cases}$ and asked for some calculations. First I am asked the relative position of them so I get they are skew lines. After that I am asked for the distance between the two lines. In order to get the distance I have to calculate the line that is perpendicular to both of them in the \"skewing\" point, check the points where it touches the other two lines (sorry, not sure about the word in English) and calculate the module of this vector. Im having trouble calculating the perpendicular line. I know I can get the director vector using vectorial product, but I'm not sure how to find a point so that I can build the line. "}
36
+ {"_id": "B.347", "text": "Formula: $(a+b, a-b) = 1$\nContext: GCD (a,b) =1 prove GCD ( (a+b), (a-b) ) = 1 or 2 if GCD of $(a, b) = 1$, prove that GCD $(a+b, a-b) = 1$ or $2 .$ The proof goes like: Let GCD $( a+b, a-b ) = d$ and let there exist integers m and n such that $ a+b =md$ and $ a-b = nd.$ By adding and subtracting these two equations we get: $2a = (m+n)d$ and $2b = (m-n)d$ , because $a, b$ are coprime then $2$ GCD $(a,b)$ = GCD $(2a, 2b),$ and so on. My question is, why do we have to add and subtract above equations? I need to understand the concept of this prove in some more details. Thanks! "}
37
+ {"_id": "B.348", "text": "Formula: $\n\\begin{pmatrix}\nA & -B\\\\ \nB & A\n\\end{pmatrix}$\nContext: Question about determinant of a block matrix I was studying block matrices and suddenly this question came to my mind. Let $A, B \\in \\Bbb R^{n \\times n}$. From this Wikipedia page, $$\\det \\begin{pmatrix} A & B\\\\ B & A\\end{pmatrix} = \\det(A-B)\\det(A+B)$$ even if $A$ and $B$ do not commute. Does a similar condition hold for the following block matrix? $$\\begin{pmatrix} A & -B\\\\ B & A \\end{pmatrix}$$ "}
38
+ {"_id": "B.349", "text": "Formula: $x! = \\sqrt{2\\pi x} * (\\frac{x}{e})^x$\nContext: Inverse to Stirling's Approximation The equation for Stirling's Approximation is the following: $$x! = \\sqrt{2\\pi x} * (\\frac{x}{e})^x$$ Writing as a function for y gives us the following: $$y = \\sqrt{2\\pi x} * (\\frac{x}{e})^x$$ Is there a way to solve this equation for x, effectively finding an inverse to this function? "}
39
+ {"_id": "B.350", "text": "Formula: $x^3 - x, x \\in \\mathbb{N}$\nContext: Induction proof for natural numbers in a division operation I want to proove that 2 and 3 divide $x^3 - x, x \\in \\mathbb{N}$ and I'm stuck at the inductive step, here's where I'm at: For all $x \\in \\mathbb{N}$, let $P(x)$ be the proposition: 2 and 3 divide $x^3 - x$ Basic step: the first term in $\\mathbb{N}$ is $0$, then: $\\frac{0^3 - 0}{2} = 0$ et $\\frac{0^3 - 0}{3} = 0$, thus $P(0)$ is true. Inductive step: For the inductive hypothesis, we assume that $P(k)$ is true for an arbitrary nonnegative integer k bigger than 0. That is, we assume that 2 and 3 divide $k^3 - k$ To carry out the inductive step using this assumption, we must show that when we assume that $P(k)$ is true, then $P(k + 1)$ is also true. That is, we must show that 2 and 3 divide $(k+1)^3 - (k+1)$ Is the next step here is that we need to prove that $\\frac{(k+1)^3-(k+1)}{2}$ and $\\frac{(k+1)^3-(k+1)}{3}$ are integers? thus 2 and 3 divide $(k+1)^3 - (k+1)$? "}
40
+ {"_id": "B.351", "text": "Formula: $\\lim_{n\\to\\infty}\\inf \\mu (A_n) \\geq \\mu(A)$\nContext: Application of Fatou's Lemma but something simpler is better? The question Let $(X,\\mathcal{A},\\mu)$ be a measure space. Let $A_n$ be a sequence of sets in $\\mathcal{A}$. Define $A := \\{ x \\in X $ such that for all but finitely many $n \\in \\mathbb{N}$ it holds that $x ∈ A_n$ $ \\}$ Show that $\\lim_{n\\to\\infty}\\inf \\mu (A_n) \\geq \\mu(A)$ My attempt $ \\mu (A_n) = \\int_X \\chi_{A_n} d\\mu$ and so by Fatou's lemma: $\\lim_{n\\to\\infty}\\inf \\mu (A_n) = \\lim_{n\\to\\infty}\\inf \\int_X \\chi_{ A_n} \\geq \\int_X \\lim_{n\\to\\infty}\\inf \\chi_{A_n} d\\mu$ Now all I need to show is that $\\lim_{n\\to\\infty}\\inf \\chi_{A_n}(x) = \\chi_A(x)$ a.e Consider $x\\in A$ then eventually $x\\in A_n \\forall n $ eventually and so $\\chi_{A_n}(x) = 1 \\forall n $ and so $\\lim_{n\\to\\infty}\\inf \\chi_{A_n}(x) = \\lim_{n\\to\\infty} 1 = 1= \\chi_A(x)$ Now consider $x\\not\\in A$ then $\\forall N \\in \\mathbb{N} \\exists n >N$ such that $x \\not\\in A_n$ and so $\\inf_{m \\geq n} \\chi_{A_n}(x) = 0 \\forall n$ and hence $\\lim_{n\\to\\infty}\\inf \\chi_{A_n}(x) = \\lim_{n\\to\\infty} 0 = 0= \\chi_A(x)$ Ando so the required follows. However this feels awfully complicated and I was wondering if anyone has any tips for something simpler "}
41
+ {"_id": "B.352", "text": "Formula: $\\int_0^\\infty f(x) dx$\nContext: find a positive continuous function with a finite area : $\\int_0^\\infty f(x) dx$ , but the $f(x)\\rightarrow$ doesn't exist. find a positive continuous function with a finite area : $\\int_0^\\infty f(x) dx$ , but the limit of $f(x)$ as $x$ goes to infinity doesn't exist. I tried finding such a function but I failed . "}
42
+ {"_id": "B.353", "text": "Formula: $Cov(x,y) = 0$\nContext: Can someone explain why if two random variables, X and Y, are uncorrelated, it does not necessarily mean they are independent? I understand that two independent random variables are by definition uncorrelated as their covariance is equivalent to 0: $Cov(x,y) = E(xy)- E(x)E(y)$ $E(x)*E(y) = E(xy)$, when x and y are two random independent variables. Therefore, $Cov(x,y) = 0$. However, I am having trouble understanding if two random variables, X and Y, are uncorrelated, it does not necessarily mean they are independent. Could someone also give me a real world example of when two random variables are neither independent nor casually connected? I believe it will help me understand this concept better. "}
43
+ {"_id": "B.354", "text": "Formula: $p_1p_2\\mid a$\nContext: Prove that if $p_1\\mid a$ and $p_2\\mid a$ then $p_1p_2\\mid a$ So I am supposed to be proving that if $p_1$ and $p_2$ are distinct primes and $p_1\\mid a$ and $p_2\\mid a$ then $p_1p_2\\mid a$, and I need to use Euclid's Lemma except as far as I understand Euclid's lemma is the converse of this statement and I have tried for the last few hours to work with Euclid's and GCDs to figure this one out and I just don't know where to start since I can't wrap my head around this one. Can anyone help me out? "}
44
+ {"_id": "B.355", "text": "Formula: $f(f(x)^2+f(y))=xf(x)+y$\nContext: $f(f(x)^2+f(y))=xf(x)+y$ Find all functions $f:\\mathbb{R}\\rightarrow\\mathbb{R}$ such that $$f(f(x)^2+f(y))=xf(x)+y$$ for all $x,y\\in{\\mathbb{R}}$. Here is my approach to the problem: We see that $f(x)=x$ is an obvious solution (Just trying easy linear equations). I think this would be the only solution to the problem. Am I right? And how to prove that there is no other solution? (Note: I am a beginner at functional equations) "}
45
+ {"_id": "B.357", "text": "Formula: $f(n)=n^2-n+2$\nContext: How many functions can be used to describe to a finite series? I was learning more about series today and would like to know if there are existing proofs I could look at about this problem. Basically, if you are given an infinite series representing a function f : $\\Bbb N \\Rightarrow \\Bbb R$ but only shown the first n numbers, how many functions f, written in terms of n, could you write to represent that series. I'm not including piecewise functions, because I assume that would always be infinite. Take the series $(2, 4, ...)$ with 2 numbers given. $f(n)=2n$ , $f(n)=n^2-n+2$ , and $f(n)=2^n$ would all be functions that could fit this series, although they differ after the first two numbers. I believe there are more polynomials that fit this description but I'm not sure how many. My question is, essentially, are there an infinite number of functions for which $f(1) = 2$ and $f(2) = 4$, and if this is the case, does this also apply to any finite number of outputs? (e.g. the first n digits of pi written as $(3, 1, 4, 1, 5, 9...)$) If not, could you find out how many possible functions there are? "}
46
+ {"_id": "B.358", "text": "Formula: $ds = \\sqrt{1 + \\frac{dy}{dx}^2}$\nContext: Confusion about the formula of the area of a surface of revolution Before I read the formula of the area of revolution which is $\\int 2\\pi y \\,ds$, where $ds = \\sqrt{1 + \\frac{dy}{dx}^2}$, I thought of deriving it myself. I tried to apply the same logic used for calculating the volume of revolution (e.g., $\\int \\pi y^2 dx $). My idea is to use many tiny hollow cylinders (inspired from the shell method), each has a surface area of $(2\\pi y) (dx)$: $2\\pi y$ is the circumference of the cylinder, and $dx$ is the height of the cylinder Their product is the surface area of the hollow (e.g., empty from the inside) cylinder. With this logic, the area is $\\int 2\\pi y dx$. Where is my mistake? Also it's confusing why for the volume it was enough to partition the object using cylinders and for areas not. "}
47
+ {"_id": "B.359", "text": "Formula: $\\tan\\theta=\\frac{x}{2}$\nContext: Apparent inconsistencies in integration In a problem, the substitution $$\\tan\\theta=\\frac{x}{2}$$ was made. In the end, the answer was in terms of sines, and to convert back, $sin\\theta$ was defined as $$\\sin\\theta=\\frac{x}{\\sqrt{4+x^2}}$$ This is a typical example of some stuff about integration I'm struggling to understand; (1) Why are the absolute values of square roots never taken? This is something I keep seeing in every situation involving an integral. (Here, if $\\theta$ is in the third quadrant, sines would be negative and tans would be positive. So this definitely doesn't work for the third quadrant.) (2) Expanding upon the stuff in the parantheses up there, a possible explanation is that while doing trig substitutions, the angle is always a principal angle of the inverse trigonometric operation on whatever you're making the substitution. Is there such a rule? "}
48
+ {"_id": "B.360", "text": "Formula: $1/ \\vert x \\vert$\nContext: Fourier transform of function $1/ \\vert x \\vert$ What is the Fourier transform of function $$f(x) = \\frac{1}{\\vert x \\vert}?$$ This is not a homework. I would also appreciate help for calculating it myself. "}
49
+ {"_id": "B.361", "text": "Formula: $F(x)=\\int_a^xf(t)dt$\nContext: Is a Riemann-integrable function always differentiable? Let $f:[a,b]\\to\\mathbb{R}$ be Riemann-integrable and $F(x)=\\int_a^xf(t)dt$. Is this function $F$ always differentiable? Because the antiderivative is defined as $F'=f$ right, so you would think that it always holds. "}
50
+ {"_id": "B.362", "text": "Formula: $x R y \\lor y R x$\nContext: Kuratowski's Theorem using Axiom of Choice I can't seem to be able to prove Kuratowski's Theorem using the Axiom of Choice, although they are equivalent assertions. Kuratowski's Lemma: Every partial order has a maximal chain. Axiom of Choice: For every set X of disjoint nonempty sets there exists a set$Y $such that for every set $Z \\in X, Y \\cap Z$ is a singleton. My attempt: Consider any chain $C_0$ of the partial order. If $\\exists x \\in X \\setminus C_0$ which is comparable with some element of $C_0$, let $C_1 := C_0 \\cup \\{ x \\}$. Iterate this process . If at some point we cannot find such an x, then we have found a maximal chain. Suppose we can find such an $x$ infinitely, then the sets $i\\geq 1 \\Rightarrow X_i := C_{i+1} \\setminus C_i$ are disjoint singletons. Hence by axiom of choice there exists $Y$ for which $X_i \\subseteq Y$. Inorder to finish the proof, I need to prove something of the form \"If a is comparable with some element of $C_0$, then $\\exists j$ s.t. $a \\in C_j$\". I can't seem to prove this. P.S: x is comparable with y iff $x R y \\lor y R x$. "}
51
+ {"_id": "B.363", "text": "Formula: $P[X^* \\geq x | \\mathcal{F}_0]= 1 \\wedge X_0 / x$\nContext: Non-negative martingale $X_n \\rightarrow 0$ a.s. prove that $P[X^* \\geq x | \\mathcal{F}_0]= 1 \\wedge X_0 / x$ I need to prove the following statement. Let $X$ be a non negative martingale such that $X_n\\rightarrow 0$ a.s. when $n\\rightarrow \\infty$. Define $X^*=supX_n$. Prove that for all $x>0$ $$P[X^* \\geq x | \\mathcal{F}_0]= 1 \\wedge X_0 / x$$ I think I've got the easy case if $x\\leq X_0$ Then necessarily $x\\leq X^*$ for the sup property. Then it follows that for $1\\leq X_0 /x$ we have that $P[X^* \\geq x | \\mathcal{F}_0]= 1$. But I can't figure out the other case. "}
52
+ {"_id": "B.364", "text": "Formula: $d(x,A)\\leqslant d(x,y)+d(y,A)$\nContext: Inequality in metric space For a point $x$ and a non-empty subset $A$ of a metric space $(X, d)$, define $\\begin{align}\\inf\\left\\{ d(x,a):a\\in A\\right\\}\\end{align}$ Prove that if $y$ is another point in $X$ then $$d(x,A)\\leqslant d(x,y)+d(y,A)$$ "}
53
+ {"_id": "B.365", "text": "Formula: $\\alpha^+$\nContext: Are Infinite ordinals and their successor equinumerous? Ordinals in set theory are well-ordered by $\\in$ or equivalently $\\subset$. If we define all ordinals greater or equal to $\\omega$ as infinite ordinals. Is it true that every infinite ordinal is equinumerous to its successors. Basically my question is the proof or refutation of the following statement: Given infinite ordinal $\\alpha$. Does there exist an injection from $\\alpha^+$ to $\\alpha$. "}
54
+ {"_id": "B.366", "text": "Formula: $S=\\{ e_1,e_2,e_3,....,e_n\\}$\nContext: Show that if a normed space $X $ has a linearly independent subset of $n$ elements, so does the dual space $X'$ Show that if a normed space $X $ has a linearly independent subset of $n$ elements, so does the dual space $X'$ My attempt : $\\text{Given that a normed space $X$ has a linearly indepenedent susbset of $n-$ element}\\tag1$ let the subset be $S=\\{ e_1,e_2,e_3,....,e_n\\}$ Define $e_i \\in X$ by $f_j(e_i)= \\delta_{ij} = \\begin{cases} 1 & i=j \\\\0 , & i \\neq j \\end{cases}$ where $1\\le i\\le n$ and $1\\le j\\le n$ From $(1)$ we have $c_1e_1+...+c_ne_n=0\\implies c_1f(e_1)+...+c_nf(e_n)=0$ After that im not able to proceed further "}
55
+ {"_id": "B.367", "text": "Formula: $d(x,M)=\\frac{|\\langle f,x \\rangle|}{||f||}$\nContext: Prove that $d(x,M)=\\frac{|\\langle f,x \\rangle|}{||f||}$ I want to show that for $E$ a normed space, $f\\in E^*$ and $M=\\{x\\in E\\,:\\, f(x)=0\\}$: Write $M^\\perp$ Show that $d(x,M)=\\frac{|\\langle f,x\\rangle|}{||f||}$. This is my attempt: For the second part: We have that $f\\in E^*$ and for $x\\in E$ and $m\\in M$ $$\\ |\\langle f,x-m|\\rangle \\leq ||f|| ||x-m|| \\Rightarrow \\frac{|\\langle f,x\\rangle|}{||f||} \\leq ||x-m||. $$ Therefore, $$ \\frac{|\\langle f,x\\rangle|}{||f||} \\leq \\inf_{m\\in M}||x-m|| =d(x,M). $$ The second inequalyty is that I can't prove, I think that any corollary of Hanh-Banach could help me to prove that $$d(x,M)\\leq \\frac{|\\langle f,x\\rangle|}{||f||} $$ Does anyone have any idea and could check my proof? Update I found the same question in this link Orthogonality Relations Exercise, Brezis' Book Functional Analysis "}
56
+ {"_id": "B.368", "text": "Formula: $\\sum_{n\\geq1}\\frac{1}{n^2}=\\sum_{n\\leq x}\\frac1{n^2}+\\mathcal O(1/x).$\nContext: Basel Problem approximation error bounded by $\\mathcal O(1/x)$? In this answer it is stated that $$ \\sum_{n\\geq1}\\frac{1}{n^2}=\\sum_{n\\leq x}\\frac1{n^2}+\\mathcal O(1/x). $$ Is this statement true as $x\\to\\infty$? What I've done: If $x$ is fixed, then I think the answer is almost trivial, because we may set $C=\\pi^2x/6$, so $$ \\sum_{n=x}^\\infty\\frac1{n^2}\\leq\\sum_{n=1}^\\infty\\frac1{n^2}=\\frac{\\pi^2}{6}=\\frac{C}{x}, $$ therefore $$ \\sum_{n\\geq1}\\frac1{n^2}=\\sum_{n\\leq x}\\frac{1}{n^2}+\\sum_{n=x}^\\infty\\frac{1}{n^2}\\leq\\sum_{n\\leq x}\\frac{1}{n^2}+C/x=\\sum_{n\\leq x}\\frac{1}{n^2}+\\mathcal O(1/x). $$ But is there a constant independent of $x$ that makes this true? "}
57
+ {"_id": "B.369", "text": "Formula: $J_n=\\int_{-\\pi}^\\pi \\frac{\\sin{(nx)}}{(1+2^n) \\sin{x}}\\,\\mathrm{d}x\\:?$\nContext: Recurrent integral How to calculate integral $$J_n=\\int_{-\\pi}^\\pi \\frac{\\sin{(nx)}}{(1+2^n) \\sin{x}}\\,\\mathrm{d}x\\:?$$ I tried partial integration but did not succeed in finding a recurrent relation? Also, tried Moivre formula for $I_n+iJ_n$, where $I_n=\\int_{-\\pi}^\\pi \\frac{\\cos{(nx)}}{(1+2^n) \\sin{x}} dx$, but also without success. Any help is welcome. Thanks in advance. "}
58
+ {"_id": "B.370", "text": "Formula: $s(x) = \\sum\\limits_{j=1}^n c_j\\chi_{A_j}(x)$\nContext: What if we take step functions instead of simple functions in the Lebesgue integral When we define the Lebesgue integral, we first define it for simple functions $s(x) = \\sum\\limits_{j=1}^n c_j\\chi_{A_j}(x)$ (where $A_j$ are measurable) as $\\int sd\\mu = \\sum\\limits_{i=j}^n c_j \\mu(A_j)$ and then for $f\\ge 0$ as $\\int fd\\mu = \\sup\\{\\int sd\\mu$ : s simple and $0\\le s\\le f\\}$. But I was wondering what could go wrong if instead of taking simple functions in the supremum, we would take step functions, i.e. $s(x)=\\sum\\limits_{j=1}^nc_i\\chi_{I_j}(x)$ where $I_j$ are intervals (any type, like $(a,b), (a,b], [a,b), [a,b])$). "}
59
+ {"_id": "B.376", "text": "Formula: $\\lim\\limits_{n\\to\\infty}\\dfrac{\\sqrt1+\\sqrt2+\\sqrt3+\\ldots+\\sqrt n}{n\\sqrt n}$\nContext: Evaluating the limit of a sqrt function using Riemann Sums $\\lim\\limits_{n\\to\\infty}\\dfrac{\\sqrt1+\\sqrt2+\\sqrt3+\\ldots+\\sqrt n}{n\\sqrt n}$ I am having trouble doing this problem. I have attempted to take the Riemann Sum but cannot get past the square root. I also tried to upper-bound and lower-bound it, but I got stuck doing this. "}
60
+ {"_id": "B.377", "text": "Formula: $\\mathbb{R}^{n+1}-\\mathbb{R}^n$\nContext: What is $\\mathbb{R}^{n+1}-\\mathbb{R}^n$? In C.H. Edwards's Advanced Calculus of Several Variables he defines the ordinate set $\\mathcal{O}_f$ of a function $f:\\mathbb{R}^n\\to\\mathbb{R}$ as the set of points between $\\mathbb{R}^n$ and the graph of $f,$ including the points of evaluation, $\\mathbf{x}\\in\\mathbb{R}^n$ and the points in the graph $\\left\\{\\mathbf{x},f\\left(\\mathbf{x}\\right)\\right\\}\\in\\mathbb{R}^{n+1}$. Later on he defines a set $\\hat{\\mathcal{G}}=\\partial\\mathcal{O}_f-\\mathbb{R}^n,$ where $\\partial\\mathcal{O}_f$ is the boundary of $\\mathcal{O}_f.$ The intent seems clear. First $$\\mathbb{R}^{n+1}-\\mathbb{R}^n=\\mathbb{R}^n\\times\\left(\\mathbb{R}-\\left\\{0\\right\\}\\right)$$ where $\\times$ means Cartesian product. Then $$\\hat{\\mathcal{G}}=\\left(\\mathbb{R}^{n+1}-\\mathbb{R}^n\\right)\\cap\\partial\\mathcal{O}_f.$$ But long ago I learned that $\\mathbb{R}^n$ is the set of all real number n-tuples, and $\\mathbb{R}^{n+1}$ is the set of all (n+1)-tuples, so elements of $\\mathbb{R}^{n}$ are not elements of $\\mathbb{R}^{n+1}$ and $\\mathbb{R}^{n}$ is not a subset of $\\mathbb{R}^{n+1}.$ So am I correct in concluding that $\\mathbb{R}^{n+1}-\\mathbb{R}^n$ is not really the relative complement of the two sets? "}
61
+ {"_id": "B.380", "text": "Formula: $\\int_0^\\pi f(t) \\sin(t)dt =\\int_0^\\pi f(t) \\cos(t)dt = 0$\nContext: If $\\int_0^\\pi f(t) \\sin(t)dt =\\int_0^\\pi f(t) \\cos(t)dt = 0$, then $f(x)=0$ admits two solutions Let $f\\colon [0,\\pi]\\to\\mathbb{R}$ be a continuous function. If $\\int^{\\pi}_{0}f (t) \\sin(t)dt =\\int^{\\pi}_{0} f (t) \\cos(t)dt = 0$, then $f(x)=0$ admits two solutions in $[0,\\pi]$ I try to show if $f(x)>0$ and then get the contradiction but I failed to prove that, so maybe can someone help me with that? thanks in advance. "}
62
+ {"_id": "B.381", "text": "Formula: $A_1 \\times ... \\times A_n$\nContext: $A_1 \\times ... \\times A_n$ is countable if $A_1, ..., A_n$ are countable Suppose that $A_1, ..., A_n$ are countable sets. Show that the cartesian product $A := A_1 \\times ... \\times A_n$ is countable. My attempt: Sets are said to be countable if they are finite or if they have the same cardinality as some subset of $\\mathbb{N}$ (i.e. we can find some bijection $f: A \\rightarrow S$ or $f: S \\rightarrow A$ where $S \\subset \\mathbb{N}$). Assume that $A_1, ..., A_n$ are countable sets. Then, there exists bijections $fi: \\mathbb{N} \\rightarrow A_i$ for $i = 1, ..., n$. Define $g: \\mathbb{N} \\rightarrow A$ as follows My issue arises here in finding such a bijective function without it being too complicated. How would I go about finding one? I am also open to any suggestions. Any assistance is welcomed. "}
63
+ {"_id": "B.384", "text": "Formula: $f_3(n) = \\binom n2$\nContext: What does this bracket notation mean? I am currently taking MIT6.006 and I came across this problem on the problem set. Despite the fact I have learned Discrete Mathematics before, I have never seen such notation before, and I would like to know what it means and how it works, Thank you: $$ f_3(n) = \\binom n2$$ (Transcribed from screenshot) "}
64
+ {"_id": "B.386", "text": "Formula: $\\sum _{n=-\\infty }^{\\infty } e^{-n^2 \\pi x}=\\frac{1}{\\sqrt{x}}\\sum _{n=-\\infty }^{\\infty } e^{-\\frac{n^2 \\pi }{x}}$\nContext: Prove that $\\sum _{n=-\\infty }^{\\infty } e^{-n^2 \\pi x}=\\frac{1}{\\sqrt{x}}\\sum _{n=-\\infty }^{\\infty } e^{-\\frac{n^2 \\pi }{x}}$ In Wikipedia's proof of Riemann's functional equation for the zeta function (here, and click \"Show Proof\"), I find the assertion that $$\\sum _{n=-\\infty }^{\\infty } e^{-n^2 \\pi x} = \\frac{1}{\\sqrt{x}}\\sum _{n=-\\infty }^{\\infty } e^{-\\frac{n^2 \\pi }{x}}$$ I can't work out how this works. Is it to do with Jacobi theta functions? Mathematica (which seems to use Jacobi's original notation) simplifies the expression on the left hand side above to the Jacobi elliptic theta function (Wikipedia here, plus the 'Auxiliary Functions' section that follows) $$ \\begin{aligned} \\sum _{n=-\\infty }^{\\infty } e^{-n^2 \\pi x} &= \\vartheta_{3}(0,e^{-\\pi x}) \\\\&= \\vartheta_{00}(0,e^{-\\pi x}) \\\\&= \\vartheta(0,e^{-\\pi x}) \\end{aligned} $$ Wikipedia defines $$\\vartheta(z;\\tau) := \\sum _{n=-\\infty }^{\\infty } e^{\\pi i n^2 \\tau+2 \\pi i n z}$$ But setting $z = 0$ and $\\tau = e^{-\\pi x}$ then gives $$\\vartheta(0,e^{-\\pi x}) = \\sum _{n=-\\infty }^{\\infty } e^{\\pi i n^2 e^{-\\pi x}}$$ which is clearly not equivalent to the original expression. I suspect that this may have something to do with nomes, which Wikipedia mentions but I cannot get my head around. So, my two questions are: How do I prove the original equivalence? What am I doing wrong in relation to the Jacobi theta function? "}
65
+ {"_id": "B.388", "text": "Formula: $\\left[\\begin{array}{ccc|c}1&10&-6&1\\\\1&k&-1&2\\\\2&-1&k&5\\end{array}\\right]$\nContext: Linear algebra find $k$ Given the linear system: $$\\begin{cases} x_1 + kx_2 − x_3 = 2\\\\ 2x_1 − x_2 + kx_3 = 5 \\\\ x_1 +10x_2 −6x_3 =1 \\end{cases}$$ for which values of $k$ has the system (2): (a) No solutions (b) A unique solution. (c) Infinitely many solutions. I've been trying echelon form where i switched $R_1$ with $R_3$ and then i switched $R_2$ with $R_3$ So I have $\\left[\\begin{array}{ccc|c}1&10&-6&1\\\\1&k&-1&2\\\\2&-1&k&5\\end{array}\\right]$ but then I'm stuck and don't know how to get any further. "}
66
+ {"_id": "B.389", "text": "Formula: $\\frac{(1+r)^{N+1}-(1+r)-rN}{r^2(1+r)^N}$\nContext: Convergence of a Special Series as N is large I'm trying to find a general formula for the series and x is a constant: $$\\sum\\limits_{i=1}^N \\frac{i}{(1+r)^i}$$ I have deduced the general formula for the sum. $$\\frac{(1+r)^{N+1}-(1+r)-rN}{r^2(1+r)^N}$$ Will this sum converge to some value when N is very large? Could someone explain how to deal with it? "}
67
+ {"_id": "B.390", "text": "Formula: $t_n:=\\frac {x_1+x_2+...+x_n}{n}$\nContext: Prove that if $x_n$ converges to $\\omega$, $t_n$ converges to $\\omega$ too By the sequence $(x_n)_{n\\in\\Bbb{N}}$, define a new sequence $(t_n)_{n\\in\\Bbb{N}}$ such that $t_n:=\\frac {x_1+x_2+...+x_n}{n}$. If $\\lim_{n\\rightarrow\\infty}t_n=\\omega$, how can I show that $\\lim_{t\\rightarrow\\infty}x_n=\\omega$? Original post had ``If $\\lim_{n\\rightarrow\\infty}x_n=\\omega$, ...\" "}
68
+ {"_id": "B.391", "text": "Formula: $\\mathbb{E} |X+Y|^r \\leq c_r (\\mathbb{E} |X|^r + \\mathbb{E} |Y|^r),$\nContext: $C_r $ inequality Show that for each $r> 0$ $$\\mathbb{E} |X+Y|^r \\leq c_r (\\mathbb{E} |X|^r + \\mathbb{E} |Y|^r),$$ where $c_r$ is a constant given by $\\begin{equation} c_r = \\left\\{ \\begin{array}{ll} 1 & \\mathrm{if\\ } 0 < r \\le 1 \\\\ 2^{r-1} & \\mathrm{if\\ } 1 < r \\end{array} \\right. \\end{equation}$ I've tried to use other inequalities for the proof of this one but I still get stuck for the case of $2^{r-1}$. "}
69
+ {"_id": "B.393", "text": "Formula: $|f(z^{2})| \\leq 2|f(z)|$\nContext: Prove an entire function is a constant under an inequality f is an entire function, suppose $|f(z^{2})| \\leq 2|f(z)|$ for all C, then f is a constant. I 'm trying to use Liouville's theorem, but it seems that it isn't helpful. "}
70
+ {"_id": "B.394", "text": "Formula: $\\forall \\epsilon > 0, \\exists \\delta > 0, |x-a|<\\delta \\implies |f(x) - f(a)| < \\epsilon$\nContext: Trying to find the $\\delta$ in epsilon-delta continuity proof. I am trying to prove the following function is continuous for all irrationals: $f(x) = \\begin{cases} 0, & \\text{if $x$ is irrational} \\\\ 1/n, & \\text{if $x = m/n$} \\end{cases}$ The question assumes $m/n$ is in lowest terms. I have shown that it is discontinuous for all rationals, and now I believe I have to either use the $\\epsilon-\\delta$ definition of continuity or sequential continuity to show the function is continuous for when $x$ is irrational. I split my attempt into two cases: Our value of $x$ is irrational, then I want: $$\\forall \\epsilon > 0, \\exists \\delta > 0, |x-a| $a$ is irrational here. Using that $x$ is irrational I get that $f(x) = 0$ as does $f(a)$ so no matter the $\\delta$ we have our condition for continuity satisfied as $0 < \\epsilon$ for all $\\delta$ Our value of $x$ is rational i.e. $x = \\frac{m}{n}$ subbing in we want: $$\\forall \\epsilon > 0, \\exists \\delta > 0, |\\frac{m}{n}-a| I am struggling to find the $\\delta$ necessary. I am able to bound $|\\frac{m}{n}-a|$ by $1+2|a|$ if I say that $\\delta \\le 1$. However I do not know how to find a $\\delta$ to yield the second inequality. Should I change my approach to sequential continuity? "}
71
+ {"_id": "B.395", "text": "Formula: $f(a+b) = f(a) + f(b)$\nContext: Why we have to check both additivity and homogenity for linearity? $f: V \\to W $ over $K$ with $a,b \\in V$ and $k \\in K$. Additivity: $f(a+b) = f(a) + f(b)$ Homogenity: $k*f(b) =f(k * b)$ I have a visual understanding that a function is linear if the structure is kept while projecting it with $f$ but why it is not enough to check if the function is additive? I would be glad to have some easy examples and an intuition why we would have to check both conditions. "}
72
+ {"_id": "B.396", "text": "Formula: $f_X(x)=\\frac{2x}{\\theta^2} \\quad$\nContext: What is the MLE $\\theta^*$ of $\\theta$? I have that $x_1, x_2,...,x_n$ are from a rv $X$ that has the density function $f_X(x)=\\frac{2x}{\\theta^2} \\quad$ for $0 \\le x \\le \\theta \\quad$ and $f_X(x)=0 \\quad$ otherwise. Ihave to determine the MLE of $\\theta^*$ of $\\theta$ Here is how I have done it: $L(\\theta)= \\frac{2}{\\theta^{2n}}\\prod_{i=1}^nx_i$ $\\frac{\\partial L(\\theta)}{\\partial \\theta} =...=\\frac{-4n}{\\theta^{2n+1}}\\prod_{i=1}^nx_i + \\frac{2}{\\theta^{2n}}\\frac{\\partial(\\prod_{i=1}^nx_i)}{\\partial \\theta}$ Is this correct? and also how do I calculate the CDF $F_{\\theta^*}$, the pdf $f_{\\theta^*}$ and the expectation $E[\\theta^*]$ of the maximum likelihood estimator $\\theta^*$? "}
73
+ {"_id": "B.397", "text": "Formula: $1 + 2 + 3 + 4 + \\cdots = -\\frac{1}{12}$\nContext: If the limit does not converge, can the sum? Or, how could the sum converge? Alright, I thought I had seen everything but last night I saw this identity (`twas attributed to Ramanujan), $$ 1 + 2 + 3 + 4 + \\cdots = -\\frac{1}{12} $$ Then I saw a proof that was seemingly correct. So alright, I believe it, hey it is no crazier than having infinities of different sizes and I finally have some closure with that fact. But then, I recalled the benchmark induction proof everyone learns, $$ \\sum_{i=1}^{n} i = \\frac{n(n+1)}{2} $$ Then kicks in the remains of all those calculus courses I once took, making me thing that, hey wait! We have this, $$ \\lim_{n\\rightarrow\\infty} \\frac{n(n+1)}{2} = \\infty $$ I think in this case we said the limit does not exist or the function diverges (correct me if I am wrong!) But... but... according to the identity above, $$ \\sum_{i=1}^{n=\\infty} i = -\\frac{1}{12} $$ But then shouldn't, $$ \\lim_{n\\rightarrow\\infty} \\frac{n(n+1)}{2} \\stackrel{?}{=} -\\frac{1}{12} $$ So what I am seeing here is that even if the limit does not converge, the sum does. Also, a long time ago I remember being told that the sum of two positive integers is always positive. Furthermore, addition is suppose to be closed under integers right? Here we not only have a negative number as a result of the sum of positive integers but a negative non-integer at that. "}
74
+ {"_id": "B.398", "text": "Formula: $\\lim_{x \\rightarrow c} f'(x) = L = \\lim_{x \\rightarrow c^+} f'(x) =\\lim_{x \\rightarrow c^-} f'(x)$\nContext: A question on differentiability at a point Is a continuous function differentiable at $x=c$ if the limit of its derivative has a value at that point? That is, if $$\\lim_{x \\rightarrow c} f'(x) = L = \\lim_{x \\rightarrow c^+} f'(x) =\\lim_{x \\rightarrow c^-} f'(x)$$ Intuitively, the slopes of the tangents approach the same value and since the function is continuous a jump-discontinuity isn't possible so it appears the slope at the point should be $L$ too. However, I cannot seem to locate such a theorem, so I suspect my intuition is wrong. Is it? "}
75
+ {"_id": "B.399", "text": "Formula: $M = {\\{\\langle x,y \\rangle \\in \\mathbb{R^2} | a \\leq x \\leq b , c \\leq y \\leq d}\\}$\nContext: Disjoint axis-aligned rectangles in the plane Let $A$ be some set of axis-aligned rectangles in the plane, each pair of which has empty intersection. Prove that $A$ is a countable set. (An axis-aligned rectangle is a set of the form $$M = {\\{\\langle x,y \\rangle \\in \\mathbb{R^2} | a \\leq x \\leq b , c \\leq y \\leq d}\\}$$ for $a,b,c,d$ such that $a < b$ and $ c < d$.) Attempt: I tried using the density of the $\\mathbb{Q}$ in $(\\mathbb{R},\\leq)$, but without any success. "}
76
+ {"_id": "B.400", "text": "Formula: $\\left(\\frac{1}{2^{n-2}}\\right)^2$\nContext: What is the probability hat two particular players verse in Wimbledon if it begins with $16$ players? Sixteen people play in the quarter-finals at Wimbledon. The winner of the quarter-finals play again in the semi-final to decide who enters the finals. What is the probability that two particular people will play each other if the tournament begins with 16 players? So I have so far (case 1 + case 2) = verse player in quarter OR verse player in semi = $\\frac{1}{15} + \\frac{14}{15}...$ I'm not sure what else to include in the second case Also the third part of the question asks What is the probability when $2^n$ players begin? The worked solutions show $\\frac{1}{2^n-1}+\\frac{2^n-2}{2^n-1}\\left(\\frac{1}{2^{n-2}}\\right)^2\\:=\\:\\frac{1}{2^{n-1}}$ which I cannot get. Even symbolab doesn't show the same simplification. As with the previous question, I do not understand the $\\left(\\frac{1}{2^{n-2}}\\right)^2$ "}