Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Show that a positive operator on a complex Hilbert space is self-adjoint Let $(\mathcal{H}, (\cdot, \cdot))$ be a complex Hilbert space, and $A : \mathcal{H} \to \mathcal{H}$ a positive, bounded operator ($A$ being positive means $(Ax,x) \ge 0$ for all $x \in \mathcal{H}$). Prove that $A$ is self-adjoint. That is, prove that $(Ax,y) = (x, Ay)$ for all $x,y \in \mathcal{H}$. Here's what I have so far. Because $A$ is positive we have $\mathbb{R} \ni (Ax,x) = \overline{(x,Ax)} = (x,Ax)$, all $x \in \mathcal{H}$. Next, I have seen some hints that tell me to apply the polarization identity: $$(x,y) = \frac{1}{4}((\lVert x+y \rVert^2 + \lVert x-y \rVert^2) - i(\lVert x + iy \rVert^2 - \lVert x - iy \rVert^2)),$$ where of course the norm is defined by $\lVert \cdot \rVert^2 = (\cdot, \cdot)$. So my guess is that I need to start with the expressions: $$(Ax,y) = \frac{1}{4}((\lVert Ax+y \rVert^2 + \lVert Ax-y \rVert^2) - i(\lVert Ax + iy \rVert^2 - \lVert Ax - iy \rVert^2)),$$ $$(x,Ay) = \frac{1}{4}((\lVert x+Ay \rVert^2 + \lVert x-Ay \rVert^2) - i(\lVert x + iAy \rVert^2 - \lVert x - iAy \rVert^2)),$$ and somehow show they are equal. But here is where I have gotten stuck. Hints or solutions are greatly appreciated.
You should apply the polarization identity in the form $$4(Ax,y) = (A(x+y),x+y) - (A(x-y),x-y) -i(A(x+iy),x+iy) + i(A(x-iy),x-iy).$$ Since you already know $(Az,z) = (z,Az)$ for all $z \in \mathcal{H}$, it is not difficult to deduce $A^\ast = A$ from that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/561636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Uniform convergence of a family of functions on $(0,1)$ Let the family of functions be $$f_n(x) = \dfrac{x}{1+nx}.$$ Is the sequence $f_n$ uniformly convergent in the interval $(0,1)$?
$\frac{x}{1 + nx} = \frac{1}{\frac{1}{x} + n} \leq \frac{1}{n}$ which doesn't depend on $x$ hence your sequence converges uniformly to $0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/561737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Mistake in simplification of large polynomial inequality? We are to solve for $p$, and the inequality to simplify is $$10p^3(1-p)^2 + 5p^4(1-p) + p^5 - 3p^2(1-p) - p^3 > 0$$ On the next line of the textbook, the author simplifies this expression to $$3(p-1)^2(2p-1) > 0 \implies p > \frac{1}{2}$$ Since no work was shown, I attempted to reach the same result but got $$3(p-1)^2(2p-1) \cdot \mathbb{p^2} > 0$$ Have I made a mistake in my work, or are the answers possibly equivalent? Might there be any reason to leave out the $p^2$ factor from the inequality I reached?
The original expression factorizes as $4p^2(p-1)(p-3)(p-\frac14)>0.$ So $\frac14<p<1$ or $p>3.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/561813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Mean and Var of a gamma distribution Let X have a Gamma distribution with a known scale parameter 1, but an unknown shape parameter, that itself is random, and has the standard exponential distribution. How do I compute the mean and the variance of X? Thanks!
If we call the unknown parameter $\theta$, then what you are seeking is $$ E(X)=\int_0^\infty x f(x)dx=\int_0^\infty x \int_0^\infty f(x, \theta)d\theta dx=\int_0^\infty x\int_0^\infty f(x|\theta)f(\theta)d\theta dx $$ where then it is given that $$ \begin{align*} x|\theta&\sim Gamma(1, \theta)\\ \theta&\sim Exp(1) \end{align*} $$ meaning that you can multiply them together and integrate out $\theta$ and then you have the marginal distribution of $x$. As pointed out in a comment to the original post, you can also use iterated expectations. $$ \begin{align*} E(X)&=E_\theta(E_{X|\theta}(X|\theta))\\ V(X)&=E_\theta (V_{X|\theta}(X|\theta))+V_\theta (E_{X|\theta}(X|\theta)) \end{align*} $$ To illustrate what this means, the expectation is thus $$ E(X)=E_\theta(E_{X|\theta}(X|\theta))=E_\theta(\theta)=1 $$ since the expectation of a $Gamma(1, \theta)$ distribution is $1\times\theta$ (i.e. $E_{X|\theta}(X|\theta)=\theta$). Since $\theta$ is Exp(1), its expectation is in turn 1. For the variance, you do the same thing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/561876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show $\exp(A)=\cos(\sqrt{\det(A)})I+\frac{\sin(\sqrt{\det(A)})}{\sqrt{\det(A)}}A,A\in M(2,\mathbb{C})$ Show $$\exp(A)=\cos(\sqrt{\det(A)})I+\frac{\sin(\sqrt{\det(A)})}{\sqrt{\det(A)}}A$$ for $A\in M(2,\mathbb{C})$. In addition, $\operatorname{trace}(A)=0$. Can anyone give me a hint how this can connect with cosine and sine? Thanks!
The caracteristic polynomial of $2\times 2$ matrix $A$ is $$X^2-\mathrm{Tr}(A)X+\mathrm{det}(A),$$ so that a trace $0$ matrix satisfies the equation $$A^2=-\mathrm{det}(A)I_2.$$ Let $\lambda\in\Bbb C$ be a square root of $\det(A)$. It follows from the equation above that for every integer $p$ $$A^{2p}=(-1)^p\lambda^{2p}I_2\qquad\text{and}\qquad A^{2p+1}=(-1)^p\lambda^{2p}A$$ From then definition of the exponential function, $$\begin{align}\exp(A)&=\sum_{n\in\Bbb N}\frac{A^n}{n!}\\ &=\sum_{p=0}^{\infty}\frac{A^{2p}}{(2p)!}+\sum_{p=0}^{\infty}\frac{A^{2p+1}}{(2p+1)!}\\ &=\sum_{p=0}^{\infty}\frac{(-1)^p\lambda^{2p}}{(2p)!}I_2+\sum_{p=0}^{\infty}\frac{(-1)^p\lambda^{2p}}{(2p+1)!}A. \end{align}$$ Thus, if $\lambda\neq 0$, $$\exp(A)=\cos(\lambda)I_2+\frac{1}{\lambda}\sin(\lambda)A.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/561971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Prove or disprove: if $A$ is nonzero $2 \times 2$ matrix such that $A^2+A=0$, then A is invertible if $A$ is nonzero $2 \times 2$ matrix such that $A^2+A=0$, then A is invertible I really can't figure it out. I know it's true but don't know how to prove it
This is not true. For example $$ A=\begin{pmatrix} -1&0\\ 0&0 \end{pmatrix} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/562007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
How prove this inequality $\sum\limits_{k=1}^{n}\frac{1}{k!}-\frac{3}{2n}<\left(1+\frac{1}{n}\right)^n<\sum\limits_{k=1}^{n}\frac{1}{k!}$ show that $$\sum_{k=1}^{n}\dfrac{1}{k!}-\dfrac{3}{2n}<\left(1+\dfrac{1}{n}\right)^n<\sum_{k=0}^{n}\dfrac{1}{k!}(n\ge 3)$$ Mu try: I konw $$\sum_{k=0}^{\infty}\dfrac{1}{k!}=e$$ and I can prove Right hand inequality $$\left(1+\dfrac{1}{n}\right)^n=\sum_{k=0}^{n}\binom{n}{k}\dfrac{1}{n^k}$$ and note $$\binom{n}{k}\dfrac{1}{n^k}=\dfrac{1}{k!}\dfrac{(n-k+1)(n-k+2)\cdots(n-1)n}{n^k}\le\dfrac{1}{k!}$$ so $$\left(1+\dfrac{1}{n}\right)^n<\sum_{k=0}^{n}\dfrac{1}{k!}$$ But the left Hand inequality how prove it ? Thank you
I think you mean this inequality$$\sum_{k=0}^{n}\dfrac{1}{k!}-\dfrac{3}{2n}<\left(1+\dfrac{1}{n}\right)^n.$$In fact, we can prove a sharper one $$\left(1+\frac{1}{n}\right)^n+\frac{3}{2n}>e.$$Let $f(x)=\left(1+\dfrac{1}{x}\right)^x+\dfrac{3}{2x}$, then$$f'(x)=\left(1+\dfrac{1}{x}\right)^x\left(\ln\left(1+\frac{1}{x}\right)-\frac{1}{1+x}\right)-\frac{3}{2x^2}.$$It's suffice to show that $f'(x)<0$ for $x>1$.Actually, notice that $\left(1+\dfrac{1}{x}\right)^x<3$ and $$\ln\left(1+\frac{1}{x}\right)-\frac{1}{1+x}\leqslant \frac{1}{x}-\frac{1}{x^2}+\frac{1}{x^3}-\frac{1}{1+x}=\frac{1}{x^2(1+x)},$$thus for $x>1$, we have$$\left(1+\dfrac{1}{x}\right)^x\left(\ln\left(1+\frac{1}{x}\right)-\frac{1}{1+x}\right)-\frac{3}{2x^2}<\frac{3}{x^2(1+x)}-\frac{3}{2x^2}<0,$$therefore $f(x)$ is monotonically decreasing in $[1,+\infty)$. As $\displaystyle\lim_{x\rightarrow+\infty}f(x)=e$, we can get$$\left(1+\dfrac{1}{x}\right)^x+\dfrac{3}{2x}>e$$for all $x>1$, hence$$\left(1+\frac{1}{n}\right)^n+\frac{3}{2n}>e.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/562109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Largest triangle to fit in a circle will be isosceles triangle? Largest triangle to fit in a circle will be isosceles triangle? Or some other type?
Yes, what you say is true, but you can say more than that. Given a particular chord of a circle, you can maximize the area of the triangle by having the third vertex as far away as possible (area is half base times perpendicular height), which means that it will be on the perpendicular bisector of the chord where it crosses the circle, the other side of the circle's centre. So with this chord as an edge, the area will be maximised if the other two edges are equal, i.e. if the triangle is isosceles. But the same thing is true of the other two edges, and that implies ... (I will leave you to work this out)
{ "language": "en", "url": "https://math.stackexchange.com/questions/562179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Reflexive, separable containing all finite dimensional spaces almost isometrically Is there a separable, reflexive Banach space $Z$ such that for every finite dimensional space $X$ and every $a>0$, there is a $1+a$-embedding of $X$ into $Z$? I can do the question without the 'reflexive' (in which case it's true), but I'm totally stuck on how to find a reflexive space with this property. Please help? Thanks.
I thought I would mention a different answer to Norbert's since the paper containing the result I cite is not that well known and deserves to be advertised. Szankowski has shown that there exists a sequence of Banach spaces $X_m$, $m\in\mathbb{N}$, each isomorphic to $\ell_2$, with the following property: every finite dimensional Banach space is isometrically isomorphic to a contractively complemented subspace of $(\bigoplus_{m\in\mathbb{N}}X_m)_{\ell_2}$. The paper of Szankowski is An example of a universal Banach space, Israel Journal of Mathematics 11 (1972), 292-296. As an aside: When I was a postdoc in France a couple of years ago, some of the Banach space experts there thought it was an open problem whether a separable Banach space containing every finite dimensional Banach space isometrically necessarily contained a subspace isomorphic to $C([0,1])$. Szankowski's result (published almost 40 years earlier!) obviously shows that the answer to this question is negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/562473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Which is larger? $20!$ or $2^{40}$? Someone asked me this question, and it bothers the hell out of me that I can't prove either way. I've sort of come to the conclusion that 20! must be larger, because it has 36 prime factors, some of which are significantly larger than 2, whereas $2^{40}$ has only factors of 2. Is there a way for me to formulate a proper, definitive answer from this? Thanks in advance for any tips. I'm not really a huge proof-monster.
You can also separate each one of these, into 10 terms and note: $$1\times 20 > 2^4$$ $$2\times 19 > 2^4$$ $$\vdots$$ $$10 \times 11 >2^4$$ $$\Rightarrow 20! > 2^{40}$$ The idea is to break the factorial symmetrically into smaller pieces; which is not the most robust method for inequalities which include factorials; but it is useful in certain obvious cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/562538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62", "answer_count": 17, "answer_id": 12 }
Swap two integers in different ways This is a famous rudimentary problem : how to use mathematical operations (not any other temporary variable or storage) to swap two integers A and B. The most well-known way is the following: A = A + B B = A - B A = A - B What are some of the alternative set of operations to achieve this?
I'm not sure if you're asking for all solutions or not, but one of the most famous solutions is by using binary xor three times. $A=A\oplus B,B=A\oplus B,A=A\oplus B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/562707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the value of sin(arcsin(4))? In this case arcsin() is the unrestricted sin inverse function. I know that it is either undefined or has the value of 4. It could be undefined because arcsin() has only a doman of -1...1 and 4 is out of the domain. On the other hand, it could be that since they are inverses the intermediary result does not matter and they will cancel to get back 4. Which one is it?
Complex values aside, this expression cannot be evaluated since the $arcsin$ can only be taken from valus between -1 and 1 (inclusive) so the cancelation property of the inverses cannot be applied here. Cancelation property of inverses can be used for values that are in the respective domains of the functions. Think about $\sqrt(-2)^2=-2$ where I cancelled the squareroot against the square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/562773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Verification of the existence of an inverse in a group In a group $(G,*)$ with neutral element $e$, does the relation $x*y=e$ imply $y*x=e$? i think it is true. Indeed, $y*x=y*(x*y)*x=(y*x)*(y*x)$ hence $(y*x)^n=y*x$ for each $n\geq 2$ which is true only if $y*x=e$. Is this correct? If yes, then to verify the existence of an inverse it suffices to verify only one of the two products is equal to $e$.
I'd say that the step where you go from $(y * x)^n = y * x$ for each $n \geq 2$ to $y * x = e$ needs some reasoning. However, from $y * x = (y * x) * (y * x)$ you can immediately get $y * x = e$ by the following proposition. Proposition. Let $g$ be an idempotent element of some group $G$, i.e., $gg = g$. Then $g = e$. Proof. $g = ge = g(g g^{-1}) = (gg) g^{-1} = g g^{-1} = e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/562988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Given a theorem can it always be reduced logically to the axioms? It's probably a silly question but I’ve been carrying this one since infancy so i might as well ask it already. let ($p \implies q$) be a theorem where $p$ is the hypotheses and $q$ is the conclusion. If stated in logical symbols can it always be reduced to the axioms by logical manipulation? Does that mean that if different rules of logic are chosen (say quantum logic) one cannot prove theorems the "normal" (language and common sense) way since intuition fails?
This question is quite natural, as the notion of "rigorous proof" largely depends on the context in which it is mentioned. The short answer is Yes. Every mathematical proof, if correct, can be formulated as a derivation starting from axioms (usually of ZFC), and using basic deduction rules. In practice, it is of course infeasible, so what we call a "proof" is a description containing enough information to convey the ideas necessary to build this rigorous proof. This is (almost) never done, but should be always possible. Therefore, what is sufficient to constitute a "proof" highly depends on the interlocutors exchanging it, because they have to be able to fill the gaps with their knowledge. For instance, a proof of most highschool problems is reduced to "trivial" for professional mathematicians. In the end, the answer is "yes", but you are wrong with your last sentence: a "human" proof is transformed into a "logical" one not by logical manipulation, but by filling in a lot of gaps, that are omitted in human formulations. If, as you mention, different rules for logic are chosen, the proofs need to be compatible with these new rules, i.e. extendable into logical proofs following these rules.
{ "language": "en", "url": "https://math.stackexchange.com/questions/563100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do I influence myself more than my neighbors? Consider relations between people is defined by a weighted symmetric undirected graph $W$, and $w_{ij}$ shows amount of weight $i$ has for $j$. Assume all weights are non-negative and less than $1$ i.e. $$0\leq w_{ij}<1, \forall{i,j}$$ and symmetric $w_{ij}=w_{ji}$. We say $i$ and $j$ are friends if $w_{ij}>0$. Define the influence matrix $\Psi=[\text{I}+W]^{-1}$ (Assume it is well-defined). Is it always the case that my influence on myself is greater than my influence on my friends? $$\Psi_{i,i}>\sum_{j\in N(i)}|\Psi_{i,j}|$$ where $N(i)$ is a set of friends of $i$.
Look at the inverse of $$\left[\begin{array}{ccc}1 & 0 & .9\\ 0 & 1 & .9\\ 0 & .9 & 1\end{array}\right]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/563181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
consecutive convergents Problem: Let $\phi=\frac{1+\sqrt{5}}{2}$ be the golden ratio and let $a$, $b$, $c$, $d$ be positive integers so that $\frac{a}{b}>\phi>\frac{c}{d}$. It is also known that $ad-bc=1$. Prove that $a/b$ and $c/d$ are consecutive convergents of $\phi$. Numerical experimentations point towards the validity of this statement. The converse is well known (and easy to show) but I cannot seem to prove the direct statement. This is not a homework question; I came across it while investigating the geometric discrepancy of a certain lattice point set. Any help would be appreciated.
Note that $\phi=1+\frac{1}{1+\frac{1}{1+\ldots}}$ has convergents $\frac{f_{n+1}}{f_n}$, i.e. ratios of consecutive Fibonacci numbers. Note that I have used lower case for the Fibonacci numbers so as to avoid confusion with the Farey sequence $F_n$. The main idea is to appeal to the properties of Farey sequences. * *Suppose $a, b, c, d$ are positive integers with $a \leq b, c \leq d, \frac{c}{d}<\frac{a}{b}$. Then $\frac{a}{b}$ and $\frac{c}{d}$ are consecutive members of the Farey sequence $F_n$ for some $n$ if and only if $ad-bc=1$. *If $\frac{a}{b}<\frac{c}{d}$ are consecutive members of the Farey sequence $F_n$ for some $n$, then either they are consecutive members in $F_{n+1}$, or $\frac{a}{b}, \frac{a+c}{b+d}, \frac{c}{d}$ are consecutive members in $F_{n+1}$, in which case we have $b+d=n+1$. In other words, as we increase the order of the Farey sequence, $\frac{a+c}{b+d}$ is the first term to appear between $\frac{a}{b}$ and $\frac{c}{d}$. Consider $0<\frac{1}{\phi}<1$. For each $m$, suppose that the Farey sequence $F_m$ is given by $\frac{0}{1}=a_{m, 0}<a_{m, 1}< \ldots <a_{m, |F_m|-1}=\frac{1}{1}$. We partition the interval $[0, 1)$ into $|F_m|-1$ intervals $[a_{m, i}, a_{m, i+1})$. Note that $\frac{1}{\phi}$ must belong to exactly one such interval. Also note that $\frac{1}{\phi}$ is irrational and so cannot be equal to $a_{m, i}$. Thus for each $m$, there is a unique pair of rational numbers $r_m, s_m$ s.t. $r_m, s_m$ are consecutive members of the Farey sequence $F_m$ and $r_m<\frac{1}{\phi}<s_m$. Observe that $\frac{1}{\phi}$ has convergents $\frac{f_{n-1}}{f_n}$. We observe that for $f_n \leq m<f_{n+1}$, we have that $\frac{f_{n-2}}{f_{n-1}}$ and $\frac{f_{n-1}}{f_n}$ are consecutive members of $F_m$. Explanation: This is because we have the identity $f_{n-2}f_n-f_{n-1}^2=(-1)^{n-1}$, so by property $1$ $\frac{f_{n-2}}{f_{n-1}}$ and $\frac{f_{n-1}}{f_n}$ are consecutive members of some Farey sequence $F_k$. Note that we necessarily have $k \geq f_n$, since $\frac{f_{n-1}}{f_n}$ is a member of $F_k$. Therefore $\frac{f_{n-2}}{f_{n-1}}$ and $\frac{f_{n-1}}{f_n}$ will be consecutive members in $F_{f_n}$. (Removing elements doesn't affect the fact that they are consecutive) Now, as we increase the order of the Farey sequence, the first term that appears between them is $\frac{f_{n-2}+f_{n-1}}{f_{n-1}+f_n}=\frac{f_n}{f_{n+1}}$, which cannot appear for $m<f_{n+1}$. Therefore $\frac{f_{n-2}}{f_{n-1}}$ and $\frac{f_{n-1}}{f_n}$ remain as consecutive members in the Farey sequence $F_m$, for $f_n \leq m<f_{n+1}$. Also, as the convergents of $\frac{1}{\phi}$ are alternately greater and smaller than $\frac{1}{\phi}$, we see that $\frac{1}{\phi}$ is strictly between $\frac{f_{n-2}}{f_{n-1}}$ and $\frac{f_{n-1}}{f_n}$. Therefore $\frac{f_{n-2}}{f_{n-1}}$ and $\frac{f_{n-1}}{f_n}$ must be $r_m$ and $s_m$ in some order, i.e. $\{\frac{f_{n-2}}{f_{n-1}},\frac{f_{n-1}}{f_n}\}=\{r_m, s_m\}$. Finally, from the question, $\frac{a}{b}>\phi>\frac{c}{d}$ so $\frac{d}{c}>\frac{1}{\phi}>\frac{b}{a}$. Since $ad-bc=1$, $\frac{b}{a}$ and $\frac{d}{c}$ are consecutive members of some Farey sequence $F_m$. Thus $\{\frac{b}{a}, \frac{d}{c}\}=\{r_m, s_m\}$ Clearly we have $f_n \leq m<f_{n+1}$ for some $n$ so by above $\{r_m, s_m\}=\{\frac{f_{n-2}}{f_{n-1}},\frac{f_{n-1}}{f_n}\}$, so $\{\frac{b}{a}, \frac{d}{c}\}=\{\frac{f_{n-2}}{f_{n-1}},\frac{f_{n-1}}{f_n}\}$. Therefore $\{\frac{a}{b}, \frac{c}{d}\}=\{\frac{f_{n-1}}{f_{n-2}},\frac{f_n}{f_{n-1}}\}$ are consecutive convergents of $\phi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/563268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $[K : \mathbb{Q}] = 2$, then $K = \mathbb{Q}(\sqrt{d})$ This isn't for homework, but I would just a like a hint please. The question asks If $K$ is an extension field of $\mathbb{Q}$ and $[K : \mathbb{Q}] = 2$ (the dimension of $K$ over $\mathbb{Q}$), then $K = \mathbb{Q}(\sqrt{d})$ for some square-free integer $d$. I started by considering the linearly independent set $\{ 1 \}$ in $K$. Now since $[K : \mathbb{Q}] = 2$, I can extend this set to a basis $\{ 1, v \}$ of $K$ over $\mathbb{Q}$, where $v \notin \text{span} \{ 1 \}$. I see that $v^2 \in \text{span} \{ 1, v \}$, so that $v^2 = a_0 + a_1 v$ for some $a_0, a_1 \in \mathbb{Q}$. Now $a_0 \neq 0$ for otherwise $a_1 = v$ and $v \in \text{span} \{ 1 \}$. However, I'm not sure how to conclude that $v$ is a square-free integer. I feel like I am on the right track (hopefully), and would greatly appreciate a hint please!
$v$ satisfies a quadratic equation $ax^2 + bx + c = 0$, where $a \ne 0, b, c \in \mathbb{Z}$. Solve this equation and deduce $K = \mathbb{Q}(\sqrt D)$, where $D = b^2 -4ac$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/563337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
what is the relationship between vector spaces and rings? Can you show me an example to show how vector and scalar multiplication works with rings would be really helpful.
The relationship between a ring and its modules is the analogue of the relationship of a field and its vector spaces. For a field (or even skewfield) $F$, the Cartesian product $F\times F\times \dots\times F$ of finitely many copies of $F$ is a vector space in the ways you are probably familiar with. There is no reason you can't do this for $R\times R\times\dots\times R$ for a general ring $R$: defining scalar multiplication is exactly the same. The deal is that for general rings $R$ you call this a module, and in general there are a lot more modules that don't look like finitely many copies of $R$ (or even infinitely many copies of $R$.) That is why vector spaces are nice: because their structure is completely understood in terms of direct sums of copies of the field/skewfield.
{ "language": "en", "url": "https://math.stackexchange.com/questions/563413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Proving $\lim _{x \to 0^-} \frac{1}{x}=-\infty$ How do I calculate the limit: $$\lim _{x \to 0^-} \frac{1}{x}$$ The answer is clearly $-\infty$, but how do I prove it? Its clear that as x approaches 0 from the right side, $x$, becomes infinitely small, but negative, making the limit go to $-\infty$, but how do I prove this mathematically?
show x = -.0001 then x = -.00000001 then x = -.00000000000000001. Choose values that get closer and closer to 0 then graph that. That is the best way I could show it. Is this what you mean? Or am I making this too simple?
{ "language": "en", "url": "https://math.stackexchange.com/questions/563501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Trying to understand the equivalence of two definitions of a sieve. Let $\mathcal{C}$ be a small category, let $C$ be an object of $\mathcal{C}$ and let $\mathbf{y}:\mathcal{C}\to[\mathcal{C}^{op},\mathbf{Set}]$ be the Yoneda embedding. I am trying to derive the simple fact that a sieve $S$ on $C$ is a family of morphisms in $\mathcal{C}$, all with common codomain $C$, such that $f\in S\Rightarrow f\circ g\in S$ (whenever the composition makes sense) from the fact that $S$ is a subobject $S\subseteq \mathbf{y}(C)=\mathrm{Hom}_{\mathcal{C}}(-,C)$ but there must be something I do not get. $S\subseteq \mathbf{y}(C)$ really means a monic $S\to\mathbf{y}(C)$ in the presheaf category. Hence it is a natural transformation between the functors $S$ and $\mathbf{y}(C)$. Therefore it is a collection of set functions $$\lbrace t_A:SA\to \mathbf{y}(C)(A)\rbrace_{A\in \textrm{ob}\mathcal{C}}$$ such that for all $g:B\to A$ in $\mathcal{C}$ the following square is commutative $$ \require{AMScd} \begin{CD} SA @>{t_A}>> \mathrm{Hom}_{\mathcal{C}}(A,C);\\ @VVV @VVV \\ SB @>{t_B}>> \mathrm{Hom}_{\mathcal{C}}(B,C); \end{CD}$$ where the vertical arrows are $Sf$ and $-\circ g$. But I don't see from here how the first description of a sieve above follows...I am probably not looking at this correctly. Can someone help?
The monic $S \rightarrow y(C)$ determines subsets $S(A) \subseteq Hom_\mathcal{C}(A,C)$, in the standard sense (why?), so that composing with a morphism sends the subset $S(A)$ to $S(B)$. Since the category is small, you can also take the union $\bigcup_{A \in \mathcal{C}} S(A)$. You can then check that this union is precisely a sieve, and that an isomorphic subobject will give the same sieve. And of course, given an bunch of "equivariant subsets" as above, you can construct a subobject of $y(C)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/563564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
On $n\times n$ matrices $A$ with trace of the powers equal to $0$ Let $R$ be a commutative ring with identity and let $A \in M_n(R)$ be such that $$\mbox{tr}A = \mbox{tr}A^2 = \cdots = \mbox{tr}A^n = 0 .$$ I want to show that $n!A^n= 0$. Any suggestion or reference would be helpful. P.S.: When $R$ is a field of characteristic zero I can prove that $A^n=0$ but I have no idea for the general case.
The following argument also works in prime characteristic. The coefficients of the characteristic polynomial $$ \chi(t)=\sum^n_{j=0} (-1)^j \omega_j (A)\:t^{n-j}\; $$ of $A$ satisfy the following identities: $$ \sum^j_{i=1} (-1)^{i+1} {\rm tr}(A^i)\:\omega_{j-i} (A) =j\cdot \omega_j (A) \hbox{ with } \omega_0 (A)=1,\; \omega_{n+j}(A) = 0 \quad \forall \; j\in \mathbb{N} $$ We have either $p\mid n!$, or we have $p>n$ so that $\omega_j (A) = 0 \quad\forall \, j \ge 1$. In this case $A$ has characteristic polynomial $t^n$, so that $A$ is nilpotent with $A^n=0$. Together this means $n!A^n=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/563674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find a non-principal ideal (if one exists) in $\mathbb Z[x]$ and $\mathbb Q[x,y]$ I know that $\mathbb Z$ is not a field so this doesn't rule out non-principal ideals. I don't know how to find them though besides with guessing, which could take forever. As for $\mathbb Q[x,y]$ I know $\mathbb Q$ is a field which would mean $\mathbb Q[x]$ is a principal ideal domain, but does this still apply for $\mathbb Q[x,y]$ ?
Here is a general result: If $D$ is a domain, then $D[X]$ is a PID iff $D$ is a field. One direction is a classic result. For the other direction, take $a\in D$, consider the ideal $(a,X)$, and prove that it is principal iff $a$ is a unit. This immediately answers both questions: $(2,X)$ is not principal in $\mathbb Z[X]$ and $(X,Y)$ is not principal in $\mathbb Q[X,Y]=\mathbb Q[X][Y]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/563784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Intuition/Real-life Examples: Pairwise Independence vs (Mutual) Independence Would someone please advance/discuss some real-life situations falsities $1, 2$? I'd like to intuit why these are false. As a neophyte, since I still need to compute the probabilities for the examples in the two answers to apprehend them, I haven't naturalised these concepts yet. Thus, I beg leave to ask about other real-life examples which require less or no computations. I tried and would appreciate less numerical examples than http://notesofastatisticswatcher.wordpress.com/2012/01/02/pairwise-independence-does-not-imply-mutual-independence/ and http://econ.la.psu.edu/~hbierens/INDEPENDENCE.PDF, and Examples 1.22 and 1.23 on P39 of Introduction to Pr by Bertsekas. $1.$ Pairwise Independence $\require{cancel} \cancel{\implies}$ (Mutual) Independence. $2.$ Pairwise Independence $\require{cancel} \cancel{\Longleftarrow}$ (Mutual) Independence. P38 defines (Mutual) Independence : For all $S \subseteq \{1, ..., n - 1, n\} $ and events $A_i$, $Pr(\cap_{i \in S} A_i) = \Pi_{i \in S} A_i.$
Head appears in the first toss. B: Head appears in the second toss. C: Head appears in the third toss. D: A and B yield the same outcome. Mutual independence: Firstly we only consider the $A, B, C$ (just treat $D$ as nonexistent). It is obvious that they are mutual independent. And here are two perspectives of this statement. * *$$P(A\cap B\cap C)=P(A)P(B)P(C)=\frac{1}{8}$$ $$P(A\cap B)=P(A)P(B)=\frac{1}{4}$$ $$P(A\cap C)=P(A)P(C)=\frac{1}{4}$$ $$P(B\cap C)=P(B)P(C)=\frac{1}{4}.$$ *We firstly calculate the joint probability: $$P(A\cap B\cap C)=P(A)P(B)P(C)=\frac{1}{8}$$ And then get the marginal probability $P(B, C)$ by summing out A: $$P(B, C) = P(A, B, C) + P(\neg A, B, C)=\frac{1}{4}$$ And we can verify the other two easily due to the symmetry of A, B and C. Mutual independence implies pairwise independence because we can just marginalize out the variables not in each pair. Pairwise independent but not mutual independent Let's consider A, B and D(just treat C as nonexistent). $$P(A\cap B\cap D)=\frac{1}{4}\neq P(A)P(B)P(D)=\frac{1}{8}$$ $$P(A\cap B)=P(A)P(B)=\frac{1}{4}$$ $$P(A\cap D)=P(A)P(D)=\frac{1}{4}$$ $$P(B\cap D)=P(B)P(D)=\frac{1}{4}.$$ We can see that A, B and C are mutually independent(thereof pairwise independent), but A, B and D are only pairwise independent and don't satisfy the mutual independence. Refrence: * *Pairsewise independence versus mutual independence by Isaac (Ed) Leonard. *Mutually Independent Events by Albert R Meyer *Probabilistic graphical model by Stefano Ermon
{ "language": "en", "url": "https://math.stackexchange.com/questions/563887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Solve $\frac{d}{dx}f(x)=f(x-1)$ I am trying to find a function such that $\dfrac{d}{dx}f(x)=f(x-1)$ Is there such function other than $0$ ?
Certainly. Let $b$ be the unique real number such that $b=e^{-b}.$ Then for any real $a,$ the function $$f(x)=ae^{bx}$$ satisfies the desired property. In fact, for real-valued functions on the reals, only functions of this form will satisfy the desired property. (As achillehui points out in the comments, there are other alternatives if we consider complex-valued functions.) In particular, $b=W(1)$, where $W$ is the Lambert W function. $W(1)$ is also sometimes known as the Omega constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/564014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
$\sum_{i=1}^n \frac{1}{i(i+1)} = \frac{3}{4} - \frac{2n+3}{2(n+1)(n+2)}$ by induction. I am wondering if I can get some help with this question. I feel like this is false, as I have tried many ways even to get the base case working (for induction) and I can't seem to get it. Can anyone confirm that this is false? If I am wrong, I would really appreciate a hint. $$\sum_{i=1}^n \frac{1}{i(i+1)} = \frac{3}{4} - \frac{2n+3}{2(n+1)(n+2)}$$
Let $$F(n)=\sum_{i=1}^n\frac{1}{i(i+1)}$$ and $$G(n)=\frac{3}{4}-\frac{2n+3}{2(n+1)(n+2)}$$ Your task is to prove that $F(n)=G(n)$ for all $n$. To do this by induction, prove first that $F(1)=G(1)$. Then, assume $F(n)=G(n)$. Add $\frac{1}{(n+1)(n+2)}$ to both sides; this is because $F(n)+\frac{1}{(n+1)(n+2)}=F(n+1)$. We now have $$F(n+1)=G(n)+\frac{1}{(n+1)(n+2)}$$ Now you need to do some algebra to prove that $G(n)+\frac{1}{(n+1)(n+2)}=G(n+1)$, at which point you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/564095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Each closed set is $f^{-1}(0)$ Let $X$ be some compact Hausdorff space (or even normal space). Is it true that each closed subset $X'$ is $f^{-1}(0)$ for some $f\in C(X,\mathbb{R})$? I know that there is Urysohn's lemma which gives us an opportunity to continue each function $X'\longrightarrow \mathbb{R}$ to a function $X\longrightarrow \mathbb{R}$, but it doesn't seem to be useful in my case.
No. The compact ordinal space $\omega_1+1$ with the order topology is an easy counterexmple, as is $\beta\Bbb N$: each has at least one point $x$ such that $\{x\}$ is not a $G_\delta$-set and therefore cannot be $f^{-1}[\{0\}]$ for any continuous real-valued $f$. You’re asking for spaces in which every closed set is a zero-set (or as Engelking somewhat idiosyncratically calls it, a functionally closed set). Among $T_1$ spaces these are precisely the perfectly normal spaces, i.e., the normal spaces in which every closed set is a $G_\delta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/564167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Expectation value of certain number of trials of multinomial distribution. Player can extract card from deck (the size of deck is infinite) to obtain one of $k$ kinds of cards, and the possibility of obtaining each kind is given by $p_i$. (Obviously $\sum_{i=1}^{k} p_{i} = 1$). If you collect AT LEAST one card from all kinds, then you get 1 dollar. Actually, the amount of money you receive is the minimum number of collected cards of all kinds (In other words, the number of sets of cards of all kinds that you've completed) My object is to calculate the expectation value of number of trials (namely $n$) to receive more than $m$ dollar, as a closed form of $p_{i}$s, $k$, $n$ and $m$. How can I do this? Thanks.
Please search the web for the coupon collector's problem. Let $ N$ be the number of cards to extract till we get all $ k$ kinds, and let $N_{i}$ be the number of extracts to collect the $i'th$ kind after $i−1$ kinds have been collected - note that $N=\sum_{i=1}^{k}N_{i}.$ We want to get more then m dollars so we'll calculate $n\geq E(m\cdot N)=m\cdot E(N)=m\sum_{i=1}^{k}E(N_{i}).$ We notice that: $$N_{1} = 1$$ $$N_{2} \sim Geom\left(\frac{k-1}{k}\right)$$ $$\vdots $$ $$N_{k} \sim Geom\left(\frac{1}{k}\right)$$ Also we know that for $$X\sim Geom(p)\Rightarrow E(X)=\frac{1}{p}$$ therefore:$$m\sum_{i=1}^{k}E(N_{i})=m\cdot k\sum_{i=1}^{k}\frac{1}{i}=m\cdot k\cdot H_{k}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/564248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that the Gale-Shapley algorithm terminates after at most $n^2 - n + 1$ proposals. How do you prove that the Gale-Shapley algorithm terminates after at most $n^2 - n + 1$ proposals by showing that at most one proposer receives his or her lowest-ranked choice?
Assuming you are using the same number of proposers and acceptors (because all of your problems are this way): If exactly one proposer (from now on man) gets his last choice woman, he will have proposed $n$ times. The remaining $n-1$ men are able to propose a maximum of $n-1$ times so $$(n-1)(n-1)+n=n^2-2n+1+n=n^2-n+1$$ This is possible as Preferences for four ladies and four gentlemen where one proposer receives his/her lowest-ranked choice,... shows. Per http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15251-f10/Site/Materials/Lectures/Lecture21/lecture21.pdf, the men always get thier most preferable stable matching. Likewise the women always get thier least stable pairing so if it does not matter who proposes, there is only one stable pairing of men and women for that list of preferences. If there were two men (Adam and Bob) who are paired with their worst choice women, (Alice and Beth respectively), by this GS method then Adam and Beth and/or Alice and Bob would produce a stable pairing. This would mean that the dude would end up with someone he prefers. This is trivial for 2. For $n=3$, Charlie must be Christina's first choice or the other pairs would not be stable as her first choice would prefer her to their current partner. If this is the case, the AB pairs would still be stable as Christina still prefers Charlie. When we throw in David and Danielle so that $n=4$ or increase $n$ to an arbitrarily high number, the arrangment of new women and men (everyone except the A's and Bs) that make the A-A and B-B pairings stable, would make the A-B and B-A pairs stable. Therefore, via the proof by Gusfield and Irving, they would form instead. To confirm this, make any stable scenario of preferences and pairs that has two men with their last choices, let the men introduce the women to swinging by switching partners, and determine if the new setup is stable (it will be).
{ "language": "en", "url": "https://math.stackexchange.com/questions/564322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Quadratic residues solutions I have a homework question that I can't figure out. It says: If the prime p > 5 , show that there are always two quadratic residues of p that differ by two
Note that $1$ and $4$ are QR. So if one of $2$ or $3$ is a QR, we are finished. If both $2$ and $3$ are NR, then $6$ is a QR. But then $4$ and $6$ are QR that differ by $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/564572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A positive integer is divisible by $3$ iff 3 divides the sum of its digits I am having trouble proving the two following questions: * *If $p|N$, $q|N$ and gcd(p,q)=1, then prove that $pq|N$ *If $x$ is non zero positive integer number, then prove that $3|x$ if and only if 3 divides the sum of all digits of $x$. For both questions I tried to use theorems of discrete mathematics, but I could not find the way to solve them.
Here is an explanation of number 2. We will use a corollary from a theorem (both taken from Rosen's Discrete Mathematics text. Theorem: Let $m$ be a positive integer. If $a\equiv b \text{ mod } m$ and $c\equiv d \text{ mod } m,$ then $$ a+c \equiv b+d \text{ mod } m \qquad\text{and}\qquad ac \equiv bd \text{ mod } m. $$ From this theorem we get this corollary Corollary: Let $m$ be a positive integer and let $a$ and $b$ be integers. Then $$ (a+b) \text{ mod } m = ((a \text{ mod } m) + (b \text{ mod } m)) \text{ mod } m $$ and $$ ab \text{ mod } m = ((a \text{ mod } m)(b \text{ mod } m)) \text{ mod } m. $$ Note that "$\equiv$" is used to indicate congruence, while "=" is used to indicate equality. The problem says: If $x$ is a non zero positive integer number, then prove that $3|x$ if and only if $3$ divides the sum of all digits of $x$. We can represent $x$ as sum like so $$ \begin{aligned} x &= a_n\cdot 10^n + a_{n-1}\cdot 10^{n-1} + \cdots + a_1\cdot 10^{1} + a_0\cdot 10^0\\ &= \sum_{k=0}^n a_k10^k, \end{aligned} $$ where $a_0$ is in the ones place in $x$, $a_1$ is in the tens place in $x$, $a_2$ is in the hundreds place in $x$, etc $\dots$ Note that $3|x$ implies that $0 = x\text{ mod } 3.$ Then, $$ \begin{aligned} 0 &= x\text{ mod } 3\\ &=\left(\sum_{k=0}^na_k10^k\right)\text{ mod } 3 &&\text{substitute}\\ &=\left(\sum_{k=0}^n(a_k10^k\text{ mod } 3)\right)\text{ mod } 3 &&\text{ by corollary}\\ &=\left(\sum_{k=0}^n([(a_k\text{ mod } 3)(10^k\text{ mod } 3)]\text{ mod } 3)\right)\text{ mod } 3 &&\text{ by corollary}\\ &=\left(\sum_{k=0}^n([(a_k\text{ mod } 3)(1\text{ mod } 3)]\text{ mod } 3)\right)\text{ mod } 3 &&1 \equiv 10^k\text{ mod }3\\ &=\left(\sum_{k=0}^n(a_k\text{ mod } 3)\right)\text{ mod } 3 &&(a_k)(1)=a_k\text{ and corollary}\\ &=\left(\sum_{k=0}^na_k\right)\text{ mod } 3 &&\text{by corollary.}\\ \end{aligned} $$ The last equality states $$ 0 = \left(\sum_{k=0}^n a_k\right)\text{ mod }3, $$ which proves that $3$ divides the sum of all digits of $x$. To prove the other direction (if $3$ divides the sum of all digits of $x$, then $3|x$) just use the same equalities, but start at the bottom and work your way to the top.
{ "language": "en", "url": "https://math.stackexchange.com/questions/564662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
$x^a+y^b$ is divisible by $x^b+y^a$ for infinitely many $(x,y)$ Let $a\geq b>0$ be integers. For which $(a,b)$ do there exist infinitely many positive integers $x\neq y$ such that $x^a+y^b$ is divisible by $x^b+y^a$? If $a=b$, we certainly have $x^a+y^a$ divisible by itself. For $a>b$ maybe we can choose some form of $(x,y)$ in terms of $(a,b)$?
If $a$ is odd and $b=1$ and any positive $x>1$ with $y=1$ you have the integer $(x^a+1)/(x+1).$ There are other families of pairs $(a,b)$ with some good choices of $x$ or $y$. For example replacing $x$ above by $x^t$ gives the pair $(at,t)$ of exponents which, on choosing $y=1$, gives the integer $(x^{at}+1)/(x^t+1).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/564754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Complex Analysis Proofs Let $f$ = $u + iv$ be an entire function satisfying $u(x,y) \geq v(x,y)-1$ for every $(x,y) \in R^2$. Prove that all functions $f, u, v$ are constant. Can someone please help me prove this...
With these type of questions, when I see entire and some bound on the function, I immediately try to apply Liouville's theorem. From there, it is just a matter of trying to get a bounded entire function. In this case, the following gives a bounded entire function: The condition that $u\geq v-1$ can be rewritten to say that $v-u\leq 1$. Now consider $$ |e^{-f-if}| = |e^{-u-iv-iu+v}| = e^{v-u}\leq e^1 $$ Note that the function $-f-if$ is entire, being the sum of entire functions. Also $e^{-f-if}$ is entire, being the composition of entire functions. The above inequality shows that we have a bounded entire function, Liouville's theorem now implies that $$ e^{-f-if} = c$$ Note that, since $e^z$ is never zero, $c\neq 0$. To conclude that $f$ must be constant, differentiate both sides to give $$ (-f'-if')e^{-f-if} =0$$ hence $(-1-i)f' = 0$ which gives $f'=0$, hence $f$ must be constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/564870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Question about Ito integral I was wondering if Ito integral: $\int_0^T B(t)dB(t) $ is Gaussian (in which B(t) is Brownian Motion)?? Thank you so much, I appreciate any help ^^
$d(B_t^2) = 2B_t dB_t + dt$. Therefore your integral is $\frac12(B_T^2-T)$. $B_T$ is Gaussian $N(0,\sqrt T)$, therefore $B_T^2$ is $T$ times a $\chi^2_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/564942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
The Concept of Instantaneous Velocity The concept of instantaneous velocity really becomes counter-intuitive to me when I really think deeply about it. Instantaneous velocity is the velocity of something at an instant of time; however, at the very next instant the velocity changes. In general, speed tells us how quickly something is changing its position with respect to some point in space. Then by instantaneous velocity we mean that the object at some instant is changing its position with respect to time, but may change to a velocity much faster. But if it is changing its position with that rate then the time passes by and that instant as well and at another instant the velocity is different. Did it even move with that velocity for a moment? I’m sorry that I’m not exactly able to tell you what my problem is; however, it is something like I've described.
Anything that changes in speed will do so as a result of momentum transfer into the object. This requires some interaction with other forces. Force fields are the source of momentum transfer, this transfer takes some time, thus we get acceleration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/565003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Convergence of $\frac{1}{n}\sum_{i=1}^{n}\left[\frac{\left(\log\left(1+i/n\right)\right)^2}{1+i/n}\right]$ Sequence of real numbers $$S_n=\frac{1}{n}\sum_{i=1}^{n}\left[\frac{\left(\log\left(1+\frac{i}{n}\right)\right)^2}{1+\frac{i}{n}}\right]$$ Does $\lim\limits_{n \to \infty} S_n$ exist? If so, compute the value. My Solution: A) According to Cauchy's first theorem on limits. This limit will be same as $$\lim_{n \to \infty}\left[\frac{(\log(1+\frac{n}{n}))^2}{1+\frac{n}{n}}\right] =\frac{(\log(2))^2}{2}$$ B) $S_n$ can also be written as $$\int_{0}^{1}\frac{(\log(1+x))^2}{1+x}dx=\frac{(\log(2))^3}{3}$$ Two answers by two methods. What am I missing here? Is this sequence not convergent?
the B) methods is true. $$\lim_{n\to\infty}\dfrac{1}{n}\sum_{i=1}^{n}\left(\dfrac{(\ln{1+\dfrac{i}{n}})^2}{1+\dfrac{i}{n}}\right)=\lim_{n\to\infty}\dfrac{1}{n}\sum_{i=1}^{n}f\left(\dfrac{i}{n}\right)=\int_{0}^{1}f(x)dx$$ where $$f(x)=\dfrac{\left(\ln{(1+x)}\right)^2}{1+x}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/565085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Does $\lim_{(x,y) \to (0,0)} xy^4 / (x^2 + y^8)$ exist? From this question on answers.yahoo, the guy says the following limit does not exist: $$\lim_{(x,y) \to (0,0)} \frac{xy^4}{x^2 + y^8},$$ then on wolfram, it says the limit is equal to $0$. When I did it myself, I tried approaching $(0,0)$ from the $x$-axis, $y$-axis, $y=x$, $y=x^2$. They all equal $0$. But when I tried the squeeze theorem, I got $y^8 \leq x^2 + y^8$, therefore $0 \leq |xy^4/(x^2+y^8)| \leq |\dfrac{x}{y^4}|$, and the latter does not exist for $(x, y) \to (0,0)$. So does the original limit exist or not? I'm getting contradicting information from various sources. Also, if it doesn't exist (it looks like it doesn't... I think), how would I go about proving that it doesn't?
Hint: let $x=ky^4(k\neq 0)$,so $$\lim_{(x,y)\to(0,0)}\dfrac{xy^4}{x^2+y^8}=\dfrac{k}{k^2+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/565187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Let p and q be distinct odd primes. Define $n=pq$ and$ \phi(n)=(p−1)(q−1)$ (a) Show that $p+q = n−\phi(n)+1$ and $p−q = \sqrt{(p+q)^2−4n}$. (b) Suppose you are given that $n = 675683$ and are told that $p−q = 2$. Explain how this information can help us factor $n$ quickly. (Hint: Try to use the result from part (a) to obtain the values of $p$ and $q$ instead. You may use a calculator to help you solve this problem.) (c) Suppose $n$ and $\phi(n)$ are known, but $p$ and $q$ are unknown primes. Use your answer from part (a) to express the primes $p$ and $q$ in terms of $n$ and $\phi(n)$ I'm just having a bit of trouble with part (c). I think I figured out part (a) and (b) but just in case here's my work. Part(a) $\phi(n) = (p-1)(q-1) = pq -(p+q) + 1 = n -(p+q) + 1$ So $p+q = n - \phi(n) + 1$ $(p-q)^2 = p^2 + q^2 -2n$ $(p+q)^2 = p^2 + q^2 + 2n$ $(p+q)^2 -(p-q)^2 = 4n $ So $(p+q)^2 -4n = (p-q)^2$ $p-q = \sqrt{(p+q)^2 -4n}$ (b) $n = pq = 675683$ and $p-q = 2$ So $p(p-2) = 675683 $ $p^2-2p - 675683 = 0$. Solving by the quadratic formula $p = 1 + \sqrt{675684} = 823$ $q = p-2 = 821. $
Since you already have expressions for p+q and p-q in terms of just n and phi(n), all you have to do is use these expressions and note that p can be expressed as p=((p+q)+(p-q))/2 and likewise q=((p+q)-(p-q))/2
{ "language": "en", "url": "https://math.stackexchange.com/questions/565273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
does $(\overline{E})^{'}= E^{'} \cup ( E^{'})^{'}$ holds? My question is as follows: Suppose $E$ is a set in metric space $X$, let $\overline{E}$ denote the closure of E, let $E^{'}$ be the set of all the limit points of $E$. We all know that $\overline{E}=E\cup E^{'} $ Then my question is: Does the following equality hold? $(\overline{E})^{'}= E^{'} \cup ( E^{'})^{'}$ if not, can you give me an exception in which the equality does not hold? Thanks so much!
We have more. In a Hausdorff space (even in a $T_1$ space, if you already know what that is), $x$ is a limit point of $A$ if and only if every neighbourhood of $x$ contains infinitely many points of $A$. Thus in such spaces, we have $$(\overline{E})' = E'.$$ Since evidently $(E')' \subset (\overline{E})'$, the equality holds in Hausdorff (or $T_1$, nore generally) spaces, in particular in metric spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/565420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Intuitive understanding of determinants? For a $n \times n$ matrix $A$:$$\det (A) = \sum^{n}_{i=1}a_{1i}C_{1i}$$ where $C$ is the cofactor of $a_{1i}$. If the determinant is $0$, the matrix is not invertible. Could someone an intuitive explanation of why a zero determinant means non-invertibility? I'm not looking for a proof, the book gives me one. I'm looking for intuition. For example, consider the following properties of zero: Property 1: If a factor of a polynomial is $0$ at $x$, then the polynomial is zero at $x$. Intuition: Anything times zero, so if the remaining polynomial is being multiplied by zero, it has to be zero. Property: $x + 0 = x$ Intuition: If you have something and don't change anything about it, it remains the same. So why does having a zero determinant imply non-invertibility? Could someone give me a similar intuition for determinants? Thanks.
There is a simple property for determinants: $Det(A)Det(b)=Det(AB)$, so if you take: $Det(A A^{-1})= Det(A)Det(A^{-1})= Det(I) = 1$, from this follows than $det(A)$ must be different from $0$ to be invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/565502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
What's the limit of $(1+\frac{1}{8^n})^n$ What's the limit of $(1+\frac{1}{8^n})^n$? How do I find the answer for this? Thanks in advance.
$\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\dd}{{\rm d}}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\ic}{{\rm i}}% \newcommand{\imp}{\Longrightarrow}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert #1 \right\vert}% \newcommand{\yy}{\Longleftrightarrow}$ $$ \pars{1 + {1 \over 8^{n}}}^{n} = \expo{n\ln\pars{1 + 8^{-n}}} \sim \expo{n/8^{n}} \to \color{#0000ff}{\large 1} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/565562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Algebraic solution to: Do the functions $y=\frac{1}{x}$ and $y=x^3$ ever have the same slope? The exercise doesn't specify how it must be answered, so I chose a graphical proof because I couldn't come up with an algebraic one. Sketching the graphs of $y=\frac{1}{x}$ and $y=x^3$, I noticed that $y=x^3$ always has a nonnegative slope, whereas $y=\frac{1}{x}$ always has a negative slope. Therefore these two functions never have the same slope. However, I'm wondering if there's a algebraic way of showing this. I thought of differentiating each function and setting the values equal, but I think this would only prove that the two functions don't have the same slope at any particular x, and not that they don't have the same slopes anywhere over their domains.
Let $f(x)=x^3$ and $g(x)=x^{-1}$. We have $f'(x)=3x^2$ and $g'(x)=-x^{-2}$, now we do exactly what you said: $Im(f')\cap Im(g')=\left\{t\in\mathbb{R}|t\geq0 \right\}\cap\left\{t\in\mathbb{R}|t<0 \right\}=\left\{t\in\mathbb{R}|t\geq0\wedge t<0 \right\}=\emptyset$
{ "language": "en", "url": "https://math.stackexchange.com/questions/565653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
PDF of $Y - (X - 1)^2$ for $(X, Y)$ uniform on $[0, 2] \times [0, 1]$ I am trying to find the p.d.f (but will calculate the c.d.f first) of $Z = Y - {(X - 1)}^2$ knowing that $(X, Y)$ is distributed uniformly on $[0, 2] \times [0, 1]$. So, $$f_{X, Y}(x, y) = \begin{cases}\frac{1}{2} & (x, y) \in [0, 2] \times [0, 1] \\ 0 & \text{otherwise} \end{cases}$$ $$F_Z(z) = P_{X, Y}(\left\{(x, y): y - {(x - 1)}^2 \leq z\right\})$$ I understand that $z$ changes in: $z \leq - 1$, $- 1 < z \leq 0$, $0 < z \leq 1$ and $z > 1$ When $z \leq - 1$: $F_Z(z) = 0$ and when $z > 1$: $F_Z(z) = 1$. My question is regarding $- 1 < z \leq 0$ and $0 < z \leq 1$. This is what I got: $$F_Z(z) = \begin{cases}2 \cdot (\int\limits_0^{1 - \sqrt{-z}} \int\limits_0^{z + {(x - 1)}^2} \frac{1}{2}\,\mathrm{d}y\,\mathrm{d}x) & - 1 < z \leq 0 \\ \int\limits_{1 - \sqrt{1 - z}}^{1 + \sqrt{1 - z}} \int\limits_{z + {(x - 1)}^2}^1 \frac{1}{2}\,\mathrm{d}y\,\mathrm{d}x & 0 < z \leq 1 \end{cases}$$ Do you agree with the definition of the c.d.f? I am asking because finding the integrals (especially the integration limits) was a bit tricky. Finally, assuming that the c.d.f is correct, I did the derivative and got the following p.d.f: $$f_Z(z) = \begin{cases}1 - \sqrt{-z} & - 1 < z \leq 0 \\ \sqrt{-(z - 1)} & 0 < z \leq 1 \\ 0 & \text{otherwise} \end{cases}$$
Given: the joint pdf of $(X,Y)$ is $f(x,y)$: (source: tri.org.au) Some neat solutions have been posted showing all the manual steps which involve some work. Alternatively (or just to check your work), most of this can be automated which makes solving it really quite easy. Basically, it is a one-liner: The cdf of $Z = Y-(X-1)^2$ is $P(Z<z)$: (source: tri.org.au) where Prob is a function from the mathStatica add-on to Mathematica (I am one of the authors of the former). The pdf of $Z$ is, of course, just the derivative of the cdf: (source: tri.org.au) All done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/565724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Accumulation Points for $S = \{(-1)^n + \frac1n \mid n \in \mathbb{N}\}$ I was recently asked to find the accumulation points of the set $$S = \{(-1)^n + \frac{1}{n} \mid n \in \mathbb{N}\}$$ I answered that the accumulation points are $\{-1,1\}$, because despite the fact that $\frac{1}{n}$ diverges, we can still use $\frac{1}{n}$ to find an element arbitrarily close to either $1$ or $-1$, i.e. to show that there is an element in an arbitrarily small neighborhood of $1$ or $-1$. (No proof was required for this question --- this is my intuition.) Am I right at all?
Let's denote $$a_n=(-1)^n+\frac{1}{n}$$ then the subsequence $(a_{2n})$ and $(a_{2n+1})$ are convergent to $1$ and $-1$ respectively then $1$ and $-1$ are two accumulation points. There's not other accumulation point since any convergent subsequence of $((-1)^n)$ has either $1$ or $-1$ as limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/565812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Elementary proof that $*$homomorphisms between C*-Algebras are norm-decreasing A lecturer once gave a very elementary proof that $*$-homomorphisms between C*-algebras are always norm-decreasing. It is well-known that this holds for a $*$-homomorphism between a Banach algebra and a C*-algebra, but all the proofs I find involve the spectral radius and so. If I remember it well, the proof he gave used the C*-algebra structure in the domain, and (as always) had something to do with a geometric series. Does anyone knows how to do so?
Let $f\colon A\to B$ be a *-homomorphism. Let us note that $f$ cannot enlarge spectra of self-adjoint elements in $A$, that is, for all $y\in A$ self-adjoint we have $\mbox{sp}(f(y))\setminus \{0\} \subseteq \mbox{sp}(y)\setminus \{0\}$. By the spectral radius formula, we have $\|y\|=r(y)$. Now, let $x\in A$. It follows that $\|f(x)\|^2 = \|f(x^*x)\| = r(f(x^*x))\leqslant r(x^*x)=\|x\|^2$. $\square$ Another strategy (involving geometric series) is to notice that $f$ is positive and $\|f\|=\|f(1)\|=1$ (in the unital case). In this case, you can tweak the proof given by julien in Why is every positive linear map between $C^*$-algebras bounded?
{ "language": "en", "url": "https://math.stackexchange.com/questions/565958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
For what values of $r$ does $y=e^{rx}$ satisfy $y'' + 5y' - 6y = 0$? For what values of $r$ does $y=e^{rx}$ satisfy $y'' + 5y' - 6y = 0$? Attempt: $y' = [e^{rx}] (r)$ $y''= r^2e^{rx}$
If you plug them in, you obtain : $$r^2+5r-6=0$$ Solving this equation you get $r=1$ or $r=-6$. That means that the general solution of the suggested ODE is : $$y(x)=ae^t + be^{-6t}, (a,b) \in \Bbb R^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/566056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $Y$ and $Y \cup X$ are connected. Must their be some $X$-component ($C$) where $C \cup Y$ is connected? This is a question I had, while trying to solve a homework problem. My original approach was dependent upon the following statement being true. If $Y$ and $Y \cup X$ are connected, then there is some connected component of $X$, call it $C$, where $Y \cup C$ is connected. I eventually solved the homework problem using a very different method, but the question has been bugging me. I can't seem to come up with a proof or counterexample. I haven't been able to make more than trivial progress (for example, if $Y$ is not closed or there is some component $C$ is not closed. Then $\overline{C} \cap \overline{Y} \neq \emptyset$ and you can show that one must contain a limit point of the other, therefore $Y \cup C$ is connected.) Does anyone have any insights?
There certainly is some connected component of $X$ where $Y \cup C$ is connected. If there wasn't, then you would get a disjoint union of $X$ and $Y$, a contradiction.. Proof. Choose an arbitrary $x$ in $C$ with the restriction it is the closest point to $Y$ (for some point $y \in Y$). For it to be connected there would have to exist an $\epsilon > 0$ such that $d(x, y) < \epsilon$. If there were no such component, that is, if $d(x, y) > \epsilon$ for all $x$ in $C$ (and since $C$ is an arbitrary subset of $X$, we can just claim $x \in X$ WLOG) then $X$ and $Y$ are clearly disconnected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/566124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $\gcd(a, b) = 1$ then $\gcd(ab, a+b) = 1$? In a mathematical demonstration, i saw: If $\gcd(a, b) = 1$ Then $\gcd(ab, a+b) = 1$ I could not found a counter example, but i could not found a way to prove it too either. Could you help me on this one ?
First prove * *$\gcd(mn, k)=1$ if and only if $\gcd(m,k)=1$ and $\gcd(n,k)=1$. *If $\gcd(m,k)=1$ then $\gcd(m,m+k)=1$. The desired result follows from these like so: From $\gcd(a,b)=1$, we have $\gcd(a, a+b)=\gcd(b, a+b)=1$, implying $\gcd(ab, a+b)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/566193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 2 }
Infinite regular sets Would it be true that for all infinite regular subsets, each one contains subsets that are not c.e/r.e (countably enumerable/recursively enumerable)? Intuitively this seems true because of sets that are uncountable.
Uncountability has nothing to do with it: none of the sets that you’re talking about in this context is uncountable. However, the statement is true; it follows from the pumping lemma for regular languages. If $L$ is regular and infinite, let $p$ be its pumping length, and let $w\in L$ be such that $|w|\ge p$. Then $w$ can be decomposed as $w=xyz$ in such a way that $|xy|\le p$, $|y|\ge 1$, and $xy^kz\in L$ for all $k\ge 0$. Let $A\subseteq\Bbb N$ be a non-r.e. set, and consider $\{xy^kz:k\in A\}\subseteq L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/566250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Compute $\int_0^{\infty}\frac{\cos(\pi t/2)}{1-t^2}dt$ Compute $$\int_0^{\infty}\frac{\cos(\pi t/2)}{1-t^2}dt$$ The answer is $\pi/2$. The discontinuities at $\pm1$ are removable since the limit exists at those points.
Define $$f(z)=\frac{e^{\pi iz/2}}{1-z^2}\;,\;\;C_R:=[-R,-1-\epsilon]\cup\gamma_{-1,\epsilon}\cup[1+\epsilon,\epsilon]\cup\gamma_{1\epsilon}\cup[1+\epsilon,R]\cup\Gamma_R$$ with $\;\epsilon, R\in\Bbb R^+\;$ and $$\gamma_{r,s}:=\{r+se^{it}\;;\;0\le t\le \pi\}\;,\;r,s\in\Bbb R^+\;,\;\Gamma_R:=\{Re^{it}\;;\;0\le t\le \pi\}\;$$ Now, as the two poles of the function are simple: $$\begin{align*}\text{Res}_{z=-1}(f)&=\lim_{z\to -1}(z+1)f(z)=\frac{e^{-\pi i/2}}{2}=-\frac i2\\ \text{Res}_{z=1}(f)&=\lim_{z\to 1}(z-1)f(z)=-\frac{e^{\pi i/2}}{2}=-\frac i2\end{align*}$$ so using the corollary to the lemma in the second answer here (watch the direction of the integration!), and the residue theorem for integration, and using also Jordan's Lemma, we get: $$0=\lim_{R\to\infty\,,\,\epsilon \to 0}\oint\limits_{C_R}f(z) dz=\int\limits_{-\infty}^\infty f(x)dx+\pi i(i)\implies$$ $$\int\limits_{-\infty}^\infty \frac{e^{\pi ix/2}}{1-x^2}dx=\pi$$ Now just compare real parts in both sides and take into account your integrand function is an even one...
{ "language": "en", "url": "https://math.stackexchange.com/questions/566332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Closure of equivalence relations Show that the transitive closure of the symmetric closure of the reflexive closure of a relation R is the smallest equivalence relation that contains R. I can understand the statement intuitively but can't come up with a mathematical proof
HINT: Let $S$ be the transitive closure of the symmetric closure of the reflexive closure of $R$. You have to show three things: * *$R\subseteq S$. *$S$ is an equivalence relation. *If $E$ is an equivalence relation containing $R$, then $E\supseteq S$. The first of these is pretty trivial, and the second isn’t very hard: just show that the symmetric closure of a reflexive relation is still reflexive, and that the transitive closure of a symmetric, reflexive relation is still symmetric and reflexive. For (3), show that every ordered pair in $S$ must necessarily belong to any equivalence relation containing $R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/566401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is $\int_0^1\frac{x^7-1}{\log(x)}\mathrm dx$? /A problem from the 2012 MIT Integration Bee is $$ \int_0^1\frac{x^7-1}{\log(x)}\mathrm dx $$ The answer is $\log(8)$. Wolfram Alpha gives an indefinite form in terms of the logarithmic integral function, but times out doing the computation. Is there a way to do it by hand?
Yet another direct way forward is to use Frullani's Integral. To that end, let $I(a)$ be the integral given by $$I(a)=\int_0^1 \frac{x^a-1}{\log x}\,dx$$ Enforcing the substitution $\log x \to -x$ yields $$\begin{align} I(a)&=\int_{0}^{\infty} \frac{e^{-ax}-1}{x}\,e^{-x}\,dx\\\\ &=-\int_{0}^{\infty} \frac{e^{-(a+1)x}-e^{-x}}{x}\,dx \end{align}$$ whereupon using Frullani's Integral we obtain $$\bbox[5px,border:2px solid #C0A000]{I=\log(a+1)}$$ For $a=7$, we have $$\bbox[5px,border:2px solid #C0A000]{I(7)=\log (8)}$$as expected!
{ "language": "en", "url": "https://math.stackexchange.com/questions/566475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56", "answer_count": 7, "answer_id": 4 }
Show that $\sum_{n=0}^\infty (order\ {S_n})q^n=\prod_{m\ge 1}(1-q^m)^{-1}$ Let $T=\mathbb (C^*)^2$ acts on $\mathbb C[x,y]$ via $(t_1,t_2)(x,y)=(t_1x,t_2y)$, let $S_n$ be the set of ideals $I$ of $\mathbb C[x,y]$ such that $TI=I$ and $\mathbb C[x,y]/I$ is $n$-dimensional $\mathbb C$-vector space. If $order\ S_{0}=0$. show that $$\sum_{n=0}^\infty (order\ {S_n})q^n=\prod_{m\ge 1}(1-q^m)^{-1}.$$
Since this looks like a homework problem let me just give an outline. * *Prove that an ideal is invariant under the torus action if and only if it is generated by monomials. *Prove that monomial ideals $I \subset \mathbb C[x,y]$ can be identified with partitions. Hint: draw a square grid with squares $(i,j)_{i \geq 0, j \geq 0}$. Mark each square $(i,j)$ such that $x^i y^j \in I$ with a different color. What kind of shape do you get? *Apply Euler's generating function for partitions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/566551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finite extension of perfect field is perfect Let $E/F$ be a finite extension and $F$ be a perfect field. Here, perfect field means $char(F)=0$ or $char(F)=p$ and $F^p=F$. How to prove $E$ is also perfect field? For $char(F)=0$ case, it's trivial, but for $char(F)=p$, no improvement at all... Give me some hints
Hint: Recall that a field is perfect if and only if every finite extension is separable. Now, if $L/E$ finite weren't separable, then clearly $L/F$ is finite and isn't separable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/566619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prime Number Theorem estimate Update I have updated this question in light of the illuminating answers given below, which clearly point out my mistake. However, I still maintain that Legendre's original guess had some validity - it would seem, as the asymptotic starting point for the plot $\log x - \frac{x}{\pi(x)}$ as given below: It clearly will begin to converge to $1$ eventually (_Mathematica_ has a limit of not much more than PrimePi[$1 \times 10^{14}$]), but it does appear to have a starting point of around Legendre's first guess, so my initial question still holds in this regard. As an additional question, does anyone know how far along the numberline one would have to go for $\log x - \frac{x}{\pi(x)}$ to reach close to (let's say $\pm 0.01$) $1$? Is there an asymptotic statement for this? Original question Could it be that the bounds for Stirling's approximation, given as $\frac{e}{\sqrt{2\pi}} = 1.0844375514...$ provide a more accurate 'guess' than Legendre's constant of $1.08366$ as an estimate for the convergence of the prime counting function as illustrated here? The discrepancy is illustrated below in a plot of $\log x - \frac{x}{\pi(x)}$: Compare: If this were the case, would it not give a closed form to the approximation? Edit A plot of $\frac{x}{\pi(x)}$, $\log x+1.084437\dots$ and $\log x+1$ in response to Eric Naslund's answers below.
No, your value is a worse approximation. The true approximation/value of Legendre's constant is $1$. By the prime number theorem, $$\pi(x)=\frac{x}{\log x}+\frac{x}{\log^{2}x}+O\left(\frac{x}{\log^{3}x}\right)=\frac{x}{\log x}\left(1+\frac{1}{\log x}+O\left(\frac{1}{\log^{2}x}\right)\right)$$ and so $$\frac{x}{\pi(x)}=\log x\left(1+\frac{1}{\log x}+O\left(\frac{1}{\log^{2}x}\right)\right)^{-1} $$ $$=\log x-1+O\left(\frac{1}{\log x}\right).$$ Your choice of $B$ approximates $\frac{x}{\pi(x)}$ by $\log x-1.084437\dots,$ and since $1.08366$ is closer to $1$, the true value, than $1.084437$, your constant will be a worse asymptotically.
{ "language": "en", "url": "https://math.stackexchange.com/questions/566725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Marking the point closest to each point We have $6000$ points in the plane. All distances between every pair of them are distinct. For each point, we mark red the point nearest to it. What is the smallest number of points that can be marked red? I divide the $6000$ points into $1000$ groups with $6$ points in each group. In each group I have one point being the center, and the other five points forming a regular pentagon around it. Then only the center points are marked red, for a total of $1000$ points. Note that using a regular hexagon is not possible, since the points will have equal distance, and using a heptagon or more will yield points other than the center being marked red.
This is a known problem, and I use its standard dramatic interpretation to solve it. Consider $n$ marksmen standing in a field (so that all their pairwise distances are different). Each marksman simultaneously shoots and kills the closest marksman. What is the smallest number $k(n)$ of marksmen that can be killed? Lemma 1. Each killed marksman has no more than 5 bullets in his body. Proof. Suppose that the marksman located at the point $O$ has $m\ge 6$ bullets in his body. Suppose that these bullets belongs to marksmen located at the points $A_1,\dots, A_m$. Draw the rays $OA_1,\dots,OA_m$. Then there exist points $A_i$ and $A_j$, $i\not =j$ such that the angle $A_iOA_j\le \pi/3$. Therefore the angle $A_iOA_j$ is not the largest angle of the triangle $A_iOA_j$. Therefore the side $A_iA_j$ is not the largest side of the triangle $A_iOA_j$. But this is impossible, since, by the problem's conditions, $|A_iA_j|>|A_iO|$ and $|A_jA_i|>|A_jO|$.$\square$ Lemma 2. $k(n)\ge n/5$ for each $n$. Proof. Count the number $b$ of the shot bullets by two ways. At the first, clearly, that $b=n$. From the other side, by Lemma 1, $5k(n)\ge b$. This implies the lemma.$\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/566866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
The tangent to the curve $y=x^2+1$ at $(2,5)$ meets the normal to the curve at $(1, 2)$ Find the coordinates of the point where the tangent to the curve $y=x^2+1$ at the point $(2,5)$ meets the normal to the same curve at the point $(1,2).$ I tried to form 2 equations for each set of coordinates given, then solve then simultaneously to get the 'y and x' coordinates needed, however I didn't seem to get the answer right.
* *First, you need to find $f'(x)= 2x$. *Then to evaluate the slope of the tangent line at the point $(2, 5)$, $m_1 = f'(2) = 4$. With slope $m_1=4$, and the point $(x_0, y_0) = (2, 5)$, use the slope-point form of an equation to obtain the equation of that tangent line. $(y - y_0) = m_1(x - x_0)\tag{Point-Slope Equation Of Line}$ * *To find the slope of the normal line at the point $(1, 2)$, first evaluate $f'(1) = m_2= 2$, the slope of the tangent line at that point. Then the slope of the normal line to the curve at that same point is $m_\perp = -\dfrac 1{m_2} = -\dfrac 12$. Using the slope of the normal line $m_\perp = -\dfrac 12$ and the point $(x_0, y_0) = (1, 2)$, use the slope point form of an equation to obtain the equation of the line normal to the curve at $(1, 2)$: $$(y - y_0) = m_\perp(x - x_0)$$ *Finally, find where the tangent line and the normal line intersect by setting the equations equal to one another. Solve for $x$, then find the corresponding $y$ value using the equation of either line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/566954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A lot of terms to calculate lim I'm trying to prepare for exam and I came across a limit to calculate. $$ \lim_{n->\infty} \frac{2^n + \left(1+\dfrac{1}{n^2}\right)^{n^3} + \dfrac {4^n}{n^4}}{\dfrac {4^n}{n^4} + n^3\cdot 3^n} $$ When I'm trying to extract $4^n$ I end up with nothing. And I managed to tell that $(1+\frac{1}{n^2})^{n^3}$ goes to infinity, bounding it with $2$ and $3$. Cause I know that $(1+\frac{1}{n^2})^{n^2}$ goes to $e$ as $n$ is increasing. I'd appreciate some help or tips on that one, cause I'm stuck with it for over an hour and I couldn't find any clues how to solve it on-line. Chris
$$\frac{2^n + e^n + \frac{4^n}{n^4}}{\frac{4^n}{n^4} + 3^nn^3} = \frac{2^nn^4 + e^nn^4 + 4^n}{4^n + n^73^n} = \frac{\frac{n^4}{2^n} + (\frac{e}{4})^n\cdot n^4 + 1}{1 + (\frac{3}{4})^nn^7} $$ Now, since $a^n$ is an infinite of magnitude greater than $n^b$ for all $b$, we conclude that all those fractions tends to zero (cause also $a^n$ tends to zero really fast if $a < 1$) that leaves you with $1$ both at the numerator and the denominator So the limit is $1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/567018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that there is an infinite number of rationals between any two reals I just stumbled upon this question: Infinite number of rationals between any two reals.. As I' not sure about my idea of a proof, I do not want to post this as an answer there, but rather formulate as a question. My idea is as follows: * *$\mathbb{Q} \subset \mathbb{R}$ *$\forall a,b \in \mathbb{R}$ with $a>b, \exists q_0 \in \mathbb{Q}$ s.t. $a > q_0 > b$ (which is proven e.g. on Proofwiki) *As $\mathbb{Q} \subset \mathbb{R}, q_0 \in \mathbb{R}$ *For $a, q_0$, repeat step 2 to find $q_1 \in \mathbb{Q}$ s.t. $a > q_1 > q_0 > b$ *Repeat ad infinitum Thus, there have to be infinitely many rationals between any two reals. Can you argue like this, or is there anything wrong in my line of reasoning?
Since $q_0$ has been found such that $a > q_0 > b$, you can use induction proof: For all integer $n$, let $P(n)$ be : there exist $q_0, \cdots, q_n \in \Bbb Q$ such that $a>q_0> \cdots >q_n >b$. Then: (i) $P(0)$ is true. (2) Let us suppose $P(n)$ true for any $n\in \Bbb N$. Let $q_{n+1} \in \Bbb Q$ such that $ q_n > q_{n+1} > b$ , then $a > q_0 > \cdots > q_n >q_{n+1} > b$ and $P(n+1)$ is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/567126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
DPLL Algorithm $ \rightarrow $ Resolution proof $ \rightarrow $ Craig Interpolation I really need help here for an exam that I got tomorrow .. Let's say I got a bunch of constraints: $ c1 = { \lnot a \lor \lnot b } \\ c2 = { a \lor c } \\ c3 = { b \lor \lnot c } \\ c4 = { \lnot b \lor d } \\ c5 = { \lnot c \lor \lnot d } \\ c6 = { c \lor e } \\ c7 = { c \lor \lnot e } \\ $ As I perform the DPPL-Algorithm on these constrains the result is, that these constraints are unsatisfiable. However, I have to show a resolution proof. Just for convenience you can take a look at this proof (see picture below). The thing that's left to do is to compute the Craig-Interpolation - which is not really that hard - but I don't know how to define the quantities $ \varphi_1 $ and $ \varphi_2 $. For this example I know that $ \varphi_1 = \{ c_1, c_2, c_3, c_5 \} $ and $ \varphi_2 = \{ c_4, c_6, c_7 \} $ But I don't know how I can come up with that. Can anybody help me here? PS: Feel free to add some keywords to this question. The keywords "McMillan-Interpolation", "DPLL-Algorithm", etc. do not exist yet .. EDIT: There is a little error in my notes: The arrows that point from DPLL to those graphs $\rightarrow$ the $\lnot a$ is $a$ and the other respectively not $a$ but $\lnot a$.
I'm not quite certain what your actual question is, but: * *The lower part of the page you scanned shows a resolution proof. I haven't checked every step, but at least the lower part looks correct. *About choosing the variable sets in Craig interpolation: It is not possible to come up with these sets out of thin air - they must be given. The reason for this is that the Craig Interpolation Theorem is stated like this: Let $\varphi_1$ and $\varphi_2$ be formulas such that $\varphi_1 \land \varphi_2$ is unsatisfiable. Then there is a formula $\varphi_i$ such that $\varphi_1 \models \varphi_i$ and $\varphi_i \land \varphi_2$ is unsatisfiable. If you don't know the formulas $\varphi_1$ and $\varphi_2$, the theorem cannot be applied.
{ "language": "en", "url": "https://math.stackexchange.com/questions/567192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
probability and combinatorics mixed question A bus follows its route through nine stations, and contains six passengers. What is the probability that no two passengers will get off at the same station? no detailed solution is required here but an idea of the general line of thought could be nice...
This is an occupancy problem. You need to count the number of ways that 6 balls can get put into 9 sacks, such that each sack has at most 1 ball in it. Hint: since at most one person gets of at each bus stop, you are putting an order on the bus stops.
{ "language": "en", "url": "https://math.stackexchange.com/questions/567391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Expression for arbitrary powers of a particular $2\times2$ matrix Given$$\mathbf{M}= \begin{pmatrix} 7 & 5 \\ -5 & 7 \\ \end{pmatrix} $$, what's the formula matrix for $\mathbf{M}^n$? The eigenvalues and eigenvectors are complex and need to generate a real number formulas for each component of the resulting matrix.
$\textbf M=\begin{bmatrix}7&5\\-5&7\end{bmatrix}$ The eigenvalues will be the roots of the characteristic polynomial, $\lambda^2-(\mathrm{tr}\ \textbf M)\lambda+\det\textbf M=0$. $\lambda^2-14\lambda+74=0$, so $\lambda=7\pm5i$. Thus, the diagonalization of $\textbf M=\textbf A^{-1}\cdot\begin{bmatrix}7+5i&0\\0&7-5i\end{bmatrix}\cdot\textbf A$. We know that $\textbf A$ must be the corresponding eigenvectors in column form, so we must solve $\textbf M \begin{bmatrix}v_1\\v_2\end{bmatrix}=\lambda\begin{bmatrix}v_1\\v_2\end{bmatrix}$. We obtain $$\begin{align} 7v_1+5v_2&=\lambda v_1 \\ -5v_1+7v_2&=\lambda v_2 \end{align}$$ Or $$\begin{align} \mp 5v_1i+5v_2&=0 \\ -5v_1\mp 5v_2i&=0 \end{align}$$ You should be able to solve it from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/567475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove the estimate $|u(x,t)|\le Ce^{-\gamma t}$ Assume that $\Omega \subset \Bbb R^n$ is an open bounded set with smooth boundary, and $u$ is a smooth solution of \begin{cases} u_t - \Delta u +cu = 0 & \text{in } \Omega \times (0, \infty), \\ u|_{\partial \Omega} = 0, \\ u|_{t=0} = g \end{cases} and the function $C$ satisfies $c \ge \gamma \ge 0$ for some constant $\gamma$. Prove the estimate $|u(x,t)|\le Ce^{-\gamma t}$ for all $x \in \Omega, t \in [0,T]$ for any fixed $T > 0$. Could someone please explain me how can I get the absolute value in the estimate? If I use energy estimate then I'll have the $L_2$ norm... Any help will be much appreciated!
Let me elaborate on my comment that it is indeed possible to do this via energy methods. Let $p \geq 2$ be an even integer and differentiate under the integral and apply the chain rule to get $$\partial_t \|u(t)\|_p^p = \int_\Omega \! p|u|^{p-1}u_t \, dx = p\int_\Omega \!|u|^{p-1}(\Delta u - cu) \, dx$$ Now integrate by parts: $$= -p\int_\Omega \! |u|^{p-2} (\nabla u \cdot \nabla u) - p\int_\Omega \! c|u|^{p-1}u \, dx $$ The first term is less than or equal to $0$ and $c \geq \gamma \geq 0$. So $$ \le 0 -p\gamma \int_\Omega \! |u|^p \, dx$$ Thus we have shown $$\partial_t \|u(t)\|_p^p \le - p\gamma\|u(t)\|^p.$$ By Gronwall's inequality this implies $$\|u(t)\|_p^p \le e^{-p\gamma t}\|u(0)\|_p^p$$ and taking $p$-th roots and inserting the initial condition $$\|u(t)\|_p \le e^{-\gamma t}\|g\|_p.$$ Since $u$ is a priori bounded, we can take the limit as $p \to \infty$ to get $$\|u(t)\|_\infty \le \|g\|_\infty e^{-\gamma t}.$$ This is what you want to prove with $C = \|g\|_\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/567566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Symbol for "is closest to"? I am writing a paper on probabilities and we have to find a $k$ such that $P_n(k)$ is "closest to" $P_0$. $P_0$ is getting 4-of-a-kind in a five card hand in a standard 52 card deck. $P_n(k)$ is probability of getting $k$-of-a-kind in an $n$ card hand in some modified 88 card deck. I want to say that getting 5-of-a-kind (there are 11 suits) in a 5 card hand for our modified deck produces a probability "most similar" or "closest to" $P_0$. So would this be ok for a theorem?\ $P_5(k)$ is the probability closest to $P_0$ when $k=5$. That is, $\lvert P_0 - P_5(k)\rvert$ is minimized for $k=5$.
You could either say exactly what you said above, or, more formally: $|P_i-P_n(k)|$ is minimized for $i=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/567650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$1 + \frac{1}{1+2}+ \frac{1}{1+2+3}+ ... + \frac{1}{1+2+3+...+n} = ?$ How do I simplify the following series $$1 + \frac{1}{1+2}+ \frac{1}{1+2+3}+ \frac{1}{1+2+3+4} + \frac{1}{1+2+3+4+5} + \cdot\cdot\cdot + \frac{1}{1+2+3+\cdot\cdot\cdot+n}$$ From what I see, each term is the inverse of the sum of $n$ natural numbers. Assuming there are $N$ terms in the given series, $$a_N = \frac{1}{\sum\limits_{n = 1}^{N} n} = \frac{2}{n(n+1)}$$ $$\Rightarrow \sum\limits_{n=1}^{N} a_N = \sum\limits_{n=1}^{N} \frac{2}{n(n+1)}$$ ... and I'm stuck. I've never actually done this kind of problem before (am new to sequences & series). So, a clear and detailed explanation of how to go about it would be most appreciated. PS- I do not know how to do a telescoping series!!
HINT: $$\frac {2} {n(n+1)} = \frac 2 n - \frac 2 {n+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/567736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Solving for the zero of a multivariate How does one go about solving the roots for the following equation $$x+y+z=xyz$$ There simply to many variables. Anyone have an idea ?
If we fix one of the variables, we get a hyperbola in that plane. So, for example, fixing any $z = z_0,$ this is your relationship: $$ \left(x - \frac{1}{z_0} \right) \left(y - \frac{1}{z_0} \right) = \; 1 + \frac{1}{z_0^2} $$ Makes me think the surface could be connected. Indeed, as $|z| \rightarrow \infty,$ the curve approaches the fixed curve $xy=1.$ Meanwhile, the surface is smooth. The gradient of the function $xyz-x-y-z$ is only the zero vector at $(1,1,1)$ and $(-1,-1,-1)$ but those are not part of the surface. And, in the plane $x+y+z = 0,$ the surface contains the entirety of three lines that meet at $60^\circ$ angles, these being the intersections with the planes $x=0$ or $y=0$ or $z=0.$ EDIT: the surface actually has three connected components, sometimes called sheets in the context of hyperboloids. One is in the positive octant $x,y,z > 0,$ with $xy>1, xz>1, yz > 1$ in addition. The nearest point to the origin is $(\sqrt 3,\sqrt 3,\sqrt 3 ),$ and the thing gets very close to the walls as $x+y+z$ increases. The middle sheet goes through the origin and, near there, is a monkey saddle. The third sheet is in the negative octant. About the middle sheet: you may recall that the real part of $(x+yi)^3$ is $x^3 - 3 x y^2.$ The graph $z =x^3 - 3 x y^2 $ is, near the origin, a monkey saddle, room for two legs and a tail for a monkey sitting on a horse. If we rotate coordinates by $$ x = \frac{u}{\sqrt 3 } - \frac{v}{\sqrt 2 } - \frac{w}{\sqrt 3 }, $$ $$ y = \frac{u}{\sqrt 3 } + \frac{v}{\sqrt 2 } - \frac{w}{\sqrt 3 }, $$ $$ z = \frac{u}{\sqrt 3 } + \frac{2w}{\sqrt 3 }, $$ the surface becomes $$ \color{magenta}{ u \; \left( 1 + \frac{v^2}{6} + \frac{w^2}{6} - \frac{u^2}{9} \right) = \frac{1}{9 \sqrt 2} \; \left( w^3 - 3 w \, v^2 \right).} $$ So, say within the ball $u^2 + v^2 + w^2 < 1,$ the multiplier of $u$ on the left is quite close to $1,$ and we have a monkey saddle. Took a bit of work to confirm: given constants $a+b+c = 0,$ and looking at $$ x=a+t, y = b+t, z = c+t, $$ there are exactly three distinct real values of $t$ that solve $xyz=x+y+z.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/567803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving that if these quadratics are equal for some $\alpha$, then their coefficients are equal Let $$P_1(x) = ax^2 -bx - c \tag{1}$$ $$P_2(x) = bx^2 -cx -a \tag{2}$$ $$P_3(x) = cx^2 -ax - b \tag{3}$$ Suppose there exists a real $\alpha$ such that $$P_1(\alpha) = P_2(\alpha) = P_3(\alpha)$$ Prove $$a=b=c$$ Equating $P_1(\alpha)$ to $P_2(\alpha)$ $$\implies a\alpha^2 - b\alpha - c = b\alpha^2 - c\alpha - a$$ $$\implies (a-b)\alpha^2 + (c-b)\alpha + (a-c) = 0$$ Let $$Q_1(x) = (a-b)x^2 + (c-b)x + (a-c)$$ This implies, $\alpha$ is a root of $Q_1(x)$. Similarly, equating $P_2(\alpha)$ to $P_3(\alpha)$ and $P_3(\alpha)$ to $P_1(\alpha)$, and rearranging we obtain quadratics $Q_2(x)$ and $Q_3(x)$ with common root $\alpha$: $$Q_2(x) = (b-c)x^2 + (a-c)x + (b-a)$$ $$Q_3(x) = (c-a)x^2 + (b-a)x + (c-b)$$ $$Q_1(\alpha) = Q_2(\alpha) = Q_3(\alpha) = 0$$ We have to prove that this is not possible for non-constant quadratics $Q_1(x), Q_2(x), Q_3(x)$. EDIT: I also noticed that for distinct $a, b, c \in \{1, 2, 3\}$: $$Q_a(x) + Q_b(x) = -Q_c(x)$$
Denote $$Q_1(x)=P_1(x)-P_2(x)=(a-b)x^2-(b-c)x-(c-a);$$ $$Q_2(x)=P_2(x)-P_3(x)=(b-c)x^2-(c-a)x-(a-b);$$ $$Q_3(x)=P_3(x)-P_1(x)=(c-a)x^2-(a-b)x-(b-c).$$ Then $\alpha$ is a real root of the equations $Q_i(x);$ so that $\Delta_{Q_i(x)}\geq 0 \ \ \forall i=1,2,3;$ where $\Delta_{f(x)}$ denoted the discriminant of a quadratic function $f$ in $x.$ Now, using this we have, $$(b-c)^2+4(a-b)(c-a)\geq 0;$$ $$(c-a)^2+4(a-b)(b-c)\geq 0;$$ $$(a-b)^2+4(b-c)(c-a)\geq 0.$$ Summing these up and using the identity $$\begin{aligned}(a-b)^2+(b-c)^2+(c-a)^2+2\left(\sum_\text{cyc}(a-b)(c-a)\right)\\=[(a-b)+(b-c)+(c-a)]^2=0;\end{aligned}$$ We obtain, $$\begin{aligned}0&\leq 2\left(\sum_\text{cyc}(a-b)(c-a)\right)=2\left(\sum_\text{cyc}(ca+ab-bc-a^2)\right)\\&=-\left[(a-b)^2+(b-c)^2+(c-a)^2\right];\end{aligned}$$ Possible if and only if $(a-b)^2+(b-c)^2+(c-a)^2=0\iff a=b=c.$ We are done
{ "language": "en", "url": "https://math.stackexchange.com/questions/567854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Rudin Example 3.35B Why the $n$th root of $a_n$ is $1/2$? For Baby Rudin Example 3.35(b), I understand how the $\liminf$ and $\limsup$ of the ratio test were found, but I am not clear why $\ \lim \sqrt[n]{a_n } = \frac{1}{2} $. Please help.
The sequence in question is $$\frac{1}{2} + 1 + \frac{1}{8} + \frac{1}{4}+ \frac{1}{32}+ \frac{1}{16}+\frac{1}{128}+\frac{1}{64}+\cdots$$ In case the pattern is not clear, we double the first term, the divide the next by $8$, the double, then divide by $8$, and so on. The general formula for an odd term is $a_{2k-1}=\frac{1}{2^{2k-1}}$. The formula for an even term is $a_{2k}=\frac{1}{2^{2k-2}}$. In the first case, $\sqrt[n]{a_n}=\frac{1}{2}$. In the second, the limit is $\frac{1}{2}$. Since $n$th roots of both the even and odd terms converge to $\frac{1}{2}$, you have your desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/567955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that any line passing through the intersection of two bisectors is also a bisector. Given an arbitrary closed shape $F$, a line, $H$, that bisects $F$ horizontally, and a line, $V$, that bisects $F$ vertically, is it true that any line that passes through the intersection of $H$ and $V$ also bisects $F$? I know that any given line $L$ through the intersection must divide $F$ into 6 (possibly empty) regions. It looks kinda like this: L\ V ┌\──┼───┐ │ \B│ C │ │A \│ │ H─┼───\───┼── │ │\ D│ │ F │E\ │ └───┼──\┘ │ \ So I have $$A+B+C = C+E+D = D+E+F = F+A+B$$ and $$A+B = C = D+E = F.$$ I just need to prove that $A = D$ or $B = E$, but no matter how I manipulate the given equations, I can't seem to isolate $A$ and $B$ or $D$ and $E$. I think I should be using the fact that $L$ is a straight line somehow, since the given equations could still be true even if $L$ were curved. My idea is that, any closed shape should have a centroid, like the center of mass, such that every line that passes through it should bisect the shape. Since there can only one such centroid, it must be the intersection of any two bisectors.
The lines which bisect the area of a triangle form an envelope as shown in this picture The blue medians intersect in the centroid of the triangle, but no other lines through the centroid bisect the area of the triangle; none of the green bisectors of the area of the triangle pass through the centroid. It is easiest to consider the horizontal green line parallel to the base. It must be $\frac1{\sqrt{2}}$ of the way down to bisect the area of the triangle, rather than two-thirds of the way down to pass through the centroid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/568044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Countability (show set is countable) Show that the set $\mathbb{Z}_+\times\mathbb{Z}_+$ is countable.? To solve this you have to show a one to one correspondence. $\mathbb{Z}_+\times\mathbb{Z}_+\to\mathbb{Z}_+$ Then my book recommends using $f(m,n) = 2^m\times3^n$ (or any other primes) to show it is one to one. Where do these numbers come from, how do you know to use primes? Where is the thinking behind this?
It isn't important that you use the primes 2 or 3. The fact that there is a one to one correspondence between $\mathbb{Z}_+ \times \mathbb{Z}_+$ and $f(\mathbb{Z}_+,\mathbb{Z}_+)$ is a consequence of the unique factorization in $\mathbb{Z}$. The map $$f: \mathbb{Z}_+ \times \mathbb{Z}_+ \to f(\mathbb{Z}_+,\mathbb{Z}_+) $$ is clearly surjective. It is also injective because $2^{n_1}3^{m_1}$ can only be equal to $2^{n_2}3^{m_2}$ when $n_1=n_2$ and $m_1=m_2$ because of unique factorization. Therefor there is a bijective correspondence between $\mathbb{Z}_+ \times \mathbb{Z}_+$ and $f(\mathbb{Z}_+,\mathbb{Z}_+)$. That last one is countable as a subset of the countable set $\mathbb{Z}_+$ and therefor also $\mathbb{Z}_+ \times \mathbb{Z}_+$ is countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/568127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Value of sum of telescoping series $$\sum_{n\geqslant1}\frac{1}{\sqrt{n}} -\frac{ 1}{\sqrt {n+2}}$$ In looking at the first five partial sums, I am not convinced the series is telescopic (the middle terms don't cancel out). Thanks in advance!
$\sum_{k=1} ^n \frac{1}{\sqrt{k}}- \frac{1}{\sqrt{k+2}}$ $= \frac{1}{\sqrt{1}}- \frac{1}{\sqrt{3}}+ \frac{1}{\sqrt{2}}- \frac{1}{\sqrt{4}} +\frac{1}{\sqrt{3}}- \frac{1}{\sqrt{5}}+ \frac{1}{\sqrt{4}}- \frac{1}{\sqrt{6}}+ \frac{1}{\sqrt{5}}- \frac{1}{\sqrt{7}}+...+ \frac{1}{\sqrt{n-2}}- \frac{1}{\sqrt{n}}+ \frac{1}{\sqrt{n-1}}- \frac{1}{\sqrt{n+1}}+ \frac{1}{\sqrt{n}}- \frac{1}{\sqrt{n+2}}$ the first five terms dont cancel immediatley, after many terms have been computed they cancel,i will not classify it under telescope because the terms dont entirely cancel
{ "language": "en", "url": "https://math.stackexchange.com/questions/568227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Can you factor out vectors? My prof introduced eigenvalues to us today: Let $A$ be an $n \times n$ matrix. If there a scalar $\lambda$ and an $n-1$ non-zero column vector $u$, then $$Au = \lambda u$$ then $\lambda$ is called an eigenvalue and $u$ is called an eigenvector. $$Au - \lambda u = 0$$ $$\implies (A - \lambda I)u = 0$$ $$\implies \det(A - \lambda I) = 0$$ How did he get from $Au - \lambda u = 0$ to $\implies (A - \lambda I)u = 0$? It looks like he factored out the vector, but I thought you could only factor out constants? If you can factor out vectors can you explain why?
The distributive laws apply to matrix (or matrix-vector) multiplication. $$\eqalign{(A+B) C &= AC + BC\cr A(C+D) &= AC + AD\cr}$$ whenever $A$,$B$,$C$,$D$ have the right dimensions for these to make sense.
{ "language": "en", "url": "https://math.stackexchange.com/questions/568298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to find all vectors so that a vector equation can be solved? Unfortunately, my text book doesn't clarify this process at all. It's asking to find all all vectors [a b] so that the vector equation can be solved. The vector equation is: $c1$ $[3,1]$ + $c2 [6,2]$=$[a, b]$ The linear system would look like: $3c1+6c2=a$ $c1+2c2=b$ My text doesn't give any indication how to solve this and I'm stuck. Any help would be appreciated! Also, apologies since I can't get the vector forms inserted properly!
Hint: You have two equations, and two variables (we treat a, b as constants). Set up the associated augmented coefficient matrix, row reduce, and solve for $c_1, c_2$, which can each be expressed as functions of $a, b$. From that, you should also be able to express $a, b$ as functions of the constants $c_1, c_2$. Associated augmented coefficient matrix to your system of equations: $$\begin{pmatrix} 3 & 6 &\mid& a\\ 1 & 2 & \mid & b\end{pmatrix}$$ TIP: Be sure to choose a value for $b$ that will ensure the system is consistent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/568486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Recurrence sequence limit I would like to find the limit of $$ \begin{cases} a_1=\dfrac3{4} ,\, & \\ a_{n+1}=a_{n}\dfrac{n^2+2n}{n^2+2n+1}, & n \ge 1 \end{cases} $$ I tried to use this - $\lim \limits_{n\to\infty}a_n=\lim \limits_{n\to\infty}a_{n+1}=L$, so $L=\lim \limits_{n\to\infty}a_n\dfrac{n^2+2n}{n^2+2n+1}=L\cdot1$ What does this result mean? Where did I make a mistake?
Put $a_n=\frac{n+2}{n+1}b_n$. Then $b_n=b_1=\frac{1}{2}$, so $a_n=\frac{n+2}{2n+2}$ and $\lim \limits_{n\to\infty}a_n=\frac{1}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/568572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Let $f:\mathbb{R}^2\to\mathbb{R}$ be a $C^1$ function. Prove that the restriction is not injective. Let $f:\mathbb{R}^2\to\mathbb{R}$ be a $C^1$ function. And let $D$ an open subset of $\mathbb{R}^2$. Prove that the restriction of $f$ to $D$ is not injective. Im trying to solve this but i dont know how... The problem have a hint: Use the inverse theorem applied to an appropiate transformation in $\mathbb{R}^2$.
Assume by contradiction that $f_D$ is injective. Let $U \subset D$ be an open connected subset of $D$. Then $f|_U$ is also injective. As $U$ is connected then $f(U)$ is connected, hence an interval. Let $d \in f(U)$ be an interior point. As $f$ is injective, there exists an unique $e \in U$ so that $f(e)=d$. Then $f$ is a continuous bijection between the connected set $U \backslash \{e \}$ and the disconnected set $f(U)\backslash \{ d \}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/568899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Evaluating $\int \frac{\sin^2(x)}{\sqrt{\cos(x)}} \mathrm dx$ I would like to get some advice how to evaluate the integral, $$\int \frac{\sin^2{x}}{\sqrt{\cos{x}}} \mathrm dx$$
Integration by parts, with $g(x)=\sin x$, and $$f'(x)=\frac{\sin x}{\sqrt{\cos x}}=-2\cdot\frac{\cos'x}{2\cdot\sqrt{\cos x}}=-2\cdot(\sqrt{\cos x})'\iff f(x)=-2\cdot\sqrt{\cos x}$$ then recognizing the expression of the incomplete elliptic integral of the first kind in $\int f(x)g'(x)dx$
{ "language": "en", "url": "https://math.stackexchange.com/questions/569004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Take Laplace Transform of the integral J_0 I was just wondering how to use tables from Spiegal to solve $\int_0^\infty J_0(2\sqrt{ut}) J_0(u) du$ At the moment, I see similar transforms on page 244, but I don't actually know how to combine the laplace transforms of the first $J_0$ and the second $J_0$ Any help is appreciated, Thank you!
Well, I have a way to do it without referring to the tables, but instead using two well-known representations of a Bessel function. First write $$J_0(u) = \frac{1}{\pi} \int_0^{\pi} d\theta \, e^{i u \cos{\theta}}$$ Then assuming we may interchange the order of integration, the integral is equal to $$\begin{align}\int_0^{\infty} du \, J_0 \left ( 2 \sqrt{u t}\right ) J_0(u) &= \frac{1}{\pi}\int_0^{\pi} d\theta \, \int_0^{\infty} du \, e^{i u \cos{\theta}} J_0 \left ( 2 \sqrt{u t}\right )\\ &= \frac{2}{\pi}\int_0^{\pi} d\theta \, \int_0^{\infty} dv \, v e^{i \cos{\theta}\; v^2}J_0 \left ( 2 \sqrt{t} v\right ) \end{align}$$ Here we use the well-known relation $$\int_0^{\infty} dv \, v \, e^{i a v^2} J_0(b v) = \frac{i}{2 a} e^{-i b^2/(4 a)}$$ so that we get that $$\begin{align}\int_0^{\infty} du \, J_0 \left ( 2 \sqrt{u t}\right ) J_0(u) &= \frac{i}{\pi} \int_0^{\pi} d\theta \, \sec{\theta} \, e^{-i t \, \sec{\theta}}\\ &= \frac{1}{\pi} \int_{-1}^1 dy \, \left (1-y^2\right )^{-1/2} e^{-i t y} \end{align}$$ This last integral is simply another well-known representation of a Bessel. Therefore, $$\int_0^{\infty} du \, J_0 \left ( 2 \sqrt{u t}\right ) J_0(u) = J_0(t)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/569146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a continuous function $f:S^1 \to \mathbb R$ which is one-one? Is there a continuous function $f:S^1 \to \mathbb R$ which is one-one?
Suppose such a function exists. Let $u : [0,1] \to S^1$ be a suitable path that traces around the circle, and consider $g = fu$. This $g : [0,1] \to \mathbb R$ is one-to-one except that $g(0) = g(1)$. Consider $y = g(1/2)$. It must be either greater than or less than $g(0)$. Pick some value $z$ between $g(0)$ and $y$. Apply the intermediate value theorem on both the intervals $[0,1/2]$ and $[1/2,1]$ and show that there must be some $c$ and $d$ in $(0,1/2)$ and $(1/2,1)$ respectively with $g(c) = g(d) = z$, contradicting $g$ being one-to-one. In simple terms: the function must go either up or down from where it starts, but then it needs to go back through territory it's already visited to get back to the first point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/569261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Finding $\operatorname{Ext}^{1}(\Bbb Q,\Bbb Z)$ I am trying to compute $\operatorname{Ext}^{1}(\Bbb Q,\Bbb Z)$ explicitely. Using $\Bbb Q/\Bbb Z$ I constructed a natural injective resolution of $\Bbb Z$, and I know that $\Bbb Q/\Bbb Z$ is injective. Please help after that.
We have the following exact sequence: $$\operatorname{Hom}_{\Bbb Z}(\Bbb Q,\Bbb Q)\to \operatorname{Hom}_{\Bbb Z}(\Bbb Q,\Bbb Q/\Bbb Z)\to\operatorname{Ext}^1_{\Bbb Z}(\Bbb Q,\Bbb Z)\to\operatorname{Ext}^1_{\Bbb Z}(\Bbb Q,\Bbb Q).$$ Since $\operatorname{Hom}_{\Bbb Z}(\Bbb Q,\Bbb Q)\cong\Bbb Q$ and $\operatorname{Ext}^1_{\Bbb Z}(\Bbb Q,\Bbb Q)=0$ we get $$\Bbb Q\to \operatorname{Hom}_{\Bbb Z}(\Bbb Q,\Bbb Q/\Bbb Z)\to\operatorname{Ext}^1_{\Bbb Z}(\Bbb Q,\Bbb Z)\to 0,$$ so $\operatorname{Ext}^1_{\Bbb Z}(\Bbb Q,\Bbb Z)\cong M/\Bbb Q$, where $M=\operatorname{Hom}_{\Bbb Z}(\Bbb Q,\Bbb Q/\Bbb Z)$. In order to determine $M$ recall that $\Bbb Q/\Bbb Z=\bigoplus_{p\text {prime}}\Bbb Z_{p^{\infty}}$. Now prove that $\operatorname{Hom}(\Bbb Q,\Bbb Z_{p^{\infty}})\cong\Bbb Q_p$. In the end we get $M=\prod_{p\text {prime}}\Bbb Q_p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/569347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Orthogonal Projection of a matrix Let $V$ be the real vector space of $3 \times 3$ matrices with the bilinear form $\langle A,B \rangle=$ trace $A^tB$, and let $W$ be the subspace of skew-symmetric matrices. Compute the orthogonal projection to $W$ with respect to this form, of the matrix $$\begin{pmatrix} 1& 2 & 0\\ 0 & 0 & 1\\ 1 & 3 & 0\end{pmatrix}$$ Could someone show me how to proceed ?
Find the orthogonal complement to $W$, i.e. $$W^{\perp} := \{ X \in V : \langle X, W \rangle = 0\}$$ Assuming that $\dim W + \dim W^{\top} = \dim V$, we can write any $X \in V$ as a linear combination: $$X = \alpha A + \beta B$$ where $\alpha,\beta \in \mathbb{R}$, $A \in W$ and $B \in W^{\top}$. The orthogonal projection $\pi : V \twoheadrightarrow W$ is given by $$\pi : \alpha A + \beta B \longmapsto \alpha A$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/569426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Finding the interval convergence power series? How would I find the interval of convergence of this power series? $\sum\frac{1x^k}{k^22^k}$ I performed the ratio test and did. $\frac{x^{k+1}}{(k+1)^2(2)^{k+1}}$*$\frac{k^2(2^k)}{x^k}$ Then I got $k\rightarrow\infty$ $x\frac{k^2}{2(k+1)^2}$ $-1<\frac{1}{2}x<1$ $x=2$ $x=-2$ $\sum(-2)^k\frac{1}{k^2)(2)^k}$ converge becuase alternate series limit zero the converge. $\sum\frac{2^k}{k^2(2)^k}$ But I am not sure if x=2 converge or diverge. I tried ratio and root they were inconclusive.
The series converges for $x=2$. You may ask why... Why not use the integral test, i.e. $\lim_{n\rightarrow \infty} \int_{1}^{n} x^{-2}\; dx = \lim_{n \rightarrow \infty} 1 - \frac{1}{n} = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/569522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
No idea how to prove this property about symmetric matrices This is from homework, so please hints only. Suppose $A$ is symmetric such that all of its eigenvalues are 1 or -1. Prove that $A$ is orthogonal. The converse is really easy, but I really have no idea how to do this. Any hints?
Edited Saturday 16 November 2013 10:03 PM PST Well, it seems the "hints" have had their desired effect, so I'm editing this post to be an answer, pure and simple. That being said, try this: since $A$ is symmetric, there exists orthogonal $O$ such that $O^TAO = \Lambda$, with $\Lambda$ diagonal and $\Lambda_{ii} = \pm 1$ for all $i$. Then $\Lambda^T\Lambda = I$ and since $O^TA^TO = \Lambda^T$, we have $O^TA^TOO^TAO = O^TA^TAO = \Lambda^T\Lambda$ = I. Thus $A^TA = OIO^T = OO^T = I$ and $A$ is orthogonal. QED Whew! That feels better! Well, now I've certainly said too much! ;- )!!! Hope this helps. Cheerio, and as always, Fiat Lux!!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/569606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How to separate a partial differential equation where R is a function of three variables? Using the method of separation of variables, how can I separate each X,Y,Z if the differential equation has a function of R(x,y,z)? Example: $ R_{xx} + R_{yy} + R_{zz} = 0 $ I understand how to apply the method if R is only a funtion of X and Y, but when it comes to three variables, I am completely lost.
The argument parallels the two variable case. Setting $R(x, y, z) = X(x)Y(y)Z(z), \tag{1}$ we have $X_{xx}(x)Y(y)Z(z) + X(x)Y_{yy}(y)Z(z) + X(x)Y(y)Z_{zz}(z) = 0, \tag{2}$ and dividing through by $X(x)Y(y)Z(z)$ we obtain $X_{xx} / X + Y_{yy} / Y + Z_{zz} / Z = 0, \tag{3}$ which we write as $X_{xx} / X = -Y_{yy} / Y - Z_{zz} / Z. \tag{4}$ Now we note that, since the two sides depend upon different independent variables, there must be a constant, call it $-k_x^2$, to which they are each equal, thus: $X_{xx} / X = -k_x^2, \tag{5}$ or $X_{xx} + k_x^2X = 0, \tag{5A}$ and $Y_{yy} / Y + Z_{zz} / Z = k_x^2. \tag{6}$ Having separated out the $x$ dependence, we write (6) as $Y_{yy} / Y = k_x^2 - Z_{zz} / Z, \tag{7}$ and once again observe that the two sides depend on different independent variables, so again each must equal some constant value, call it $-k_y^2$ this time: $Y_{yy} / Y = -k_y^2 = k_x^2 - Z_{zz} / Z, \tag{8}$ which leads to $Y_{yy} + k_y^2Y = 0 \tag{9}$ and $Z_{zz} + k_z^2Z = 0, \tag{10}$ where we have set $k_z^2 = -(k_x^2 + k_y^2). \tag{11}$ It should be noted that $k_x^2 + k_y^2 + k_z^2 = 0, \tag{12}$ so that at least one of the three numbers $k_x, k_y, k_z$ must be complex. In the typical case occurring in practical applications, the $k_x, k_y, k_z$ are either real or pure imaginary, leading to solutions of (5A), (9), (10) which are respectively periodic or exponential, again analogous to the two-dimensional case. Finally, it is worth noting that the techniques outlined above easily extend to the $n$-dimensional case of the equation $\sum_1^n R_{x_jx_j} = 0; \tag{13}$ if we set $R = \prod_1^nX_j(x_j), \tag{13A}$ we obtain $n$ equations of the form $d^2X_j / dx_j^2 + k_j^2X_j = 0 \tag{14}$ with $\sum_1^nk_j^2 = 0; \tag{15}$ the details are easy to execute and left to the reader. As is well-known, the solutions $X_j(x_j$) are of the form $X_j(x_j) = a_+e^{ik_jx_j} + a_-e^{-ik_jx_j} \tag{16}$ for suitably chosen $a_\pm$. Hope this helps. Cheerio, and as always, Fiat Lux!!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/569685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Implicit differentiation I want to differentiate $x^2 + y^2=1$ with respect to $x$. The answer is $2x +2yy' = 0$. Can some explain what is implicit differentiation and from where did $y'$ appear ? I can understand that $2x +2yy' = 0$ is a partial derivative but then it becomes multi calc not single. This is in a chapter about chain rule so I assume there is some use of the chain rule here but I can't see any composite functions here. We can express y in terms of x but y is not composite. P.S: I am NOT looking for HOW to solve the problem, I am looking for the WHY as stated above.
The function $x^2 + y^2 = 1$ defines $y$ implicitly as a function of $x$. In this case, we have $y^2 = 1 - x^2$. Thus, instead or writing $y$ in the equation we can write $f(x)$ where $f(x)^2 = 1 - x^2$. This leaves the problem of differentiating $x^2 + f(x)^2 = 1$. In this form, we can see how to apply the chain rule $$ \begin{align*} & \frac{d}{dx} \left( x^2 + f(x)^2 \right) && \\ ~ = ~ & \frac{d}{dx} x^2 + \frac{d}{dx} f(x)^2 && [\text{linearity}] \\ ~ = ~ & 2x + \frac{d}{dx} f(x)^2 && [\text{power rule}] \\ ~ = ~ & 2x + 2f(x)\frac{d}{dx}f(x) && [\text{chain rule}] \end{align*}$$ In the fourth line, we can apply the chain rule because $f(x)^2$ is the composition of two functions, namely the function $g(x) = x^2$ and $h(x) = f(x)$. The chain rule says that to differentiate $(g \circ h)(x)$, we first differentiate $g(x)$ with respect to $x$, then differentiate $h(x)$ with respect to $x$, then form the product $(g' \circ h)(x)h'(x)$ as the result. In this case, differentiate $g(x)$ with respect to $x$ gives $2x$. Differentiating $h(x)$ with respect to $x$ gives $h'(x)$. Thus, $(g \circ h)(x) = 2f(x)f'(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/569787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
How to prove that a group of order $72=2^3\cdot 3^2$ is solvable? Let $G$ be a group of order $$72=2^3\cdot 3^2$$ Without using Burnside's Theorem, how to show that $G$ is solvable? Atempt: If we can show that $G$ has at least one non-trivial normal subgroup $N$, then it would be easy to show it is solvable. Indeed, $$1\longrightarrow N\longrightarrow G\longrightarrow G/N\longrightarrow 1$$ would be a short exact sequence with $N$ and $N/G$ of order $2^i\cdot3^j$ for some $i,j\in\{0,1,2\}$ and it is not too hard to show that such groups are always solvable. However, I can't find a way to show that $G$ is not simple. Added: If $G$ is not simple, then Sylow's Theorem implies that there are $4$ subgroups of order 9 and 3 or 9 subgroups of order 8. Then, I don't see how to use that to show that $G$ is not simple.
If $G$ has 4 Sylow-3 subgroups, $G$ acts on those subgroups via conjugation, inducing a homomorphism $G\to S_4$. Since $|S_4|=24<72=|G|$, this map must have a non-trivial kernel. If the morphism is not the trivial map, you are done. What can you say if the kernel is all of $G$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/569880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$\sqrt x$ is uniformly continuous Prove that the function $\sqrt x$ is uniformly continuous on $\{x\in \mathbb{R} | x \ge 0\}$. To show uniformly continuity I must show for a given $\epsilon > 0$ there exists a $\delta>0$ such that for all $x_1, x_2 \in \mathbb{R}$ we have $|x_1 - x_2| < \delta$ implies that $|f(x_1) - f(x_2)|< \epsilon.$ What I did was $\left|\sqrt x - \sqrt x_0\right| = \left|\frac{(\sqrt x - \sqrt x_0)(\sqrt x + \sqrt x_0)}{(\sqrt x + \sqrt x_0)}\right| = \left|\frac{x - x_0}{\sqrt x + \sqrt x_0}\right| < \frac{\delta}{\sqrt x + \sqrt x_0}$ but I found some proof online that made $\delta = \epsilon^2$ where I don't understand how they got? So, in order for $\delta =\epsilon^2$ then $\sqrt x + \sqrt x_0$ must $\le$ $\epsilon$ then $\frac{\delta}{\sqrt x + \sqrt x_0} \le \frac{\delta}{\epsilon} = \epsilon$. But then why would $\epsilon \le \sqrt x + \sqrt x_0? $ Ah, I think I understand it now just by typing this out and from an earlier hint by Michael Hardy here.
In $[0,1]$ we want $$ \begin{split} |f(x)-f(y)|=&\frac {|x-y|}{\sqrt{x}+\sqrt{y}} <\varepsilon\\ \Updownarrow\\ |x-y|<&\varepsilon(\sqrt{x}+\sqrt{y})<2\varepsilon\:\:\text{ so }\:\:\delta=2\varepsilon \end{split} $$ In $[1, \infty]$, $$ \frac {|x-y|}{\sqrt{x}+\sqrt{y}} < |x-y|\:\:\text{ so }\:\:\delta=2\varepsilon $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/569928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 6, "answer_id": 4 }
Elementary properties of integral binary quadratic forms Let $f = ax^2 + bxy + cy^2$ be a binary quadratic form over $\mathbb{Z}$. $D = b^2 - 4ac$ is called the discriminant of $f$. We say $f$ is positive definite if $a \gt 0$ and $D \lt 0$(cf. this question). We say $f$ is primitive if gcd$(a, b, c) = 1$. Let $\sigma = \left( \begin{array}{ccc} p & q \\ r & s \end{array} \right)$ be an element of $GL_2(\mathbb{Z})$. This means that $p,q,r,s$ are integers and det $\sigma = \pm 1$. We denote the quadratic form $f(px + qy, rx + sy)$ by $f^{\sigma}$. My question Is the following proposition correct? If yes, how do you prove it? Proposition Let $f$ and $\sigma$ be as above. * *The discriminant of $f^{\sigma}$ is the same as that of $f$. *If $f$ is positive definite, $f^{\sigma}$ is also so. *If $f$ is primitive, $f^{\sigma}$ is also so.
1) A binary quadratic form $f(x, y)=ax^2+bxy+cy^2$ can be written $$f(x, y)=\begin{pmatrix} x & y \end{pmatrix} \begin{pmatrix} a & \frac{b}{2} \\ \frac{b}{2} & c \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix}$$ so $f(x, y)$ corresponds to the $2 \times 2$ matrix $M$ in the center, with discriminant $D=-4\det M$. Note that $$\begin{pmatrix} px+qy \\ rx+sy \end{pmatrix}=\begin{pmatrix} p & q \\ r & s \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix}$$ so in fact $f^{\sigma}$ corresponds to the $2 \times 2$ matrix $\sigma ^T M \sigma$, with discriminant $$D'=-4\det(\sigma ^T M \sigma)=-4\det M (\det \sigma)^2=-4 \det M=D$$ 2) For $\begin{pmatrix} x \\ y \end{pmatrix} \not =\begin{pmatrix} 0 \\ 0 \end{pmatrix}$, we have $\begin{pmatrix} px+qy \\ rx+sy \end{pmatrix} \not =\begin{pmatrix} 0 \\ 0 \end{pmatrix}$ as well. Then since $f$ is positive definite, $$f^{\sigma}(x, y)=f(px+qy, rx+sy)>0$$, so $f^{\sigma}$ is positive definite. 3)We prove the contrapositive statement. Suppose that $f^{\sigma}$ is not primitive, and the $\gcd$ of its coefficients is $d>1$. Put $f^{\sigma}(x, y)=g(x, y)d$, where $g(x, y)$ is also a binary quadratic form, then $f=(f^{\sigma})^{\sigma^{-1}}=(gd)^{\sigma^{-1}}=dg^{\sigma^{-1}}$, which is clearly not primitive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/570007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How prove this $a_{n}>1$ let $0<t<1$, and $a_{1}=1+t$, and such $$a_{n}=t+\dfrac{1}{a_{n-1}}$$ show that $a_{n}>1$ My try: since $$a_{1}=1+t>1$$ $$a_{2}=t+\dfrac{1}{a_{1}}=t+1+\dfrac{1}{1+t}-1>2\sqrt{(t+1)\cdot\dfrac{1}{1+t}}-1=2-1=1$$ $$a_{3}=t+\dfrac{1}{a_{2}}=t+\dfrac{1}{t+\dfrac{1}{t+1}}=t+\dfrac{t+1}{t^2+t+1}=1+\dfrac{t^3+t}{t^2+t+1}>1$$ $$\cdots\cdots\cdots$$ But $a_{n}$ is very ugly,so this problem may use other methods.Thank you very much!
Let $\displaystyle \mu = \frac{t + \sqrt{t^2+4}}{2}$, we have $$\mu > 1\quad\text{ and }\quad\mu(t - \mu) = \left(\frac{t + \sqrt{t^2+4}}{2}\right)\left(\frac{t - \sqrt{t^2+4}}{2}\right) = -1$$ From this, we get $$a_{n+1} - \mu = t - \mu + \frac{1}{a_n} = \frac{1}{a_n} - \frac{1}{\mu} = \frac{\mu - \alpha_n}{\mu a_n}$$ This implies if $a_n > 1$, then $$|a_{n+1}-\mu| = \frac{|a_n - \mu|}{\mu a_n} < \frac{|a_n -\mu|}{\mu} < |a_n - \mu|\tag{*1}$$ Notice $$\begin{align} ( 1 - \mu)^2 - (a_1 - \mu)^2 = & (1 - \mu)^2 - (1 + t - \mu)^2 = (1 - \mu)^2 - (1 - \frac{1}{\mu})^2\\ = & (1-\mu)^2(1 - \frac{1}{\mu^2}) > 0 \end{align}$$ We have $a_1 \in (1,2\mu - 1) = (\mu - (\mu - 1),\mu + (\mu - 1))$. Since all $x$ in this interval $> 1$, we can repeatedly apply $(*1)$ to conclude all $a_n$ belongs to this interval and hence $> 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/570116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
If $G$ is a group and $N$ is a nontrivial normal subgroup, can $G/N \cong G$? I know $G/N$ is isomorphic to a proper subgroup of $G$ in this case, so the gut instinct I had was 'no'. But there are examples of groups that are isomorphic to proper subgroups, such as the integers being isomorphic to the even integers, so that reasoning doesn't work. However in this case the even integers are not a quotient of the integers. edit: I realize now that $G/N$ is not necessarily isomorphic to a proper subgroup of $G$, just a subgroup of $G$.
Yes. Let $G$ be the additive group of the complex numbers, and let $N$ be the subgroup consisting of the real numbers. Edit in response to comment by @GA316: $(\mathbb C,+)/\mathbb R$ is clearly isomorphic to $(\mathbb R, +)$, and it is well known (but this requires the Axiom of Choice) that $(\mathbb C,+)\cong(\mathbb R,+)$. My answer is a special case of the answer posted simultaneously by Asaf Karagila, considering $\mathbb C$ as a vector space over the field of rational numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/570177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 6, "answer_id": 0 }
How many rooted plane trees tn are there with n internal nodes? How many rooted plane trees tn are there with n internal nodes? Plane means that left and right are distinguishable (i.e. mirror images are distinguishable), and rooted simply means that the tree starts with a single root. For the sake of understanding, the below figure shows why t3 = 5.
In graph theory and more specifically in rooted plane trees there is a fundamental sentence: The number of rooted plane trees with n nodes equals to n-th Catalan number, that is |Tn| = Cn. I hope to have helped you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/570288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the limit $ \lim_{x\to 0}\frac{(1-3x)^\frac{1}{3} -(1-2x)^\frac{1}{2}}{1-\cos(\pi x)}$ I cannot find this limit: $$ \lim_{x\to 0}\frac{(1-3x)^\frac{1}{3} -(1-2x)^\frac{1}{2}}{1-\cos(\pi x)}. $$ Please, help me. Upd: I need to solve it without L'Hôpital's Rule and Taylor expansion.
Write it as $\left(\frac{\cos\pi x-\cos0}{x-0}\right)^{-1}\times\frac{\left(1-3x\right)^{\frac{1}{3}}-\left(1-2x\right)^{\frac{1}{2}}}{x-0}$. Then $\lim_{x\rightarrow0}\frac{\cos\pi x-\cos0}{x-0}$ can be recognized as $f'\left(0\right)$ for $f\left(x\right)=\cos\pi x$ and $\lim_{x\rightarrow0}\frac{\left(1-3x\right)^{\frac{1}{3}}-\left(1-2x\right)^{\frac{1}{2}}}{x-0}$ as $g'\left(0\right)$ for $g\left(x\right)=\left(1-3x\right)^{\frac{1}{3}}-\left(1-2x\right)^{\frac{1}{2}}$. If you are blamed to use de l'Hôpital after all then just claim that you were not aware of that. They will believe you. Edit: This only works if $f'\left(0\right)\neq0$ and unfortunately that is not the case here. So to be honest: this does not answer your question. Keep it in mind however for next occasions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/570406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Taylor/Maclaurin Series Show that if x is small compared with unity, then $$f(x)=\frac{(1-x)^\frac{-2}{3}+(1-4x)^\frac{-1}{3}}{(1-3x)^\frac{-1}{3}+(1-4x)^\frac{-1}{4}}=1-\frac{7x^2}{36}$$ In my first attempt I expanded all four brackets up to second order of x, but this didn't lead me to something that could be expressed as the final result. In my second attempt I decided to find $f'(x)$ and $f''(x)$ and use these to find $f'(0)$ and $f''(0)$ to find the Maclaurin expansion of $f(x)$ but this was way too time consuming. Can someone lead me to right track and offer some assistance? Thank you
The development of the last fraction (ratio of the two quadratic polynomials) is 1 - 7 x^2 / 36 + 7 x^3 / 36 + 35 x^4 / 144.
{ "language": "en", "url": "https://math.stackexchange.com/questions/570497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Evaluate the integral $\int_{\gamma}\frac{z^2+2z}{z^2+4}dz$ Evaluate $$\int_{\gamma}\frac{z^2+2z}{z^2+4}dz$$ where the contour $\gamma$ is 1.) the circle of radius $2$ centered at $2i$, traversed once anti-clockwise. 2.) the unit circle centered at the origin, traversed once anti-clickwise. So here we would have to use partial fractions: $$1+ \frac{2z-4}{z^2+4}.$$ Then for part 1.), $\gamma(t)=2 e^{it}+2i$. And for part 2.), $\gamma(t)= e^{it}$. I'm not sure what to do next to evaluate the integral for part 1.) and 2.).
$(1)$ Apply the residue theorem. $\int_{\gamma}\frac{z^2+2z}{(z-2i)(z+2i)}dz=2i\pi(\sum res_{z=z_k})$. Define $z_0:=z+2i.$ Thus, $res_{z=z_0}f(z)=\frac{z_k^2+2z_k}{2z_k}$, for $k=0$. $(2)$ Notice that none of your singluar points are in your contour $\Rightarrow $$\int_{|z|=1}\frac{z^2+2z}{(z-2i)(z+2i)}dz=0$, by Cauchy Integral Formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/570572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Irreducible polynomial $f$ as quotient - effect on $\mathbb{Z}_5[x]$ I want to get a better understanding of quotient rings so I have two questions. Let $f(x) = x^2 + 2$ Let $R = \mathbb{Z}_{5}/(f(x))$ Now as $f$ is irreducible in $\mathbb{Z}_{5}$ we have that $R$ is a field with elements being all polynomials in $\mathbb{Z}_{5}$ with degree less than $2$, i.e. $\{0, 1, 2, 3, 4,$ $x, x + 1, \dots, x + 4,$ $,\dots,$ $4x, 4x + 1, ..., 4x + 4 \}$ * *First question - But how can, say, the element $x \in R$ be a unit? Also now if we let the quotient be a reducible polynomial, the ring is not supposed to be a field? I.e. Let $g(x) = x^2 + 1$ Let $S = \mathbb{Z}_{5}/(g(x))$ * *Second question - It seems to me that $S$ will have exactly the same elements as $R$ and hence it will also be a field?
In $R$: $x^2 + 2 = 0$, so $(2x)x = 2 x^2 = -4 = 1$, i.e., $x^{-1} = 2x$. For the second question, $R$ does not have the same elements as $S$; $R$ and $S$ only happen to have the same number of elements and they can be represented by the same elements of ${\mathbb Z}_5[x]$, but that's it. Now because $x^2 + 1$ is reducible over ${\mathbb Z}_5$ ($x^2 + 1 = (x + 2)(x - 2)$), the ring $S$ is not an integral domain ($(x + 2)(x-2) = 0$ in $S$) and therefore not a field. It may be worthwhile to stress that elements of $R$ (and of $S$) are not polynomials over ${\mathbb Z}_5$ with degree less than 2, they can merely be represented by those polynomials. The elements are residue classes; the residue class of $h(x) \in {\mathbb{Z}_5[x]}$ is $\{ h(x) + a(x) f(x) \;\mid\; a(x) \in {\mathbb Z}_5[x] \}$ (and letting $h(x)$ range over the polynomials of degree less than 2, you get every residue class exactly once). Writing down these residue classes all the time is not really enlightening, which is why most of the time you just write $h(x)$ (saying that computations are "in $R$" or "modulo $(f(x))$") or maybe $\overline{h(x)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/570643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$\gcd$ of polynomials over a field I have the polynomials $f,g\neq 0 $ over a field $F$. We know also that $\gcd(f,g)=1$ and $$ \det \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix}\neq 0. $$ I need to prove that $\gcd(af+bg,cf+dg) = 1 $ for every $a,b,c,d \in F$. I really do not know how to start answer the question. Thanks for helpers!
Just work through the equations of gcd (bearing in mind that you're working in $f(x)$, hence constants do not matter): $ \gcd(af+bg, cf+dg) = \gcd(adf+bdg, cbf + bdg) = \gcd( (ad-bc)f, cbf + bdg) = \gcd( f, cbf+bdg) = \gcd(f, bdg) = \gcd(f, g) = 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/570750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
About sum of three squares I am trying to find those $k$ for which the expression $1+(10k+4)^2 +(10m+8)^2$ is never a square number for any $m$. Thank you!
You are trying to solve $J + (10m+8)^2 = n^2$ or show that no solution exists, where $J=1+(10k+4)^2$. For any $J$, you can solve $J = n^2 - p^2$ by writing it as $J = (n-p)(n+p)$ and then finding all factorizations of $J$ into two factors of equal parity (both even or both odd). This gives you all possible choices for $n$ and $p$, and then you can check whether any of the choices has $p\equiv8\pmod{10}$ (so that you can write $p=10m+8$). This gives you an algorithm for deciding the answer for any given $k$. I don't know of any theoretical way you could determine the answer for large sets of $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/570845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Velocity of a Particle Consider a particle moving in a straight line from the fixed point $(0,2)$ to another $(\pi,0)$. The only force acting on the particle is gravity. How would we parametrically define the motion of the particle with time? From kinematics, I found that $\hspace{150pt} y(t)=2-\dfrac{gt^2}{2}$ The slope can be found to be $\hspace{140pt} m=\dfrac{0-2}{\pi-0}=-\dfrac{2}{\pi}$ Since the path the particle moves on is a straight line, we have $\hspace{132pt}y=mx+b=-\dfrac{2}{\pi}x+2$ so $\hspace{150pt}x(t)=\dfrac{\pi gt^2}{4}$ Therefore the parametric equations for the position of the particle are $\hspace{150pt}x(t)=\dfrac{\pi gt^2}{4}$ $\hspace{150pt} y(t)=2-\dfrac{gt^2}{2}$ Does this seem correct? Any suggestions?
$\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\dd}{{\rm d}}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\fermi}{\,{\rm f}}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\ic}{{\rm i}}% \newcommand{\imp}{\Longrightarrow}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert #1 \right\vert}% \newcommand{\yy}{\Longleftrightarrow}$ The Action $S$ is given by $\pars{~\mbox{the motion occurs along the line}\ y = 2\,\pars{1 - x/\pi}~}$ $$ S = \int_{t_{0}}^{t_{1}}\bracks{{1 \over 2}\, m\pars{\dot{x}^{2} + \dot{y}^{2}} - mgy - m\pi\mu\pars{y + {2 \over \pi}\,x - 2}}\,\dd t $$ The equations of motion are: $$ m\ddot{x} = - 2m\mu\,,\qquad m\ddot{y} = -mg - m\pi\mu \quad\imp\quad \pi\ddot{x} - 2\ddot{y} = 2g $$ $$ \pi\,\dot{x}\pars{t} - 2\dot{y}\pars{t} = \overbrace{\bracks{\pi\,\dot{x}\pars{0} - 2\dot{y}\pars{0}}}^{\ds{\equiv \beta}} + 2gt\,,\quad \pi\,x\pars{t} - 2y\pars{t} = \overbrace{\bracks{\pi\,x\pars{0} - 2y\pars{0}}}^{\ds{\equiv \alpha}} + \beta t + gt^{2} $$ Then, we have the equations $$ \left\lbrace% \begin{array}{rcrcl} \pi x\pars{t} & - & 2y\pars{t} & = & \alpha + \beta t + gt^{2} \\ 2 x\pars{t} & + & \pi y\pars{t} & = & 2\pi \end{array}\right. $$ \begin{align} x\pars{t} & = {\pars{\alpha + \beta t + gt^{2}}\pi + 4\pi\over \pi^{2} + 4} = {\pi \over \pi^{2} + 4}\,\pars{\alpha + 4} + {\pi \over \pi^{2} + 4}\,\beta t + {\pi \over \pi^{2} + 4}\,gt^{2} \\[3mm] y\pars{t} & = {2\pi^{2} - 2\pars{\alpha + \beta t + gt^{2}}\pi \over \pi^{2} + 4} = {2\pi \over \pi^{2} + 4}\,\pars{\pi - \alpha} - {2\pi \over \pi^{2} + 4}\,\beta t - {2\pi \over \pi^{2} + 4}\,gt^{2} \end{align} $$ \color{#0000ff}{\large% \begin{array}{rcl} x\pars{t} & = & x\pars{0} + \dot{x}\pars{0}t + {\pi \over \pi^{2} + 4}\,t^{2} \\[3mm] y\pars{t} & = & y\pars{0} + \dot{y}\pars{0}t - {2\pi \over \pi^{2} + 4}\,t^{2} \end{array}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/570912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Is Hoeffding's bound tight in any way? The inequality: $$\Pr(\overline X - \mathrm{E}[\overline X] \geq t) \leq \exp \left( - \frac{2n^2t^2}{\sum_{i=1}^n (b_i - a_i)^2} \right) $$ Is this bound (or any other form of hoeffding) tight in any sense? e.g. does there exist a distribution for which the bound is no more than a constant multiple of the true probability for every $n$?
Because it fully answers the question, I will quote almost verbatim Theorem 7.3.1 from Matoušek, Jiří, and Jan Vondrák. "The probabilistic method." Lecture Notes, Department of Applied Mathematics, Charles University, Prague (2001). downloadable as of today at http://www.cs.cmu.edu/~15850/handouts/matousek-vondrak-prob-ln.pdf 7.3.1 Theorem. Let $S$ be a sum of i.i.d. $[0,1]$-valued random variables, and assume $\sqrt{\operatorname{Var}(S)} \ge 200$. Then, there exists a constant $c>0$ such that, for all $t\in\big[0,\operatorname{Var}(S)/100\big]$, we have $$ \mathbb{P}\big(S\ge \mathbb{E}[S] + t\big)\ge c \exp\bigg(-\frac{t^2}{3\operatorname{Var}(S)}\bigg).$$ In particular if $S=X_1+\dots+X_n$ for i.i.d. $X_i$, then for all $t\in \big[0,n\operatorname{Var}(X_1)/100\big]$ $$ \mathbb{P}\big(S\ge \mathbb{E}[S] + t\big)\ge c \exp\bigg(-\frac{t^2n}{3\operatorname{Var}(X_1)}\bigg)$$ Proposition 7.3.2 in the same reference has explicit values for $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/571032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
How to solve equation $\tau(123)(45)(67)\tau^{-1}=(765)(43)(21)$ and alike? Let $\sigma\in S_7$ be $(123)(45)(67)$. And Find $\tau\in S_7$ such that $\tau\sigma\tau^{-1}=(765)(43)(21)$. I understand that in symmetric group conjugate elements have the same cycle structure. Hence, $\tau$ should share the structure of $\sigma$. Then I guess I could trial and error to find the solution for $\tau$. However, it is not very nice to do so. I am wondering whether there is a good way to solve such equation in general case. Thank you!
There are more solutions than just $\tau=(17)(26)(35)$. Your task was only to find one such $\tau$, so that's fine, but it is not too hard to find them all. And in the process, see that the cycle structure of $\tau$ is unrelated to the cycle structure of your permutations. The cycles in your given permutations are all disjoint, and that is important. $\tau$ must map the three elements of the triple $(123)$ to $7$, $6$, and $5$, but not necessarily in that order, since $(765)=(657)=(576)$. $\tau(1)$ could be $5$, $6$, or $7$. Based on the choice made, $\tau(2)$ and $\tau(3)$ are determined. The other two cycles are both 2-cycles, and they are disjoint so they permute. So $\tau$ could be anything that takes $4$ and $5$ to either $4$ and $3$ or $2$ and $1$, and further there is not need to respect the internal order of these pairings ($(43)=(34)$). Considering the three-cycle, we have one of the following: $$\begin{align} \tau:&1\mapsto7&\tau:&1\mapsto6&\tau:&1\mapsto5\\ \tau:&2\mapsto6&\tau:&2\mapsto5&\tau:&2\mapsto7\\ \tau:&3\mapsto5&\tau:&3\mapsto7&\tau:&3\mapsto6\\ \end{align}$$ Considering the two-cylces, we have one of the following: $$\begin{align} \tau:&4\mapsto4&\tau:&4\mapsto4&\tau:&4\mapsto3&\tau:&4\mapsto3\\ \tau:&5\mapsto3&\tau:&5\mapsto3&\tau:&5\mapsto4&\tau:&5\mapsto4\\ \tau:&6\mapsto2&\tau:&6\mapsto1&\tau:&6\mapsto2&\tau:&6\mapsto1\\ \tau:&7\mapsto1&\tau:&7\mapsto2&\tau:&7\mapsto1&\tau:&7\mapsto2\\ \end{align}$$ This gives $12$ possible maps for $\tau$. Calculating each one's cycle structure, the possibilities for $\tau$ are: $$(17)(26)(35), (1726)(35), (17)(26)(354), (1726)(354), (162537), (16)(2537), (1625437), (16)(25437), (153627), (1536)(27), (1543627), (15436)(27)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/571076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Showing that a sequence of random variables with $\mathcal{L}(X_{n})$ (I.e., law) uniform on $[-n,n]$ does not converge at all Let $(X_{n})_{n\geq 1}$ be a sequence of real valued random variables with $\mathcal{L}(X_{n})$ (that is, law or distribution) uniform on $[-n,n]$. In what sense(s) do $X_{n}$ converge to a random variable $X$? We are told that the answer is none. To prove this, are we supposed to show that $X_{n}$ does not converge under the weakest form of convergence possible (convergence in distribution/law)? And if we do, is that sufficient to show that there is no convergence? Also, how does the fact that the r.v.'s are uniformly distributed give us that?
Convergence in distribution means $$\lim_{n\rightarrow \infty}F_{X_n}(x) = F_X(x)$$ where the RHS is a distribution function, and the equality to hold for every $x$ for which $F_X(x)$ is continuous. Our distribution function is $$F_{X_n}(x)= \begin{cases}0&\text{if $x<-n$}\\\frac{x+n}{2n}&\text{if $x\in[-n,n]$} \\1&\text{if $x> n$.}\end{cases}$$ The first and the last branch are not defined as $n\rightarrow \infty$, since there is no $x$ lower than "minus infinity", or higher than "plus infinity". Then $$\lim_{n\rightarrow \infty}F_{X_n}(x) = \lim_{n\rightarrow \infty}\Big (\frac{x}{2n} + \frac 12\Big) = 0+\frac 12$$ The constant function $1/2$ does not satisfy the properties of a distribution function, specifically $$\lim_{x\rightarrow -\infty}\frac 12 =\frac 12 \neq 0,\;\;\lim_{x\rightarrow \infty}\frac 12 =\frac 12 \neq 1$$ Carefully note that $n$ and $x$ do not "go together" at plus/minus infinity. To study convergence we first "send $n$ to infinity", and then we examine the behavior of the limiting function we have obtained, as its argument passes over.
{ "language": "en", "url": "https://math.stackexchange.com/questions/571180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$\mathbb{E}[e^{Xt}] = \mathbb{E}[\mathbb{E}[e^{Xt}\mid Y]] = \mathbb{E}[M_{X\mid Y}(t)]$? $$\mathbb{E}[e^{Xt}] = \mathbb{E}[\mathbb{E}[e^{Xt}\mid Y]] = \mathbb{E}[M_{X\mid Y}(t)]$$ How do I get the above statement? I don't understand how in the 1st step $e^{Xt}=\mathbb{E}[x^{Xt}\mid Y]$ then in the 2nd $\mathbb{E}[e^{Xt}\mid Y] = M_{X\mid Y}(t)$? This is from the part (b) of the below question: And its provided solution: (see 1st 3 lines)
The first step is what is called the law of Iterated Expecations. Simply put, if $X,Y$ are random variables then, $$\mathbb{E}_X[X] = \mathbb{E}_Y[\mathbb{E}_{X \mid Y}[X \mid Y]]$$ and by definition, $\mathbb{E}_{X \mid Y}[X \mid Y]$ is a function of $Y$ and again by definition $\mathbb{E}_{X \mid Y}[e^{xt}\mid Y]$ is the Moment generating function of the random variable defined by the PDF $f_{X\mid Y}(x\mid y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/571264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }