Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Prove that $\int_0^\infty e^{-a^2 s^2} \cos(2 b s) \,\mathrm ds=\frac{\sqrt{\pi}}{2a}e^{-b^2/a^2} $ Prove that
$$I(a,b)=\int_0^\infty e^{-a^2 s^2} \cos(2 b s) \,\mathrm ds=\frac{\sqrt{\pi}}{2a}e^{-b^2/a^2}\quad a>0 $$
I can prove it by differential-equations technique(taking the derivative with respect to $b$ to become a first-order equation) but I wanna prove it directly.
|
Note that we can complete the square and get the following integral:
$$\frac12 \Re{\left [\int_{-\infty}^{\infty} ds \, e^{-a^2 s^2} \, e^{i 2 b s} \right ]} = \frac12 e^{-b^2/a^2} \Re{\left [\int_{-\infty}^{\infty} ds \, e^{-a^2 (s-i b/a^2)^2} \right ]}$$
We can prove that the integral on the RHS is simply $\sqrt{\pi}/a$ as you may expect (and hope) by using Cauchy's theorem. Consider the integral
$$\oint_C dz \, e^{-a z^2}$$
where $C$ is the rectangle in the complex plane having vertices at $-R-i b/a^2$, $R-i b/a^2$, $R$, and $-R$ (in that order). This contour integral is equal to
$$\int_{-R}^R dx \, e^{-a^2 (x-i b/a^2)^2} + i \int_{-b/a^2}^0 dy \, e^{-a^2 (R+i y)^2}\\+\int_R^{-R} dx \, e^{-a^2 x^2} + i \int_0^{-b/a^2} dy \, e^{-a^2 (-R+i y)^2}$$
By Cauchy's theorem, the contour integral is zero. On the other hand, in the limit as $R\to\infty$, the second and fourth integrals above vanish. Thus,
$$\int_{-\infty}^{\infty} dx \, e^{-a^2 (x-i b/a^2)^2} = \int_{-\infty}^{\infty} dx \, e^{-a^2 x^2} = \frac{\sqrt{\pi}}{a}$$
The result follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/625107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Does the series $ \sum_{n=1}^{\infty}\left( 1-\cos\big(\frac{1}{n} \big) \right)$ converge? I'm having trouble determining whether the series:
$$
\sum_{n=1}^{\infty}\left[1-\cos\left(1 \over n\right)\right]
$$
converges.
I have tried the root test:
$$\lim_{n\rightarrow\infty}\sqrt[n]{1-\cos\frac{1}{n}}=\lim_{n\rightarrow\infty}\left(1-\cos\frac{1}{n}\right)^{1/n}=\lim_{n\rightarrow\infty}\mathrm{e}^{\frac{\log(1-\cos\frac{1}{n})}{n}}=\mathrm{e}^{\lim_{n\rightarrow\infty}\frac{\log(1-\cos\frac{1}{n})}{n}}$$
Now by applying the Stolz–Cesàro theorem, that upper limit is equal to:
\begin{align}
\lim_{n\rightarrow\infty}\frac{\log(1-\cos\frac{1}{n+1})-\log(1-\cos\frac{1}{n})}{(n+1)-n}&=\lim_{n\rightarrow\infty}\left(\log(1-\cos\frac{1}{n+1})-\log(1-\cos\frac{1}{n})\right)
\\&=\lim_{n\rightarrow\infty}\log{\frac{1-\cos{\frac{1}{n+1}}}{1-\cos{\frac{1}{n}}}}
\end{align}
Now I'm totally stuck, unless that quotient is actually 1, in which case the limit would be 0, the Root test result would be $\mathrm{e}^0=1$ and all this would have been to no avail.
I'm not sure this method was the best idea, the series sure seems way simpler than that, so probably another method is more appropriate?
|
We know:
$$\cos t \sim 1-\dfrac{t^2}{2}$$
Hence, for $n \to +\infty$
$$\sum_{n=1}^{+\infty}\left(1-\cos\left(\dfrac{1}{n}\right)\right)\sim \sum_{n=1}^{+\infty}\dfrac{1}{2n^2}=\frac{\pi^2}{12}$$
Thus it converges
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/625167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 5,
"answer_id": 4
}
|
Do we know if there exist true mathematical statements that can not be proven? Given the set of standard axioms (I'm not asking for proof of those), do we know for sure that a proof exists for all unproven theorems? For example, I believe the Goldbach Conjecture is not proven even though we "consider" it true.
Phrased another way, have we proven that if a mathematical statement is true, a proof of it exists? That, therefore, anything that is true can be proven, and anything that cannot be proven is not true? Or, is there a counterexample to that statement?
If it hasn't been proven either way, do we have a strong idea one way or the other? Is it generally thought that some theorems can have no proof, or not?
|
Amongst the many excellent answers you have received, nobody appears to have directly answered your question.
Goldbach's conjecture can be true and provable, true but not provable using the "normal rules of arithmetic", or false.
There are strong statistical arguments which suggest it is almost certainly true.
Whether it is provable using the "normal laws of arithmetic" - like those used to prove Fermat's Last Theorem or the Prime Number Theorem and everything you learned in high school maths - is not known. Assuming it can't be proven is a complete dead-end. To be interested at all you have to either assume it is true and be looking for a proof, or assume it is false and be looking for a counter example.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/625223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "166",
"answer_count": 10,
"answer_id": 7
}
|
Evaluating some Limits as Riemann sums. I really have difficulties with Riemann Sums, especially the ones as below:
$$\lim_{n\to\infty} \left(\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{3n}\right)$$
When i try to write this as a sum, it becomes $$\frac { 1 }{ n } \sum _{ k=1 }^{ 2n } \frac { 1 }{ 1+\frac { k }{ n } } .$$ The problem is, however, to be able to compute this limit as an integral I need to have this sum from $1$ to $n$. There are some other questions like this, but if I can understand it, I will be able solve others.
|
With Eulero-Mascheroni : $$\sum_{k = 1}^{n}\frac{1}{k} - \log{n} \rightarrow \gamma$$ $$\sum_{k = 1}^{3n}\frac{1}{k} - \log{3n} \rightarrow \gamma$$ so $$\sum_{k = n+1}^{3n}\frac{1}{k} - \log{3n} +\log{n} \rightarrow 0$$ and then$$\sum_{k = n+1}^{3n}\frac{1}{k} \rightarrow \log{3}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/625306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 0
}
|
The derivative of a holomorphic function on the boundary of the unit circle Let $f$ be holomorphic on $D(0,1)\cup \{1\}$, and $f(0)=0$, $f(1)=1$, $f(D(0,1))\subset D(0,1)$. Prove that $|f'(1)|\geq 1$.
I have no idea. Maybe $f'(\xi)=[f(1)-f(0)]/(1-0)$ for some $\xi \in (0,1)$, and the maximal modules principle applies? This is not exact.
|
It is easier to argue by contradiction. We are given that $f'(1)$ exists. Suppose $|f'(1)|<1$. Pick $c$ so that $|f'(1)|<c<1$. There is a neighborhood of $1$ in which $$|f(z)-f(1)|\le c|z-1|$$ By the reverse triangle $$|f(z)|\ge |f(1)|-c|z-1| = 1-c|z-1| \tag{1}$$
On the other hand, by the Schwarz lemma
$$|f(z)|\le |z| \tag{2}$$
From (1) and (2) we get
$$1-|z|\le c|z-1|\tag{3}$$
which is impossible when $z\in (0,1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/625409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
I've read that abelian categories can naturally be enriched in $\mathbf{Ab}.$ How does this actually work? Wikipedia defines the notion of an abelian category as follows (link).
A category is abelian iff
*
*it has a zero object,
*it has all binary products and binary coproducts, and
*it has all kernels and cokernels.
*all monomorphisms and epimorphisms are normal.
It later explains that an abelian category can naturally be enriched in $\mathbf{Ab},$ as a result of the first three axioms above.
How does this actually work?
|
The claim is not trivial and requires a bit of ingenuity. For a guided solution, see Q6 here. The four steps are as follows:
*
*Show that the category has finite limits and finite colimits.
*Show that a morphism $f$ is monic (resp. epic) if and only if $\ker f = 0$ (resp. $\operatorname{coker} f = 0$).
*Show that the canonical morphisms $A + B \to A \times B$ are isomorphisms, and then deduce that the category is enriched over commutative monoids.
*Show that every morphism has an additive inverse (by inverting an appropriate matrix).
A complete proof is given as Theorem 1.6.4 in [Borceux, Handbook of categorical algebra, Vol. 2].
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/625477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Manifolds and Topological Spaces from my understanding of manifolds they are structures defined on topological spaces. So if M is a manifold defined on a topological space $(X,\tau)$ and $X\subseteq\mathbb R^3$, does this mean $M$ is a $3$-manifold? If so does this generalize to higher dimensions such as if $X\subseteq\mathbb R^n$ would that mean $M$ is an $n$- manifold? - thanks
|
It is usually said that the notion of manifolds was introduced by Riemann in 1854, but it wasn't until Whitney’s work in 1936 that people know what abstract manifolds are, other than being submanifolds of Euclidean space.
A plane in R^3 is a very trivial counterexample of your question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/625569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How find this minimum of the $a$ such $x_{n}>0$ let real numbers $a>0$, and sequence $\{x_{n}\}$ such
$$x_{1}=1,ax_{n}=x_{1}+x_{2}+\cdots+x_{n+1}$$
Find the minimum of the $a$,such $x_{n}>0,\forall n\ge 1$
My try: since
$$ax_{n}=x_{1}+x_{2}+\cdots+x_{n+1}$$
$$ax_{n+1}=x_{1}+x_{2}+\cdots+x_{n+2}$$
$$\Longrightarrow a(x_{n+1}-x_{n})=x_{n+2}$$
then we have
$$r^2-ar+a=0$$
case1: $$\Delta=a^2-4a>0\Longrightarrow a>4$$
then $$r_{1}=\dfrac{a+\sqrt{a^2-4a}}{2},r_{2}=\dfrac{a-\sqrt{a^2-4a}}{2}$$
so
$$x_{n}=Ar^n_{1}+Br^n_{2}$$
since
$$x_{1}=1,x_{2}=a-1$$
$$\Longrightarrow Ar_{1}+Br_{2}=1,Ar^2_{1}+Br^2_{2}=a-1$$
I geuss $$a_{\min}=4$$
then I fell very ugly,Thank you
|
you've done a good work, do not feel bad about it and think your guess is also true( i havent' checked it yet).
If you continue , you will reach the answer eventually.
However, this way of approach forces us to do many caculations.
I would like to present to you the below problem and its proof to reduce your work:
Problem
$\{x_n\}$ is positive real sequence and $a$ is a positive number .$a<4$.
Then there exists $N \in \mathbb{N}$ such that:
$ x_1+x_2+..+x_N+x_{N+1} > a. x_N$.
Proof
Assume that $ \forall n \in \mathbb{N}: x_1+x_2+..+x_n+x_{n+1} \le a. x_n$(1)
Denote $X_n=\sum_{k=1}^n x_k \Rightarrow$ $\{X_n\}$ is increasing sequence
$(1) \Leftrightarrow X_{n+1} \le a( X_n-X_{n-1}) \forall n \ge 2$(2)
Denote $a_n=\frac{X_{n+1}}{X_n} \forall n\ge 1 \Rightarrow a_n \ge 1 \forall n$
$(2) \Leftrightarrow a_n \le a( 1-\frac{1}{a_{n-1}}) \forall n \ge 2$
Thus,$ a \ge a_n+\frac{a}{a_{n-1}} \ge 2\sqrt{\frac{a.a_n}{a_{n-1}}} > a.\sqrt{\frac{a_n}{a_{n-1}}} \forall n \ge 2$ (due to $a<4$)
Therefore$a_{n-1} > a_n \forall n \ge 2$.
In sum, we have $a_n$ is a decreasing sequence and $a_n >1 \forall n$, so it has limit point.
Denote :$ A=\lim_{n \rightarrow \infty} a_n$.
We have:$a_n \le a( 1-\frac{1}{a_{n-1}}) \forall n \ge 2 \Rightarrow A \le a( 1-\frac{1}{A})$.
$\Leftrightarrow a \ge A+\frac{a}{A} \ge 2\sqrt{a}$
Which is wrong because $a<4$.
Back to your problem, if you use this problem, you only have to prove with the case $a\ge 4$. Hope to keep doing good work
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/625638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Finding the improper integral $\int^\infty_0\frac{1}{x^3+1}\,dx$ $$\int^\infty_0\frac{1}{x^3+1}\,dx$$
The answer is $\frac{2\pi}{3\sqrt{3}}$.
How can I evaluate this integral?
|
$$x^3+1 = (x+1)(x^2-x+1)$$
Logic: Do partial fraction decomposition.Find $A,B,C$.
$$\frac{1}{x^3+1} = \frac{A}{x+1}+\frac{Bx+C}{x^2-x+1}$$
By comparing corresponding co-efficients of different powers of $x$, you will end up with equations in A,B,C.After solving you get :
$$A=\frac{1}{3},B=\frac{-1}{3},C=\frac{2}{3}$$
Then use this:
$$\int\frac{1}{x}\,dx=\log x+c$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/625821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Minimum distance between a disk in 3d space and a point above the disk How can I calculate the minimum distance between a point on the perimeter of a disk in 3d space and a point above the disk?
For example, there is a disk in 3d space with center [0,0,0]. It has radius 3 and lies flat on the x,y plane. If there is a particle above the disk at [5,5,5], how do I calculate the minimum distance from this particle to a point on the perimeter of the disk?
Here is my attempt so far:
vec1 = vector between disk center and particle
vec1 = [(5 - 0), (5 - 0), (5 - 0)]
vec1 = [5,5,5]
unitvec1 = unit vector in direction of vec1
unitvec1 = vec1/norm(vec1)
unitvec1 = [0.5774, 0.5774, 0.5774]
vec2 = vector between disk center and point on the perimeter closest to the particle
vec2 = disk radius * unitvec1, and make z element = 0
vec2 = 3 * [0.5774, 0.5774, 0]
vec2 = [1.7321, 1.7321, 0]
vec3 = vector between particle and point on the perimeter closest to the particle
vec3 = vec1 - vec2
vec3 = [3.2679, 3.2679, 5.0000]
So the min distance is
norm(vec3) = 6.8087
But this method doesn't always work. If I try it with disk center [0,0,0], particle location [0,0,6], and disk radius 9, it gives the minimum distance to be 6. this can't be correct, because the distance between the center of the disk and the particle will be 6, so the distance to the perimeter must be larger.
What am I doing wrong, and how should I actually calculate this?
Thanks!
note: I am using pseudo code, not an actual programing language
|
Your logic is wrong because you make the wrong thing a unit vector. The length of your vec2 is not necessarily equal to the radius of the disc, because it is equal to the radius multiplied by a vector which is some component of the unit vector unitvec1.
Instead of normalizing vec1, you should normalize vec2.
In this case, be aware that vec2 might be zero (as in your final example). Thus when normalizing you should check if this happens, and if so arbitrarily set vec2 to any point on the diameter in this case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/625931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Find the equation of the normal to the curve $y = 8/(4 + x^2)$ , at $x = 1$. When you first differentiate the above, you get $-8/25$, right? Then you derive the gradient for a normal and proceed so on and so forth.
The textbook I'm using says when you differentiate, you get $-16/25$. I believe that's wrong...
|
The textbook is correct.
You can use
$$\left\{\frac{f(x)}{g(x)}\right\}^\prime=\frac{f^\prime(x)g(x)-f(x)g^\prime(x)}{\{g(x)\}^2}.$$
Letting $$h(x)=\frac{8}{4+x^2},$$
we have
$$h^\prime(x)=\frac{0-8\cdot 2x}{(4+x^2)^2}=-\frac{16x}{(4+x^2)^2}.$$
Hence, we will have
$$h(1)=-\frac{16}{25}.$$
So...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/626034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Isolated singularities sort Can anyone help me out with finding the nature of the singularities of the following function:
$$g(z)=\frac{\cos z-1}{z^5}$$ without using Taylor expansions?
|
You are trying to figure out the nature of the singularity $z_0=0$. Write $p(z)=\cos(z)-1$ and $q(z)=z^5$. Then $g(z)=\frac{p(z)}{q(z)}$. Notice that $z_0$ is a zero of order $2$ for $p$ and a zero of order $5$ for $q$. Then you must have seen a theorem that states that $z_0$ is then a pole of order $5-2=3$.
Otherwise, write $g(z)=\frac{1}{z^3}f(z)$ with $f(z)=\frac{\cos(z)-1}{z^2}$. Use Hospital's rule to compute that $f(z_0)=-\frac{1}{2}$ and notice that $f$ is holomorphic on $\mathbb{C}$. Then take its Taylor series and multiply by $\frac{1}{z^3}$ to find the Laurent series of $g$. I let you compute its residue at $z_0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/626093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Choose the branch for $(1-\zeta^2)^{1/2}$ that makes it holomorphic in the upper half-plane and positive when $-1<\zeta<1$ From Stein/Shakarchi's Complex Analysis page 232:
...We consider for $z\in \mathbb{H}$, $$f(z)=\int_0^z
\frac{d\zeta}{(1-\zeta^2)^{1/2}},$$ where the integral is taken from
$0$ to $z$ along any path in the closed upper half-plane. We choose
the branch for $(1-\zeta^2)^{1/2}$ that makes it holomorphic in the
upper half-plane and positive when $-1<\zeta<1$. As a result,
$$(1-\zeta^2)^{-1/2}=i(\zeta^2-1)^{-1/2}\quad \text{when }\zeta>1.$$
I'm a little confused on the branch they're choosing. Do they mean that the we cut along $\mathbb{R}\setminus [-1,1]$? But then $(\zeta^2-1)^{-1/2}$ is undefined for $\zeta>1$...
|
Another (probably not as good) way to look at it is to think
$$
\sqrt{1-\zeta^2} = i \sqrt{\zeta -1} \sqrt{\zeta + 1}
$$
Choose the (possibly different) branches for each of the square roots on the right side to give you what you need. It might look at first like you will have problems on $z \in [-1,1]$, but the discontinuities cancel each other.
For example, use the principle branch of $\sqrt{w}$ on $\zeta - 1$, but for $\zeta + 1$ use the branch of $\sqrt{w}$ where $-2\pi < \arg(w) < 0$.
EDIT:
Use $-2 \pi \le \arg(\zeta+1) < 0$ and $-\pi < \arg(\zeta -1) \le \pi$.
Then $-\pi \le \arg(\sqrt{\zeta+1}) < 0$ and $-\pi/2 < \arg(\sqrt{\zeta -1}) \le \pi/2$.
So on $-1 < z < 1$ you get
$$
\arg(i \sqrt{\zeta -1} \sqrt{\zeta + 1} ) = \pi/2 + (-\pi) + \pi/2 = 0
$$
On $1 < z$ you get
$$
\arg(i \sqrt{\zeta -1} \sqrt{\zeta + 1} ) = \pi/2 + (-\pi) + 0 = -\pi/2
$$
so
$$
\arg(\frac{1}{i \sqrt{\zeta -1} \sqrt{\zeta + 1}} ) = \pi/2 = \arg(\frac{i}{\sqrt{t^2-1}})
$$
where $t = |\zeta| > 1$ and $\sqrt{t^2-1}$ uses the regular square root on $\mathbb{R}^+$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/626256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Express $\int_0^1\frac{dt}{t^{1/3}(1-t)^{2/3}(1+t)}$ as a closed path integral enclosing the interval $(0,1)$ From an old complex analysis qualifier:
Define $$I=\int_0^1\frac{dt}{t^{1/3}(1-t)^{2/3}(1+t)}.$$
*
*Express $I$ as a closed path integral enclosing the interval
$(0,1)$.
*Evaluate $I$.
Ideas: At first I thought this was an exercise in the Schwarz-Christoffel integral, but now I'm not so sure. I'm thinking I'm just not aware of the method they're suggesting, and I was hoping someone could point me to it.
Thanks
|
Consider $$f(z) = e^{-1/3\mathrm{LogA} z} e^{-2/3\mathrm{LogB} (1-z)} \frac{1}{1+z}$$
where $\mathrm{LogA}$ is the logarithm with branch cut on the negative real axis and argument from $-\pi$ to $\pi$ and $\mathrm{LogB}$ has the branch cut on the positive real axis and argument from $0$ to $2\pi.$
Then we have continuity of $f(z)$ across the negative real axis because from above we have
$$e^{-1/3\mathrm{LogA} z} e^{-2/3\mathrm{LogB} (1-z)}
= e^{-1/3\log x - 1/3\times\pi i} e^{-2/3\log(1-x)-2/3\times 2\pi i} $$
and from below
$$e^{-1/3\mathrm{LogA} z} e^{-2/3\mathrm{LogB} (1-z)}
= e^{-1/3\log x + 1/3\times\pi i} e^{-2/3\log(1-x)}$$
because $$e^{- (1/3+4/3)\times\pi i} = e^{ 1/3\times\pi i}.$$
In this continuity argument we have used $x$ to denote the absolute value of $z$ on the negative real axis.
It follows by Morera's Theorem that $f(z)$ is actually analytic across the part of the cut on the negative real axis and $f(z)$ only has a cut on $[0,1].$
Now considering the counterclockwise handle-shaped contour enclosing $(0,1)$ we get that the integral above the cut is
$$- e^{-2/3\times 2\pi i} I$$
where $I$ is the desired value and
below the cut we get the value $I.$
Now the residue of $f(z)$ at $z=-1$ is
$$e^{-1/3\times \pi i} e^{-2/3 \log 2 - 2/3\times 2\pi i}
= 2^{-2/3} e^{-5/3\pi i}.$$
We thus obtain
$$I = - 2\pi i\frac{2^{-2/3} e^{-5/3\pi i}}{1-e^{-2/3\times 2\pi i}}
= -2\pi i 2^{-2/3} \frac{e^{-\pi i}}{e^{2/3\times \pi i}-e^{-2/3\times\pi i}}\\
= \pi 2^{-2/3} \frac{2i}{e^{2/3\times \pi i}-e^{-2/3\times\pi i}}
= \pi \frac{2^{-2/3}}{\sin(2/3\pi)}
= \pi\frac{\sqrt[3]{2}}{\sqrt{3}}.$$
There is no residue at infinity to consider because $f(z)$ is $O(1/R^2)$ there on a ray to infinity at distance $R$ from the origin.
This technique is documented at Wikipedia. In the present case we get that
$$-\lim_{|z|\to\infty} z f(z) \sim -\lim_{|z|\to\infty} \frac{z}{|z|^2} = 0.$$
This concludes the argument.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/626351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
}
|
For non-negative iid random variables, show that the max converges ip From Resnick's A Probability Path, exercise 6.7.11. (Studying for my comprehensive exam.)
Suppose $\{X_n, n \ge1\}$ are iid and non-negative and define $M_n=\bigvee^n_{i=1}X_i$.
a) Check that $P[M_n > x] \le nP[X_1 > x].$
b) Show that if $E(X^p_1) < \infty$, then $\frac{M_n}{n^{1/p}} \stackrel{p}{\to} 0.$
I did part a:
\begin{equation}
[M_n > x] \subseteq \bigcup^n_{i=1}[X_i>x] \\
P[M_n > x] \le P\left(\bigcup^n_{i=1}[X_i>x]\right) \le \sum^n_{i=1}P[X_i>x] = nP[X_1>x]
\end{equation}
Part b is giving me trouble--I must be missing something! I assumed that part (a) was meant to be used in part b:
\begin{align}
P\left(\left|\frac{M_n}{n^{1/p}}-0\right|>\epsilon\right)&=P[M_n > n^{1/p}\epsilon]\\
&\le nP[X_1>n^{1/p}\epsilon]\\
&=nP[X_1^p > n\epsilon^p]\\
&\le \frac{E(X_1^p)}{\epsilon^p} \nrightarrow0.
\end{align}
Now, I recognize that although Markov's inequality does not push the probability to zero doesn't mean that the probability actually fails to converge to zero, but it seems as if the author is implying I should use the inequality from part (a), which naturally seems to lead to Markov. What am I missing here?
|
Use this instead in the very last inequality.
$$ \frac{E(X_1^pI_{X_1 > n^{1/p}\epsilon})}{\epsilon^p} .$$
You should have a theorem somewhere that if $Y \ge 0$ and $EY < \infty$, then $E(YI_{Y>\alpha}) \to 0$ as $\alpha \to \infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/626509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What's the proof that the sum and multiplication of natural numbers is a natural number? I'm explaining the construction of the natural numbers to someone and I'm asking him to show where to find $C$ and $F$ with $a,b,g,h\in \mathbb{N}$ in:
$$a+b=C$$
$$g*h=F$$
I know intuitively (and from some readings) that $C$ and $F$ are natural numbers too. But I don't know what is the proof for it, I'm thinking that every sum or multiplication of natural numbers is a natural number, I just don't know why.
Until now I could only elaborate a geometric demonstration that seems poor to me, see:
The result of the sum of two natural numbers seems certainly to be a natural number:
$$a+b=\begin{matrix}
{*}&{*}&{...}&{+}&{*}&{*}&{...}\end{matrix}\tag{1.0}$$
For multiplication:
$$a \times b=\begin{matrix}
{*}&{*}&{\cdots}\\
{*}&{*}&{\cdots}\\
{\vdots}&{\vdots}&{\ddots}\\
\end{matrix}$$
Then moving each horizontal line to form a new line of height one:
$$a\times b=(* \;\; *\;\; \cdots)+(* \;\; *\;\; \cdots)+(\;\;\vdots \;\;\; \vdots\;\; \ddots)$$
We can use the property given in $(1.0)$ to show the desired result.
The problem for me is that I think I'm pushing $(1.0)$ through intuition and I don't feel the legitimacy of it.
|
What is the extent of the mathematical knowledge of your friend? That these statements are true are a consequence of the axiomatic construction of the natural numbers using the successor function. To say a little more, essentially, what makes $a$ and $b$ natural numbers? They are formed by adding $1$ to $1$ some number of times. Intuitively, one can see, then, that $a+b$ must be $1$ added to $1$ some number of times. This can be made rigorous using the successor function.
Successor Function
Once you prove that $a+b$ is a natural number, proving that $i\cdot j$ for $i,j \in \mathbb{N}$ can be done inductively using the definition of $i \cdot j$, and the fact that $a+b$ is a natural number.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/626578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Why, conceptually, is it that $\binom{n}{r} = \binom{n-1}{r-1} + \binom{n-1}{r}$? Why, conceptually, is it that $$\binom{n}{r} = \binom{n-1}{r-1} + \binom{n-1}{r}?$$ I know how to prove that this is true, but I don't understand conceptually why it makes sense.
|
Sure. You're dividing into 2 cases. On the one hand you're saying I'm stuck choosing this element over here. So now I have r-1 more choices to make out of n-1 things. In the other case you're refusing that element. Now you've eliminated a choice, but still must pick r elements. These two cases are exhaustive and exclusive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/626664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 6,
"answer_id": 1
}
|
Tangents at singularities Given an implicit polynomial function $f$ with singularity at the origin. How do I find the tangents to the curve at the point?
Wikipedia says that ignore all the terms except the ones with lowest degree. Why is this true?
Take for example the curve $x^3+x^4+y^3=0$. What are the tangents at the origin?
|
Wikipedia is right: the tangent cone to your curve $C$ is given by $x^3+y^3=0$.
If the base field is algebraically closed of characteristic $\neq 3$ and if $j\neq 1$ is a primitive cubic root of$1$, it consists of the three lines $x+y=0, x+jy=0, x+j^2y=0$, since $x^3+y^3=(x+y)(x+jy)(x+j^2y)$.
But where does the Wikipedia recipe come from?
Here is an explanation: we are looking at the intersection at $(0,0)$ of our curve with the line $L_{ab}$ given parametrically by $x=at, y=bt$ (with $a,b$ not both zero).
The values of $t$ corresponding to an intersection point $L_{ab}\cap C$ are those satisfying the equation $(at)^3+(at)^4+(bt)^3=0$ gotten by substituting $x=at, y=bt$ into the equation of $C$.
The equation for $t$ is thus $$t^3(a^3+at+b^3)=0 \quad (MULT)$$ The result is that $t=0$ is a root of multiplicity $3$ for all values of $a,b$ except for those with $a^3+b^3=0$ for which the multiplicity of the zero root of $(MULT)$ is $4$.
So we decree that the lines $L_{ab}$ with $a^3+b^3=0$ are the tangents to $C$ at the origin because they cut $C$ with a higher multiplicity, namely $4$, than all the other lines which only cut $C$ with multiplicity $3$.
This calculation is easily generalized to arbitrary curves through the origin and justifies Wikipedia's recipe.
[Purists will notice that the above is purely algebraic: no limits are involved and we don't need a topology on the base field.
But assuming that the base field is $\mathbb C$ with its metric topology and using limits is fine with me if it helps in understanding the situation. In mathematics as in love and war all is fair...]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/626745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
}
|
Use of L'hopital's rule Define $f:\mathbb{N} \to \mathbb{R}$ by $f(n)=\frac{sin (\frac{n\pi}{4})}{n}.$
May I know if we can use L'hopital's rule to evaluate $\lim_{n \to 0} f(n)$ ? If not, how can we evaluate the limit without the use of series?
Thank you.
|
There is no such thing as $\lim_{n\to 0}f(n)$ if $f$ is only defined on $\mathbb N$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/626836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What's the digit "a" in this number? a and b are digits in a four-digit natural number 7a5b. If 7a5b is divisible by 18, how many different possible values can "a" have?
|
A number is divisible by $18$ iff it's divisible by $2$ and $9$. So, we must have $b \in \{0,2,4,6,8\}$ and $7+a+5+b$ divisible by $9$, since a number is divisible by $9$ iff the sum of its digits is divisible by $9$. I think you can solve it by now.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/626917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Solution of $\frac{\partial f}{\partial t}(t,x) = 2 \frac{\partial^2f}{\partial x^2}(t,x)$ Consider the PDE
$$\frac{\partial f}{\partial t}(t,x) = 2 \frac{\partial^2f}{\partial x^2}(t,x)\tag{1} $$
with $t\ge0,\ x\in\mathbb R,\ f(0,x)=e^x$. I want to find $f(t,x)$.
I know that the heat equation
$$\frac{\partial p}{\partial t}(t,x) = \frac{1}{2}\frac{\partial^2p}{\partial x^2}(t,x)\tag{2}$$
with $t\ge0,\ x\in\mathbb R,\ p(0,x)=h(x)$ has the solution $p(t,x) =\mathbb E[h(x+W_t)]$ where $W_t$ is a Brownian motion.
I have tried things like setting $p(t,x)=f(2t,x)$, but I do not seem to be able to put $(1)$ into the form of $(2)$. How can I find $f(t,x)$ from using the general solution of the heat equation?
|
$\newcommand{\+}{^{\dagger}}%
\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\ds}[1]{\displaystyle{#1}}%
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}%
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}%
\newcommand{\fermi}{\,{\rm f}}%
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}%
\newcommand{\half}{{1 \over 2}}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\isdiv}{\,\left.\right\vert\,}%
\newcommand{\ket}[1]{\left\vert #1\right\rangle}%
\newcommand{\ol}[1]{\overline{#1}}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}%
\newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}%
\newcommand{\sech}{\,{\rm sech}}%
\newcommand{\sgn}{\,{\rm sgn}}%
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
Let's define
$\ds{\tilde{\fermi}\pars{s,x} = \int_{0}^{\infty}\fermi\pars{t,x}\expo{-st}\,\dd s}$.
Then,
$$
-\fermi\pars{0,x} + s\tilde{\fermi}\pars{s,x}
= 2\,\partiald[2]{\tilde{\fermi}\pars{s,x}}{x}\quad\imp\quad
\pars{\partiald[2]{}{x} - \half\,s}\tilde{\fermi}\pars{s,x} = -\,\half\,\expo{x}
$$
Then, $\tilde{\fermi}\pars{s,x} = A\expo{x}$ such that $A - sA/2 = -1/2\quad\imp\quad
A = -1/\bracks{2\pars{1 - s/2}} = 1/\pars{s - 2}$ which leads to:
$$
\tilde{\fermi}\pars{s,x} = {\expo{x} \over s - 2}
\quad\mbox{and}\quad
\fermi\pars{t,x} = \expo{x}\int_{\gamma - \ic\infty}^{\gamma + \ic\infty}
{\expo{st} \over s - 2}\,{\dd s \over 2\pi\ic}\quad\mbox{with}\quad\gamma > 2
$$
$$
\color{#0000ff}{\large\fermi\pars{t,x} = \expo{x\ +\ 2t}}
$$
Even more simple: Write $\fermi\pars{t,x} \equiv \expo{x}\varphi\pars{t}$ and you get
$\dot{\varphi}\pars{t} = 2\varphi\pars{t}$ with $\varphi\pars{0} = 1$. Then, $\varphi\pars{t} = \expo{2t}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Why are compact sets called "compact" in topology? Given a topological space $X$ and a subset of it $S$, $S$ is compact iff for every open cover of $S$, there is a finite subcover of $S$.
Just curiosity:
I've done some search in Internet why compact sets are called compact, but it doesn't contain any good result. For someone with no knowledge of the topology, Facing compactness creates the mentality that a compact set is a compressed set!
Does anyone know or have any information on the question?
|
If you are curious about the history of compact sets (the definition of which dates back to Fréchet as mentioned in another response) and you can read French, then I suggest checking out the following historical article:
Pier, J. P. (1980). Historique de la notion de compacité. Historia mathematica, 7(4), 425-443. Retrieved from http://www.sciencedirect.com/science/article/pii/0315086080900063.
I came across this article when posting a response on MO here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 0
}
|
Proving that there are $n-1$ roots in $a_1x^{b_1}+a_2x^{b_2}+...+a_nx^{b_n}=0 $ on $(0,\infty)$
We know that:
$a_1,...,a_n\in \mathbb R , \forall a_i\neq0 \\
b_1,...,b_n\in \mathbb R : b_j\neq b_k : \forall j\neq k$
Prove that there are $n-1$ roots in $(0,\infty)$:
$$a_1x^{b_1}+a_2x^{b_2}+...+a_nx^{b_n}=0 $$
Using induction, for n=1 it's obvious that there are 0 roots.
Suppose the statement is true for n-2 and prove for n-1:
From here on I'm not sure how to continue, I noticed that $x^{b_1}(a_1 + \ldots + a_n x^{b_n - b_1})=0$ but at what point can it come into the induction process ? Also, how does the IVT is applied here ?
|
Hint:
So now just work with $\;a_1 +a_2x^{b_2-b_1}+\ldots+a_nx^{b_n-b_1}\;$, differentiate and get less than $\;n\;$ summands. Now use what the other question mentions about the relation between the zeros of the derivative of a function and those of the function itself.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Conditional probability with 3 events i'm struggling to understand how this answer for a past paper is correct.
Question:
In a lake there are 10 fish, 3 of which are tagged. 3 fish are caught randomly from the lake without replacement. What is the probability that the first two fish that are caught are tagged but not the third?
It also gives us that P(A int B int C) = P(C given A int B)*P(B given A)*P(A)
I understand that we need to use the above formula, but in the answers it suggests that P(C given A int B) is 1/7, which I cannot deduce why.
Any help is greatly appreciated.
|
Define $A=\mbox{the first fish is tagged}$, $B = \mbox{the second fish is tagged}$ and $C = \mbox{the third fish isn't tagged}$. We are willing to calculate $P(A \cap B \cap C)$. Now, $P(A)=3/10$, because there are $3$ tagged fishes of $10$. Similarly, $P(B\mid A) = 2/9$, because there are now $2$ tagged fishes out of $9$. Finally, $P(C \mid A \cap B) = 7/8$, because there are now $7$ non-tagged fishes out of $8$. This gives the result $P(A \cap B \cap C)=(7/8)(2/9)(3/10)=42/720=7/120$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How should I denote "undefined" in a functions definition While solving some limits, I thought this might be a nice strategy.
$$
\lim_{x \to -2} \frac{x^2-4}{x^2+3x+2} = \lim_{x \to -2} f(x) \\
f(x) = \frac{(x-2)(x+2)}{(x+2)(x+1)} =
\begin{cases}
\frac{x-2}{x+1},& x \neq -2 \\
\varnothing,& x = -2
\end{cases} \\
\lim_{x \to -2} f(x) = \frac{-2-2}{-2+1}=\frac{-4}{-1}=4
$$
First, you define an equivalent function to the one given, isolating the any undefined points/ranges and then finding the limits for the defined parts.
My question is - how should I mathematically denote the value for undefined points/ranges in the equivalent function? Here, I wrote $\varnothing$ as in there is no result, the result is an empty set, but I'm not sure this is appropriate.
Edit 1
I do realise this comes naturally
$$f(x)=\frac{x-2}{x+1}\;,\;\;x\neq -2$$
But isn't there a loss of information as a result? Whereby you can't determine just by this definition, whether it has a value at $-2$. That's just my intuition.
|
You don't have to. This is all you need to write.
$$\lim_{x \to -2} \frac{x^2 - 4}{x^2 + 3x + 2} = \lim_{x \to -2} \frac{(x-2)(x+2)}{(x+1)(x+2)} = \lim_{x \to -2} \frac{x-2}{x+1} = \frac{-4}{-1}=4.$$
The point is, the functions $\frac{x^2 - 4}{x^2 + 3x + 2}$ and $\frac{x-2}{x+1}$ are not the same function, as the first is undefined at $-2$ and the second is not. But, the limits of the two functions are the same so what I wrote is perfectly correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Two different problems with similar solutions Problem 1 :
Calculate the sum
$$S(n):=\sum_{s=0}^{\infty}s^nx^s$$
The solution is S(n)=$\frac{P(n)}{(1-x)^{n+1}}$, where the polynomials
satisfy the reccurence
P(0)=1 , P(1)=x , P(n+1) = x(1-x)P(n)'+x(n+1)P(n)
Problem 2 :
Calculate the probabilities that the sum of n random variables X~U[0,1]
is in the range [0,1],[1,2],... (The random variables are independent)
The coefficients of the polynomials are
1
1 1
1 4 1
1 11 11 1
1 26 66 26 1
The probabilities of problem 2 are
$$ 1 $$
$$ \frac{1}{2} \frac{1}{2}$$
$$ \frac{1}{6} \frac{4}{6} \frac{1}{6}$$
$$ \frac {1}{24} \frac{11}{24} \frac{11}{24} \frac{1}{24}$$
The pattern is apparently similar to that of problem 1.
Can it be proven that the same numbers occur also for greater values n ?
I tried the falting-theorem for the sum of random variables, but I could not
prove completely, that the same values as in problem 1 occur.
|
This continues to be true. The coefficients of your polynomials are the Eulerian Numbers (and the polynomials the Eulerian polynomials).
Compare the formula in terms of binomial coefficients for the Eulerian numbers given in the above link with the integral of the pdf of the Irwin-Hall Distribution between consecutive integers (you'll need to simplify a bit to get the identity).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Maximising with multiple constraints I have $$Z=f(x_1 ,x_2 ,x_3 ,... ,x_n)$$ function and $$\left[\begin{array}{r}c_1=g_1(x_1 ,x_2 ,x_3 ,... ,x_n) \\c_2=g_2(x_1 ,x_2 ,x_3 ,... ,x_n)\\c_3=g_3(x_1 ,x_2 ,x_3 ,... ,x_n) \\...\\c_m=g_m(x_1 ,x_2 ,x_3 ,... ,x_n)
\end{array}\right]$$
constraints.
How can I know critical points are maximized or minimized with Hessian Matrix(bordered matrix)? I need to know it for use to solve numeric problems only not for prove.
In other word, how can I know that critical points (that had sound by first order condition. Setting derivatives of Lagrange function to zero ) are Maximum, Minimum or inflection points via Hessian Matrix?
|
I'm going to use the notation
\begin{equation}
f(x) := f(x_1, x_2, \ldots, x_n)
\end{equation}
and
\begin{equation}
g(x) = \left[\begin{array}{c} g_1(x) \\ \vdots \\ g_m(x) \end{array}\right]
\end{equation}
along with
\begin{equation}
c = \left[\begin{array}{c} c_1 \\ \vdots \\ c_m\end{array}\right]
\end{equation}
to represent all of your constraints compactly as
\begin{equation}
g(x) = c.
\end{equation}
You are correct in forming the bordered Hessian matrix because this is a constrained problem. Note that the bordered Hessian differs from the Hessian used for unconstrained problems and takes the form
\begin{equation}
H = \left[\begin{array}{ccccc}
0 & \frac{\partial g}{\partial x_1} & \frac{\partial g}{\partial x_2} & \cdots & \frac{\partial g}{\partial x_n} \\
\frac{\partial g}{\partial x_1} & \frac{\partial^2 f}{\partial x_1^2} & \frac{\partial^2 f}{\partial x_1 \partial x_2} & \cdots & \frac{\partial^2}{\partial x_1 \partial x_n} \\
\frac{\partial g}{\partial x_2} & \frac{\partial^2 f}{\partial x_2 \partial x_1} & \frac{\partial^2 f}{\partial x_2^2} & \cdots & \frac{\partial^2 f}{\partial x_2 \partial x_n} \\
\frac{\partial g}{\partial x_3} & \frac{\partial^2 f}{\partial x_3 \partial x_1} & \frac{\partial^2 f}{\partial x_3 \partial x_2} & \cdots & \frac{\partial^2 f}{\partial x_3 \partial x_n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{\partial g}{\partial x_n} & \frac{\partial^2 f}{\partial x_n \partial x_1} & \frac{\partial^2 f}{\partial x_n \partial x_2} & \cdots & \frac{\partial^2 f}{\partial x_n^2} \end{array}\right]
\end{equation}
where the $0$ in the upper-left represents an $m\times m$ sub-matrix of zeros and we've added $m$ columns on the left and $m$ rows on top.
To determine if a point is a minimum or a maximum, we look at $n - m$ of the bordered Hessian's principal minors. First we examine the minor made up of the first $2m+1$ rows and columns of $H$ and compute its determinant. Then we look at the minor made up of the first $2m + 2$ rows and columns and compute its determinant. We do the same for the first $2m+3$ rows and columns, and we continue doing this until we compute the determinant of the bordered Hessian itself.
A sufficient condition for a local minimum of $f$ is to have all of the determinants we computed above have the same sign as $(-1)^m$. A sufficient condition for a local maximum of $f$ is that the determinant of the smallest minor have the same sign as $(-1)^{m-1}$ and that the determinants of the principal minors (in the order you computed them) alternate in sign.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Every compact metric space is complete I need to prove that every compact metric space is complete. I think I need to use the following two facts:
*
*A set $K$ is compact if and only if every collection $\mathcal{F}$ of closed subsets with finite intersection property has $\bigcap\{F:F\in\mathcal{F}\}\neq\emptyset$.
*A metric space $(X,d)$ is complete if and only if for any sequence $\{F_n\}$ of non-empty closed sets with $F_1\supset F_2\supset\cdots$ and $\text{diam}~F_n\rightarrow0$, $\bigcap_{n=1}^{\infty}F_n$ contains a single point.
I do not know how to arrive at my result that every compact metric space is complete. Any help?
Thanks in advance.
|
Let $X$ be a compact metric space and let $\{p_n\}$ be a Cauchy sequence in $X$. Then define $E_N$ as $\{p_N, p_{N+1}, p_{N+2}, \ldots\}$. Let $\overline{E_N}$ be the closure of $E_N$. Since it is a closed subset of compact metric space, it is compact as well.
By definition of Cauchy sequence, we have $\lim_{N\to\infty} \text{diam } E_N = \lim_{N\to\infty} \text{diam } \overline{E_N} = 0$. Let $E = \cap_{n=1}^\infty \overline{E_n}$. Because $E_N \supset E_{N+1}$ and $\overline{E_N} \supset \overline{E_{N+1}}$ for all $N$, we have that $E$ is not empty. $E$ cannot have more than $1$ point because otherwise, $\lim_{N\to\infty} \text{diam } \overline{E_N} > 0$, which is a contradiction. Therefore $E$ contains exactly one point $p \in \overline{E_N}$ for all $N$. Therefore $p \in X$.
For all $\epsilon > 0$, there exists an $N$ such that $\text{diam } \overline{E_n} < \epsilon$ for all $n > N$. Thus, $d(p,q) < \epsilon$ for all $q \in \overline{E_n}$. Since $E_n \subset \overline{E_n}$, we have $d(p,q) < \epsilon$ for all $q \in E_n$. Therefore, $\{p_n\}$ converges to $p \in X$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 10,
"answer_id": 2
}
|
Interesting but short math papers? Is it ok to start a list of interesting, but short mathematical papers, e.g. papers that are in the neighborhood of 1-3 pages? I like to read them here and there throughout the day to learn a new result.
For example, I recently read and liked On the Uniqueness of the Cyclic Group of Order n (Dieter Jungnickel, The American Mathematical Monthly Vol. 99, No. 6 (Jun. - Jul., 1992), pp. 545-547, jstor, doi: 10.2307/2324062).
|
On the Cohomology of Impossible Figures by Roger Penrose: Leonardo Vol. 25, No. 3/4, Visual Mathematics: Special Double Issue (1992), pp. 245-247.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46",
"answer_count": 17,
"answer_id": 6
}
|
Proof - There're infinitely many primes of the form 3k + 2 — origin of $3q_1..q_n + 2$ Origin — Elementary Number Theory — Jones — p28 — Exercise 2.6
To instigate a contradiction, postulate $q_1,q_2,\dots,q_n$ as all the primes $\neq 2 (=$ the only even prime) of the form $3k+2$. Consider $N=3q_1q_2\dots q_n+2.$ None of the $q_i$ divides $N$, and $3 \not | N$.
$N$ = 3(the product of odd numbers) + 2 = odd number + even number = odd. Because $N \ge 2 $,
$\color{brown}{♯}$ because $N$ is odd $\implies 2 \not | N$,
thence by $\color{brown}{♯}$ and the Fundamental Theorem of Arithmetic, N = a product of one or more $\color{brown}{odd}$ primes.
The prime divisors of $N$ cannot be all of the shape $3k+1$. At least one of these primes is of the form $3k+2$ — Why? $(3a + 1)(3b + 1) = 3(...) + 1$, thence any number of (not necessarily distinct) primes of the form $3k+1$ is itself of the form $3k+1$.
But $N$ is not of the form $3k+1$. So some prime $p$ of the form $3k+2$ divides $N$. Overhead in first paragraph, we proved $q_i \not|N$ for all $i$. Therefore $p \notin$ $\{q_1,\dots,q_n\}$ of primes of the form $3k+2$, contradiction.
*
*The general proof just starts with primes. Therefore how can you prefigure this proof's different start with odd primes ?
*Where did this choice of $N$ hail from — feels uncanny?
*I don't understand how none of the $q_i$ divides $N$?
|
Let's see if the following observations help:
*
*Different statement $\Rightarrow$ different proof :-)
*$N$ was chosen to be of form $3k+2$ precisely to guarantee that it must be divisible by at least one prime of the form $3k+2$.
*Look at the form of $N$: It was chosen so that $(N-2)$ is divisible by all of $q_i$, so if $N$ was to be divisible by $q_i$ too, $q_i$ would have to divide $2$. But that's impossible, since $q_i$ is an odd prime and thus strictly greater than $2$.
*If you multiply several natural numbers and at least one of them is even, the product will be even. Thus, if we're looking at an odd number and its factorization to primes, no even prime (i.e. no $2$) can occur in it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
The 6 generals problem 6 generals propose locking a safe containing secret stuff with a number of different locks. Each general will get a certain set of keys to these locks. How many locks are required and how many keys must each general have so that, unless 4 generals are present, the safe can't be opened. Generalize to $n$ generals and $m$ minimum number of generals required.
Here's where I've gotten to so far. Define a function $$f(n,m)=k$$ where $n$ and $m$ are as defined above and $k$ is the number of locks required. I've figured out $f(1,1)$, $f(2,1)$, $f(2,2)$ and so on until $f(4,4)$. I've noticed that if I arrange these values in a Pascal-like triangle, I can get the values in the lower row by summing the 2 numbers above it (I can't figure out how to display it using LaTeX). Doing this, I get the number of locks required as $24$ but I'm still working on the key distribution.
My question is whether I'm on the right track, and if so, how do I go about proving my solution. Thanks for your help.
[EDIT] To make it clear, the arrangement must be such that the safe can be opened when any $4$ generals are present and not if the number of generals is $3$ or less.
|
Sorry not to post this as a comment, but I don't have enough points.
Are they supposed to be able to open the safe if and only if at least four generals are present?
Edit: Here is the outline of a solution for the case of four out of six generals, but it generalizes easily.
Let the locks be numbered $1, \ldots, p$, and the generals $1, \ldots, 6$. For each $i = 1, \ldots, p$, let $K_i$ be the set of generals with key number $i$. Thus $K_i$ is a subset of $\{1, \ldots, 6\}$.
The condition that four generals should always be able to open the safe amounts to saying that:
(1): Each $K_i$ should have at least three elements. (6 - 4 + 1 = 3.)
The condition that three generals should never be able to open the safe amounts to saying that:
(2): For each set S = $\{ a, b, c \}$ of three generals, at least one of the sets $K_i$ should be disjoint from $S$.
If we want to minimize the number of sets $K_i$ necessary, we should make the $K_i$'s as small as possible, because of point 2. However, because of point 1, they should have at least three elements. Thus they should all have three elements.
By point 2, the $K_i$'s should in fact constitute all subsets of $\{1, \ldots, 6\}$ with three elements.
Since we obviously wish to avoid repetition of $K_i$'s, the minimal number of $K_i$'s is the number of subsets of $\{1, \ldots, 6\}$ with three elements, which is $\binom{6}{3} = 20$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
By finding solutions as power series in $x$ solve $4xy''+2(1-x)y'-y=0 .$ By finding solutions as power series in $x$ solve
$$4xy''+2(1-x)y'-y=0 .$$
What I did is the following. First I let the solution $y$ be equal to
$$y =\sum_{i=0}^{\infty} b_ix^i =b_0 +b_1x+b_2x^2+\ldots$$
for undetermined $b_i$. Then I found the expression for $y'$ and $y''$,
$$y' =\sum_{i=0}^{\infty} ib_ix^{i-1} =b_1 + 2b_2x+3b_3x^2+\ldots.$$
and
$$y'' =\sum_{i=0}^{\infty} i(i-1)b_ix^{i-2} =2b_2+6b_3x+12b_4x^2\ldots.$$
Now I put these in the original DE to get
$$4\sum i(i-1)b_ix^{i-1}+2\sum ib_i(x^{i-1}-x^i) - \sum b_ix^i =0 $$
where all sums range from $0$ to infinity. Finally this becomes
$$\sum \left\{ (4i(i-i)b_i+2ib_i )x^{i-1}+(-2ib_i-b_i)x^i \right\}=0.$$
At this point I am fairly certain I have already made a mistake somewhere, probably in working out the power series of $y'$ or $y''$. Who can help point it out to me, I am pretty sure in the last sum there should be terms like $b_{i+1}$ or $b_{i+2}$. Thanks for any help or tips!
EDIT I have gotten further by realizing that $$y' =\sum_{i=0}^{\infty} ib_ix^{i-1} =\sum_{i=1}^{\infty} ib_ix^{i-1}=\sum_{i=0}^{\infty} (i+1)b_{i+1}x^{i}$$
and
$$y'' =\sum_{i=0}^{\infty} (i+2)(i+1)b_{i+2}x^{i}.$$
Putting these in the original DE I get
$$\sum \left\{ [4(i+2)(i+1)b_{i+2}-2(i+1)b_{i+1}]x^{i+1} + [2(i+1)b_{i+1}-b_i]x^i \right\}=0.$$
This must be true for all $x$ and thus we have
$$4(i+2)(i+1)b_{i+2}=2(i+1)b_{i+1}$$
and
$$2(i+1)b_{i+1} = b_i.$$
After simplyfying these two conditions are seen to be identical. Now I've set $b_0=1$ to obtain the solution
$$ y = 1 + \frac{x}{2}+ \frac{x^2}{8} +\frac{x^3}{48}+\ldots + \frac{x^i}{2^i(i!)}+\ldots.$$
Now I've arrived at the ackward position where in working out the question here I have actually managed to solve it. My last question is then, does anyone recognize this power series? Thanks!
|
You have made your mistake in the power series. In particular, you need to end up with a recurrence relation and solve that.
$$y'=\sum_{i=0}^\infty{ib_ix^{i-1}}=0+b_1+2b_2x+3b_3x^2+...=\sum_{i=1}^\infty{ib_ix^{i-1}}$$
Now you need to get your lower bound so that it starts at $0$. Rewriting the sum using $i=0$, we get that
$$\sum_{i=1}^\infty{ib_ix^{i-1}}=\sum_{i=0}^\infty{(i+1)b_{i+1}x^i}$$
Similarly,
$$y''=\sum_{i=0}^\infty{i(i-1)b_ix^{i-2}}=0(-1)x^{-2}+1(0)b_1x^{-1}+2(1)b_2+3(2)b_3x+...=\sum_{i=2}^\infty{ib_ix^{i-2}}$$
Now rewrite that also with an index of 0.
$$\sum_{i=2}^\infty{i(i-1)b_ix^{i-2}}=\sum_{i=0}^\infty{(i+2)(i+1)b_{i+2}x^i}$$
Since all the indices are now $0$, you can rewrite the equation as
$$4x\sum_{i=0}^\infty{(i+2)(i+1)b_{i+2}x^i}+2(1-x)\sum_{i=0}^\infty{(i+1)b_{i+1}x^i}-\sum_{i=o}^\infty{b_ix^i}=0$$
You now have one more issue to resolve. You have to include the factors of $x$ in both the $y''$ and $y'$ sums and this gives you two higher powers of $x$. You'll again have to rewrite the sums so that each sum contains sums of $x^i$, not $x^{i+1}$.
$$\sum_{i=0}^\infty{4i(i+1)b_{i+1}x^i}+\sum_{i=0}^\infty{2(i+1)b_{i+1}x^i}-\sum_{i=o}^\infty{2ib_ix^i}-\sum_{i=0}^\infty{b_ix^i}$$
$$=\sum_{i=0}^\infty{[2(2i+1)(i+1)b_{i+1}-(2i+1)b_i}]x^i$$
So, we then see that $2(2i+1)(i+1)b_{i+1}=(2i+1)b_i$ and thus
$$2(i+1)b_{i+1}=b_i\Rightarrow b_{i+1}=\frac{b_i}{2(i+1)}$$
Setting $b_0=1$, and replacing the $b_i$'s in the series expansion of $y$, we get
$$y=\sum_{i=0}^\infty{\frac{x^i}{2^ii!}}=\sum_{i=0}^\infty{\frac{(\frac{x}{2})^i}{i!}}=e^{\frac{x}{2}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/627944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Numer of possibilities of placing people of different nationalities How many ways of sitting 3 people of nationality A, 3 of nationality B and 3 of nationality C there are if no two people of the same nationality can sit near each other (so such placings are prohibited: AABACBCBC, BCCAABCAB)
I came to such result:
$\frac{9!}{(3!)^3}- {3 \choose 1} {3 \choose 2}\frac{8!}{(3!)^3} + {3 \choose 2} \frac{7!}{(3!)^3} - {3 \choose 3}{3 \choose 2} \frac{6!}{(3!)^3}$
Which is most likely bad - can anyone help me get the correct answer?
|
This may not be a good way to solve it, but this will give the answer.
We have
$$\frac{6!}{3!3!}$$
patterns to arrange three $A$s and three $B$s.
1) In each of $BBBAAA,AAABBB,BAAABB$ case, since there are four places where two same letters are next to each other, we can't arrange $C$s in them.
2) In each of $ABBBAA, AABBBA,BBAAAB$ cases, we have to arrange three $C$s as
$$ABCBCBACA, ACABCBCBA,BCBACACAB$$ So, each case has $1$ way to arrange $C$s.
3) In each of $BBABAA,BBAABA,BABBAA$, $BAABBA,ABBAAB,ABAABB,AABABB,AABBAB$ cases, we have to arrange two $C$s as
$$BCBABACA, BCBACABA,BABCBACA,BACABCBA,$$$$ABCBACAB,ABACABCB,ACABABCB,ACABCBAB$$
So, each case has $\binom{5}{1}=5$ ways to arrange the last $C$.
4) In each of $BABABA,ABABAB$, we have $\binom{7}{3}=35$ ways to arrage three $C$s.
5) In each of $BABAAB,BAABAB,ABBABA,ABABBA$, we have to arrange one $C$ as
$$BABACAB,BACABAB, ABCBABA, ABABCBA.$$
So, each case has $\binom{6}{2}=15$ ways to arrange two $C$s.
Thus, the number we want is
$$3\times 1+8\times 5+2\times 35+4\times 15=173.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/628014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
What is this geometric Probability In a circle of radius $R$ two points are chosen at random(the points can be anywhere, either within the circle or on the boundary). For a fixed number $c$, lying between $0$ and $R$, what is the probability that the distance between the two points will not exceed $c$?
|
Unfortunately I don't have enough time to write down the whole thing. Basically you have to distinguish two cases:
A circle with radius c around the first point lies completely inside the circle that is span by R. Then the probability is $$\frac{c^2}{R^2}$$, but this only happens in $$\frac{(R-c)^2}{R^2}$$ of the cases.
Therefore the total probability will be $$\frac{c^2\cdot(R-c)^2}{R^2}+K\cdot\left(1-\frac{(R-c)^2}{R^2}\right)$$
With $K$ being the probability of the distance between the two points larger then c in case of the circle around the first point with radius c intersecting with the outter circle. And $\left(1-\frac{(R-c)^2}{R^2}\right)$ being the probability for this to happen.
In order to calculate K you need the formula for circle-circle intersections:
Given two circles with radi R and r the formula for the intersectionarea A is:$$A(R,r,d)=r^2\cos^{-1}\left(\frac{d^2+r^2-R^2}{2dr}+R^2\right)\cos^{-1}\left(\frac{d^2+R^2-r^2}{2dR}\right)-\frac{1}{2}\sqrt{(-d+r+R)(d+r-R)(d-r+R)(d+r+R)}$$
Please make sure I don't have any typos, the original formula can be found here:
wolfram
The $d$ in the formula is the distance between the two circle's centers.
We will now integrate over this $d$.
$$K = \int_{d=R-c}^{d=R}\left(A(R,r,d)\cdot\frac{d^2}{R^2}\right)$$
You have to multiply with $\frac{d^2}{R^2}$, since the probability grows with $d$ that the first point picked lies at a specific distance from the center. If you plug this into mathematica or something similar you should have your result!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/628082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Solving an irrational equation Solve for $x$ in:
$$\frac{\sqrt{3+x}+\sqrt{3-x}}{\sqrt{3+x}-\sqrt{3-x}}=\sqrt{5}$$
I used the property of proportions ($a=\sqrt{3+x}$, $b=\sqrt{3-x})$:
$$\frac{(a+b)+(a-b)}{(a+b)-(a-b)}=\frac{2a}{2b}=\frac{a}{b}$$
I'm not sure if that's correct.
Or maybe the notations $a^3=3+x$, $b^3=3-x$ ? I don't know how to continue. Thank you.
|
Here is another simple way, exploiting the innate symmetry.
Let $\ \bar c = \sqrt{3\!+\!x}+\sqrt{3\!-\!x},\,\ c = \sqrt{3\!+\!x}-\sqrt{3\!-\!x}.\,$ Then $\,\color{#0a0}{\bar c c} = 3\!+\!x-(3\!-\!x) = \color{#0a0}{2x},\ $ so
$\,\displaystyle\sqrt{5} = \frac{\bar c}c\, \Rightarrow \color{#c00}{\frac{6}{\sqrt{5}}} = {\frac{1}{\sqrt{5}}\!+\!\sqrt{5}} \,=\, \frac{c}{\bar c}+\frac{\bar c}c \,=\, \frac{(c+\bar c)^2}{\color{#0a0}{c\bar c}} - 2 \,=\,\frac{4(3\!+\!x)}{\color{#0a0}{2x}}-2 \,=\, \color{#c00}{\frac{6}x}\ $ so $\ \color{#c00}x = \ldots$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/628314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Is there anything "special" about elementary functions? I just found an article on Liouville's integrability criterion, which gave me a thought.
What makes functions like $\mathrm{Si}(x)$, $\mathrm{Ei}(x)$, $\mathrm{erfc}(x)$, etc. inherently different from $\sin{x}$, $\log{x}$, etc. ?
Related question: What is the exact definition for the elementary field, and what is its significance to this question?
|
Yes, beginning from constants and the identity function, the different elementary functions are obtained by applying closure with respect to different elementary operations. Close by sum and multiplication and we get polynomials. Close by solving linear differential equations or order one, with constant coefficients, and we get the trigonometric and exponentials. Close by computing inverse function and we get logarithm and arc-trigonometric. We get those you mention after closing by integration.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/628414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Prove uniform contininuity (probably by Lipschitz continuity) Prove uniform continuity at $(0,\infty)$ for:
$$f(x) = x + \frac{\sin (x)}{x}$$
Derivative is:
$$f'(x) = \frac{x\cos (x) - \sin (x) + x^2}{x^2}$$
so, taking the limit at $\infty$ I got the value of $1$.
Looking at the graph I see infinite numbers of maximum points converges to $1$ of course.
Wolfram-alpha is telling me there is no global maximum (this is kinda weird, isn't it?).
Anyway, what should I do next? by looking at the graph it is sure can be bounded by a line. I just don't know how to describe it mathematically.
|
$f'(x) = \frac{x\cos (x) - \sin (x) + x^2}{x^2}$ can be extended to a continuous function on $\mathbb{R}$ (check that $\lim _{x \rightarrow 0}f'(x)=1$). Thus it is bounded on the interval $[-1,1]$.
To show that $f'$ is bounded on $\mathbb{R}$ it remains to show that it is bounded on $[1,\infty]$ and $[-\infty,-1]$.
Let us first consider $[1,\infty]$, for $x\geq1$ we have that
$$\frac{x\cos (x) - \sin (x) + x^2}{x^2}\leq\frac{x^2+x}{x^2}\leq\frac{2x^2}{x^2}=2$$
It is equally easy to see that $f'$ is bounded on $[-\infty,-1]$. I leave this case for you to work out.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/628495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Consider $R[x]$ and let $S$ be the subring generated by $rx$, where $r \in R$ is some non-invertible element. Then $x$ is not integral over $S$
Consider $R[x]$ and let $S$ be the subring generated by $rx$, where $r \in R$ is some non-invertible element. Then I want to show that $x$ is not integral over $S$
I'm not seeing why this is the case.
Note : Here R is a commutative ring.
|
Let $\bar{R} = R / rR$. Then $R[x] / rR[x] \cong \bar{R}[X]$ and the image $\bar{S}$ of $S$ in $\bar{R}[\bar{x}]$ is contained in $\bar{R}$ by assumption.
Suppose $x$ is integral over $S$. Then $X$ is integral over $\bar{S}$. Hence $X$ is integral over $\bar{R}$. The indeterminate $X$ is integral over $\bar{R}$ inside the polynomial ring $\bar{R}[X]$ only when $\bar{R}$ is the zero ring. So $rR = R$ and $r$ is a unit in $R$, a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/628562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Proof $\lim\limits_{n \rightarrow \infty}n(a^{\frac{1}{n}}-1)=\log a$ I want to show that for all $a \in \mathbb{R }$
$$\lim_{n \rightarrow \infty}n(a^{\frac{1}{n}}-1)=\log a$$
So far I've got $\lim\limits_{n \rightarrow \infty}ne^{(\frac{1}{n}\log a)}-n$, but when i go on to rearrange this, i come after a few steps back to the beginning...
We have no L'Hôpital and no differential and integral calculus so far.
|
This is essentially the inverse of
$$
e^x=\lim_{n\to\infty}\left(1+\frac xn\right)^n
$$
The sequence of functions $f_n(x)=\left(1+\frac xn\right)^n$ converge equicontinuously; simply note that $f_n'(x)=\left(1+\frac xn\right)^{n-1}\sim e^x$. Thus, we get
$$
\lim_{n\to\infty}n\left(e^{x/n}-1\right)=x
$$
which is the same as
$$
\lim_{n\to\infty}n\left(x^{1/n}-1\right)=\log(x)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/628638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Solve $7x^3+2=y^3$ over integers I need to solve the following
solve $7 x^3 + 2 = y^3$ over integers.
How can I do that?
|
To solve this kind of equations, we have several 'tools' such as
using mod, using inequalities, using factorization...
In your question, using mod will help you.
Since we have
$$y^3-2=7x^3,$$
the following has to be satisfied :
$$y^3\equiv 2\ \ \ (\text{mod $7$}).$$
However, in mod $7$,
$$0^3\equiv 0,$$
$$1^3\equiv 1,$$
$$2^3\equiv 1,$$
$$3^3\equiv 6,$$
$$4^3\equiv 1,$$
$$5^3\equiv 6,$$
$$6^3\equiv 6.$$
So, there is no integer $y$ such that $y^3\equiv 2\ \ \ (\text{mod $7$}).$
Hence, we know that there is no solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/628711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
every element of $V_{\omega}$ is definable My attempt by $\in$-induction. I am trying find formula that will work:
$N=(V_{\omega},\in)\models rank(\varnothing) =0<\omega$
Assume,given $x\in V_\omega$ that $\forall y\in x$ are definable too $N\models rank(y)<\omega$. Then since $x\in V_\omega$, $|x|<\omega\Rightarrow x$ is finite $\Rightarrow rank(x)=rank(y_{1})+...+rank(y_{n})<\omega$. Is the last equality valid i.e. $rank(x)=rank(y_{1})+...+rank(y_{n})$?
thanks
|
Let $N$ be an arbitrary structure of an arbitrary language. Recall that $n\in N$ is called definable (without parameters) if there exists a formula $\varphi(x)$ such that $N\models\varphi(u)\iff u=n$.
We want to show that in $V_\omega$ in the language including only $\in$, every element is definable. We do this by $\in$-induction:
Suppose that $x$ is such that for all $y\in x$, $y$ is definable. Then for every such $y$ there is some formula defining it, $\varphi_y$. We can therefore define $x$ to be the unique set whose elements are defined by one of these $\varphi_y$'s. That is: $$\varphi_x(u):=\forall v(v\in u\leftrightarrow\bigvee_{y\in x}\varphi_y(v))$$
The disjunction occur in the meta-theory, where we know that $x$ is finite, and what are its elements. Therefore there is no circularity arguments here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/628777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Proving $\displaystyle\lim_{n\to\infty}a_n=\alpha$ $a_1=\frac\pi4, a_n = \cos(a_{n-1})$
Let $a_1=\frac\pi4, a_n = \cos(a_{n-1})$
Prove $\displaystyle\lim_{n\to\infty}a_n=\alpha$.
Where $\alpha$ is the solution for $\cos x=x$.
Hint: check that $(a_n)$ is a cauchy sequence and use Lagrange's theorem.
Well I tried to show that it's a cauchy sequence: $|a_m-a_n|<\epsilon , \ \forall m,n$ but I just don't see how it's done with a trig recursion sequence...
|
Note that
$$
\cos(x):\left[\frac1{\sqrt2},\frac\pi4\right]\mapsto\left[\frac1{\sqrt2},\frac\pi4\right]
$$
and on $\left[\frac1{\sqrt2},\frac\pi4\right]$,
$$
\left|\frac{\mathrm{d}}{\mathrm{d}x}\cos(x)\right|=|\sin(x)|\le\frac1{\sqrt2}\lt1
$$
Thus, the Mean Value Theorem ensures that $\cos(x)$ is a contraction mapping on $\left[\frac1{\sqrt2},\cos\left(\frac1{\sqrt2}\right)\right]$. The Contraction Mapping Theorem says that $\cos(x)$ has a unique fixed point in $\left[\frac1{\sqrt2},\cos\left(\frac1{\sqrt2}\right)\right]$. Furthermore, for $n\ge3$,
$$
\left|a_n-a_{n-1}\right|\le\left(\frac1{\sqrt2}\right)^{n-3}\left|a_3-a_2\right|
$$
Thus, $a_n$ is a Cauchy sequence. The point to which it converges must be the unique fixed point where $\cos(x)=x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/628885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Find all homomorphisms from a quotient polynomial ring $\mathbb{Z}[X] /(15X^2+10X-2)$ to $\mathbb{Z}_7$ I'm completely lost, what my problem is I don't get the gist of a quotient polynomial ring nor ANY homomorphisms between it and some $\mathbb{Z}_n$, much less ALL of them.
I know there is something to be done with an ideal, but I really have no clue how to do it. I would be grateful for a full solution or at least anything that may help me understand these structures.
EDIT:
Ok, so this particular polynomial was inreducible in $\mathbb{Z}_7$, but I still have no clue how to deal with other cases, where it is reducible, for instance $X^2+3X+3$. I really would like to know how to solve problems of this kind.
|
Hint: a ring homomorphism $\Bbb Z[X]/(\cdots)\to\Bbb Z/7\Bbb Z$ will be determined by where $X$ is sent. It can't be sent just anywhere; it still has to satisfy $15X^2+10X-2=0$ (does this have roots in $\Bbb Z/7\Bbb Z$?).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/629005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Proving that expression is equivalent to the definition of derivative Let $f$ be differentiable at $x=a$.
Prove that if $x_n \to a^+$ and $y_n \to a^-$ then:
$$\lim_{n\to \infty} \frac{f(x_n)-f(y_n)}{x_n-y_n}=f'(a).$$
Every option that I think about seems to my very trivial, so I believe that I am doing something wrong.
Both numerator and denominator approach zero as $n\to\infty$ as the case of the formal definition of derivative, but it isn't guaranteed that the limits are equal (“$\frac{0}{0}$”).
Any direction?
|
First remark that the result is easy if you assume that $f$ is derivable with continuous derivative in a neighborhood of $a$. Indeed, by the mean value theorem, you can write $\frac{f(x_n)-f(y_n)}{x_n - y_n}$ as $f'(c_n)$ for some $c_n$ between $y_n$ and $x_n$ and the result follows by continuity of the derivative.
For the general case, I would do the following: write $\frac{f(x_n)-f(y_n)}{x_n - y_n}$ as $\frac{f(x_n) - f(a)}{x_n - a} \frac{x_n - a}{x_n - y_n} + \frac{f(a)-f(y_n)}{a-y_n} \frac{a-y_n}{x_n-y_n}$. Then we would like to compute the limits of these quantities and find that this gives $f'(a)$ by addition and multiplication.
The problem is the following: the quantity $\frac{x_n-a}{x_n - y_n}$ may well have no limit at all. For instance, take $a=0$, $x_n = 1/n$ and $y_n = -1/n$ for odd $n$ and $0$ for even $n$. Then this quantity is alternatively $1$ and $1/2$, hence does not converge.
Here is how I resolve this issue. Write $u_n$ for $\frac{f(x_n)-f(y_n)}{x_n - y_n}$. Then, $u_n$ is bounded since in the above writing, the quantites $\frac{x_n-a}{x_n - y_n}$ and $\frac{a-y_n}{x_n-y_n}$ are between $0$ and $1$. It is a standard fact that a bounded sequence converges if and only if it has a unique adherence value.
By Bolzano-Weierstrass theorem, let $u_{\phi(n)}$ be an extracted sequence that converges to $l_\phi$. By further extracting, we can also assume that the sequences $\frac{x_n-a}{x_n - y_n}$ and $\frac{a-y_n}{x_n-y_n}$ also converge to $l_x$ and $l_y$. Remark that necessarily, $l_x + l_y = 1$ since $\frac{x_n-a}{x_n - y_n} + \frac{a-y_n}{x_n-y_n} = 1$. By the above writing, this shows that $l_u = f'(a)l_x + f'(a)l_y = f'(a)$. Hence, the only adherence value of $u_n$ is $f'(a)$, showing that $f'(a)$ is the limit of $u_n$.
Maybe there is a shortcut to the extracting argument...
Edit: Indeed, there is a shortcut:
Write $u_n$ as $\frac{f(a)-f(y_n)}{a-y_n} + \frac{x_n - a}{x_n - y_n}(\frac{f(x_n) - f(a)}{x_n - a} - \frac{f(a)-f(y_n)}{a-y_n})$. The first term tends to $f'(a)$, both terms in the parenthesis have the same limit and the coefficient before the parenthesis is bounded.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/629122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
1-st countability of a uncountable topological product Let $X = \displaystyle\prod_{\lambda \in \Lambda} X_{\lambda}$, where each $X_\lambda$ is $T_2$ and has at least two points. Prove that if $\Lambda$ is uncountable then $X$ is not 1st-countable.
I dont even know how to start proving. Maybe in the opposite direction, $X$ 1st-countable $\implies$ $\Lambda$ countable...
I know I have to use the $T_2$ assumption, but I dont know wheter its as simple as noticing that X is also $T_2$, or do I have to do something with the projections to ceratin $\lambda$, and taking into account that the projections are continious.
Thanks
|
Hint: Membership in a countable family of open sets depends only on countably many coordinates, as is being a superset of a member of such a family. Being a neighbourhood of a point in the product depends on $\lvert \Lambda\rvert$ many coordinates (due to each $X_\lambda$ being nontrivial).
In fact, if you choose your point carefully, all you need about $X_\lambda$ is for each to be nontrivial (as in nontrivial topology), so that there's an $x_\lambda\in X_\lambda$ with a neighbourhood distinct from $X_\lambda$ itself.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/629281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
An explicit imbedding of $(R\mathbf{-Mod})^{op}$ into $S\mathbf{-Mod}$ Given a ring $R$ consider $(R\mathbf{-Mod})^{op}$, the opposite category of the category of left $R$-modules. Since it is the dual to an abelian category and the axioms of abelian categories are self-duals, it is an abelian category itself and thus, by the Freyd-Mitchell Imbedding Theorem, has to be a full subcategory of $S$-Mod, for some ring $S$.
Is it possible to describe $S$ and the embedding in a particular nice form? At least for some special rings, I would like to see a construction of $S$ and the embedding which is as concrete as possible.
|
I think that if Vopěnka's principle is true, then $\mathsf{Mod}(R)^{op}$ can't fully embed in $\mathsf{Mod}(S)$ for any non-zero $R$ and $S$ (all the references that follow are to "Locally Presentable and Accessible Categories" by Adámek and Rosický): If it did, then $\mathsf{Mod}(R)^{op}$ would be bounded (Theorem 6.6), and since it is also complete it would be locally presentable (Theorem 6.14). But if a category and its opposite are both locally presentable, then the category is equivalent to a complete lattice (Theorem 1.64).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/629357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 1
}
|
Proving a determinant inequality Let $A$ be a square matrix in $M_n(\mathbb R)$. Prove that:
$$det(A^2+I_n) \ge 0$$
I wrote $A^2+I_n=A^2 I_n+I_n=I_n(A^2+1)$:
$$det(I_n)\cdot det(A^2+1)=det(A^2+1)$$
How can I prove that is $\ge 0$ ? Thank you.
|
The problem in the OP approach is that we don't know what $A^2+1$ is.
It's better to use complex numbers and the relatioships:
*
*$A^2+I_n=(A-iI_n)(A+iI_n)$, and
*$\det(\overline B)=\overline{\det B}$ for any complex square matrix $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/629408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why does $\lim_{x\to 0^+} x^{\sin x}=1$? I tried writing $y=x^{\sin x}$, so $\ln y=\sin x\ln x$. I tried to rewrite the function as $\ln(x)/\csc x$ and apply l'Hopital to the last function, but it's a mess.
Is there a by hand way to do it?
|
Write $x^{\sin{x}} = \exp{\frac{\log{x}}{\frac{1}{\sin{x}}}}$ and apply rule of l'hopital:
$$\lim_{x\rightarrow 0}x^{\sin{x}} = \exp{\left(\lim_{x\rightarrow 0}\frac{\log{x}}{\frac{1}{\sin{x}}}\right)}$$
Now (by 2$\times$ l'hopital)$$\lim_{x\rightarrow 0}\frac{\log{x}}{\frac{1}{\sin{x}}} = \lim_{x\rightarrow 0}\frac{\sin{2x}}{ \cos{x}-x\sin{x}} = 0$$
Thus
$$\lim_{x\rightarrow 0}x^{\sin{x}} = \exp{\left(\lim_{x\rightarrow 0}\frac{\sin{2x}}{ \cos{x}-x\sin{x}}\right)} = \exp{0} = 1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/629459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 4
}
|
Find $\displaystyle \int_0^1x^a \ln(x)^m \mathrm{d}x$ Find $$\int_0^1x^a \ln(x)^m\ \mathrm{d}x$$ where $a>-1$ and $m$ is a nonnegative integer. I did a subsitiution and changed this into a multiple of the gamma function. I get $(-1)^m m! e^a$ as the solution but Mathematica does not agree with me. Can someone confirm my answer or provide a solution?
|
$\newcommand{\+}{^{\dagger}}%
\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\ds}[1]{\displaystyle{#1}}%
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}%
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}%
\newcommand{\fermi}{\,{\rm f}}%
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}%
\newcommand{\half}{{1 \over 2}}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\isdiv}{\,\left.\right\vert\,}%
\newcommand{\ket}[1]{\left\vert #1\right\rangle}%
\newcommand{\ol}[1]{\overline{#1}}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}%
\newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}%
\newcommand{\sech}{\,{\rm sech}}%
\newcommand{\sgn}{\,{\rm sgn}}%
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
\begin{align}
&\color{#0000ff}{\large\int_{0}^{1}x^{a}\ln^{m}\pars{x}\,\dd x}
= \lim_{\mu \to 0^{+}}\totald[m]{}{\mu}\int_{0}^{1}x^{a}x^{\mu}\,\dd x
=\lim_{\mu \to 0^{+}}\totald[m]{}{\mu}\bracks{1 \over \mu + a + 1}
\\[3mm]&=\lim_{\mu \to 0^{+}}\bracks{\pars{-1}^{m}\,m! \over \pars{\mu + a + 1}^{m + 1}}
=\color{#0000ff}{\large%
{\pars{-1}^{m}\,m! \over \pars{a + 1}^{m + 1}}}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/629543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Simple proof Euler–Mascheroni $\gamma$ constant I'm searching for a really simple and beautiful proof that the sequence $(u_n)_{n \in \mathbb{N}} = \sum\nolimits_{k=1}^n \frac{1}{k} - \log(n)$ converges.
At first I want to know if my answer is OK.
My try:
$\lim\limits_{n\to\infty} \left(\sum\limits_{k=1}^n \frac{1}{k} - \log (n)\right) = \lim\limits_{n\to\infty} \left(\sum\limits_{k=1}^n \frac{1}{k} + \sum\limits_{k=1}^{n-1} [\log(k)-\log(k+1)]\right)$
$ = \lim\limits_{n\to\infty} \left(\frac{1}{n} + \sum\limits_{k=1}^{n-1} \left[\log(\frac{k}{k+1})+\frac{1}{k}\right]\right) = \sum\limits_{k=1}^{\infty} \left[\frac{1}{k}-\log(\frac{k+1}{k})\right]$
Now we prove that the last sum converges by the comparison test:
$\frac{1}{k}-\log(\frac{k+1}{k}) < \frac{1}{k^2} \Leftrightarrow k<k^2\log(\frac{k+1}{k})+1$
which surely holds for $k\geqslant 1$
As $ \sum\limits_{k=1}^{\infty} \frac{1}{k^2}$ converges $ \Rightarrow \sum\limits_{k=1}^{\infty} \left[\frac{1}{k}-\log(\frac{k+1}{k})\right]$ converges and we name this limit $\gamma$
q.e.d
|
Upper Bound
Note that
$$
\begin{align}
\frac1n-\log\left(\frac{n+1}n\right)
&=\int_0^{1/n}\frac{t\,\mathrm{d}t}{1+t}\\
&\le\int_0^{1/n}t\,\mathrm{d}t\\[3pt]
&=\frac1{2n^2}
\end{align}
$$
Therefore,
$$
\begin{align}
\gamma
&=\sum_{n=1}^\infty\left(\frac1n-\log\left(\frac{n+1}n\right)\right)\\
&\le\sum_{n=1}^\infty\frac1{2n^2}\\
&\le\sum_{n=1}^\infty\frac1{2n^2-\frac12}\\
&=\sum_{n=1}^\infty\frac12\left(\frac1{n-\frac12}-\frac1{n+\frac12}\right)\\[9pt]
&=1
\end{align}
$$
Lower Bound
Note that
$$
\begin{align}
\frac1n-\log\left(\frac{n+1}n\right)
&=\int_0^{1/n}\frac{t\,\mathrm{d}t}{1+t}\\
&\ge\int_0^{1/n}\frac{t}{1+\frac1n}\,\mathrm{d}t\\[3pt]
&=\frac1{2n(n+1)}
\end{align}
$$
Therefore,
$$
\begin{align}
\gamma
&=\sum_{n=1}^\infty\left(\frac1n-\log\left(\frac{n+1}n\right)\right)\\
&\ge\sum_{n=1}^\infty\frac1{2n(n+1)}\\[3pt]
&=\sum_{n=1}^\infty\frac12\left(\frac1n-\frac1{n+1}\right)\\[6pt]
&=\frac12
\end{align}
$$
A Better Upper Bound
Using Jensen's Inequality on the concave $\frac{t}{1+t}$, we get
$$
\begin{align}
\frac1n-\log\left(\frac{n+1}n\right)
&=\frac1n\left(n\int_0^{1/n}\frac{t\,\mathrm{d}t}{1+t}\right)\\
&\le\frac1n\frac{n\int_0^{1/n}t\,\mathrm{d}t}{1+n\int_0^{1/n}t\,\mathrm{d}t}\\
&=\frac1{n(2n+1)}
\end{align}
$$
Therefore, since the sum of the Alternating Harmonic Series is $\log(2)$,
$$
\begin{align}
\gamma
&=\sum_{n=1}^\infty\left(\frac1n-\log\left(\frac{n+1}n\right)\right)\\
&\le\sum_{n=1}^\infty\frac1{n(2n+1)}\\
&=\sum_{n=1}^\infty2\left(\frac1{2n}-\frac1{2n+1}\right)\\[6pt]
&=2(1-\log(2))
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/629630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 4,
"answer_id": 0
}
|
How would you explain to a 9th grader the negative exponent rule? Let us assume that the students haven't been exposed to these two rules: $a^{x+y} = a^{x}a^{y}$ and $\frac{a^x}{a^y} = a^{x-y}$. They have just been introduced to the generalization: $a^{-x} = \frac{1}{a^x}$ from the pattern method: $2^2 = 4, 2^1 = 2, 2^0 = 1, 2^{-1} = \frac{1}{2}$ etc. However, some students confuse $2^{-3}$ to be $(-2)(-2)(-2)$ since they are familiar with $2^{3} = 2 \cdot 2 \cdot 2$. This is a low-income urban school and most kids in this algebra class struggle with math dealing with exponents, fractions and decimals. What would be the best approach to reach all 32 students?
|
To a 9th grader, I would say "whenever you see a minus sign in the exponent, you always flip the number."
$$
2^{-3} = \frac{1}{2^{3}} = \frac{1}{8}
$$
I would simply do 10-20 examples on the board, and hammer the point until they start to get it.
You may have to review fractions with them here too.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/629740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61",
"answer_count": 24,
"answer_id": 0
}
|
Algebra 2-Factoring sum of cubes by grouping Factor the sum of cubes: $81x^3+192$
After finding the prime factorization of both numbers I found that $81$ is $3^4$
and $192$ is $2^6 \cdot 3$.
The problem is I tried grouping and found $3$ is the LCM so it would outside in parenthesis. The formula for the sum of cubes is $(a+b) (a^2-ab+b^2)=a^3+b^3$
So I tried writing it this way and got:$(3x+2) (9x^2-6x+4)$ and then I realized I was wrong because $b$ from the formula (sum of cubes) was wrong. So how do I find $b$? and Why did I get $2$ for $b$ in the formula?
|
$$81x^3+192 = 3 (27 x^3 + 64) = 3 ((3x)^3+4^3) \\= 3 (3x + 4) ((3x)^2 - 3x\cdot 4 + 4^2) = 3 (3x+4)(9x^2-12x+16)$$
Since $12^2-4\cdot9\cdot16$ does not have a nice square root further factorization is not possible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/629765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is a supremum? I'm reading here about sequence of functions in Calculus II book,
and there's a theorem that says:
A sequence of functions $\{f_n(x)\}_0^\infty$ converges uniformly to $f(x)$ in domain $D$ $\iff$ $\lim_{n \to \infty} \sup_{x \in D} |f_n(x) - f(x)| = 0.$
I really serached a lot , in Google, Wikipedia and Youtube,
And I'm still having difficulties to understand what is sup.
I'll be glad if you can explain me. thanks in advance!
|
supremum means the least upper bound. Let $S$ be a subset of $\mathbb{R}$
$$
x = \sup(S) \iff ~ x \geq y~\forall y \in S \mbox{ and } \forall \varepsilon > 0, x - \varepsilon \mbox{ is not an upper bound of } S
$$
You may also define $\sup(S) = +\infty$ when $S$ is not bounded above.
The reason why we have supremum instead of simply maximum is that in some subset of $\mathbb{R}$, we do not have maximum element, let's take an open interval $(0,1)$ as an example, $\max\{(0,1)\}$ does not exist, but $\sup\{(0,1)\} = 1$.
Supremum of a nonempty subset having an upper bound always exists by the completeness property of the real numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/629955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Rank of homology basis in Ahlfors' Complex Analysis In Ahlfors' Complex Analysis book Section 4.4.7, he decomposes the complement in the extended plane of a region $\Omega$ into connected components. He then constructs a collection of cycles $\gamma_i$ in $\Omega$, one for each component, such that every cycle $\gamma$ in $\Omega$ is homologous to an integer linear combination of the $\gamma_i$.
Each homology class for $\Omega$ is represented uniquely as such an integer linear combination. He then mentions that there are other such collections of cycles that also serve as a homology basis, but that
...by an elementary theorem in linear algebra we may conclude that every homology basis has the same number of elements.
I think I understand at least partially why this should be true: Homology classes of cycles in $\Omega$ form a module over the integers, and because we've already found the basis $\gamma_i$ we know that it's in fact a free module on these generators. We know from slightly more than elementary algebra that the rank of a finitely generated free module over a commutative ring is well-defined.
Question: What is the elementary linear algebra argument that he's using?
EDIT: Does the fact that it's a module over the integers help out? Can you embed the module of homology classes into its module of fractions (without using a basis to do so), show that the homology basis for the module becomes a basis for the module of fractions as a vector space over the rationals. Then you could use the linear algebra result for dimension of a vector space?
Thank you!
|
Concerning your edit, yes, you can embed the integer homology group into the homology with rational coefficients. You can define chains with rational coefficients for the paths, a cycle is a chain with empty boundary (the boundary of a path is $\text{endpoint} - \text{startpoint}$, the boundary of a chain is the corresponding linear combination of the boundaries of the constituent paths). The integral of $f$ over $\Gamma = \sum c_i\gamma_i$ is
$$\sum c_i\cdot \int_{\gamma_i} f(z)\,dz,$$
the winding number of a chain around a point $a$ not on the trace of the chain is
$$n(\Gamma,a) = \frac{1}{2\pi i}\sum c_i\int_{\gamma_i} \frac{dz}{z-a},$$
two cycles are homologous if $n(\Gamma_1,a) = n(\Gamma_2,a)$ for all $a\notin \Omega$.
A more or less tedious verification shows that $[\Gamma]_\mathbb{Z} \mapsto [\Gamma]_\mathbb{Q}$ is a well-defined embedding of the integer homology into the rational homology.
But determining the rank of a finitely generated free abelian group $B$ by counting the elements of the $\mathbb{F}_p$-vector space $B/pB$ for a prime $p$ ($p = 2$ for example) seems easier and more elementary to me.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/630138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Is $\sin{(\log{x})}$ uniformly continuous on $(0,\infty)$? Is $\sin{(\log{x})}$ uniformly continuous on $(0,\infty)$?
Let $x,y \in (0,\infty)$.
$$
|f(x)-f(y)| = |\sin{\log{x}} - \sin{\log{y}}| = \left|2 \cos{\frac{\log{xy}}{2}}\sin{\log{\frac{x}{y}}{2}} \right| \leq 2 \left|\sin{\frac{\log{\frac{x}{y}}}{2}} \right| \leq \left|\log{\frac{x}{y}} \right| = |\log{x} - \log{y}|
$$
If we consider the interval is $(1,\infty)$, then it is easy to finish the above inequality.
But now the interval is $(0,\infty)$, I guess it is not uniformly continuous and now I don't have any idea to show it is not uniformly continuous.
Can anyone give a favor or some hints? Thank you!
|
No it is not uniformly continuous because $\log x$ is not uniformly continuous on $(0,1)$.
If we take the sequence of numbers $(e^{-n})$ then $|e^{-n} -e^{-n-1}|$ tends to zero but $|\log(e^{-n})-\log(e^{-n-1})|=1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/630236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Applying Open Mapping Theorem Let $X$ and $Y$ Banach spaces and $F: X \to Y$ a linear, continuous and surjective mapping. Show that if $K$ is a compact subset of $Y$ then there exists an $L$, a compact subset of $X$ such that $F(L)= K$.
I know by the Open Mapping Theorem that $F$ is open. What else can I do? Thank yoU!
|
I suspect there must be a simpler proof for Banach spaces, but as I don't see one, here is what I came up with, using a faint recollection of the proof of Schwartz' lemma (simplified, since we're dealing with Banach spaces, not Fréchet spaces):
By the open mapping theorem, we know that there is a $C > 1$ such that
$$\bigl( \forall y \in Y\bigr)\bigl(\exists x \in X\bigr)\bigl(\lVert x\rVert \leqslant C\cdot \lVert y\rVert\land y = F(x)\bigr).\tag{1}$$
Without loss of generality, assume that $K$ is contained in the (closed, not important) unit ball of $Y$ and nonempty.
Since $K$ is compact, there is a finite set $M_1 = \{ y_{1,\nu} : 1 \leqslant \nu \leqslant n_1\}$ such that
$$K \subset \bigcup_{\nu=1}^{n_1} B_{1/C}(y_{1,\nu}).$$
For every $1\leqslant \nu \leqslant n_1$, choose an $x_{1,\nu} \in X$ with $\lVert x_{1,\nu}\rVert \leqslant C$ and $F(x_{1,\nu}) = y_{1,\nu}$. Let $L_1 = \{ x_{1,\nu} : 1 \leqslant \nu \leqslant n_1\}$.
For $k \geqslant 2$, if $M_1,\dotsc,M_{k-1}$ and $L_1,\dotsc,L_{k-1}$ have already been constructed, let $M_k = \{y_{k,\nu} : 1 \leqslant \nu \leqslant n_k\}$ a finite set such that
$$K \subset \bigcup_{\nu = 1}^{n_k} B_{1/C^k}(y_{k,\nu}).$$
For each $\nu$, choose a $\mu_k(\nu) \in \{ 1,\dotsc,n_{k-1}\}$ such that $\lVert y_{k,\nu} - y_{k-1,\mu_k(\nu)}\rVert < C^{1-k}$, and by $(1)$ a $z_{k,\nu} \in X$ with $\lVert z_{k,\nu}\rVert \leqslant C\cdot \lVert y_{k,\nu} - y_{k-1,\mu_k(\nu)}\rVert$ and $F(z_{k,\nu}) = y_{k,\nu} - y_{k-1,\mu_k(\nu)}$. Let $x_{k,\nu} = x_{k-1,\mu_k(\nu)} + z_{k,\nu}$
and $L_k = \{x_{k,\nu} : 1 \leqslant \nu \leqslant n_k\}$.
By construction, $F(L_k) = M_k$, and hence, setting $L_\ast = \bigcup\limits_{k=1}^\infty L_k$, we see that
$$F(L_\ast) = F\left(\bigcup_{k=1}^\infty L_k\right) = \bigcup_{k=1}^\infty F(L_k) = \bigcup_{k=1}^\infty M_k$$
is dense in $K$.
$L_\ast$ is also totally bounded: Let $\varepsilon > 0$ arbitrary. Choose a $k \geqslant 1$ such that $C^{-k} < \varepsilon(C-1)$. Then
$$L_\ast \subset \bigcup_{j=1}^{k+2}\bigcup_{\nu=1}^{n_j} B_{\varepsilon}(x_{j,\nu}).\tag{2}$$
That is clear for $x \in L_m$ where $m \leqslant k+2$, so let $x'_m\in L_m$ where $m > k+2$. By construction, there is an $x'_{m-1} \in L_{m-1}$ with $\lVert x'_m-x'_{m-1}\rVert \leqslant C\cdot \lVert F(x'_m) - F(x'_{m-1})\rVert$ and $\lVert F(x'_m) - F(x'_{m-1})\rVert \leqslant C^{1-m}$, so $\lVert x'_m-x'_{m-1}\rVert \leqslant C^{2-m}$. If $m-1 > k+2$, again by construction, there is an $x'_{m-2} \in L_{m-2}$ with $\lVert x'_{m-1} -x'_{m-2}\rVert \leqslant C^{3-m}$. We thus obtain a finite sequence $(x'_{r})_{k+2 \leqslant r \leqslant m}$ with $\lVert x'_r - x'_{r+1}\rVert \leqslant C^{1-r}$ and $x'_r \in L_r$, whence
$$\lVert x'_m - x'_{k+2}\rVert \leqslant \sum_{r=k+2}^{m-1} \lVert x'_{r+1} - x'_r\rVert \leqslant \sum_{r=k+2}^{m-1} C^{1-r} < \frac{1}{C^{k}}\frac{1}{C-1} < \varepsilon,$$
showing $(2)$. Since $\varepsilon > 0$ was arbitrary, $L_\ast$ is totally bounded.
Hence $L := \overline{L_\ast}$ is compact, and therefore
$$F(L) \subset \overline{F(L_\ast)} = K$$
is a dense compact subset of $K$, i.e. $F(L) = K$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/630314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
calculation of Stefan's constant In the calculation of Stefan's constant one has the integral
$$J=\int_0^\infty \frac{x^{3}}{\exp\left(x\right)-1} \, dx$$
which according to Wikipedia is equal to $\frac{\pi^4}{15}$.
In this page of Wikipedia there is a (long) method of calculation using the Taylor expansion of $f(k) = \int_0^\infty \frac{\sin\left(kx\right)}{\exp\left(x\right)-1} \, dx$ obtained with contour integration.
In this article is written that $J=\Gamma(4)\,\mathrm{Li}_4(1) = 6\,\mathrm{Li}_4(1) = 6 \zeta(4)$. How to obtain these equalities?
It is also written that there is number way to obtain the result. Someone know another way?
|
Writing the integral as
$$
J = \int_0^\infty \frac{x^3}{\mathrm{e}^{x} -1} \mathrm{d}x = \int_0^\infty \frac{x^3 \mathrm{e}^{-x} }{ 1 - \mathrm{e}^{-x}} \mathrm{d}x = \lim_{\epsilon \downarrow 0} \int_\epsilon^\infty \frac{x^3 \mathrm{e}^{-x} }{ 1 - \mathrm{e}^{-x}} \mathrm{d}x
$$
Now for $x >\epsilon$, $\left(1-\mathrm{e}^{-x}\right)^{-1} = \sum_{k=0}^\infty \mathrm{e}^{-k x}$. Interchanging the summation and integration, justified by Tonelli's theorem:
$$ \begin{eqnarray}
J &=& \lim_{\epsilon \downarrow 0} \sum_{k=0}^\infty \int_{\epsilon}^\infty x^3 \mathrm{e}^{-(k+1)x} \mathrm{d}x \\ &=& \lim_{\epsilon \downarrow 0} \sum_{k=0}^\infty \frac{\Gamma(4)}{(k+1)^4} \mathrm{e}^{-k \epsilon} \left(1 + (k+1) \epsilon + \frac{1}{2} (k+1)^2 \epsilon^2 + \frac{1}{3!} (k+1)^3 \epsilon^3 \right) \\ &=& \Gamma(4) \lim_{\epsilon \downarrow 0} \left( \operatorname{Li}_4(\mathrm{e}^{-\epsilon}) + \epsilon \operatorname{Li}_3(\mathrm{e}^{-\epsilon}) + \frac{\epsilon^2}{2!} \operatorname{Li}_2(\mathrm{e}^{-\epsilon}) + \frac{\epsilon^3}{3!} \operatorname{Li}_1(\mathrm{e}^{-\epsilon}) \right)
\end{eqnarray}
$$
Since $\operatorname{Li}_1(\mathrm{e}^{-\epsilon}) = -\log\left(1-\mathrm{e}^{-\epsilon}\right)$ and $\lim_{\epsilon \downarrow 0} \epsilon^3 \log\left(1-\mathrm{e}^{-\epsilon}\right) = 0$, and due to finiteness of $\operatorname{Li}_{k}(1)$ for $k \geqslant 2$ we have
$$
J = \Gamma(4) \operatorname{Li}_4(1) = \Gamma(4) \zeta(4) = \frac{\pi^4}{15}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/630406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Help with a contour integration I've been trying to derive the following formula
$$\int_\mathbb{R} \! \frac{y \, dt}{|1 + (x + iy)t|^2} = \pi$$
for all $x \in \mathbb{R}, y > 0$. I was thinking that the residue formula is the way to go (and would prefer a solution by this method), but I keep getting stuck either proceeding with the function as is and choosing the correct contour or finding a substitution which makes things easier. I would greatly appreciate some help on how best to proceed. Thanks in advance.
|
Note that
$$|1+(x+i y)t|^2=1+2 x t +(x^2+y^2) t^2$$
The integral is therefore a straightforward application of the residue theorem, if you want. That is, evaluate
$$\int_{-\infty}^{\infty} \frac{dt}{1+2 x t +(x^2+y^2) t^2}$$
The poles are at $t_{\pm}=(-x \pm i y)/(x^2+y^2)$. If we close in the upper half plane with a semicircle of radius $R$, and let $R\to\infty$, the integral about the circular arc vanishes as $\pi/((x^2+y^2)R)$, and by the residue theorem, the integral is
$$i 2 \pi \frac1{2 (x^2+y^2) t_++2 x} = \frac{i 2 \pi}{2 (-x+i y)+2 x} = \frac{\pi}{y}$$
as claimed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/630563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
What's the explanation for why n^2+1 is never divisible by 3? What's the explanation for why $n^2+1$ is never divisible by $3$?
There are proofs on this site, but they are either wrong or overcomplicated.
It can be proved very easily by imagining 3 consecutive numbers, $n-1$, $n$, and $n+1$. We know that exactly one of these numbers must be divisible by 3.
$$(n-1)(n)(n+1)=(n)(n-1)(n+1)=(n)(n^2-1)$$
Since one of those first numbers had to have been divisible by $3$, this new product $(n)(n^2-1)$ must also be divisible by $3$. That means that either $n$ (and by extension $n^2$) or $n^2-1$ is divisible by $3$. If one of those has to be divisible by $3$, then $n^2+1$ cannot be.
So it is definitely true. My question is why is this true, what is inherent about $1$ more than a square number that makes it not divisible by $3$? Another way of saying this might be to explain it to me as if I don't know algebra.
|
one of $n-1,n$ or $n+1$ is divisible by $3$.
If it is $n$ then so is $n^2$.
If it is not $n$, then one of $n-1$ or $n+1$ is divisible by $3$, and hence so is their product $n^2-1$.
Thus, either $n^2$ or $n^2-1$ is a multiple of $3$. If$n^2+1$ would be a multiple of three, then one of $2=(n^2+1)-(n^2-1)$ or $1=(n^2+1)-n^2$ would be a multiple of three.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/630742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 8,
"answer_id": 1
}
|
How to rewrite $7-\sqrt 5$ in root form without a minus sign? How to rewrite $7-\sqrt 5$ in root form without a minus sign ?
For clarity "root form " means an expression that only contains a finite amount of positive integers , additions , substractions , multiplications and root extractions (sqrt, cuberoot etc).
For example some quintic equations cannot be solved in root form.
A " root form without a minus sign " means an expression that only contains a finite amount of positive integers , additions , multiplications and root extractions (sqrt , cuberoot etc).
So the solution could look something like this :
$$ 7-\sqrt5 = \sqrt{...+1+(...)^{\frac{2}{3}}}+\sqrt{...+2(...)^{\frac{1}{3}}}$$
How to solve such problems ?
EDIT
Warning: $\dfrac{44}{7+\sqrt 5}$ is not a solution, since no divisions are allowed!
I got that answer 3 times now so I put it in the OP as a warning , not just the comments.
|
Let $n$ be some positive integer.
$(7-\sqrt 5)^n = r$ where $r$ is in the desired form but there is no root symbol over the entire expression on the RHS.
Now every root symbol in $r$ denotes a principal root.
The number of ways that we can write the same identity $r$ but replace one or more of its roots by a nonprincipal is given by the product $a_1 a_2 ... a_m = k$ where the amount of roots symbols are $m$ and the roots are $*^{1/a_i}$.
We denote the Original $r$ and the nonprincipal ones as
$r_0 =r$
$r_i$
( i is just an index )
Therefore (by very basic galois theory) there exists an integer polynomial $P$ of degree $k$ such that $P(x) = (x-r_0)(x-r_1)(x-r_2)...(x-r_i)...(x-r_k).$
( notice that $P$ is completely factored and hence we have unique factorization )
Now clearly $r = r_0$ is the max in absolute value of all the $r_i$.
Call this PROPERTY 1.
It is easy to show with some elementary ring theory or number theory that if
$(7-\sqrt 5)^n$ is a zero of an integer polynomial then so is $(7+\sqrt 5)^n$.
Therefore some $r_i$ must be equal to $(7+\sqrt 5)^n$.
This is PROPERTY 2.
Now notice that |$(7+\sqrt 5)^n$| > |$(7-\sqrt 5)^n$|.
This is PROPERTY 3.
Combining property 1,2 and 3 we see that $r$ must be both the largest in absolute value of all the $r_i$ and NOT the largest in absolute value of all the $r_i$.
Proof by contradiction.
QED.
Sorry if I did not use much of Galois theory , Im not so good with it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/630791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 5,
"answer_id": 4
}
|
I flip M coins, my opponent flips N coins. Who has more heads wins. Is there a closed form for probability? In this game, I flip M fair coins and my opponent flips N coins. If I get more heads from my coins than my opponent, I win, otherwise I lose. I wish to know the probability that I win the game.
I came to this:
$$P(victory) = \sum\limits_{i=1}^M \left(\binom{M}{i}\left(\frac{1}{2}\right)^i\left(\frac{1}{2}\right)^{M-i}
\times
\sum\limits_{j=0}^{i-1} \left(\binom{N}{j}\left(\frac{1}{2}\right)^j\left(\frac{1}{2}\right)^{N-j} \right)
\right)$$
My questions would be: Is the above formula correct? And can a closed form formula exist, and if not, is there a simple proof?
|
If you toss $M$ coins, there are ${M \choose m}$ ways to get exactly $m$ heads.
Then, the number of ways you can win when your opponent tosses $N$ coins, given that you've tossed $m$ heads, is
$${M \choose m}\sum_{i=0}^{m-1}{N \choose i}.$$
Now, just sum over all possible values of $m$, and divide by the total number of outcomes:
$$P_{win}(M, N) = 2^{-(M+N)}\sum_{m=0}^{M}{M \choose m}\sum_{i=0}^{m-1}{N \choose i}.$$
This looks pretty close to what you have.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/630873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Necessary condition for have same rank Let $P,Q$ real $n\times n$ matrices such that $P^2=P$ , $Q^2=Q$ and $I-P-Q$ is an invertible matrix.
Prove that $P$ and $Q$ have the same rank.
Some help with this please , happy year and thanks.
|
Let $V$ be the vector space on which all these matrices act. First, note that $V = P(V) \oplus (I-P)V$ (and in fact, $P(V) = \ker (I-P),$ $(I-P)V = \ker P.$ Similarly for $Q$ instead of $P.$
Now, notice that the third condition states that for no vector is it true that $(I-P)v = Q v.$ This means that $\dim Q (V) \leq \dim P(V).$ But by a symmetric argument, $\dim P(V) \leq \dim Q(V).$ So, $\dim Q(V) = \dim P(V).$ Now the result follows, since the rank of $P$ (or $Q$) is equal to the dimension of the image.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/630942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Direct proof of empty set being subset of every set Recently I finished my first pure mathematics course but with some intrigue about some proofs of definitions by contradiction and contrapositive but not direct proofs (the existence of infinite primes for example), I think most of them because the direct proof extends away from a first mathematics course or the proofs by contradiction/contrapositive are more didactic. The one that most bothers me in particular is the demonstration that the empty set is a subset of every set, and it is unique. I understand the uniqueness and understand the proof by contradiction:
"Suppose $\emptyset \subsetneq A$ where $A$ is a set. So it exists an element $x \in \emptyset$ such that $x \notin A$ wich is absurd because $\emptyset$ does not have any elements by definition."
but I would like to know if there exists a direct proof of this and if indeed extends from a first course. Thanks beforehand.
|
Yes, there's a direct proof:
The way that we show that a set $A$ is a subset of a set $B$, i.e. $A \subseteq B$, is that we show that all of the elements of $A$ are also in $B$, i.e. $\forall a \in A, a\in B$.
So we want to show that $\emptyset \subseteq A$. So consider all the elements of the empty set. There are none. Therefore, the statement that they are in $A$ is vacuously true: $\forall x \in \emptyset, x \in A$. So $\emptyset \subseteq A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/631042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 9,
"answer_id": 0
}
|
Evaluate the limit $\lim_{x\to 0} \frac{(\tan(x+\pi/4))^{1/3}-1}{\sin(2x)}$ Evaluate the limit
$$\lim_{x\to 0} \frac{(\tan(x+\pi/4))^{1/3}-1}{\sin(2x)}$$
I know the limit is $1\over3$ by looking at the graph of the function, but how can I algebraically show that that is the limit. using this limit: $$\lim_{x \rightarrow 0} \frac{(1+x)^c -1}{x} =c$$? (without L'Hopital Rule)
|
Use the definition of derivative: $$f'(x) = \lim_{h \to 0} \frac{f(x+h)-f(x)}{h}.$$ Second hint: multiply numerator and denominator by $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/631117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Proving that two sets are diffeomorphic I have the following two sets
$\mathcal{S}= \left\lbrace (x,y,z,w) \in \mathbb{R}^4 \mid x^2+y^2- z^2-w^2 = 1 \right\rbrace$
and $\mathcal{S}' = \left\lbrace (x,y,z,w) \in \mathbb{R}^4 \mid x^2+y^2- z^2-w^2 = r \right\rbrace$
for some non-zero real number $r$.
I need to show that these two sets are diffeomorphic.
I considered taking two cases for $r$ ($r>0$ and $r<0$);
then considering the map $f :\mathcal{S} \rightarrow \mathcal{S}' $ where $f( x,y,z,w) = \sqrt{|r|} ( x,y,z,w) $ when $r>0$.
Is this correct?
Thank you.
|
As you wrote yourself, your argument works only for $r\gt 0$.
If $r\lt 0$ the trick is to notice that the change of variables $x=Z,y=W,z=X, w=Y$ shows that $S$ is diffeomorphic to $Z^2+W^2-X^2-Y^2=1$ and thus to $x^2+y^2-z^2-w^2=-1$.
You can then apply your argument to show that you can replace the $-1$ on the right hand side of the equation $x^2+y^2-z^2-w^2=-1$ by any negative number $r$ to prove that $S$ is also diffeomorphic to the manifold $S'$ given by $x^2+y^2-z^2-w^2=r$ for any $r\lt 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/631225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Intersection of conics By conic we understand a conic on the projective plane $\mathbb{P}_2=\mathbb{P}(V)$, where $V$ is $3$-dimensional. I'd like to ask how to find the number of points in the intersection of two given smooth conics $Q_1$ and $Q_2$ (given by $q_1,q_2\in S^2V^*$ respectively). So, we are to find the set of all $(x_0,x_1,x_2)\in V$ such that $q_1(x_0,x_1,x_2)=q_2(x_0,x_1,x_2)=0$. First, is this number necessarily finite? And second, how to describe such intersection of quadrics (i.e. 'conics' in $\mathbb{P}^n$) in higher dimensions?
I'm sorry for the lack of my 'research', I'm really stuck on this problem.
|
First, is this number necessarily finite?
No: a degenerate conic factors into two lines. Two such degenerate conics might have a line in common. This line would be a continuum of intersections. Barring such a common component, the number of intersections is limited to four.
And second, how to describe such intersection of quadrics (i.e. 'conics' in $\mathbb P^n$) in higher dimensions?
If you want to count them, then Bézout's Theorem, as mentioned in a comment, is indeed the way to go. Wikipedia includes a generalization to higher dimensoons. Applied to quadrics, you get the result that barring shared components, $n$ quadrics intersect in up to $2^n$ points. In exactly $2^n$ if you include complex points of intersection and take multiplicities into account.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/631322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Book request: mathematical logic with a semantical emphasis. Suppose I am interested in the semantical aspect of logic; especially the satisfaction $\models$ relation between models and sentences, and the induced semantic consequence relation $\implies,$ defined by asserting that $\Gamma \implies \varphi$ iff whenever $M \models \Gamma$ we have $M \models \varphi$. Suppose, however, that I am not currently interested in any of the following (admittedly very important) issues.
*
*Founding mathematics.
*Formal systems, and the limitations of formal systems.
*Recursive axiomatizability.
*Computability theory.
Given my particular mathematical interests, can anyone recommend a good, fairly elementary mathematical logic book?
|
The book "Éléments de Logique Mathématique" by Kreisel and Krivine (which I believe has an English translation, probably with the obvious title "Elements of Mathematical Logic") takes a fiercely semantical approach to the basic parts of mathematical logic. The material you don't want, about the axiomatic method and foundations of mathematics, is relegated to a couple of appendices.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/631405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Prove that $\lim\limits_{n \rightarrow \infty} \left(\frac{23n+2}{4n+1}\right) = \frac{23}{4} $. My attempt:
We prove that $$\displaystyle \lim\limits_{n \rightarrow \infty} \left(\frac{23n+2}{4n+1}\right) = \frac{23}{4} $$
It is sufficient to show that for an arbitrary real number $\epsilon\gt0$, there is a $K$
such that for all $n\gt K$, $$\left| \frac{23n+2}{4n+1} - \frac{23}{4}
\right| < \epsilon. $$
Note that $$ \displaystyle\left| \frac{23n+2}{4n+1} - \frac{23}{4} \right| = \left| \frac{-15}{16n+4} \right| $$ and for $ n > 1 $ $$ \displaystyle \left| \frac{-15}{16n+4} \right| = \frac{15}{16n+4} < \frac{1}{n}. $$
Suppose $ \epsilon \in \textbf{R} $ and $ \epsilon > 0 $. Consider $ K = \displaystyle \frac{1}{\epsilon} $. Allow that $ n > K $. Then $ n > \displaystyle \frac{1}{\epsilon} $. So $ \epsilon >\displaystyle \frac{1}{n} $.
Thus
$$ \displaystyle\left| \frac{23n+2}{4n+1} - \frac{23}{4} \right| = \left| \frac{-15}{16n+4} \right| = \frac{15}{16n+4} < \frac{1}{n} < \epsilon. $$ Thus $$ \displaystyle \lim\limits_{n \rightarrow \infty} \left(\frac{23n+2}{4n+1}\right) = \frac{23}{4}. $$
Is this proof correct? What are some other ways of proving this? Thanks!
|
Is this proof correct? What are some other ways of proving this?
Thanks!
Your proof is correct with the caveat that you are a bit more precise about what $\epsilon$ and $K$ mean. Another way to prove this is using l'Hôpital's rule. Let $f(n)=23n+2$, $g(n)=4n+1$, then we can see that
$$
\lim_{n\rightarrow\infty} f(n) = \lim_{n\rightarrow\infty} g(n) = \infty
$$
In this case, the rule applies because you have an "indeterminant form" of $\infty/\infty$. Then the rule is that
$$
\lim_{n\rightarrow\infty}\frac{f(n)}{g(n)} = \lim_{n\rightarrow\infty}\frac{f'(n)}{g'(n)}
$$
All that remains is to evaluate $f'(n)=23$, $g'(n)=4$,
$$
\lim_{n\rightarrow\infty}\frac{f(n)}{g(n)} = \lim_{n\rightarrow\infty}\frac{23}{4} = \frac{23}{4}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/631462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Square of Bernoulli Random Variable I was wondering about the distribution of the square of a Bernoulli RV. My background in statistics is not too good, so maybe this doesn't even make sense, or it is a trivial problem.
Let, $Z\sim X^2$, where $X\sim \text{Ber}(p)$.
$F_Z(z)=\Pr(X^2\leq z)$
$=\Pr(-z^{1/2}\leq X\leq z^{1/2})$
$=F_X(z^{1/2})-F_X(-z^{1/2})$
At this point I'm pretty confused I mean the CDF is right-continuous while I know $Z$ is a discrete RV.
$\implies P_Z(z)=\frac{ d}{dz}\{F_X(z^{1/2})-F_X(-z^{1/2})\}$
I guess you can define the derivative to be:
$\frac{d}{dx}f(x)=\frac{f(x+1)-f(x)}{1}$ or something... and we have
$F_X(x)=\begin{cases}
0, & \text{if }x<0 \\
1-p, & \text{if }0\leq x\lt 1 \\ 1, & \text{if } x\geq1
\end{cases}$
Any help is appreciated (is my approach correct?)
|
If $X$ is Bernoulli, then $X^2=X$.${}{}{}{}{}{}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/631704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Does this orbifold embed into $\mathbb{R}^3$? Let $X$ be the space obtained by gluing together two congruent equilateral triangles along corresponding edges.
Note that $X$ has the structure of a Riemannian manifold except at the three cone points. In particular, $X$ is a Riemannian orbifold.
Is there an isometric embedding of $X$ into $\mathbb{R}^3$?
|
If you want the image to be convex, then no. Otherwise, my guess is that Kuiper's theorem gives you an embedding...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/631800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
What is the purpose of universal quantifier? The universal generalization rule of predicate logic says that whenever formula M(x) is valid for its free variable x, we can prefix it with the universal quantifier,
M(x) ⊢ ∀x M(x).
But it seems then makes no sense. Why do you introduce a notion that does not mean anything new?
|
$M(x)$ doesn't strictly have a truth value; $\forall x.M(x)$ does. So, different beasts. Not to mention "$T\vdash M(x)$" is not a sentence in the language, while again, "$\forall x.M(x)$" is.
The universal generalization rule connects a piece of metatheory ("for any variable '$x$', '$M(x)$' is provable", or some such), with a statement in the theory itself. The distinction is a subtle one, and won't often trip one up in everyday logic use, but when studying some truly abstract reaches of logic like model theory, you can end up in some tangled fallacies if you don't keep theory and metatheory apart.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/631901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Fixed point of tree automorphism Given the tree $T$ and its automorphism $\phi$ prove that there exists a vertex $v$ such that $\phi(v)=v$ or an edge $\{{u,v\}}$ such that $\phi(\{{u,v}\}) = \{{u,v}\}$
|
See this wikipedia article. The center is preserved by any automorphism. There is also the baricenter of a tree (which is a vertex for which the sum of the distances to the other points is minimized). Again, there is either one such or two adjacent ones. Are the center and the baricenter the same? For you to find out.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/631970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Can someone explain how this linear equation was solved [proof provided] Can someone please explain to me the steps taken in the proof provided to solve the linear equation? [1]: http://i.imgur.com/2N52occ.jpg "Proof"
What I don't understand is how he removed the denominator of both fractions (3,5 respectively). I know that 3 * 5 is 15, so he just multiplied everything by 15?
Now I understand that he found the least common multiples of 3 and 5 (which I kinda already knew he was trying to do), however, did he use the 5 from the 2nd part of the equation (2m / 5) or from the first part (5/1). I think that is where I lost track.
|
Yes. The proof just multiplied everything by $15$ which is the least common multiple of $3,5$.
Note that in this case $3\times 5=15$ is the least common multiple of $3,5$, but if you have two denominators $3,6$, then the least common multiple is $6$. ($3\times 6=18$ is not the least common multiple.)
In general, if you have several fractions in an equation, you can multiply everything by the least common multiple of all of the denominators to get an equation without any fraction.
So, you'll be able to solve it more easily.
EDIT : To answer another quation.
The answer is the second. You can write as
$$\frac 51-\frac{2m+7}{3}=\frac{2m}{5}.$$
Hence, the denominators are only $1,3,5$.
$$15\times \frac 51-15\times \frac{2m+7}{3}=15\times \frac{2m}{5}$$
$$15\times 5-5(2m+7)=3(2m).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Invertible function $f(x) = \frac{x^3}{3} + \frac{5x}{3} + 2 $ How can I prove that $f(x) = \frac{x^3}{3} + \frac{5x}{3} + 2 $ is invertible.
First I choose variable $x$ for $y$ and tried to switch and simplified the function but I am stuck. Need some help please.
|
Both $g(x)=\frac{x^3}3$ and $h(x)=\frac{5x}3+2$ are increasing functions. (This should be clear if you know graphs of some basic functions.)
Sum of two (strictly) increasing functions is again an increasing function, therefore $f(x)=g(x)+h(x)$ is strictly increasing.
If a function is strictly increasing, then it is injective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Ansatz of particular solution, 2nd order ODE Find the particular solution of $y'' -4y' +4y = e^{x}$
Helping a student with single variable calculus but perhaps I need some brushing up myself. I suggested y should have the form $Ce^{x}$. This produced the correct answer, but the solution sheet said the correct ansatz would be $z(x)e^x$. I don't understand the point of the $z$ here when $e^x$ isn't accompanied by a polynomial or whatever. Am I missing something?
|
Exp[x] alone is a particular equation of the ODE. On the othe hand, the canonical equation has two identical roots corresponding to r=2. Then the general equation will contain a term Exp[2x] and, because of the degeneracy a term x Exp[2x].
So, the general solution of the ODE is y[x] = Exp[x] + (C1 + C2 x) Exp[2 x]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Number of zeros equals number of poles The following is an old qualifying exam problem I cannot solve:
Let $f$ be a meromorphic function (quotient of two holomorphic functions) on an open neighborhood of the closed unit disk. Suppose that the imaginary part of $f$ does not have any zeros on the unit circle, then the number of zeros of $f$ in the unit disk equals the number of poles of $f$ in the unit disk.
It seems this problem begs Rouche's theorem, but I cannot seem to apply it correctly.
|
This is not quite Rouche though everything in the end boils down to the Cauchy integral formula. To count zeros and poles you pass to the logarithmic derivative i.e. $\frac{f'}{f}$ and then integrate over the unit circle (which you are allowed to do since $f$ does not vanish there by hypothesis). Why you can take a continuous branch of the log is explained in the answer by @Sourav.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Prove or disprove the implication: Prove or disprove the implication:
$a^2\cdot \tan(B-C)+ b^2\cdot \tan(C-A)+ c^2\cdot \tan(A-B)=0 \implies$
$ ABC$ is an isosceles triangle.
I tried to break down the left hand side in factors, but all efforts were in vain.
Does anyone have a suggestion?
Thank you very much!
|
I believe I have an example in which the identity is true but the triangle is not isosceles. All the algebra in the following was done by Maple.
Let $a=1$ and $b=2$, and take $c^2$ to be a root of the cubic $x^3-5x^2-25x+45$. The cubic has two roots between $1$ and $9$, either of which will give a valid set of sides $\{a,b,c\}$ for a triangle.
Now $c^2\ne4$ since $4$ is not a root of the cubic; so $c\ne2$; likewise $c\ne1$; so the triangle is not isosceles. Future forbidden values of $c$ are checked in the same way and will be given without comment.
Now use the cosine rule to find $\cos A$, $\cos B$ and $\cos C$ in terms of $a,b,c$. Then use the sine rule to write $\sin A$ and $\sin B$ as multiples of $\sin C$. Check that $\cos C\ne\pm1$ and so $\sin C\ne0$. Check that $\cos(B-C)$ and so on are not zero, then calculate $\tan(B-C)$ and so on, replacing $\sin^2C$ by $1-\cos^2C$; they all have a factor of $\sin C$.
Form the given expression and cancel $\sin C$; after simplification, Maple gives the numerator of the result as a constant times
$$(c-1)(c-2)(c+1)(c+2)(c^6-5c^4-25c^2+45)\ ,$$
which is zero.
This also fits in with the answer given by @Blue: with the values of $a,b,c$ I have specified, the first factor in his/her product is zero and not the others.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 3
}
|
$\mathbb P $- convergence implies $L^2$-convergence for gaussian sequences Consider $(X_n)_{n \in \mathbb N}$ a sequence of gaussian random variables whose limit in probability exists and is given by $X$.
I was interested in showing that in this particular case we have always that this sequence converge also in $\mathbb L ^2$ ($\mathbb L ^p$) towards $X$.
Is the following arguments sufficient to show such fact (since $X$ is gaussian as a limit in probability of $X_n$ and $(X_n), X \in \mathbb L^p$)?
Given an arbitrary $\epsilon >0$
\begin{align} \mathbb E [ (X_n -X)^2]&= \mathbb E [ (X_n -X)^2\mathbf 1_{|X_n -X|> \epsilon}]+\mathbb E [ (X_n -X)^2\mathbf 1_{|X_n -X|< \epsilon}]\\& \leq\mathbb E [ (X_n -X)^4]^{1/2}\mathbb P(|X_n-X|> \epsilon)+ \epsilon^2
\end{align}
Also, I have a vague memory of a minimal condition to this implication in the general case when de sequence is not necessarily normal distributed. Should it be that the sequence is almost surely bounded ?
Many tank's for your help.
|
We have to show that for any subsequence $(X_{n_k})$ we can extract a further subsequence $(X_{n'_k})$ such $\mathbb E|X_{n'_k}-X|^p\to 0$. We can assume that $X_{n_k}\to X$ almost surely by passing to a further subsequence. Then $X$ is Gaussian.
Write $X_n=a_nN+b_n$, $X=aN+b$, where $N\sim N(0,1)$.
If $a\gt 0$, then $a_n\gt a/2$ for $n$ large enough hence we can prove that $\sup_n\mathbb E|X_n|^p$ is finite for each $p$. We then conclude as in the OP attempt.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
The number $n^4 + 4$ is never prime for $n>1$ I am taking a basic algebra course, and one of the proposed problems asks to prove that $n^4 + 4$ is never a prime number for $n>1$.
I am able to prove it in some particular cases, but I am not able to do it when $n$ is an odd multiple of $5$.
|
$n^4 + 4 = (2 - 2 n + n^2) (2 + 2 n + n^2)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Why are mathematical proofs that rely on computers controversial? There are many theorems in mathematics that have been proved with the assistance of computers, take the famous four color theorem for example. Such proofs are often controversial among some mathematicians. Why is it so?
I my opinion, shifting from manual proofs to computer-assisted proofs is a giant leap forward for mathematics. Other fields of science rely on it heavily. Physics experiments are simulated in computers. Chemical reactions are simulated in supercomputers. Even evolution can be simulated in an advanced enough computer. All of this can help us understand these phenomena better.
But why are mathematicians so reluctant?
|
I think a computer-assisted or computer-generated proof can be less convincing if it consists of a lot of dense information that all has to be correct. A proof should have one important property: a person reading it must be convinced that the proof really proves the proposition it is supposed to prove.
Now with most computer generated proofs that in fact is not the case. What often happens is that the proof is output of an algorithm that for instance checks all possibilities in a huge state space. And then we say the computer proved it since all states has been checked and they were all ok.
An important question here is: can there be a simple, elegant proof if we have to check a million cases separately and cannot use an elegant form of induction or other technique to reduce the number of logical cases?
I would say no, so in this case it does not matter if the output is from a computer or not. However, most mathematicians would look for a more elegant solution where the number of distinct cases to check fits one page of paper, and would only consider a proof of that size a nice proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "111",
"answer_count": 12,
"answer_id": 3
}
|
Complex numbers - proof Let z and w be complex numbers such that $|z| = |w| = 1$ and $zw \neq -1$. Prove that $\frac{z + w}{zw + 1}$ is a real number.
I let z = a + bi and w = c+ di so we have that $\sqrt{a^2+b^2} = \sqrt{c^2+d^2} = 1$ and $a^2+b^2 = c^2 + d^2 = 1$. I plugged it into the equation but I didn't really get anything worth noting. Does anyone know how to solve this? Thanks
|
Since $|z|=|w|=1$,
$$z\bar z=w\bar w=1.$$
Letting $u$ be the given number,
$$u=\frac{z+w}{zw+1}=\frac{(1/\bar z)+(1/\bar w)}{zw+1}=\frac{\bar w+\bar z}{\bar z\bar w(zw+1)}=\frac{\bar z+\bar w}{\bar z\bar w+1}=\bar u.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 0
}
|
Proving that if $|f''(x)| \le A$ then $|f'(x)| \le A/2$
Suppose that $f(x)$ is differentiable on $[0,1]$ and $f(0) = f(1) = 0$. It is also
known that $|f''(x)| \le A$ for every $x \in (0,1)$. Prove that
$|f'(x)| \le A/2$ for every $x \in [0,1]$.
I'll explain what I did so far. First using Rolle's theorem, there is some point $c \in [0,1]$ so $f'(c) = 0$.
EDIT: My first preliminary solution was wrong so I tried something else.
EDIT2: Another revision :\
I define a Taylor series of a second order around the point $1$:
$$ f(x) = f(1) + f'(1)(x-1) + \frac12 f''(d_1)(x-1)^2 $$
$$ f(0) = f(1) + f'(1)(-1) + \frac12 f''(d_1)(-1)^2 $$
$$ |f'(1)| = \frac12 |f''(d_1)| <= \frac12 A $$
Now I develop a Taylor series of a first order for $f'(x)$ around $1$:
$$ f'(x) = f'(1) + f''(d_2)(x-1) $$
$$ |f'(x)| = |f'(1)| + x*|f''(d_2)|-|f''(d_2)| \leq \frac{A}{2} + A - A = \frac{A}{2} $$
It looks correct to me, what do you guys think?
Note: I cannot use integrals, because we have not covered them yet.
|
If $f''$ exists on $[0,1]$ and $|f''(x)| \le A$, then $f'$ is Lipschitz continuous, which implies that $f'$ is absolutely continuous. So $f'(a)-f'(b)=\int_{a}^{b}f''(t)\,dt$ for any $0 \le a, b \le 1$. And, of course, the same is true of $f$ because $f$ is continuously differentiable.
Because $f(0)=0$, one has the following for $0 \le x \le 1$:
$$
f(x) = \int_{0}^{x}f'(t)\,dt = tf'(t)|_{0}^{x}-\int_{0}^{x}tf''(t)\,dt=xf'(x)-\int_{0}^{x}tf''(t)\,dt.
$$
Similarly, because $f(1)=0$, one has the following for $0 \le x \le 1$:
$$
f(x)=\int_{1}^{x}f'(t)\,dt = (t-1)f'(t)|_{1}^{x}-\int_{1}^{x}(t-1)f''(t)\,dt=(x-1)f'(x)-\int_{1}^{x}(t-1)f''(t)\,dt.
$$
Subtracting the second equation from the first gives
$$
0=f'(x)-\int_{0}^{x}tf''(t)\,dt+\int_{1}^{x}(t-1)f''(t)\,dt.
$$
Therefore,
$$
|f'(x)| \le A\left(\int_{0}^{x}t\,dt + \int_{x}^{1}(1-t)\,dt\right)=\frac{A}{2}(x^{2}+(1-x)^{2}).
$$
The maximum of the expression on the right occurs at $x=0$ and $x=1$, with a value of $A/2$. Therefore, $|f'(x)| \le A/2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 1
}
|
What if $\operatorname{div}f=0$? Say, we have a function $f\in C^1(\mathbb R^2, \mathbb R^2)$ such that $\operatorname{div}f=0$. According to the divergence theorem the flux through the boundary surface of any solid region equals zero.
So for $f(x,y)=(y^2,x^2)$ the flux through the boundary surface on the picture (sorry for its thickness, please treat it as a line) is zero.
The result (if I interpret the theorem correctly) seems to be quite surprising.
It looks like can also get non-zero flux by zero divergence. For example, $$g(x,y)=(-\frac{x}{x^2+y^2},-\frac{y}{x^2+y^2})$$ (see the next picture) has $\operatorname{div}g=0$ yet the flux is clearly negative.
The function $g$ isn't continuous at $(0,0)$ and therefore not $C^1$.
My first question is: are there any other cases where divergence is zero yet the flux isn't?
The reasons I'm asking is the exercise I came across:
Compute the surface integral
$$\int_{U}F \cdot dS$$
where $F(x,y)=(y^3, z^3, x^3)$ and $U$ is the unit sphere.
I didn't expect the exercise to be doable mentally (by simply noting that $\operatorname{div}F=0$ and concluding the integral is zero) yet $F$ is clearly $C^1$ so the divergence theorem seems to be applicable. My second question is: am I overlooking something in the exercise?
|
The divergence theorem is a statement about 3-dimensional vector fields, the 2-dimensional version sometimes being called the normal version of Green's theorem. In your second example, the vector field
$$g(x,y) = \left( -\frac{x}{x^2+y^2}, -\frac{y}{x^2+y^2} \right)$$
is not even defined at the origin, which is why Green's theorem doesn't apply to it. In the final problem though, the vector field $F$ is well-defined (and $C^1$) everywhere on $\mathbb{R}^3$, so you can apply the divergence theorem, and conclude that the flux is $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
If $0\leq f_n$ and $f_n\rightarrow f$ a.e and $\lim\int_Xf_n=\int_X f$, is it true that $\lim\int_Ef_n=\int_E f$ for all $E\in\mathcal{M}$. If $0\leq f_n$ and $f_n\rightarrow f$ a.e and $\lim\int_Xf_n=\int_X f$, p,rove or disprove that $\lim\int_Ef_n=\int_E f$ for all $E\in\mathcal{M}$.
I think it is true. It is easy to see $\lim\int_Ef_n\geq\int_E f$ using Fatou's Lemma, yet I could not show that $\lim\int_Ef_n\leq\int_E f$.
To disprove, each time I am either frustrated by either a.e convergence or non-negativity.
Thanks in advance,
|
Take $X= [0,\infty]$ then define take $f_n=n^2 \chi_{[0, \frac{1}{n}]}$
then you $\lim _{n\rightarrow \infty} \int _{X} f_n =\infty$ but $f = 0$ almost everywhere so
$\int _{X} f =0$ to fix that define $\tilde{f}_n =n^2 \chi_{[0, \frac{1}{n}]} + \chi _{[1,\infty]} $. Now $\tilde {f} = \chi _{[1, \infty]}$ almost everywhere.
The hypothesis now is satisfied but if you restrict the integral on $[0,1]$ you do not have the desired limiting behavior.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/632957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
prove that $a^{nb}-1=(a^n-1)((a^n)^{b-1}+...+1)$ Prove that $a^{nb}-1=(a^n-1)\cdot ((a^n)^{b-1}+(a^n)^{b-2}+...+1)$
We can simplify it, like this:
$$(a^{n})^b-1=(a^n-1)\cdot \sum_{i=1}^{b}(a^{n})^{b-i}$$
How can we prove this?
|
$$
\begin{align}
&(x-1)(x^{n-1}+x^{n-2}+x^{n-3}+\dots+x+1)\\
=&\quad\,x^n+x^{n-1}+x^{n-2}+x^{n-3}+\dots+x\\
&\quad\quad\;\:\,-x^{n-1}-x^{n-2}-x^{n-3}-\dots-x-1\\
=&\quad\,x^n\qquad\qquad\qquad\qquad\qquad\qquad\quad\;\;\;-1
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Simple pendulum as Hamiltonian system I am unable to understand how to put the equation of the simple pendulum in the generalized coordinates and generalized momenta in order to check if it is or not a Hamiltonian System.
Having
$$E_T = E_k + E_u = \frac{1}{2}ml^2\dot\theta^2 + mgl(1-cos\theta)$$
How can I found what are the $p$ and $q$ for $H(q,p)$ in order to check that the following holds, i.e. the system is a Hamiltonian system.
$$\frac{dq}{dt}=\frac{\partial H}{\partial p}~~~~~~~~~~~~~~\frac{dp}{dt}=\frac{-\partial H}{\partial q}$$
|
The Lagrangian is
$${\cal L}=\frac{1}{2}ml^2\dot{\theta}^2-mgl(1-\cos\theta).$$
The conjugate momentum is
$$p_\theta=\frac{\partial{\cal L}}{\partial\dot{\theta}}=ml^2\dot{\theta}$$
and so the Hamiltonian is
$${\cal H}=\sum_q \dot{q}p_q-{\cal L}=\frac{1}{2}ml^2\dot{\theta}^2+mgl(1-\cos\theta)=\frac{p_\theta^2}{2ml^2}+mgl(1-\cos\theta).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Where is the world of Imaginary numbers? Complex numbers have two parts, Real and Imaginary parts.
Real world is base of Real numbers.
but
where is (or what is) the world of Imaginary numbers?
|
Complex numbers can be thought of as 'rotation numbers'. Real matrices with complex eigenvalues always involve some sort of rotation. Multiplying two complex numbers composes their rotations and multiplies their lengths. They find frequent use with alternating current which fluctuates periodically. Also, $e^{ix}=\cos x+i \sin x$, parametrizing a circle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
How to evaluate the following summation I am trying to find the definite integral of $a^x$ between $b$ and $c$ as the limit of a Riemann sum (where $a > 0$):
$I = \displaystyle\int_b^c \! a^{x} \, \mathrm{d}x.$
However, I'm currently stuck in the following part, in order to find S:
$S = \displaystyle\sum\limits_{i=1}^n \displaystyle{a^{\displaystyle\frac{i(c-b)}{n}}}$
Is there a formula for this kind of expression? Thank you for your help.
|
Note that:
$$\begin{align}
S &= \sum\limits_{i=1}^n \displaystyle{a^{\displaystyle\frac{i(c-b)}{n}}} \\
&= \sum\limits_{i=1}^n \left(a^{\left(\dfrac{(c-b)}{n}\right)}\right)^i
\end{align}$$
Now, we can use the finite form of the geometric series formula:
$$S = \frac{\left(a^{\left(\dfrac{(c-b)}{n}\right)}\right)-\left(a^{\left(\dfrac{(c-b)}{n}\right)}\right)^{n+1}}{1-a^{\left(\dfrac{(c-b)}{n}\right)}}$$
...and that limit will be pretty nasty, but do-able. (I think.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
What does this modular relation mean, if anything?
Let $x$ and $y$ be real numbers.
Suppose $\dfrac{x}{y \bmod x}$ is a natural number.
What does that say about the relationship between $x$ and $y$?
If $x$ and $y$ are naturals themselves, then I think it means that $x$ is some multiple of $y$ plus some divisor of $y$, but I'm a little fuzzy on what it would mean if $x$ and $y$ are reals.
Thank you.
|
I think it means that $y$ is $x$ divided by an integer plus some integral multiple of $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How to compute $\lim_{n\to \infty}\sum\limits_{k=1}^{n}\frac{1}{\sqrt{n^2+n-k^2}}$ Find this follow limit
$$I=\lim_{n\to \infty}\sum_{k=1}^{n}\dfrac{1}{\sqrt{n^2+n-k^2}}$$
since
$$I=\lim_{n\to\infty}\dfrac{1}{n}\sum_{k=1}^{n}\dfrac{1}{\sqrt{1+\dfrac{1}{n}-\left(\dfrac{k}{n}\right)^2}}$$
I guess we have
$$I=\lim_{n\to\infty}\dfrac{1}{n}\sum_{k=1}^{n}\dfrac{1}{\sqrt{1-(k/n)^2}}=\int_{0}^{1}\dfrac{1}{\sqrt{1-x^2}}dx=\dfrac{\pi}{2}$$
But I can't prove follow is true
$$\lim_{n\to\infty}\dfrac{1}{n}\sum_{k=1}^{n}\dfrac{1}{\sqrt{1+\dfrac{1}{n}-\left(\dfrac{k}{n}\right)^2}}=\lim_{n\to\infty}\dfrac{1}{n}\sum_{k=1}^{n}\dfrac{1}{\sqrt{1-(k/n)^2}}$$
I have only prove
$$\dfrac{1}{\sqrt{1+\dfrac{1}{n}-\left(\dfrac{k}{n}\right)^2}}<\dfrac{1}{\sqrt{1-\left(\dfrac{k}{n}\right)^2}}$$
Thank you
|
The difference term-wise is
\begin{align}
\frac{1}{ \sqrt {1+({\frac{k}{n}})^2}} - \frac{1}{ \sqrt {1 + \frac{1}{n} - {(\frac{k}{n}})^2}}&= \frac{\frac{1}{n}}{ \sqrt {1+({\frac{k}{n}})^2}\cdot \sqrt {1 + \frac{1}{n} - ({\frac{k}{n}})^2} ( \sqrt {1+({\frac{k}{n}})^2}+ \sqrt {1 + \frac{1}{n} - ({\frac{k}{n}})^2}) } \\
&\leq \frac{1}{n}\frac{1}{ \sqrt {1+({\frac{k}{n}})^2}\cdot \sqrt {1- ({\frac{k}{n}})^2} ( \sqrt {1+({\frac{k}{n}})^2}+ \sqrt {1 - ({\frac{k}{n}})^2}) }
\end{align}
Therefore when we take the difference of the sums we get one extra $\frac{1}{n}$ and the other quantity converges to an integral thus the result is zero which proves your claim.
$$ 0 \cdot \int_{0}^{1} \frac{1}{ \sqrt{(1+x^2)(1-x^2)} \sqrt { (1-x^2) }+\sqrt{(1+x^2)}}\rm{d}x=0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Compute $\lim_{n \rightarrow \infty} \left(\left(\frac{9}{4} \right)^n+\left(1+\frac{1}{n} \right)^{n^2} \right)^{1/n}$ may someone show how to compute $\lim_{n \rightarrow \infty} \left(\left(\frac{9}{4} \right)^n+\left(1+\frac{1}{n} \right)^{n^2} \right)^{\frac{1}{n}}$?
According to W|A it's e, but I don't know even how to start...
Please help, thank you.
|
Clearly,
$$
\left(1+\frac{1}{n}\right)^{\!n^2}<
\left(\frac{9}{4}\right)^n+\left(1+\frac{1}{n}\right)^{\!n^2}<2\,\mathrm{e}^n,
$$
and therefore
$$
\left(1+\frac{1}{n}\right)^{\!n}<
\left(\left(\frac{9}{4}\right)^n+\left(1+\frac{1}{n}\right)^{n^2}\right)^{1/n}\le 2^{1/n}\mathrm{e}
$$
which implies that
$$
\mathrm{e}=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{\!n}\le\lim_{n\to\infty}\left(\left(\frac{9}{4}\right)^n+\left(1+\frac{1}{n}\right)^{n^2}\right)^{1/n}\le \lim_{n\to\infty}2^{1/n}\mathrm{e}=\mathrm{e}.
$$
Hence the limit of $\,\left(\left(\frac{9}{4}\right)^n+\left(1+\frac{1}{n}\right)^{n^2}\right)^{1/n},\,\,$ as $\,n\to\infty$, exists and it is equal to e.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
What is the name of this function $f(x) = \frac{1}{1+x^n}$? $f(x\in\mathbb{R}) = \frac{1}{1+x^n}$
|
In the particular case where $n$ is even, this looks like the pdf of the Cauchy distribution, so you might want to say that $f(x) = \frac{1}{1+x^{2n}}$ is some kind of a generalized Cauchy...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
Find a solution for $f\left(\frac{1}{x}\right)+f(x+1)=x$ Title says all. If $f$ is an analytic function on the real line, and $f\left(\dfrac{1}{x}\right)+f(x+1)=x$, what, if any, is a possible solution for $f(x)$?
Additionally, what are any solutions for $f\left(\dfrac{1}{x}\right)-f(x+1)=x$?
|
A few hints that might help...
*
*$1/x = x+1$ when $x = \frac{\pm\sqrt{5}-1}2$
*Differentiating gives: $-\frac{f'(1/x)}{x^2}+f'(1+x)=1$
*Differentiating again gives: $f''(1+x)+\frac{f''(1/x)}{x^4}+\frac{2f'(1/x)}{x^3}=0$ - this can then be continued.
*An "analytic function" has a Taylor series at any point that is convergent within a non-zero region around the point. So what would the Taylor series look like at the points given in hint 1?
ADDED:
A consideration of limits may also be useful. Indeed, with a substitution of $x=1/y-1$, you have $$f\left(\frac{y}{1-y}\right)+f\left(\frac1y\right)=\frac1y-1$$
We can then cancel out the $\frac1y$ term by first replacing $y$ with $x$, and limits from here may be useful.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 1
}
|
Prove $e^x, e^{2x}..., e^{nx}$ is linear independent on the vector space of $\mathbb{R} \to \mathbb{R}$ Prove $e^x, e^{2x}..., e^{nx}$ is linear independent on the vector space of $\mathbb{R} \to \mathbb{R}$
isn't it suffice to say that $e^y$ for any $y \in \mathbb{R}$ is in $\mathbb{R}^+$
Therefore, there aren't $\gamma_1, ...\gamma_n$ such that $\gamma_1e^x+\gamma_2e^{2x}...+\gamma_ne^{nx}=0$.
Therefore, they're not linear dependent.
I've seen a proof goes as follow:
take $(n-1)$ derivatives of the equation. then, you got $n$ equations with $n$ variables. Arranging it in a matrix (which is found out to be Van-Der-Monde matrix).
calculate the determinant which is $\ne 0$. Therefore, only the trivial solution exist. Therefore, no linear dependency.
Is all that necessary?
|
The exercise is
$$
f\alpha = \left\{
\begin{array}{ll}
\mathbb{R}\rightarrow\mathbb{R} \\
\ t\mapsto e^{\alpha t}
\end{array}
\right.
$$ Prove $(f_\alpha)_{\alpha \in\mathbb{R}} $is linear independent.
Let $(f_{\alpha_k})_{1\leq k \leq n} $ a finite number of vectors as $\alpha_1<\alpha_2<...<\alpha_n$.
When $\sum_{k=1}^n \alpha_k f_{\alpha_k}=0\Rightarrow \forall t\in\mathbb{R}, \sum_{k=1}^n \alpha_k e^{{\alpha_k} t}=0$
$$\Rightarrow \sum_{k=1}^{n-1} \alpha_k e^{{\alpha_k}t}=-\alpha_ne^{{\alpha_n}t}$$
$$ \Rightarrow\forall t\in\mathbb{R} \sum_{k=1}^{n-1} \alpha_k e^{({\alpha_k-\alpha_n})t}=-\alpha_n$$ where ${\alpha_k-\alpha_n}<0$ for $1\leq k \leq n-1$
$$\Rightarrow 0=-\alpha_n$$ when $n\rightarrow +\infty$
By repeating this process we obtain $\alpha_1=\alpha_2=...=\alpha_n=0$. QED
A better proof is using the fact that Eigenvectors with distinct eigenvalues are linearly independent
Indeed,
$$f\alpha = \left\{
\begin{array}{ll}
\mathbb{R}\rightarrow\mathbb{R} \\
\ t\mapsto e^{\alpha t}
\end{array}
\right.
$$ is linearly independent as vectors are eigenvectors with distinct eigenvalues of:
$$D = \left\{
\begin{array}{ll}
C^\infty(\mathbb{R})\rightarrow C^\infty(\mathbb{R}) \\
\ f\mapsto f'
\end{array}
\right.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 0
}
|
How to calculate complicated geometrical series? I have a geometrical series (I don't know if its geometrical series or not):
$$
\sum_{n=1}^{\infty }n\rho ^{n}(1-\rho)
$$
how can I simplify it ? ( assume that $ 0 \le \rho \le 1$ )
The last answer in my calculatio should be $\frac{\rho}{1-\rho}$. But I really don't know how ?
|
$$\sum_{n=1}^{\infty }n\rho ^{n}(1-\rho)=(1-\rho)\rho\frac{d(\sum_{n=1}^{\infty }\rho^n)}{d \rho}$$
Now using Infinite Geometric Series $$\sum_{n=1}^{\infty }\rho^n=\frac \rho{1-\rho}=-1-\frac1{\rho-1}$$ as $|\rho|<1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Sum of weighted squared distances is minimized by the weighted average? Let $x_1, \ldots, x_n \in \mathbb{R}^d$ denote $n$ points in $d$-dimensional Euclidean space, and $w_1, \ldots, w_n \in \mathbb{R}_{\geq 0}$ any non-negative weights.
In some paper I came across the following equation:
$$\arg\min_{c \in \mathbb{R}^d} \sum_{i=1}^n w_i \|c - x_i\|^2 = \frac{\sum_{i=1}^n w_i x_i}{\sum_{i=1}^n w_i}$$
However, I cannot see why this holds.
Googling suggests that this is related to minimizing the Moment of Inertia by the Center of Mass, as well as to the Fréchet mean for Euclidean distances, but both these contexts seem to be much too general to let me get the key insight here.
So, is there a simple straightforward proof of the above?
|
Let $f(x) = \sum_i w_i (c - x_i) \cdot (c - x_i)$. Then the partial derivative of $f$ wrt $c_j$ is
$$
2\sum_i w_i (c - x_i)\cdot e_j
$$
where $e_j$ is the $j$th standard basis vector. Setting this to zero gives
$$
\sum_i w_i (c_j - x_{i,j}) = 0 \\
c_j\sum_i w_i = \sum_i w_i x_{i,j} \\
c_j= \frac{\sum_i w_i x_{i,j}}{\sum_i w_i }
$$
where $x_{i, j}$ denotes the $j$th entry of vector $x_i$.
That shows that there's a unique critical point. The function clearly goes to infinity as $\|c\|$ gets large, so this critical point must be a min. You're done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Lower bound on the size of a maximal matching in a simple cycle Let $C_n$ denote an undirected simple cycle of $n$ nodes. I want to determine a lower bound on the size of a maximal matching $M$ of $C_n$.
Please note: A subset $M$ of the edges in $C_n$ is called a matching $\Leftrightarrow$ every node $v\in C_n$ is incident to at most one edge in $M$. The matching $M$ is called maximal $\Leftrightarrow$ it cannot be extended to a larger matching by adding an edge which is not already contained in $M$.
Proof: Let $M$ denote a maximal matching in $C_n$. Since
$(v_{2i-1},v_{2i})_{1\le i\le \lfloor\frac{n}{2}\rfloor}$ is a matching in $C_n$, |M| must be greater or equal to $\lfloor\frac{n}{2}\rfloor$.
The Assumption $|M|\ge\lfloor\frac{n}{2}\rfloor +1$ leads to $|\left\{v\in C_n\;|\;\exists w\in C_n : (v,w)\in M\vee (w,v)\in M \right\}|\ge \lfloor\frac{n}{2}\rfloor+2$, contrary to $|C_n|=n$.
Am I missing something or is everything as it should be? I'm a little bit confused, because the exercise only asked for a lower bound.
EDIT:
Okay, I was confused by the terms maximum matching and maximal matching. Let me try to provide a proof for a lower bound of a maximal matching:
I think its easier to construct a maximal matching and show that it can't be reduced while maintaining its maximality.
Let $C_n=(v_0,\dots,v_{n+1})$ with $v_{n+1}=v_0$. Please consider the sequence $S:=(v_{3i},v_{3i+1})_{0\le i\le \lfloor\frac{n}{3}\rfloor -1}$ if $n$ is uneven, extend $S$ by $(v_{n-1},v_n)$. $S$ induces a matching $M$ of $C_n$ with $|M|=\lceil\frac{n}{3}\rceil$.
If $M'$ is also a matching of $C_n$ with $|M'|<|M|$, then there exists an natural number $k\le n-1$ such that neither $v_k$ nor $v_{k+1}$ participates in any edge of $M'$. So, $M'$ cannot be maximal.
|
A maximal matching can have less than $n/2$ edges. Imagine a $C_6$ with the vertices named in order $a,b,c,d,e,f$. Then, the $ab$ and $ed$ edges form a maximal matching, and here it's $n/3$ edges.
So what's the most stupid way of choosing your matching ?
Let $X$ denote the set of vertices of $C_n$ that are incident to an edge of some matching $M$.
Then, $|M| = |X| / 2$. So how can you minimize $|X|$ while making $M$ maximal ?
You should be able to do the rest by observing that if $M$ is maximal, then for two adjacent vertices $v_1, v_2$, at least one has to be in $X$, because otherwise you can add the $v_1v_2$ edge in $M$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/633947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show that there does not exist an integer $n\in\mathbb{N}$ s.t $\phi(n)=\frac{n}{6}$ Show that there does not exist an integer $n\in\mathbb{N}$ s.t $$\phi(n)=\frac{n}{6}$$.
My solution:
Using the Euler's product formula:
$$\phi(n)=n\prod_{p|n}\Bigl(\frac{p-1}{p}\Bigr)$$
We have:
$$\frac{\phi(n)}{n}=\prod_{p|n}\Bigl(\frac{p-1}{p}\Bigr)=\frac{1}{6}$$
But $6=3\cdot 2$, hence
if $p=2,\;\;\;\;\; \Bigl(\frac{p-1}{p}\Bigr)=\Bigl(\frac{2-1}{2}\Bigr)=\frac{1}{2}$
if $p=3,\;\;\;\;\; \Bigl(\frac{p-1}{p}\Bigr)=\Bigl(\frac{3-1}{3}\Bigr)=\frac{2}{3}$
But $\Bigl(\frac{2-1}{2}\Bigr)\Bigl(\frac{3-1}{3}\Bigr)\neq \frac{1}{6}$
Is this correct?
Thanks
|
Let $n=p_{1}^{a_1}\cdots p_{r}^{a_r}$, so $\;\;\phi(n)=p_{1}^{a_{1}-1}(p_{1}-1)\cdots p_{r}^{a_{r}-1}(p_{r}-1)$.
If $n=6\phi(n)$, then $\;\;p_{1}^{a_1}\cdots p_{r}^{a_r}=6\big[p_{1}^{a_{1}-1}(p_{1}-1)\cdots p_{r}^{a_{r}-1}(p_{r}-1)\big]$, so
$\;\;p_{1}\cdots p_{r}=6(p_1-1)\cdots(p_{r}-1)=2\cdot3(p_1-1)\cdots(p_{r}-1)$.
$\;\;$We can take $p_1=2$ and $p_2=3$, so
$\;\;p_{3}\cdots p_{r}=1\cdot2 (p_3-1)\cdots(p_{r}-1)$; and this gives a contradiction since $p_i\ne2$ for $i=3,\cdots,r$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/634012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
calculator issue: radians or degrees for inverse trig It's a simple question but I am a little confused. The value of $cos^{-1} (-0.5)$ , is it 2.0943 or 120 ?
|
It helps to understand that there are several different functions called cosine. I find it useful to refer to "cos" (the thing for which $\cos^{-1}(0) = \pi/2$) and "cosd" for which cosd(90) = 0.
Your calculator (if you're lucky) will mean "cos" when you press the "cos" button; if you've got the option of "degrees/radians", then in "degrees" mode, the "cos" button is actually computing the function "cosd".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/634076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
PDF of a sum of exponential random variables Let $X_i$ for $i=1,2,...$ be a sequence of i.i.d exponential random variables with common parameter $\lambda$. Let $N$ be a geometric random variable with parameter $p$ that is independent of the sequence $X_i$. What is the pdf of the random variable $Y=\Sigma_{i=1}^N X_i$.
|
We can also answer this with the following consideration:
The expected value of $Y$ is
$$E(\sum_{i=1}^N T_i) = E_{geom}\left(E_{exp}\left(\sum_{i=1}^N T_i | N\right)\right) = \frac{1}{p\lambda}.$$
So if $Y$ is exponentially distributed, it is so with parameter $p\lambda$. That is, we are left with the need to prove it being exponentially distributed. We get help from the following theorem:
A continuous random variable $Y : \Omega \to (0, \infty]$ has an exponential distribution if and only if it has the memoryless property
$$P(Y>s+t|Y>s) = P(Y>t) \text{ for all } s,t \ge 0.$$
We know that the geometric distribution is the only discrete random variable with the same property.
Here is where I struggle to give a formal proof. Imagine however a "clock" that ticks in exponentially distributed time intervals (i.e., a Poisson process). At any time point, independent of ticks in the past, there is no added information because the clock does not know how often it will still tick because the geometric distribution is memoryless and it also does not know when the next tick will be because the exponential distribution is memoryless. And so, the whole process is memoryless and $Y$ is exponentially distributed with paramter $p\lambda$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/634158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.