Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Probability inequality proof I'm stuck on a homework question and don't even know where to start. Here it goes:
If A and B are two events which are not impossible, prove that $$P(A\land B)\times P(A\lor B)\le P(A)\times P(B)$$
|
In general if $d+a=c+b$ and $a\leq b\leq c\leq d$ then:
$$ad=\frac{1}{4}\left(d+a\right)^{2}-\frac{1}{4}\left(d-a\right)^{2}\leq\frac{1}{4}\left(c+b\right)^{2}-\frac{1}{4}\left(c-b\right)^{2}=bc$$
This as a direct consequence of:$$d-a\geq c-b\geq 0$$
This can be applied by taking $a=P(A\cap B)$, $b=P(A)$, $c=P(B)$ and $d=P(A\cup B)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/644009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
a question about double integral Let $a,b$ be positive real numbers, and let $R$ be the region in $\Bbb R^2$ bounded by $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$. Calculate the integral
$$
\int\int_R\left(1-\frac{x^2}{a^2}-\frac{y^2}{b^2}\right)^{3/2}dx\,dy
$$
my question is I don't know anything about $R$, the function $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ is not the function of $R$, so then how can I get the answer? Could somebody give me some hints.
|
It sounds like you're just a bit confused about notation. $R$ is simply the name of the region. The notation
$$\iint\limits_{R} f(x,y) \, dA$$
simply means that we should integrate over the region $R$. In your case, $R$ is defined to be the region contained inside the ellipse
$$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1.$$
For $a=3$ and $b=2$, this situation could be illustrated as follows:
As the other answers already indicate, the integral can be evaluated easily by a change of variables $x=ar\cos(\theta)$ and $y=br\sin(\theta)$. It can also be expressed as an iterated integral in Cartesian coordinates
$$4\int_0^3 \int_0^{\frac{b}{a}\sqrt{a^2-x^2}} \left(1-\frac{x^2}{a^2} - \frac{y^2}{b^2}\right) \, dy \, dx,$$
which evaluates to $2\pi a b/5$, though it's certainly more algebraically cumbersome than the change of variables approach.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/644064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Intuition of why $\gcd(a,b) = \gcd(b, a \pmod b)$? Does anyone have a intuition or argument or sketch proof of why $\gcd(a,b) = \gcd(b, a \pmod b)$?
I do have a proof and I understand it, so an intuition would be more helpful.
The proof that I already have:
I show $\gcd(a,b) \mid \gcd(b, a \pmod b)$ and $\gcd(b, a \pmod b) \mid \gcd(a, b)$ which implies $\gcd(a,b) = \gcd(b, a \pmod b)$ and stuff is non-negative.
WLOG $a \geq b$
$\gcd(a,b) \mid a$
$\gcd(a,b) \mid b$
so it divides any linear combination of a and b
Since $a \pmod b = a - qb$ then:
$\gcd(b, a - qb) = bx + (a-qb)y$
$\gcd(b, a - qb) = bx + ay - qby $
$\gcd(b, a \pmod b) = b(x-qy) + ay$
which is a LC of $a$ and $b$.
So $\gcd(a,b) \mid \gcd(b, a \pmod b)$.
Other direction is nearly identical.
|
Suppose each of $a$ and $b$ is an integer number of miles. Then so is $a\bmod b$.
If a mile is a "common measure" (as Euclid's translators say) of both distances, then a mile is a common measure of what's left when $b$ has been taken from $a$ as many times as possible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/644252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
}
|
Is a function determined by its integrals over open sets? If $f \in L^1(\mathbb R)$ satisfies
$$
\int_U f = 0
$$
for every open set $U \subset \mathbb R$, then is it true that $f = 0$ a.e.?
|
Since $f$ is measurable, the set $A=\{x\in\mathbb{R}\mid f(x)>0\}$ is measurable. Therefore, by regularity of the Lebesgue measure, $m$, for every $\varepsilon>0$ there exists an open set $U$ such that $A\subset U$ and $m(U\setminus A)<\varepsilon$. Let $f^+$ and $f^-$ denote the positive and negative part if $f$. Then, we have
$$
0=\int_U f\, dm= \int_A f^+\, dm-\int_{U\setminus A} f^-\, dm
$$
Note that the measure $\mu$ on the Lebesgue measurable subsets of $\mathbb{R}$ defined by
$$
\mu(A)=\int_{A} f^-\, dm
$$
Is absolutely continuous with respect to $m$. Hence, for every $n$ and $B$ measurable there exists a $\delta_n$ such that $\mu(B)<1/n$ if $m(B)<\delta_n$. Taking $\varepsilon=\delta_n$ yields a sequence of open sets $U_n$ such that $m(U_n\setminus A)<\delta_n$. Hence, we have
$$
0\leq \int_A f^+\, dm=\int_{U_n\setminus A} f^-\, dm< \frac{1}{n}
$$
And this is valid for every $n$. Therefore, we should have $\int_A f^+\, dm=0$ and since $f^+\geq 0$, this implies that $f^+=0$ a.e.
Using a similar argument, we can find that $f^-=0$ a.e. which gives the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/644347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
}
|
Find the volume of the solid bounded by $z=x^2+y^2+1$ and $z=2-x^2-y^2$. Question: Find the volume of the solid bounded by $z=x^2+y^2+1$ and $z=2-x^2-y^2$.
Setting the 2 equations equal w.r.t. $z$, $x^2+y^2+1=2-x^2-y^2 \rightarrow x=\pm\sqrt{\frac 12-y^2}$
Therefore the boundary of $y=\pm\frac {1}{\sqrt2}$.
So to find the volume of the solid, take the integration by subtracting the volume above and below the boundaries.
$\displaystyle V=\int_{-\frac {1}{\sqrt2}}^{+\frac {1}{\sqrt2}}\int_{-\sqrt{\frac 12-y^2}}^{+\sqrt{\frac 12-y^2}}(2-x^2-y^2)dxdy-\int_{-\frac {1}{\sqrt2}}^{+\frac {1}{\sqrt2}}\int_{-\sqrt{\frac 12-y^2}}^{+\sqrt{\frac 12-y^2}}(x^2+y^2+1)dxdy$
This is what I did. Without solving the equation, can someone tell me if it is correct?
Thank you!
|
Your setup is right. Here is the method you could have done to compute the volume.
Assume the density is $f(x,y,z) = 1$, so
$$V = \iiint_D \,dx\,dy\,dz$$
We are given that the solid is bounded by $z = x^2 + y^2 + 1$ and $z = 2 - x^2 - y^2$. As I commented under your question, you need to use cylindrical coordinates to evaluate the volume integral. Using the substitutions $x = r\cos(\theta)$, $y = r\sin(\theta)$ and $z = z$, we have $z = r^2 + 1$ and $z = 2 - r^2$. With some knowledge in graphs and functions, we see that the bounds are
$$\begin{aligned}
r^2 + 1 \leq z \leq 2 - r^2\\
0 \leq \theta \leq 2\pi\\
0 \leq r \leq \dfrac{1}{\sqrt{2}}
\end{aligned}$$
where $r = \frac{1}{\sqrt{2}}$ is found by solving for $r$ when $r^2 + 1 = 2 - r^2$. So for the volume triple integral, we have
$$\begin{aligned}
V &= \iiint_D \,dx\,dy\,dz\\
&= \iiint_D r\,dr\,d\theta\,dz\\
&= \int_{0}^{2\pi}\int_{0}^{\frac{1}{\sqrt{2}}}\int_{r^2 + 1}^{2 - r^2}r\,dz\,dr\,d\theta\\
&= \int_{0}^{2\pi}\int_{0}^{\frac{1}{\sqrt{2}}} r(2 - r^2 - r^2 - 1)\,dr\,d\theta\\
&= \int_{0}^{2\pi}\int_{0}^{\frac{1}{\sqrt{2}}} r(1 - 2r^2)\,dr\,d\theta\\
&= \int_{0}^{2\pi}\int_{0}^{\frac{1}{\sqrt{2}}} (r - 2r^3)\,dr\,d\theta\\
&= \int_{0}^{2\pi}\,d\theta\left.\left(\dfrac{1}{2}r^2 - \dfrac{1}{2}r^4 \right)\right\vert_{r = 0}^{r = \frac{1}{\sqrt{2}}}\\
&= 2\pi \left.\left(\dfrac{1}{2}r^2 - \dfrac{1}{2}r^4 \right)\right\vert_{r = 0}^{r = \frac{1}{\sqrt{2}}}\\
&= 2\pi \cdot \dfrac{1}{8}\\
&= \dfrac{\pi}{4}
\end{aligned}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/644444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to prove $\left(\frac{n}{n+1}\right)^{n+1}<\sqrt[n+1]{(n+1)!}-\sqrt[n]{n!}<\left(\frac{n}{n+1}\right)^n$
Show that:
$$\left(\dfrac{n}{n+1}\right)^{n+1}<\sqrt[n+1]{(n+1)!}-\sqrt[n]{n!}<\left(\dfrac{n}{n+1}\right)^n$$
where $n\in \Bbb N^{+}.$
If this inequality can be proved, then we have
$$\lim_{n\to\infty}\sqrt[n+1]{(n+1)!}-\sqrt[n]{n!}=\dfrac{1}{e}.$$
But I can't prove this inequality. Thank you.
|
Here is a partial answer :
If a sequence $u=(u_n)_{n\ge1}$ of real numbers converges, then the sequence $\left(\frac{1}{n}\sum_{k=1}^nu_k\right)_{n\ge1}$ converges to the same limit. This is the well known Cesaro's lemma.
It can be proved that the converse is false (consider the sequence $u=((-1)^n)_{n\ge1}$) but becomes true if we assume that $u$ is monotonic.
As a consequence, we get the following result :
If $t=(t_n)_{n\ge1}$ is a monotonic sequence of real numbers such that $\lim_{n\to\infty}\frac{u_n}{n}=L\in\mathbb{R}$ then $\lim_{n\to\infty}\left(u_{n+1}-u_n\right)=L$.
Let's apply this last result to the sequence $t=(\left[n!\right])^{1/n})_{n\ge1}$.
It's easy to show ($\color{red}{\mathrm{see}\,\,\mathrm{below}}$) that
$$\lim_{n\to\infty}\frac{\left[n!\right]^{1/n}}{n}=\frac{1}{e}$$
Therefore, if we prove that $t$ is monotonic (at least ultimately, which is a sufficient condition), we will reach the conclusion that :
$$\lim_{n\to\infty}\left(\left[(n+1)!\right]^{1/(n+1)}-\left[n!\right]^{1/n}\right)=\frac{1}{e}$$
It is a consequence of Cesaro's lemma that if a sequence of positive real numbers $(x_n)_{n\ge1}$ verifies $\lim_{n\to\infty}\frac{x_{n+1}}{x_n}=L$ some $L>0$ then $\lim_{n\to\infty}\left(x_n\right)^{1/n}=L$.
Applying this to sequence $x_n=\left(\frac{n!}{n^n}\right)$ we obtain $\lim_{n\to\infty}\frac{\left[n!\right]^{1/n}}{n}=\frac{1}{e}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/644526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 4,
"answer_id": 2
}
|
Convert the power series solution of $(1+x^2)y''+4xy'+2y=0$ into simple closed-form expression $(a)$Use two power series in $x$ to find the general solution of
$$(1+x^2)y''+4xy'+2y=0$$
and state the set of $x$-values on which each series solution is valid.
$(b)$ Convert the power series solutions in $(a)$ into simple closed-form expressions.
$(c)$ Use $(b)$ to find the general solution of the equation above on the whole real line.
For $(a)$, I used $y(x)=\sum\limits_{n=0}^\infty a_nx^n$,and solved the recurrence relation to be $a_{2k} = (-1)^ka_0$ and $a_{2k+1} = (-1)^ka_1$, where $a_0$ and $a_1$ are arbitrary.
And for the solution I got $y(x)=a_0\sum\limits_{k=1}^\infty (-1)^kx^{2k} +a_1\sum\limits_{k=1}^\infty (-1)^kx^{2k+1}$.
How do I convert the solution into simple closed-from expressions, and how should I solve $(c)$?
Thanks for any help.
|
Hint: Your power series are both geometric series.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/644602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
pdf for non-central gamma distribution I have a given gamma distribution as:
$f(x;k,\theta) = \frac{1}{\Gamma(k)\theta^{k}}x^{k-1}e^{\frac{-x}{\theta}}$ and a non-centrality parameter $\delta$.
Now, I need to find the pdf of this non-central gamma distribution $f(x;k,\theta,\delta)$?
I have found an expression of this in a paper by Oliveira and Ferreira. However, the pdf expression is in terms of shape parameter and non-centrality parameter only, which is given as
$f(x;k,\delta) = \displaystyle\sum_{i=0}^{\infty}e^{\frac{-\delta}{2}}\left(\frac{\delta}{2}\right)^i \left[ \frac{1}{\Gamma(k+i)}e^{-x}x^{k+i-1}\right]$.
Is there an expression for pdf that incorporates x,$\theta$,k, and $\delta$? Or, any approximations to make the non-central distribution to the central distributions?
|
As far as my monte-carlo simulation and closed form expression match, the non-central gamma could be well approximated by Amoroso distribution i.e.,
$f(x;k,\theta,\delta) = \frac{1}{\theta^k\Gamma(\theta)}(x-\delta)^{k-1}e^{\left(\frac{-(x-\delta)}{\theta} \right)}$
where $\delta$ is the location parameter.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/644804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Given two odd primes, $p\neq q$, prove that there are no primitive roots $\mod(pq)$
Given two odd primes, $p\neq q$, prove that there are no primitive roots $\mod(pq)$
I don't know where to start with this, any help would be appreciated.
|
Hints:
*
*$\phi(pq)$ is an even multiple of both $\phi(p)=p-1$ and $\phi(q)=q-1$.
*Check what happens when you raise a residue class to power $\phi(pq)/2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/644925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Equation with the variable in the exponent and also in the base Does anyone know how to solve this equation, with the variable in the exponent and also in the base?
$$1.05^{2y}-0.13y-1=0$$
Thank you very much.
|
Equations like this can sometimes be "solved" using the Lambert W function, but many do not define that as a solution. Usually you are reduced to numeric rootfinding, which is discussed in any numerical analysis book. This one has a root at $y=0$ and another near $5.5$ as shown by this Alpha plot. Alpha gives this solution, but I don't feel much smarter. I haven't gotten it to solve it numerically
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/645001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Convergence of the integral $\int\limits_{1}^{\infty} \left( \frac{1}{\sqrt{x}} - \frac{1}{\sqrt{x+3}} \right) \, dx$ Would someone please help me prove that the integral
$$
\int\limits_{1}^{\infty} \left( \frac{1}{\sqrt{x}} - \frac{1}{\sqrt{x+3}} \right) \, dx
$$
is convergent?
Thank you.
|
Use
$$\frac{1}{\sqrt{x}}-\frac{1}{\sqrt{x+3}}=\frac{\sqrt{x+3}-\sqrt{x}}{\sqrt{x^2+3x}}=\frac{3}{(\sqrt{x+3}+\sqrt{x})\sqrt{x^2+3x}}$$
So, the integrand is positive and $\le \frac{3}{2x\sqrt{x}}$.
Here I use $\sqrt{x^2+3x}>x$ and $\sqrt{x+3}>\sqrt{x}$
So the integral converges by the comparison criterium.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/645070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
exponential equation with different bases We have $3^x-5^\frac{x}{2}=4$ My question is what we can do here ? Can we solved it algebraically or we need to notice that $x=2$ and then show that for $x \neq 2$ there aren't any other solutions?
|
Your second approach, viz. showing there are no other solutions would also require some algebra. With $2t = x$, you can write the equation as
$$(4+5)^t = 4 + 5^t$$
This is obvious for $t=1$ i.e $x=2$.
For $t > 1$, we have $(4+5)^t > 4^t+5^t > 4+5^t$
for $0 < t < 1$, let $y = \frac1t > 1$, then $(4+5^t)^y > 4^y+5 > 4+5 = \left((4+5)^t\right)^y$
and for $t \le 0$, $(4+5)^t \le 1 < 4 < 4+5^t $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/645165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Absolute continuity of a nondecreasing function Can anyone give me a hint on how to approach this problem? It's another problem from an old qualifying exam.
Suppose that $f\colon \mathbb R \to \mathbb R$ is nondecreasing, $\int_{\mathbb R} f' = 1$, $f(-\infty) = 0$, and $f(\infty) = 1$. Prove that $f$ is absolutely continuous on any interval $[a,b]$.
|
One approach is as follows:
Since $f$ is non-decreasing, it is differentiable ae. [$m$] and $f(x)-f(y) \ge \int_y^x f'(t) dt$ for all $x>y$.
Use the fact that $1 = \lim_{x \to + \infty} f(x) - \lim_{x \to - \infty} f(x) = \int_{\infty}^\infty f'(t) dt$ to show that,
$f(x)-f(y) = \int_y^x f'(t) dt$ for all $x>y$.
Now use the fact that $f'$ is integrable and positive to conclude absolute continuity (note that $\lim_{M \to \infty} \int_{\{x | f'(x) \ge M\}} f' = 0$, and if $f'(t) \le M$, the result is straigtforward).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/645250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove $1/x$ is not uniformly continuous $f: (0,+\infty) \to (0,+\infty)$ $f(x) = 1/x$, prove that f is not uniformly continuous.
Firstly, I negated the definition of uniform convergence obtaining:
$\exists \epsilon > 0 $ s.t. $\forall \delta > 0 $ with $|x-y| < \delta$ & $x,y \in (0, + \infty)$ and $|f(x) - f(y)| = \left|\dfrac{x-y}{xy}\right| \geq \epsilon$
so I choose $\epsilon = 1$ and $ x = \delta/2 \in (0,+\infty)$ and $y = \delta /4 \in(0,+\infty)$ so $|x-y| = \delta/2 < \delta$ and $|f(x) - f(y)| = |2/\delta|$ How do I show that this is greater than or equal to epsilon?
|
Here a full answer (that i writte too to practice) but take into account that I am just a student so I hope it is correct.
1 - First let recall the definition of a non uniformly continuous function.
It exists at least one $\epsilon_0>0$ such that for every $\delta>0$ that we can choose it will always exists at least $x$ and $y$ that verifies $|x-y|<\delta$ but $|f(x)-f(y)|>\epsilon_0$.
More formally: $\exists \epsilon_0>0 \; , \forall \delta>0 \; : \; \exists |x-y|< \delta \Rightarrow |f(x)-f(y)| \geq \epsilon_0$
2 - Now let pay attention to the following inequalities:
(1): for any $\delta>0$ given it exists $N=Max(1; \left \lceil 1/ \delta \right \rceil)$ s.t. $1/N < \delta$ . Moreover all $n \geq N$ verifies too this inequality (by assumption we are in $(0; \infty)$ ).
(2): $\forall n \in \mathbb{N} $ we have $|\frac{1}{1/n}-\frac{1}{n+1/n}|=|n-\frac{n}{n^2+1}|=|n(1-\frac{1}{n^2+1})|=|\frac{n^3}{n^2+1}| \geq 1/2$
3 - Now we can writte:
$\exists \epsilon_0 = \frac{1}{4}>0$ such that for any $\delta>0$ it will always exists ,with $n \geq N$ as define in (1), at least two points $x_n=n$ and $y_n=n+1/n$ that despite that verifying $|x_n-y_n|=|1/n| < \delta$ (by (1)) $ \Rightarrow|f(x_n)-f(y_n)|=|\frac{n^3}{n^2+1}|>1/4$.
More formally: $\exists \epsilon_0 = \frac{1}{4}>0 \; , \forall \delta>0 \; : \; \exists \; x_n = n, \; y_n=n+\frac{1}{n}$ with $n \geq max(1; \left \lceil 1/ \delta \right \rceil)$
By (1): $|x_n-y_n|< \delta$
By (2): $\Rightarrow |f(x_n)-f(y_n)|=|\frac{n^3}{n^2+1}| \geq \epsilon_0 = \frac{1}{4}$
Q.E.D.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/645343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
if $g^k=e$ then $\chi(g)=\sum_j^n \zeta_k$
Let $G$ be a group. Let $g \in G$ and $g^k=e.$ Let $\chi$ be an
$n$-dimensional character of the group $G.$ Let $\zeta_k$ be $k$-th root of unity.
Prove that $\chi(g)$ is equal to sum of a $k$-th roots of unity.
My trying. Consider the cyclic subgroup $C_k \in G,$ Such that $C_k=\{g^i| i<k\}.$
Then any representation $\rho$ of $C_k$ is equal to prime sum of 1-dimensional representations $\rho_j$ of the group $C_k.$ Them number is $n.$ and $\rho_j(g)=\zeta_k^r$ as $r|k.$
I'm not sure of the correctness of my proof
|
If $g^k = e$ then since the function $\rho: G \to GL_n(\mathbb C)$ that constitutes your representation is a homomorphism you have that $\rho(g^k) = \rho(g)^k = $ I, $ $thus in particular you know that all of the eigenvalues of $\rho(g)$ must be kth roots of unity (as for any $v \in \mathbb C^n$ you have that $\rho(g)^kv = v$, including its eigenvectors). But the character $\chi_\rho(g) = \text{Trace}(\rho(g))$ and the trace of a matrix is the sum of its eigenvalues as you know from linear algebra. Thus it must be a sum of kth roots of unity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/645552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What makes a condition unary vs. n-ary (n>1)? For any two disjoint sets $A$ and $B$, a set $W$ is a connection of $A$ with $B$ if
*
*$Z\in W\implies (\exists x\in A)(\exists y\in B)[Z=\{x,y\}]$
*$(\forall x\in A)(\exists !y\in B)[\{x,y\}\in W]$
*$(\forall y\in B)(\exists !x\in A)[\{x,y\}\in W]$
I know that each of the conditions 1-3 are definite conditions (and indeed being a connection in general is a definite condition). But I'm having trouble telling when I should consider something an $n$-ary condition $(n>1)$ vs a unary condition.
I want to use the Axiom of Separation (Zermelo axioms) which requires a definite condition to be unary. The condition that I want to use is that of a set being a connection. But, to me, that would be a ternary condition, say $P(A,B,W)$. So I would have to "decompose" this ternary condition into unary conditions and apply the separation axiom multiple times, no?
|
Say, $P(A,B,W)$ is the first order proposition that expresses that $W$ is a connection between $A$ and $B$, Then consider
$${\rm isConn}(W):= " \exists A\exists B:P(A,B,W)"\,.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/645603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What is the difference between a point and a vector? I understand that a vector has direction and magnitude whereas a point doesn't.
However, in the course notes that I am using, it is stated that a point is the same as a vector.
Also, can you do cross product and dot product using two points instead of two vectors? I don't think so, but my roommate insists yes, and I'm kind of confused now.
|
There is a difference of definition in most sciences, but what I suspect you're asking about is a rather nice one-to-one correspondence between points in real space (say perhaps $\mathbb{R}^n$) and vectors between $(0, 0, 0)$ and those points in the space of $n$-dimensional vectors.
So, for every point $(a, b, c)$ in $\mathbb{R}^3$, there's a vector $(a, b, c)$ in the space of all 3-dimensional vectors.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/645672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "152",
"answer_count": 19,
"answer_id": 10
}
|
Prove this is a subspace Let $ W_1, W_2$ be subspace of a Vector Space $V$.
Denote $W_1+W_2$ to be the following set
$$W_1+W_2=\left\{u+v, u\in W_1, v\in W_2\right\}$$
Prove that this is a subspace.
I can prove that the set is non emprty (i.e that it houses the zero vector).
pf: Since $W_1 , W_2$ are subspaces, then the zero vector is in both of them.
$$\mathbb{O}_V+\mathbb{O}_V=\mathbb{O}_V$$
but I can't wrap my head around the closure of addition and scalar multiplication.
|
If $w_1,w_2 \in W_1+W_2$, then $w_k=u_k+v_k$ for some $u_k \in W_1$ and $v_k \in W_2$. Since $u_1+u_2 \in W_1$ and $v_1+v_2 \in W_2$, we have $w_1+w_2=(u_1+u_2) + (v_1+v_2) \in W_1+W_2$.
Similarly, if $w \in W_1+W_2$, then $w=u+v$ for some $u \in W_1$ and $v \in W_2$, since $\lambda u \in W_1$ and $\lambda v \in W_2$, we see that $\lambda w = (\lambda u) + (\lambda v) \in W_1+W_2$.
Alternatively, note that the range space of a linear operator is a linear space
and $W_1 \times W_2$ is a vector space with componentwise addition and multiplication. If $L:W_1 \times W_2 \to V$ is the linear operator given by $L((w_1,w_2)) = w_1+w_2$, we see that $W_1+W_2 = L(W_1 \times W_2 )$, hence it is a linear space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/645763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Are all $\mathbb{Z}/(6)$-modules injective? I'm trying to show that every $\mathbb{Z}/(6)$ module is injective. My strategy is to use Baer's Criterion.
The only nontrivial ideals of $\mathbb{Z}/(6)$ are $(2)=(4)=\{0,2,4\}$ and $(3)=\{0,3\}$. Suppose $g:(2)\to Q$ is a morphism into any $\mathbb{Z}/(6)$ module $Q$. I try to extend it some $G$ on all of $\mathbb{Z}/(6)$. I know $G$ will be uniquely determined by $G(1)$. From
$$
g(2)=G(2)=2G(1)
$$
and $2g(4)=g(8)=g(2)=2G(1)$, I want to define $G(1)=g(4)$ and extend homomorphically. It seems to check out that $G$ is an extension of $g$. My concern is, is there some guarantee that just setting $G(1)=g(4)$ is ok? Will $G(1)$ be well-defined to extend to a module homomorphism on all of $\mathbb{Z}/(6)$?
|
*
*Every module over a field (i.e. vector space) is injective.
*If $C_1,C_2$ are categories with injective objects $I_1 \in C_1$, $I_2 \in C_2$, then $(I_1,I_2)$ is an injective object of $C_1 \times C_2$.
*If $R_1,R_2$ are rings, there is an equivalence of categories $\mathsf{Mod}(R_1 \times R_2) \simeq\mathsf{Mod}(R_1) \times \mathsf{Mod}(R_2)$.
*Chinese Remainder Theorem.
Combining these basic facts, we get that in $\mathsf{Mod}(\mathbb{Z}/6) \simeq \mathsf{Mod}(\mathbb{F}_2) \times \mathsf{Mod}(\mathbb{F}_3)$ every object is injective.
In order to make your proof work, use $\hom(\mathbb{Z}/n,A) \cong \{a \in A : n \cdot a = 0\}$, $f \mapsto f(1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/645826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove $\int \limits_0^b x^3 = \frac{b^4}{4} $ by considering partitions $[0, b]$ in $n$ equal subinvtervals. I was given this question as an exercise in real analysis class. Here is what I came up with. Any help is appreciated!
Prove $\int \limits_0^b x^3$ = $\frac{b^4}{4} $ by considering partitions [0, b] in $n$ equal subinvtervals.
Consider $f$ on the interval $[0, b]$ where $b > 0$. For a partition $ P = \{0=t_0 < t_1 < ...< t_n = b\}$ we have:
$U(f, P) = $$\sum_{k=1}^n t^3_k (t_k - t_1) $
If we choose $t_k = \frac{kb}{n} $ then we can say
$U(f, P) = $$\sum_{k=1}^n \frac{k^3b^3}{n^3}\ . \frac{b}{n}$
=> $U(f, P) = \frac{b^4}{n^4} \sum_{k=1}^n k^3$
=> $U(f, P) = \frac{b^4}{n^4}\ . \frac{n^2(n + 1)}4$
so $L(f) \ge \frac{b^4}{n^4}$ . Therefore $f(x) = x^3$ is integrable on $[0, b]$ and $\int \limits_0^b x^3$ = $\frac{b^4}{4} $.
|
Say that you want to use $n$ subintervals. So you want to integrate $x^3$ over the range b(i-1)/n and b i/n, index $i$ running fron $0$ to $n$.
The result of this integration for this small range is simply given by
b^4 (-1 + 4 i - 6 i^2 + 4 i^3) / (4 n^4)
You must now add up all these terms from $i=0$ to $i=n$ which means that you need to compute the sum of the $i$, the sum of the $i^2$ and the sum of the $i^3$ form $i=0$ to $i=n$. These sums are known and applying, you arrive after simplifcations to
b^4 (n^4 - 1) / (4 n^4)
Now, I suppose you want to move $n$ to infinity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/645932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Is the following set is compact Consider the set of all $n \times n$ matrices with determinant equal to one in the space of $\mathbb R^{n\times n}$.
My idea is compact because determinant function is continous ant it is bijective from the given set to $\mathbb R$ and $\mathbb R$ is Hausdorff, so image of compact set is compact.
|
In $\mathbb R^m$, with the usual metric topology, a set is compact iff it is closed and bounded. The set you describe is closed (since it is the inverse image of a closed set under a continuous function) but it fails to be bounded if $n>1$ (and the case $n=1$ is trivial, do you see why?). Try to find matrices with arbitrarily large entries, but still having determinant $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/646025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be multiplicative. Is it exponential? For function $f:\mathbb{R}\rightarrow\mathbb{R}$ that satisfies $f\left(x+y\right)=f\left(x\right)f\left(y\right)$
and is not the zero-function I can prove that $f\left(1\right)>0$
and $f\left(x\right)=f\left(1\right)^{x}$ for each $x\in\mathbb{Q}$.
Is there a way to prove that for $x\in\mathbb{R}$?
This question has been marked to be a duplicate of the question whether $f(xy)=f(x)f(y)$ leads to $f(x)=x^p$ for some $p$. I disagree on that. Both questions are answered by means of construction of a function $g$ that suffices $g(x+y)=g(x)+g(y)$. In this question: $g(x)=\log f(x)$ and in the other $g(x)=\log f(e^x)$. So the answers are alike, but both questions definitely have another startpoint.
|
Continuity (or continuity in some point or measurability) is required. See Cauchy's functional equation. Your problem is reducible to this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/646109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
MATLAB: Approximate tomorrow's temperature with 2nd, 3rd and 4th polynomial using the Least Squares method. The following is Exercise 3 of a Numerical Analysis task I have to do as part of my university course on the subject.
Find an approximation of tomorrow's temperature based on the last 23
values of hourly temperature of your city ( Meteorological history for
Thessaloniki {The city of my univ} can be found here:
http://freemeteo.com)
You will approximate the temperature function with a polynomial of
2nd, 3rd and 4th degree, using the Least Squares method. Following
that, you will find the value of the function at the point that
interests you. Compare your approximations qualitatively and make a
note to the time and date you're doing the approximation on.
Maybe it's due to fatigue due to doing the first two tasks without break, or it's my lack of experience on numerical analysis, but I am completely stumped. I do not even know where to start.
I know it's disgusting to ask for a solution without even showing signs of effort, but I would appreciate anything. Leads, tutorials, outlines of the things I need to work on, one after the other, anything.
I'd be very much obliged to you.
NOTE: I am not able to use any MATLAB in-built approximation functions.
|
Let $\mathbf{t}\in\mathbb{R}^{23}$ be the last 23 samples you have. To fit these to an $N^\mathrm{th}$ order polynomial in terms of the hour, i.e., $t = \sum_i p_i h^i$, where $\mathbf{p}\in\mathbb{R}^{N}$ is the vector of coefficients and $h\in\mathbb{R}$ is the time in hours, first set up the system using the measurements you have and the times at which they were taken. According to the problem, the times were taken at $h = 0,1,\dots$, so let us define $h_i$ to be equal to the value $i$. Then
$$
\left[
\begin{array}{ccccc}
h_0^0 & h_0^1 & h_0^2 & h_0^3 & \cdots \\
h_1^0 & h_1^1 & h_1^2 & h_1^3 & \cdots \\
\vdots & \vdots & & \ddots
\end{array}
\right]
\mathbf{p} = \mathbf{H} \mathbf{p} = \mathbf{t}
$$
If you write the above out in long form, you'll see that all it says is that
$$
p_0 \cdot h_i^0 + p_1 \cdot h_i^1 + p_2 \cdot h_i^3 + \dots = t_i
$$
Now to solve for $\mathbf{p}$, you just use the normal equation:
$$
\mathbf{p} = \left( \mathbf{H}^{\mathrm{T}} \mathbf{H} \right)^{-1} \mathbf{H}^{\mathrm{T}} \mathbf{t}
$$
In Matlab, setting up and solving for the above is straightforward. Once you have $\mathbf{H}$ and $\mathbf{t}$ defined,
p = (H' * H) \ H' * t;
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/646200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How do I convince my students that the choice of variable of integration is irrelevant? I will be TA this semester for the second course on Calculus, which contains the definite integral.
I have thought this since the time I took this course, so how do I convince my students that for a definite integral
$$\int_a^b f(x)\ dx=\int_a^b f(z)\ dz=\int_a^b f(☺)\ d☺$$
i.e. The choice of variable of integration is irrelevant?
I still do not have an answer to this question, so I would really hope someone would guide me along, or share your thoughts. (through comments of course)
NEW EDIT: I've found a relevant example from before, that will probably confuse most new students. And also give new insights to this question.
Example: If $f$ is continuous, prove that
$$\int_0^{\pi/2}f(\cos x)\ dx = \int_0^{\pi/2}f(\sin x)\ dx$$
And so I start proving...
Note that $\cos x=\sin(\frac{\pi}{2} -x)$ and that $f$ is continuous, the integral is well-defined and
$$\int_0^{\pi/2}f(\cos x)\ dx=\int_0^{\pi/2}f(\sin(\frac{\pi}{2}-x))\ dx $$
Applying the substitution $u=\frac{\pi}{2} -x$, we obtain $dx =-du$ and hence
$$\int_0^{\pi/2}f(\sin(\frac{\pi}{2}-x))\ dx=-\int_{\pi/2}^{0}f(\sin u)\ du=\int_0^{\pi/2}f(\sin u)\ du\color{red}{=\int_0^{\pi/2}f(\sin x)\ dx}$$
Where the red part is the replacement of the dummy variable. So now, students, or even some of my peers will ask: $u$ is now dependent on $x$, what now? Why is the replacement still valid?
For me, I guess I will still answer according to the best answer here (by Harald), but I would love to hear more comments about this.
|
Maybe it helps to investigate the wording: integration variable is just a fancy name for what we used to call placeholder in elementary school when we solved
3 + _ = 5
and used an underscore or an empty box as the placeholder. Isn't it obvious then that the symbol (or variable name) cannot have an effect on the solution?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/646238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62",
"answer_count": 19,
"answer_id": 15
}
|
Showing that a map defined using the dual is a bounded linear operator from X' into X' I have trouble answering the second part of the following exercise. Any help would be appreciated!
Let $(X, \| \cdot \|)$ be a reflexive Banach space. Let $\{ T_n \}_{n = 1}^\infty$ be a sequence of bounded linear operators from $X$ into $X$ such that $\lim_{n \to \infty} f(T_n x)$ exists for all $f \in X'$ and all $x \in X$.
(a) Show that $\sup_{n} \|T_n' \| < + \infty$
(b) Show that the map $S$ defined by $(Sf)(x) := \lim_{n \to \infty} (T_n'f)(x)$ is a bounded linear operator from $X'$ into $X'$.
What I've done so far:
(a): I have done this using the Uniform Boundedness Principle twice after first showing that $\sup_n \| T_n' (f)(x) \| < + \infty$
(b): I think that $\displaystyle \sup_n\|T_n' \|$ would make a good bound since we have just seen that it is finite, but I have not succeeded in proving that so far...
Thank you very much for any hints you can offer me!
|
After you have shown that $A := \sup\limits_n \lVert T_n'\rVert < \infty$, you have a known bound on $S(f)(x)$ for every $x\in X$ and $f\in X'$, namely
$$\lvert S(f)(x)\rvert = \lim_{n\to\infty} \lvert T_n'(f)(x)\rvert \leqslant \limsup_{n\to\infty} \lVert T_n'(f)\rVert\cdot \lVert x\rVert \leqslant \limsup_{n\to\infty} \lVert T_n'\rVert\cdot \lVert f\rVert\cdot \lVert x\rVert.$$
From $\lvert S(f)(x)\rvert \leqslant N\lVert f\rVert\cdot \lVert x\rVert$, we deduce by the definition of the norm on $X'$ that $\lVert S(f)\rVert \leqslant N\cdot \lVert f\rVert$, and this in turn shows that $S$ is a continuous (bounded) operator on $X'$ with $\lVert S\rVert \leqslant N$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/646325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Equations of planes and lines in 3-space I'm reading Strang's book "Linear Algebra and it's applications" and he writes in the first chapter that an equation involving two variables in still a plane in 3-space.
"The second plane is 4u - 6v = -2. It is drawn vertically, because w can take any value. The coefficient of w is zero, but this, remains a plane in 3-space. (The equation 4u = 3, or even the extreme case u = 0, would still describe a plane.)"
I don't quiete understand why, I always thought that you would need three variables to create a plane. E.g. the intersection of the equations 2u+ v+ w= 5 and 4u - 6v = -2 is a line, presumably because when you solve those two equations you get an equation with two variables, but then why isn't 4u - 6v = -2 a line as well?
|
A linear equation reduces the dimension of the ambient space by 1. You can think of it as restricting one variable, as a function of all the others.
Hence, a linear equation in 2 dimensions is a one-dimensional space, or a line. A linear equation in 3 dimensions is a 2-dimensional space, or a plane. And so on.
Followup, as requested. A system of 2 linear equations generally reduces the dimension of the ambient space by 2 -- each of them reduces the dimension by 1. In a plane, this means a point (0-dimensional). In 3-space, this means a line (1-dimensional).
The reason it's "generally" is that the two equations may actually be the same, such as $$x+y=1, 2x+2y=2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/646396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Hyperplane sections on projective surfaces I am studying Beauville's book "Complex Algebraic Surfaces".
At page 2 he defines the intersection form (.) on the Picard group of a surface.
For $L, L^\prime \in Pic(S)$
$$(L.L^\prime)=\chi(\mathcal{O}_S)-\chi(L^{-1})-\chi(L^{\prime-1})+\chi(L^{-1}\otimes L^{\prime-1})$$
Why the self-intersection (i.e $(H.H)=H^2$) of an hyperplane section $H$ on $S$ is always positive?
|
This self-intersection is exactly the degree of $S$.
Concretely, choose $H$ and $H'$ in general position, then $S \cap H$ and $S \cap H'$ are two curves on $S$, and they intersect in some number of points. This
already shows that the intersection is non-negative. The fact that it is positive
is a general fact about projective varieties: projective varieities of complementary dimension always have a positive number of intersection points.
(Apply this to $S$, which is of dimension $2$, and $H \cap H'$, which is a linear subspace of codimension $2$, i.e. of complementary dimension to $S$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/646477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Get the rotation matrix from two vectors Given $v=(2,3,4)^t$ and $w=(5,2,0)^t$, I want to calculate the rotation matrix (in the normal coordinate system given by orthonormal vectors $i,j$ and $k$) that projects $v$ to $w$ and to find out which is the rotation axis.
I Started by calculating the vector product $v \times w$, which I need in order to calculate the angle between the two vectors as $\varphi=arcsin({||v \times w|| \over ||v||||w||})$. This equals to $arcsin(\frac{3\sqrt{65}}{29})$.
I now that the rotation in the plane spanned by v and w is given by $A=\pmatrix{cos(arcsin(\frac{3\sqrt{65}}{29}) & - \frac{3\sqrt{65}}{29} \\ \frac{3\sqrt{65}}{29} & cos(arcsin(\frac{3\sqrt{65}}{29}))}$ and $v \times w$ ist he rotation axis, but I'm lost when it comes to determining the rotation matrix in 3D.
|
Try writing the matrix $A_{B_2}$ in terms of the basis $B_2 = \{\hat v, \hat w, \widehat{v \times w}\}$. This is very similar to the matrix for the plane that you've already written.
Then, think about if you know a way to change from the standard basis $B_1 = \{\hat i, \hat j, \hat k\}$ to this basis. That is, let $P_{1 \to 2}$ be a linear map such that $P_{1 \to 2}(\hat i) = \hat v$, $P_{1 \to 2}(\hat j)= \hat w$, and so on. Then the matrix representation of $P$ can be used to find the matrix representation $A_{B_1}$ (in the standard basis) like so:
$$A_{B_1} = P^{-1}_{1 \to 2} A_{B_2} P_{1 \to 2}$$
You should feel comfortable with the idea that $P^{-1}_{1 \to 2}$ is just the change of basis that goes from $B_2 \to B_1$, which is what you need.
Think about how you would write the change of basis matrix $P_{1 \to 2}$ and its inverse. Remember that this is an orthogonal matrix, and as such, its inverse is equal to its transpose (greatly simplifying things). All that should be left once you know that matrix is to do the matrix multiplications: a tedious process by hand, but that's all.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/646570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Correct way to get average price probably a basic question for a lot of you guys, but it was a subject of a friendly debate at my work earlier - needless to say none of the involved were accountants. In short, we are thinking about which is the proper way to get the average price of a sold item. Here is a simple example:
I sell part xyz 3 times this month.
I sell 100 for $67 each for $6700
I sell 80 at $70 each for $5600
and I sell 60 at $72 each for $4320
Is the average price of the product 69.67 ((67+70+72)/3)?
Or is it 69.25 (6700+5600+4320)/(100+80+60)?
Minor, I know, but still.
Is this just one of those things that you can say "it depends on how look at it". I tend to think it is the latter simply because it seems more accurate, you are actually figuring in all the individual items sold.
Thanks.
|
The latter is better, because it actually tells you that what you have earned is the same as if you've sold all the items at this average price.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/646784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
A question about degree of a polynomial Let $R$ be a commutative ring with identity $1 \in R$, let $R[x]$ be the ring of polynomials with coefficients in $R$, and let the polynomial $f(x)$ be invertible in $R[x]$. If $R$ is an integral domain, show that $\text{deg}(f(x))=0$
|
Go for a contradiction. Assume that $\deg(f)\geq 1$ Write out the polynomial, which has nonzero leading coefficient $a_n$
$$f(x)=a_nx^n+a_{n-1}x^{n-1}+...+a_1x+a_0$$
Write the inverse function with nonzero lead coefficient
$$f^{-1}(x)=b_mx^m+b_{m-1}x^{m-1}+...+b_1x+b_0$$
$f(x)f^{-1}(x)$ has the lead term $a_nb_mx^{n+m}$ where $\deg(f(x)f^{-1}(x))= n+m\geq1$. because $a_nb_m\neq0$ since we are in an integral domain. Also, $f(x)f^{-1}(x)=1$, which has degree 0. Contradication. Therefore, $\deg(f)=0$ if $f$ has an inverse.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/646910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Why $\frac{|1-z|}{1-|z|}\le K$ corresponds to the region defined by the Stolz angle? In his presentation of Abel's theorem, Ahlfors mentions that for a fixed positive number $K$, the region defined by \begin{equation}
\frac{|1-z|}{1-|z|}\le K
\end{equation} corresponds to the region inside the unit circle and in a certain angle with vertex $1$, symmetric around the x-axis.
This should not be too difficult, but I cannot actually see how that inequality is related to that geometric picture.
Can someone give a hint?
Thanks!
|
It's not an equality. The Stolz angle with opening $\alpha > 0$ and radius $r$ is
$$S(\alpha,r) = \{1 - \rho e^{i\varphi} : 0 < \rho < r,\; \lvert\varphi\rvert < \alpha\},$$
a circular sector that for $\alpha < \pi/2$ and small enough $r$ (depending on $\alpha$) is contained in the unit disk. Its boundary consists of two straight line segments, and one circular arc.
For $0 < K < \infty$, the region
$$R(K) = \left\lbrace z \in \mathbb{D} : \frac{\lvert 1-z\rvert}{1-\lvert z\rvert} < K \right\rbrace$$
is bounded by a curve that contains no straight line segment. But as $z$ approaches $1$ on the boundary of $R(K)$, the angle between between the real axis and $1-z$ approaches a limit $< \pi/2$ (namely $\arccos K^{-1}$).
The equivalence of the two conditions is to be understood in the sense that if we consider $R(K,r) = \{z \in R(K) : \lvert 1-z\rvert < r\}$, then for every $K \in (0,\infty)$, there is an $\alpha < \pi/2$ with $R(K,r) \subset S(\alpha,r)$, and for every angle $\alpha \in (0,\pi/2)$, there is a $K < \infty$ with $S(\alpha,r) \subset R(K,r)$ (where $r$ is always supposed small enough that $\overline{S(\alpha,r)}\setminus \{1\}$ is contained in the unit disk). Demonstrations at wolfram.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/647025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How prove exists a sequence $\{a_{n}\}$ of real numbers such that $\sum_{n=1}^{\infty}a^2_{n}<\infty,\sum_{n=1}^{\infty}|a_{n}b_{n}|=\infty$ Suppose that the series $\displaystyle\sum_{n=1}^{\infty}b^2_{n}$ of postive numbers diverges. Prove that
there exists a sequence $\{a_{n}\}$ of real numbers such that
$$
\sum_{n=1}^{\infty}a^2_{n}<\infty
\quad\text{and}\quad
\sum_{n=1}^{\infty}|a_{n}b_{n}|=\infty.
$$
My try: maybe this Cauchy-Schwarz inequality have usefull
$$\Big(\sum_{n=1}^{\infty}a^2_{n}\Big)\Big(\sum_{n=1}^{\infty}b^2_{n}\Big)\ge
\Big(\sum_{n=1}^{\infty}a_{n}b_{n}\Big)^2$$
|
Define
$$
s_n=\sum_{k=1}^nb_k^2
$$
and
$$
a_n=\frac{b_n}{\sqrt{s_ns_{n-1}}}
$$
Without loss of generality, assume $b_1\ne0$.
Since $u-1\ge\log(u)$ for $u\gt0$,
$$
\begin{align}
\sum_{k=2}^n a_kb_k
&=\sum_{k=2}^n\frac{s_k-s_{k-1}}{\sqrt{s_ks_{k-1}}}\\
&=\sum_{k=2}^n\sqrt{\frac{s_k}{s_{k-1}}}-\sqrt{\frac{s_{k-1}}{s_k}}\\
&\ge\sum_{k=2}^n\sqrt{\frac{s_k}{s_{k-1}}}-1\\
&\ge\frac12\sum_{k=2}^n\log\left(\frac{s_k}{s_{k-1}}\right)\\[8pt]
&=\frac12(\log(s_n)-\log(s_1))
\end{align}
$$
Therefore,
$$
\sum_{k=1}^\infty a_kb_k=\infty
$$
$$
\begin{align}
\sum_{k=2}^n a_k^2
&=\sum_{k=2}^n\frac{s_k-s_{k-1}}{s_ks_{k-1}}\\
&=\sum_{k=2}^n\frac1{s_{k-1}}-\frac1{s_k}\\
&=\frac1{s_1}-\frac1{s_n}
\end{align}
$$
Therefore,
$$
\sum_{k=2}^\infty a_k^2=\frac1{b_1^2}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/647085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Real- Valued Random Variable This is from Ross Ihaka's notes about Time Series Analysis.
Any random variable which has probability 1 of being zero will have $\langle X,X \rangle = 0$, which violates the requirement that this only happen when $X=0$.
*
*How can a real-valued random variable has probability 1 of being zero and probability 0 of being other values?
*Why is there uniqueness problem here?
Thank you.
|
A random variable is a (measurable) function from a sample space to the real numbers. There is nothing wrong with mapping (almost) everything to value 0. This is still a random variable.
When $X=0$ with probability $1$, $\langle X,X\rangle$ equals to 0.
Here is an example, let the sample space be the interval $[0,1]$ with uniform measure. For a $x$ in the sample space, $X(x) = 0$ for all $x$ in the sample space except $X(0.5)=1$, then $EX^2=0$ but $X$ is not 0 everywhere. This violates uniqueness that there is only one $X$ such that $EX^2=0$
This is quite standard in probability theory: you just treat two r.v. as the same if they equal almost everywhere, i.e. they differ only on a set of measure $0$. You 'quotient' out the equivalent classes of random variables which are the same almost everywhere, then the 0 is unique. This is known as $L^2$ space.
from a practical point of view, just ignore it, it does not matter.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/647314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Basic differentiation: second derivative I'm currently teaching myself some differential equations by watching the MIT OCW series on the topic. In This video, at 21:50mins, the lecturer calculates the following derivatives:
1st $y'=x^2-y^2$
2nd $y''=2x-2yy'$
My simple question is, how he came to the second one. Is this a "total derivative" and why is it required? If I try to calculate the total derivative of y', I get:
$y''=(2x-1)dx+(1-2y)dy=2x dx - 2y dy$
I'm pretty sure that I made a silly mistake. Thanks for your help!
|
What's going on in the video, and in your posted problem, is what we refer to as implicit differentiation.
We view $y$ as a function of $x$, and thus, need to use the chain rule: $$y'(x) = x^2 - [y(x)]^2 \implies y''(x) = 2x - 2y(x)y'(x)$$ The author simply omits the parenthetical argument $(x)$: $$y'' = 2x - 2yy''$$
You can also see that your suggested answer just stops short one step: divide your $y''$ through by $dx$ and simplify! So you are not wrong; you "simply" haven't simplified!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/647507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove: If $\gcd(a,b,c)=1$ then there exists $z$ such that $\gcd(az+b,c) = 1$ I can't crack this one.
Prove: If $\gcd(a,b,c)=1$ then there exists $z$ such that $\gcd(az+b,c) = 1$ (the only constraint is that $a,b,c,z \in \mathbb{Z}$ and $c\neq 0)$
|
This question is closely related to this one. There is a small difference, but apparently both solutions provided there fit in here too, after a very small modification:
Assume $c\neq0$.
Let $p_1,p_2,\ldots,p_i$ be the common prime divisors of $c$ and $b$, with their respective powers $e_1,\ldots,e_i$ in the prime factorisation of $c$.
If we set $d=p_1^{e_1}\cdots p_i^{e_i}$, then $z=\frac cd$ will satisfy $\gcd(az+b,c)=1$.
To see why, suppose $q$ is prime and $q$ divides both $c$ and $az+b$.
If $q\mid b$, then we should have $q\mid az$ which is impossible since $\gcd(a,b,c)=1$ and $\gcd(z,b)=1$. If $q\nmid b$, then $q\mid z$ by the definition of $z$, and therefore $q\nmid az+b$, a contradiction again.
An alternative with induction:
Clearly the desired theorem is true for $a,c=\pm 1$.
Now suppose we know it's true for all $a,c$ satisfying $|a|+|c|\leq s$. (We will do induction on $s$.)
Let $|a|>1$. (The case $a=\pm1$ is trivial and needs no induction, in fact.)
If $a\mid c$, let $c=c'a$ and then $(za+b,c)=(za+b,c'a)=(za+b,c')$ for all $z$, because we are given $(a,b,c)=1$. Now $|a|+|c'|<|a|+|c|$ and we conclude, by the induction hypothesis, there is a $z$ satisfying $(za+b,c)=1$.
If $a\nmid c$, let $g=(a,c)$.
We are looking for an integer $d$ with $(d,c)=1$ such that $az+kc=d-b$ has a solution for $z$ and $k$. (This is simply rewriting $(az+b,c)=1$ where $d\equiv az+b\pmod c$.)
It is well know that such a linear equation has a solution if and only if $g\mid d-b$, or equivalently $d=z'g+b$ for some $z'$. Because $|g|<|a|$ we can again use the induction hypothesis, and conclude that there exists such a $z'$, and therefore there exists such $z$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/647600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 4,
"answer_id": 2
}
|
Graph Theory. Prove that $\sum_{v}^{} \frac{1}{1+d(v)} \ge \frac{n^2}{2e+n} $ Let e denote the number of edges and n the number of vertices. We can assume that the graph G is simple. Prove that
$\sum_{v}^{} \frac{1}{1+d(v)} \ge \frac{n^2}{2e+n} $
Any help/hints would be very much appreciated!
|
Recast the right-hand side as follows:
$$\frac{n^2}{2e+n} = \sum_v\frac{n}{2e+n} = \sum_v \frac{1}{1+ \frac{2e}{n}}.$$
From the handshaking lemma, it follows that $\frac{2e}{n} = \overline{\delta}$, the average degree. Instead of working with the above quantity, we work with the harmonic mean, which is incidentally equal to the arithmetic mean since we are summing over identical quantities:
$$\frac{n}{\sum_v \frac{1}{1+ \overline{\delta}}} = 1+\overline{\delta}=\frac{\sum_v(1+\overline{\delta})}{n}.$$
Notice that since $\overline{\delta}$ is the average degree, we also have
$$\frac{\sum_v(1+\overline{\delta})}{n} = \frac{\sum_v(1+\delta(v))}{n}.$$
By the arithmetic-harmonic inequality, we therefore have
$$\frac{\sum_v\left(1+\delta(v)\right)}{n} \ge \frac{n}{\sum_v\frac{1}{1+\delta(v)}}.$$
Putting everything together, we finally have
$$\frac{n}{\sum_v \frac{1}{1+ \overline{\delta}}} \ge \frac{n}{\sum_v\frac{1}{1+\delta(v)}}.$$
Cancelling $n$s and taking the reciprocal yields the desired inequality. Incidentally, we have also shown that equality occurs if and only if the graph is regular through the equality case of the arithmetic-harmonic inequality.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/647704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Proving that $(a_n) $ defined by induction to be: > $a_1=2.2$, $a_{n+1}=5-\frac6{a_n}$ is converging and finding the limit
Let there be the sequence $(a_n) $ defined by induction to be:
$a_1=2.2$, $a_{n+1}=5-\frac6{a_n}$ $\forall n\ge 1$
Prove that the sequence is converging and calculate it's limit.
So what needs to be done is to show that it's increasing and bounded and to find the limit.
Finding the limit we know that $a_n\to \alpha $ which leads to $ \alpha=5-\frac6{\alpha}$ after solving the quadratic equation we get: $ \alpha_{1,2}=-1,6$ so we know that the limit is 6 because it can't be negative.
Increasing, by induction, we need to show that $a_n\le a_{n+1}$ so we get to: $a_{k+2}= 5-\dfrac6{a_{k+1}} \ge 5-\dfrac6{a_k}=a_{k+1}\to...\to a_{k}\le a_{k+1}$.
Is my way correct ? how do I show that it's bounded ? anything missing ?
NOTE: I need to know how to solve this rigoursly as I study for a test so please let me know of anything that needs to be mentioned in the test.
|
Ok. I don't know how you resolved this in class but here's one way.
Let $f$ be defined by $$f(x) = 5-\frac{6}{x}$$
So we have, $a_1 = \frac{11}{5}$ and $$\forall n \geqslant 1, \qquad a_{n+1} = f(a_n)$$
Study of $f$
$f$ is defined for $x\neq 0$. We are looking for stable intervals by $f$ (interval $I$ such that $f(I) \subset I$).
If we plot the function we can have some idea of what are some stable intervals.
For example, if $x \geqslant 2$, then $\frac{6}{x} \leqslant 3$ and finally $f(x) \geqslant 2$. Idem, if $x \leqslant 3$, then we can show that $f(x) \leqslant 3$. So $I= [2, 3]$ is stable by $f$.
Let's also study the sign of $$g(x) = f(x) - x = 5-x-\frac{6}{x}$$ on $I$.
For $x\in I$, we know that we can differentiate $g$ and got $$g'(x) = \frac{6}{x^2}-1$$
So, if $x^2 < 6$ ($\Leftrightarrow |x| < \sqrt{6}$ and $\sqrt{6} > 2.2$) $g$ is increasing and decreasing otherwise.
We can also (by solving a simple equation) that the roots of $g$ are $2$ and $3$.
So. On $I$, $f(x) \geqslant x$ if $x\in [2,3]$ and $f(x) \leqslant x$ if $x > 3$
We've got everything to end this.
$a_n$ is bounded
We're going to show that $\forall n\in \mathbb{N}, a_n \in I=[2,3]$.
This is clear for $a_1 = 2.2$. Then if we assume that $a_n \in I$ then
$$a_{n+1} = f(a_n) \in I$$ because $a_n \in I$ and $I$ is stable by $f$.
So the property is proved.
$a_n$ is increasing
We now know that $a_n \in [2,3]$ for all $n$.
But, we also know that $$\forall x\in [2,3], \qquad f(x) \geqslant x$$ If we say $x = a_n$ (which is allowed because we're in the right interval) we have $$f(a_n) \geqslant a_n$$
So $$a_{n+1} \geqslant a_n$$ and $a_n$ is increasing.
The end
So we now know that $a_n$ has a limit, say $\ell$. $\ell$ has to have the property $$f(\ell) = \ell$$ We have prove that there's only $\ell =2$ or $\ell =3$. And since for $n > 1$, $a_n >a_1 >2$, we know we have $$a_n \to 3$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/647783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
In $\ell^p$, if an operator commutes with left shift, it is continuous? Our professor put this one in our exam, taking it out along the way though because it seemed too tricky. Still we wasted nearly an hour on it and can't stop thinking about a solution.
What we have: The left shift $L : \ell^p \to \ell^p$
$$L(x_1,x_2,x_3,\ldots) = (x_2,x_3,\ldots)$$
and another operator $T$. We should prove that if $TL=LT$, then $T$ is continuous.
We had defined subspaces
$$ X_k = \{ (x_i) : x_i = 0 \text{ for } i>k \} $$
and seen that these are $T$-invariant and the restrictions $T : X_k \to X_k$ continuous (obvious). The hint was to use closed-graph-theorem to show that $T$ is continuous. Of course we can truncate any sequence to then lie in $X_k$, however I do not see how convergence of the truncated sequences relates to convergence of the images under $T$.
Any help please?
|
The statement is false, as I discovered here. Since not everybody has access to the paper, let me provide a summary of the argument:
Let $R=\mathbb C[t]$, $L$ the left shift operator, and view $\ell^p$ as an $R$-module by defining $t\cdot x=Lx$. Let $X=\sum \ker L^i \subset \ell^p$ be the subspace of eventually-zero sequences.
Lemma: Given a PID $P$, a $P$-module $M$ is injective if and only if it is divisible, i.e., if for every $p\in P$, $pM=M$.
Lemma: $X$ is an injective $R$-module.
Proof. Since a PID is a UFD, we can check divisibility (and hence injectivity) using irreducible elements. Over $R$, the irreducible elements are linear polynomials. Thus, we need to show that if $x$ is an eventually-null sequence and $\lambda\in \mathbb C$, there exists a $y$ such that $Ly-\lambda y=x$. This is straight forward.
Because $X$ is injective, the inclusion $X\subset \ell^p$ splits, there exists a (non-unique) projection map $P:\ell^p\to X$. Since this is a map of $R$-modules, it commutes with $L$.
Lemma: $P$ is not continuous.
Proof. Suppose that $P$ were continuous. Then $X=\ker (P-\operatorname{Id})$ is closed. However, this is absurd, as every sequences can be approximated by finitely many terms (i.e., $X$ is a dense proper subspace).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/647866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 2,
"answer_id": 0
}
|
Bipartite proof Let $G$ be a graph of order $5$ or more. Prove that at most one of $G$ and "$G$ complement" is bipartite.
I'm lost as to what needs to be done. I know that A nontrivial graph $G$ is bipartite if and only if $G$ contains no odd cycles.
|
Let me give you a hint towards a much, much more elementary solution:
Hint: It is enough to show that if $G$ is bipartite, then $\bar{G}$ (the complement of $G$) is not bipartite.
To that end, suppose that $V(G)=A\cup B$ is a bipartition of $G$. Then in $E(G)$, there are no edges inside $A$, no edges inside $B$, and there may be some edges between $A$ and $B$.
Based on this, what does $E(\bar{G})$ look like? Since $G$ contains no edges inside $A$ and no edges inside $B$, it must be the case that $\bar{G}$ contains EVERY edge inside $A$ and EVERY edge inside $B$.
So, if $\bar{G}$ WERE bipartite, it couldn't be the case that any two vertices from $A$ or any two vertices from $B$ fell in the same part of the bipartition.
Can you see how to finish it from here, using the assumption that there are at least 5 vertices distributed between $A$ and $B$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/647954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Evaluate the contour integral $\int_{\gamma(0,1)}\frac{\sin(z)}{z^4}dz.$ Let $\gamma(z_0,R)$ denote the circular contour $z_0+Re^{it}$ for $0\leq t \leq 2\pi$. Evaluate
$$\int_{\gamma(0,1)}\frac{\sin(z)}{z^4}dz.$$
I know that
\begin{equation}
\int_{\gamma(0,1)}\frac{\sin(z)}{z^4}dz = \frac{1}{z^4}\left(z-\frac{z^3}{3!}+\frac{z^5}{5!}-\cdots\right)
= \frac{1}{z^3}-\frac{1}{6z}+\cdots
\end{equation}
but I'm not sure if I should calculate the residues and poles or to use Cauchy's formula?
Using Cauchy's formula would give $$ \frac{2\pi i}{1!} \frac{d}{dz}\sin(z),$$
evaluated at $0$ gives $2\pi i$? I'm not sure though, any help will be greatly appreciated.
|
Cauchy's integral formula is
$$f^{(n)}(z) = \frac{n!}{2\pi i} \int_\gamma \frac{f(\zeta)}{(\zeta-z)^{n+1}}\,d\zeta,$$
where $\gamma$ is a closed path winding once around $z$, and enclosing no singularity of $f$.
Thus in your example, $n = 3$, and you need the third derivative,
$$\int_{\gamma(0,1)} \frac{\sin z}{z^4}\,dz = \frac{2\pi i}{3!} \sin^{(3)} 0 = \frac{2\pi i}{6} (-\cos 0) = - \frac{\pi i}{3}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/648066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Homogenous polynomial and partial derivatives I'm struggling to understand this part in a book I'm reading:
Let $F$ be a projective curve of degree $d$ with $P\in F$. Wlog,
suppose $P=(a:b:1)$. Let's look the affine chart $(a,b)\mapsto
(a,b,1)$.
Let $f$ be the deshomogenization of $F$, we can write $f$ in this way:
(WHY?)
$$f=F(x,y,1)=f_1(x-a,y-b)+\ldots +f_d(x-a,y-b)$$
Where $f_l$ is a homogeneous polynomial of degree $l$ and we have
(WHY?)
$$f_l=\sum_{i+j=l}\frac{1}{i!j!}\frac{\partial^lf}{\partial^ix\partial^jy}x^iy^j$$
I know this should be a silly question, but I'm a beginner in this subject and I really need help, if anyone could help me I would be grateful.
Thanks
EDIT
I'm thinking about Taylor's formula, but The formula of the post doesn't match with the Taylor's one of several variables, see for example this link, maybe there is some mistake in the formula of my post?
|
Question: "I know this should be a silly question, but I'm a beginner in this subject and I really need help, if anyone could help me I would be grateful. Thanks"
Answer: When trying to prove a formula you should verify the formula in an explicit "elementary " example first, then try to generalize.
Example: Let $F:=x^2+y^2+z^2$ and let $p:=(a,b,1) \in D(z)$ with $u:=x/z, v:=y/z$ local coordinates. Let $f(u,v):=u^2+v^1+1$.
It follows
$$f(u,v)=f(a+u-a, b+v-b)=(a+u-a)^2+(b+v-b)^2+1= $$
$$a^2+b^2+1+ 2a(u-a)+2b(v-b)+(u-a)^2+(v-b)^2=$$
$$=f(a,b)+\frac{\partial f}{\partial_u}(a,b)(u-a)+\frac{\partial f}{\partial v}(a,b)(v-b) + $$
$$\frac{\partial^2 f}{\partial u^2}(a,b)(u-a)^2 +\frac{\partial^2 f}{\partial u \partial v}(a,b)(u-a)(v-b)+\frac{\partial^2 f}{\partial v^2}(a,b)(v-b)^2.$$
In general you get for any polynomial $g(u,v)$ the formula
$$g(u,v)=T_p(g(u,v)):=\sum_{k\geq 0} \sum_{i+j=k}\frac{\partial^{k} f}{\partial^i_u \partial^j_v}(a,b)(u-a)^i(v-b)^j,$$
which is the Taylor expansion of $g(u,v)$ at the point $p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/648182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Prove by induction that $2^{2n} – 1$ is divisible by $3$ whenever n is a positive integer. I am confused as to how to solve this question.
For the Base case $n=1$, $(2^{2(1)} - 1)\,/\, 3 = 1$, base case holds
My induction hypothesis is:
Assume $2^{2k} -1$ is divisible by $3$ when $k$ is a positive integer
So, $2^{2k} -1 = 3m$
$2^{2k} = 3m+1$
after this, I'm not quite sure where to go. Can anyone provide any hints?
|
Hint: If $2^{2k} - 1$ is divisible by $3$, then write
\begin{align*}
2^{2(k + 1)} - 1 &= 2^{2k + 2} - 1 \\
&= 4 \cdot 2^{2k} - 1 \\
&= 4 \cdot \Big(2^{2k} - 1\Big) + 3
\end{align*}
Do you see how to finish it up?
This technique is motivated by attempting to shoehorn in the term $2^{2k} - 1$, since that's the only piece we really know anything meaningful about.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/648273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 1
}
|
Are there any nontrivial ways to factor n-cycles into a product of cycles? I was reading a proof here about the simplicity of $A_n (n \ge 5)$. It states (and proves) a lemma about 3-cycles:
A 3-cycle $(a, b, c)$ may be written as $(a, b, c) = (1, 2, a)^{-1}(1, 2, c)(1, 2, b)^{-1}(1,2, a)$ (here, multiplication is right to left).
Later on, the author shows that a 3-cycles $(1, 2, 3)^{-1}$ and $(1, 2, k)$ are conjugate in $A_n$, decomposing the former into a product of the latter and some transpositions: $$(1, 2, k) = ((1, 2)(3, k))(1, 2, 3)^{-1}((1, 2)(3, k))^{-1}.$$
This certainly makes sense now that I see it, but how is this arrived at a priori? Is there a way to "factorize" cycles into other cycles?
Thank you so much.
|
I believe you need another way of looking on conjugation:
If the group $S_n$ is realized as acting on $n$ points, then conjugating by some permutation is just renumerating points. (i.e. if you have a cycle $\sigma=(123)$ and a renumeration $\tau=(12)$ then the new cycle will be $\sigma=(213)$ and that is exactly the conjugation by $\tau$). Now it is very easy to construct the element in $S_n$ that changes two $3$-cycles and all you need is to check, that this element is even. Well, you have a concrete formula for the element so that's easy.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/648331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
An inequality concerning triangle inequality. $a_1,a_2,a_3,b_1,b_2,b_3,c_1,c_2,c_3 \ge 0$, and given that $a_i+b_i \ge c_i$ for $i = 1,2,3$. I'd like the following inequality to hold, but can't find a proof, so I'd appreciate some help. $$\sqrt{a_1^2 + a_2^2 + a_3^2} + \sqrt{b_1^2 + b_2^2 + b_3^2} \ge \sqrt{c_1^2 + c_2^2 + c_3^2} $$
|
Good news. It does hold.
Note that applying Minkowski's inequality to the LHS, we have:
$$\sqrt{a_1^2 + a_2^2 + a_3^2} +\sqrt{b_1^2 + b_2^2 + b_3^2} \ge \sqrt{(a_1 +b_1)^2+(a_2 +b_2)^2 +(a_3 +b_3)^2}$$
Now all we need to do is show that
$$\sqrt{(a_1 +b_1)^2+(a_2 +b_2)^2 +(a_3 +b_3)^2} \ge \sqrt{c_1^2+c_2^2+c_3^2}$$
Which is trivial since:
$$a_i+b_i \ge c_i$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/648412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Way to find volume of the solid A solid has a square base of side $s$ . The upper edge is parallel to the base and has length $2s$. All other edges have length $s$ . What is the volume of the solid ?
NB : The volume of the tetrahedron with all sides length l is $ V = \dfrac{\sqrt2}{12}l^3$
|
Converting comments to answer.
Especially given your "NB", it seems like you're describing a solid formed by slicing a regular tetrahedron by a plane parallel to (and half-way between) two opposite edges. In that case, the volume of the solid is half the volume of a regular tetrahedron of side $2s$.
Imagine a regular tetrahedron balancing on an edge, with the opposite edge parallel to the table. A plane parallel to the table will be parallel to those opposite edges. If the plane passes halfway between the table and the "upper" edge, then that plane will pass through the midpoints of the four remaining edges; those midpoints determine a square whose edges have half the length of the tetrahedron's edges (since each edge of the square is a "mid-line" of one of the tetrahedron's faces). And clearly the plane cuts the volume of the tetrahedron in half.
Here's a cool animation (though without the "balancing on an edge" thing):
Image credit: http://www.mmmlib.com/anne%20tyng%20maryland.html .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/648525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Show that $-3$ is a primitive root modulo $p=2q+1$ This was a question from an exam:
Let $q \ge 5$ be a prime number and assume that $p=2q+1$ is also prime. Prove that $-3$ is a primitive root in $\mathbb{Z}_p$.
I guess the solution goes something like this:
Let $k$ be the order multiplicative order of $-3$ modulo p. Using Euler's theorem we see that: $(-3)^{2q} \equiv1\ (mod\ p)$.
Hence $k\ |\ 2q$, which means (since $q$ is prime) that $k=1,\ 2,\ q,$ or $2q$. Obviously $k \neq 1$ and $k \neq 2$, since otherwise we'll have $[-3]_p=[1]_p$ and $[(-3)^2]_p = [9]_p = [1]_p$, respectively, which is wrong since $ p \ge 11$. What remains is to show that $k \neq q$. Unfortunately, I could not figure out how to show this.
|
Hint: If $(-3)^q \equiv 1 (mod p)$ then $-3$ is a quadratic residue modulo $p$ (because if $\xi$ is a primitive root and $(\xi^k)^q \equiv 1$ then $k$ is even and $\xi^k$ is a quadratic resieue)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/648616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Should $f(x) \equiv 0$ if $0\le f'(x)\le f(x)$ and $f(0)=0$? Assume $f(x)$ is a real-function defined on $[0,+\infty)$ and satisfies the followings:
*
*$f'(x) \geq 0$
*$f(0)=0$
*$f'(x) \leq f(x)$
Should we always have $f(x) \equiv 0$ ? Thanks for any solution.
|
As $f(0)=0$ and for all $x\geq0$, $f'(x)\geq0$, we have $f(x)\geq0, \forall x\geq0$.
Define
$$g(x)=\mathrm e^{-x}f(x)$$
and compute
$$g'(x)=\mathrm e^{-x}\left(f'(x)-f(x)\right)\leq 0$$
as $g(0)=0$ we have $g(x)\leq 0$ for all $x\geq0$.
Therefore we have
$$f(x)=\mathrm e^xg(x)\leq 0,\quad \forall x\geq0,$$
which proves $f\equiv0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/648706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 1
}
|
Why is $\vec a\downarrow\vec c=\vec a\downarrow(\vec b\downarrow\vec c)$? I know how to draw a driagram to show that it's true, but I can't really explain it mathematically / algebraically. This is about projection vectors, if the notation is unclear.
EDIT:
This is the notation I've learned, but after having read about more standard notation, I see that more people would prefer it this way:
$$Proj_\vec c\vec a=Proj_{proj_\vec c\vec b}\vec a$$
|
The projection can be written as $$\vec a \downarrow \vec c=\frac{\vec c\cdot \vec a}{\vec c \cdot \vec c}\vec c$$
What do you get if you let $\vec c=\vec b \downarrow \vec c$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/648798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Financials Maths- Credited Interest You invest £1,000 in an account for 5 years at 9% pa nominal. How much will you get at the end of the 5 years if the interest is is credited:
a) annually; b) 6 monthly; c) 3 monthly; d)monthly?
Approximate how much you would get if interest was credited daily. Which method would you prefer.
So I have managed to do it for a) annually using $1000(1.09)^5 = 1538.62$
However I have no idea what to for parts b, c or d. I originally thought that I could do $1000(1.09)^{10}$ because of it being split into 6 months but it did not come out right.
Any help would be greatly appreciated.
|
Hint: You are correct for the annual calculation. If the interest is credited every half year, you get half the amount of interest ($4.5\%$) each time, so it would be $1000(1.045)^{10}$ You should find that shorter crediting periods are better.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/648888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Extending a holomorphic function defined on a disc Suppose $f$ is a non-vanishing continous function on $\overline{D(0,1)} $ and holomorphic on ${D(0,1)} $ such that $$|f(z) | = 1$$ whenever $$|z | = 1$$
Then I have to prove that f is constant.
We can extend $f$ to all $\mathbb{C}$ by setting $$f(z) = \frac{1}{\overline{f(\frac{1}{\bar{z}})}}$$ and the resulting function is holomorphic on ${D(0,1)} \ $, $\mathbb{C} - \overline{D(0,1)}$ and continous on $\partial D(0,1)$.
But how can we say that the resulting function is holomorphic in $z \in \partial D(0,1)$ ?
|
To prove that $f$ is identically constant, you'd better employ the maximum principle, according to which the maximum of $|f|$ on $\overline{D}$ equals 1. But $f$ is non-vanishing on $\overline{D}$, hence $\frac{1}{f}$ is holomorphic on $D$ while $\bigl|\frac{1}{f}\bigr|=1$ on $\partial{D}$. By the same maximum principle now follows that the minimum of $|f|$ on $\overline{D}$ also equals 1. Hence $|f|=1$ on $\overline{D}$, whence readily follows the desired result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Bijection from a set of functions to a Cartesian product of sets Let S be an arbitrary set. Let $F=\{f:\{0,1\}\to S\}$ be the set of functions from $\{0,1\}$ to S. Construct a bijection $F→S \times S$.
I think I would define the function $a(f)=(f(0),f(1))$ because we know that both $f(0)$ and $f(1)$ are in $S$, but I don't know where to go from there.
|
Consider the following mapping $\phi:F\to S\times S$: $$\phi(f)=(f(0),f(1))\quad\forall f\in F.$$ It's injective, since if $\phi(f)=\phi(g)$ for $f,g\in F$, then $(f(0),f(1))=((g(0),g(1))$. This means that $f(0)=g(0)$ and $f(1)=g(1)$, so that $f=g$ identically. It's also surjective, since if $(s,t)\in S\times S$, then construct a function $f:\{0,1\}\to S$ as $f(0)=s$ and $f(1)=t$. Then, $f\in F$ and $\phi(f)=(s,t)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Irrationals: A Group? I understand that the set of irrational numbers with multiplication does not form a group (clearly, $\sqrt{2}\sqrt{2}=2$, so the set is not closed). But is there a proof or a counter-example that the irrationals with addition form (or do not form) a group?
Thank you!
Edit: In particular, I am wondering if the set is closed with respect to addition.
|
To speak to the spirit of your question a bit: the rational numbers are a so-called normal subgroup of the reals (since the reals form an abelian group, and all subgroups of an abelian group are normal), so we can talk about the quotient group of the reals by the rationals, $\mathbb{R}\ /\ \mathbb{Q}$. Each element of this group is a set of the form $\{r+q, q\in\mathbb{Q}\}$, and the sum of two elements $s_0=\{r_0+\mathbb{Q}\}$ and $s_1=\{r_1+\mathbb{Q}\}$ is $s_0+s_1 = \{r_0+r_1+\mathbb{Q}\}$; you can convince yourself that addition of two elements doesn't depend on which representative we choose for a given element. The identity element of this group is just the rationals $\mathbb{Q}$ themselves. This is a complicated object; for instance, any collection of representatives is a so-called Vitali set, and is non-measurable. (And just building such a collection of representatives requires the Axiom of Choice!)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Set Theory and Equality Let $A$ and $X$ be sets. Show that $X\setminus(X\setminus A)\subseteq A$, and that equality holds if and only if $A\subseteq X$.
I understand why this holds but am not sure how to 'show' this. Any advice would be appreciated.
|
Start with the definition/alternate expression for "setminus": $A\setminus B=A\cap B^c$, where $B^c$ is the complement of $B$:
$$X\setminus(X\setminus A)=X\cap (X\cap A^c)^c = X\cap (X^c \cup A) = (X\cap X^c) \cup (X \cap A)= X\cap A$$
Does that help?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find greatest value of $y(x) = (0.9^x)(300x + 650)$ Question and attempt
$y(x) = (0.9^x)(300x + 650)$
Estimate at what x value that y reaches its maximum value
The only way I could think of would be to use derivatives, so I tried it:
$y'(x) = (0.9^x)' \times (300x + 650) + 0.9^x \times (300x + 650)'$
$=$ [$\ln(0.9) \times 0.9^x \times 1$] $\times (300x + 650) + 0.9^x \times 300$
So then I subbed in 0 to find where the turning point is:
$y(0) =\ln0.9 \times 0.9^0 \times (300(0) + 650) + 0.9^0 \times 300$
$=\ln0.9 \times 650 + 300$
~ $231.51$
The problem is that $231.51$ is not the correct answer.
The real answer
This is a table of values of the answer (see that it increases up to in between x = 7 and 8, then decreases):
x = 5: 1269.5535
x = 6: 1302.03045
x = 7: 1315.316475
x = 8: 1312.9249905
x = 9: 1297.85863815
Here is what the graph actually looks like (which tells me there is a turning point):
And here is the picture of the vertex:
So the x value at which the y value is at its maximum value is $7.3$
I'd like to know where I went wrong with the derivative idea, but I still would encourage answers describing any other way to solve the problem.
|
You were asked to give an approximation, not an exact answer. If you take the derivative and compare it to zero, you get the rather difficult expression:
$$(300x+650)[\ln(0.9)×0.9^x]+0.9^x×300=0$$
We know $\ln (1+x) \sim x$ for small $x$. So, we get:
$$(300-30x-65)0.9^x=0$$
This will only be zero when $235=30x$, or when $x$ is approximately $7.8$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Help with recurrence relation It 's been a long time since I touched this kind of math , so please help me to solve the relation step by steps :
$V_k = (1+i)*V_{k-1}+P$
I know the answer is $V_k = (P/i)*((1+i)^k-1) $
Thanks in advance.
|
If you want a methodical way to find the answer that you don't know yet, you could use generating functions. Defining $F(x) = \sum_{k=0}^\infty V_k x^k$, and assuming the initial condition is $V_0=0$, the recursion gives
\begin{align*}
F(x) &= V_0 x^0 + \sum_{k=1}^\infty \big( (1+i)V_{k-1} + P \big) x^k \\
&= 0 + (1+i)x \sum_{k=1}^\infty V_{k-1} x^{k-1} + Px \sum_{k=1}^\infty x^{k-1} \\
&= (1+i)x \sum_{k=0}^\infty V_k x^k + Px \sum_{k=0}^\infty x^k \\
&= (1+i)x F(x) + \frac{Px}{1-x}.
\end{align*}
Solving for $F(x)$ gives $\big(1 - (1+i)x \big) F(x) = Px/(1-x)$, and so
\begin{align*}
F(x) &= \frac{Px}{(1 - (1+i)x )(1-x)} \\
&= \frac Pi \bigg( \frac1{(1-(1+i) x)}-\frac1{1-x} \bigg) \\
&= \frac Pi \bigg( \sum_{k=0}^\infty ((1+i) x)^k - \sum_{k=0}^\infty x^k \bigg).
\end{align*}
Comparing coefficients of $x^k$ then yields $V_k = \frac Pi \big( (1+i)^k - 1 \big)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Difference between a "topology" and a "space"? What do we mean when we talk about a topological space or a metric space? I see some people calling metric topologies metric spaces and I wonder if there is some synonymity between a topology and a space? What is it that the word means, and if there are multiple meanings how can one distinguish them?
|
I think of a "space" as the conceptually smallest place in which a given abstraction makes sense. For example, in a metric space, we have distilled the notion of distance. In a topological space, we are in the minimal setting for continuity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 0
}
|
Show $\forall x \exists y F(x,y)$ does not imply $\exists y \forall x F(x,y)$ Show:
$\exists y \forall x R(x,y) \rightarrow \forall x \exists y R(x,y)$
$\forall x \exists y F(x,y)$ does not imply $\exists y \forall x F(x,y)$
How do proofs of this nature usually work? When I try to prove the first one by saying:
$\mathcal{M} \models R(n,m)$ for all $n$ and some $m$ in the domain, so $\mathcal{M} \models \forall x \exists y R(x,y)$, I don't see why the same can't be applied for the second problem with $F(x,y)$. Can someone help me to intuitively understand these operations?
|
Consider the sentences $\forall x\exists y\ x=y$ and $\exists y\forall x\ x=y$ in a domain with two elements.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 2
}
|
Proving that Spec(α) and Spec(ß) partition positive integers iff α and ß are irrational and 1/α + 1/ß = 1 From Concrete Math, problem 3.13 asks:
"Let α and ß be positive real numbers. Prove that Spec(α) and Spec(ß) partition positive integers if and only if α and ß are irrational and 1/α + 1/ß = 1"
The solution claims:
"If they form a partition, the text's formula for N(α,n) implies that 1/α + 1/ß = 1, because the coefficients of n in the equation N(α,n) + N(ß,n) = n must agree if the equation is to hold for large n" (it goes on to the next part of the proof but I only care about this part)
In this chapter, they define Spec(α) to mean an infinite multiset of integers: $\{\lfloorα\rfloor,\lfloor2α\rfloor, ...\}$, and define N(α,n) to be the number of elements in Spec(α) that are $\le$ n. They show that N(α,n) = $\lceil(n+1)/α\rceil - 1$. They also show that a necessary condition for Spec(α) and Spec(ß) to partition the positive integers is N(α,n) + N(ß,n) = n
I can write out the equation N(α,n) + N(ß,n) = n by substituting that equation:
$$\lceil(n+1)/ß\rceil + \lceil(n+1)/α\rceil - 2 = n$$
then converting to floor
$$\lfloor (n+1)/ß\rfloor + \lfloor (n+1)/α\rfloor = n$$
then splitting the floor into the fractional and actual part of its arguments
$$n(1/ß + 1/α) + 1/α + 1/ß - \{(n+1)/ß\} - \{(n+1)/α\} = n$$
...but, at that point, I don't see how I can conclude that 1/α + 1/ß = 1. I see that 1/α + 1/ß appears as a coeffecient of n, but there's the problem of the fractional stuff on the right. For their claim to be correct, the fractional parts would have to add up to 1 to cancel out the 1 from 1/α + 1/ß. I know that the fractional parts both have values less than one (since they are fractional parts of real numbers), but I don't see how to conclude that they sum to 1. Can I just claim that, since n appears on the right hand side with a coefficient of 1, that the coefficient for n on the left must be 1?
|
The hint is in the phrase "for large $n$".
As you proved, we must have that for all $n$,
$$n(1/\beta + 1/\alpha) + 1/\alpha + 1/\beta - \{(n+1)/\beta\} - \{(n+1)/\alpha\} = n.$$
Let us denote $1/\beta + 1/\alpha$ by $c$, to rewrite this as
$$nc + c - \{(n+1)/\beta\} - \{(n+1)/\alpha\} = n.$$
As $0 \le \{x\} < 1$ for all $x$ and here we have $\{(n+1)/\beta\} - \{(n+1)/\alpha\} = nc + c - n$, this means that
$$0 \le nc + c -n < 2.$$
Now consider what happens for very large $n$. If $c < 1$, then $nc + c - n = c - n(1-c)$ eventually becomes negative (in fact, it becomes negative for $n > \frac{c}{1-c}$), violating the "$0 \le$" inequality. And if $c > 1$, then for very large $n$, we'll have $nc + c - n > 2$: specifically, for $n > 2/(c-1)$, we'll have $nc + c - n > nc - n > 2$. So we must have $c = 1$.
Another way of writing the same thing is to divide $0 \le nc + c - n < 2$ by $n$ and say that $$0 \le c + \frac{c}{n} - 1 < \frac{2}{n},$$
and taking the limit as $n \to \infty$ gives $c - 1 = 0$, as it is sandwiched between the left- and right-limits, both equal to $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
Finding limit of a sequence in product form \begin{equation} \prod_{n=2}^{\infty} \left (1-\frac{2}{n(n+1)} \right )^2 \end{equation}
I need to find limit for the following product..answer is $\frac{1}{9}$.
I have tried cancelling out but can't figure out.
Its a monotonically decreasing sequence so will converge to its infimum..
how to find the infimum?
|
The product is equal to $\left(\frac{2-1}{2+1}\frac{n+2}{n}\right)^2$
and the limit is 1/9.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
open map equivalent definition $f : (X,\tau_X) \to (Y,\tau_Y) $ continuous and surjective. I need to prove that
$f$ is open $\Longleftrightarrow \forall U\in\tau_X, f^{-1}(f(U))\in \tau_X$
Proof:
$\implies)$ By definition, $f$ is open if $\forall U\in\tau_X, f(U)\in\tau_Y$. As f is continuous, $f^{-1}(f(U))\in \tau_X$
$\Longleftarrow)$ Now we suppose $\forall U\in\tau_X, f^{-1}(f(U))\in \tau_X$
I know that $U\subseteq f^{-1}(f(U))\in \tau_X$ and the equality holds when f is injective, but I just know that f is surjetive
How can I continue?
Thank you for your help and time
|
This isn't true. Let $X$ denote the real line with the discrete topology, let $Y$ denote the real line with the usual topology, and let $f : X \to Y$ be the identity map. Clearly $f$ is a continuous surjection, and is not an open mapping. However $f^{-1} [ f [ U ] ] = U$ for all (open) $U \subseteq X$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can a function have overlapping range? Consider the function $f : \mathbb{R}\rightarrow\mathbb{R}$ defined by
$$f(x)=\left\{\begin{align}x^2 - 2 & \text{if}\,x > 0,\\
x - 1 & \text{if}\, x \le 0.\end{align}\right.$$
Find a right inverse of $f$.
The answer that I came up to this question was this:
$$f(g(x)) = x$$
$$g(x) = \left\{\begin{align} \sqrt{x+2} & \text{if}\, x > -2,\\
x + 1 & \text{if}\, x\le -1.\end{align}\right.$$
However, the ranges $-1 \ge x > -2$ overlap. Since the definition of a function is that each $x$ maps to only 1 $y$, if $x = -1$ then there are potentially 2 $y$ that it could map to: $1$ and $0$. So is this therefore not a function? How can it not be the right answer?
|
You are right in that your $g$ is not a function.
In the overlapping range, you need to decide which one of the two branches to choose.
Solution:
For example, you can choose
$$g(x)=\left\{\begin{align}\sqrt{x+2} & \text{if}\,x>-2\\
x+1 & \text{if}\,x\le -2\end{align}\right.$$
Now suppose that $x>-2$. Then $\sqrt{x+2}>0$ and therefore $f(g(x))=f(\sqrt{x+2})=(\sqrt{x+2})^2-2=x$.
Suppose that $x\le -2$. Then $x+1\le -1\le 0$. So $f(g(x))=f(x+1)=(x+1)-1=x$.
Altogether, $g$ is a right inverse of $f$.
In the range from $-2$ to $-1$ you are free to choose any of the two branches.
Every choice (even if it's different at every point, i.e. if you jump between branches all the time) will give you a right inverse.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Picking a correct answer in a multiple choice test answering randomly I was reading a book about techniques to pass a multiple choice test and I found a passage that seems strange to me.
Every question has 5 possible answers, you get 1 point for each correct answer, 0 for each not given answer and -.25 for each wrong answer.
The book reasoning was like this:
if you choose your answers randomly you have a probability of 1/5 to
get the correct answer, so if you randomly answer five question, one
will be correct.
I feel something is wrong about this deduction. I mean, since the each question will be different, how could you be sure that randomly answering five questions will get you one correct answer?
I think the correct deduction would be something like this:
if you choose your answers randomly you have a probability of 1/5 to
get the correct answer, so if you randomly answer the same question for five times, you will get the correct answer with one of your answers.
The question is: is the deduction contained in the original passage from the book correct from a probabilistic standpoint?
|
No, it's technically incorrect as you can't say "one WILL be correct." It should read something like "you are expected to get one fifth of the questions right"
However, if I may deduce where the author is going with this, he will probably say something of the sort "So, you will get one question right (+1) and four incorrect answers (-1), in total you get 0 points. This way, it doesn't matter if you completely guess or skip a question. etc etc". If the book is following this reasoning, the conclusion is correct, even though the poor wording.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/649992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
For a non-constant entire function which property is possible? Let $f$ be a non-constant entire function.Which of the following properties is possible for each $z \in \mathbb{C}$
$(1) \ \ \mathrm{Re} f(z) =\mathrm{Im} f(z)$
$(2) \ \ |f(z)|<1$
$(3)\ \ \mathrm{Im} f(z)<0$
$(4)\ \ f(z) \neq 0 $
I tried for $(2)$ and $(3)$ option.For $(2) f$ is entire and bounded by Louiville's theorem it has to be constant which is contradiction to hypothesis.for $(3)$ if imaginary part or real part is bounded below or above then function has to be constant.How eliminate $(1)$ & $(4)$? I don't know what are the right option.
Here it is possible that there are more than one answers. Please help me thanks in advance.
|
Hints: For $1,$ note that $f$ can't be $0$ everywhere, nor can its derivative. Hence, there is some non-empty open set that $f$ maps to an open set. (Why?) Can the line $\operatorname{Re}(w)=\operatorname{Im}(w)$ contain any non-empty open set?
For $4,$ try to think of an example of a non-constant entire function that is never $0.$ (A basic example should do.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/650081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Small question on relative holology if $Y\subset X$ , what is $\ker \delta$ such that $\delta: H_k(X,Y)\rightarrow H_k(Y)$ ?
is it $\ker \delta = H_k(X,Y)$ ?
$\delta$ is the usual connecting homomorphism from the long exact sequence of relative pairs
Please
Thank you.
|
There's not really much more I can say than is on Wikipedia or can be found in any intro to algebraic topology text. The map $\delta$ which appears in the long exact sequence of a relative pair $(X,Y)$ as $$\cdots \to H_n(Y) \to H_n(X) \stackrel{f_*}{\to} H_n (X,Y) \stackrel{\delta}{\to} H_{n-1}(Y) \to \cdots$$ is the map which takes a relative cycle $\alpha$ in $H_n(X,Y)$ to its boundary which is in $H_{n-1}(Y)$ because $\alpha$ is a cycle relative to $Y$.
The kernel of $\delta$ is simply those relative cycles which have zero boundary in $Y$. By the exactness of the above sequence, we can also say that the kernal of $\delta$ is equal to the image of $f_*$, which is induced from the usual quotient homomorphism $f\colon C_n(X)\to C_n(X)/C_n(Y)$ appearing in the relative short exact sequence of degree $n$, after passing to homology.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/650267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Which methods different than the natural one can one devise to confirm that the limit is $\;2/\pi\;$? Good evening,
I have found this exercise (https://math.stackexchange.com/questions/633509/which-methods-different-than-the-natural-lim-n-to-infty-frac-cos1-cos)
What is the limit of:
$$\lim_{n\to\infty}\dfrac{|\cos{1}|+|\cos{2}|+|\cos{3}|+\cdots+|\cos{n}|}{n}$$
I am trying to solve it (without Probability, I have no knowledge on these subjects) but I didn't succeed, is it possible to solve it? If so, how to find the limit?
Thanks in advance.
|
Hint: If one replaces $n$ with $n\bmod {2\pi}$, the quotient tuens into something like a "distorted" Riemann sum. How distorted can it be? What does it mean of $n_1\bmod {2\pi}$ and $n_2\bmod{2\pi}$ are "unusually" close to each other?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/650403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Group with an even number of elements. If $G$ is a group such that $|G|=2n$. Prove that there's an odd number of elements of order 2, and then there's an element which is its own inverse, besides of the identity.
If we consider all the elements of $G$ that have order different than 2, we have two cases: 1) the elements with order greater than 2, and 2) elements with order 1.
For 1). Let $A=\{g\in G: g^k=e, k>2\}$, then if $g\in G$ then $g^{-1}\in G$ (should I prove that the inverse has the same order?), and also $e\not\in G$. Then there's an even number of elements in A, because $g\not = g^{-1}$. So $|A|=2m$ for some $m$.
For 2). The only element of order 1 is the identity, so $B=\{g\in G: g^k=e, k=1\}=\{e\}$ then $|B|=1$.
So $|A|+|B|=2m+1$ like we wanted, but if that happens, the second part of the problem doesn't make sense.
So doing this in a different way, instead of directly counting the elements of B, we consider the complement of A, so $|A^c|=2k$, that way $|G|=|A|+|A^c|=2n$ checks out right. But $A^c=\{g\in G: g^k=e, k\not>2\}=\{g\in G: g^k=e, k=1 or 2\}$, so because the identity is the only element of order 1, then there must be at least one element in $A^c$ of order 2, that means $g_o^2=g_o*g_o=e$ hence $g_o=g_o^{-1}$.
Is this right? What am I doing wrong in the first try?
I know this problem has been solved before in this site, but I really wanted to make it on my own, sorry if this is redundant.
|
Your first try is correct. By your proof, you know the number of elements of order 2 is odd, hence can not be zero. This means there exists at least one, say $g$, which is just the element you wanted in the second part.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/650491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Convex function in its interior Let $f$ be a convex function on an open subset of $R^{n}$. How to prove $f$ is continuous in the interior of its domain.
For $n=1$, let $f$ be convex on the set $(a,b)$ with $a<s<t<u<b$
Then using the inequality $\frac{f(t)-f(s)}{t-s} \leq \frac{f(u)-f(s)}{u-s} \leq \frac{f(u)-f(t)}{u-t} $
We can prove it for $n=1$.
|
You can see the solution in the following lecture note
enter link description here
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/650581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How find this ODE solution $\frac{f(x)}{f(a)}=\left(\frac{x}{a}\right)^2e^{(x-a)\left(\frac{f'(x)}{f(x)}-\frac{2}{x}\right)}$ let $a>0$ is constant number,and the function $f(x)$ such follow ODE
$$\dfrac{f(x)}{f(a)}=\left(\dfrac{x}{a}\right)^2e^{(x-a)\left(\dfrac{f'(x)}{f(x)}-\dfrac{2}{x}\right)}$$
Find the $f(x)?$
Thank you
My try:
$$\dfrac{f(x)}{x^2}=\dfrac{f(a)}{a^2}e^{(x-a)\left(\dfrac{f'(x)}{f(x)}-\dfrac{2}{x}\right)}$$
then
$$\ln{\dfrac{f(x)}{x^2}}-\ln{\dfrac{f(a)}{a^2}}=(x-a)\left(\dfrac{f'(x)}{f(x)}-\dfrac{2}{x}\right)$$
then I can't,Thank you very much
|
You could turn it into a first order linear differential equation by substituting $\ln(f)=\phi$, since $\phi'=\ln(f)'=\frac{f'}{f}$ appears on the right side.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/650636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
what is the minimum number of points in affine plane. what is minimum number of points in affine plane,
By the way: Here are the $\textbf{Three Axioms}$ for affine plane.
*
*Given two distinct points $\textbf{P}$ and $\textbf{Q}$, there is only one line passing through them
*Given a point $\textbf{P}$ and a line $\textit{l}$, if $\textbf{P}\not\in\textit{l}$,
there is only one line passing through point $\textbf{P}$ and parallel to line $\textit{l}$
*There exist three points $\textbf{P}$, $\textbf{Q}$, $\textbf{R}$ non-collinear
|
HINT: If $K$ is any field, the vector space $V=K^2$ has always the structure of affine plane.
Now take for $K$ the smallest field you know.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/650737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Check whether the function is continuous at 0 - what went wrong? I have to check the whether the following function is continuous:
$$ \
f:\mathbb{R}\rightarrow \mathbb{R},~f(x)=\left\{
\begin{array}{lll}
e^{1/x} &\text{if} & x < 0,
\\
0 & \text{if}& x \ge 0.
\end{array}\right.
\
$$
Now it is obvious that the function is continuous in $(-\infty, 0)$ as a combination of 2 continuous functions as well as in $(0, \infty)$ as a constant function.
To prove continuity at $0$, I took a sequence that approaches $0$, such as $a_n = \frac{1}{n}$.
Now if $g(x) = e^\frac{1}{x}$ is continuous in $0$ it must be possible to get $\lim_{n\rightarrow \infty}~ g(a_n)) = g(0)$. So:
$$e^\frac{1}{x} = \exp\frac{1}{x} \rightarrow g(a_n) = \exp(1/\frac{1}{n}) = \exp(n)$$
$$\rightarrow \lim_{n\rightarrow \infty}~ g(a_n)) = \lim_{n\rightarrow \infty}~ \exp(n) = \exp(\infty) \ne g(0)$$
Note: I am approaching $0$ from negative n - I just couldn't find a propper visualisation for that in Latex.
Now my question is: why is this proof is wrong? The solution should be that the function $f$ is continuous at $0$, but I don't see what went wrong. Also because the function is not defined at $0$, I am not sure whether I could perform the limit to $0$.
Thank you for advice!
FunkyPeanut
|
If $x_n \to 0$ and $x_n<0$ then $\frac{1}{x_n} \to -\infty$ and $e^{1/x_n} \to 0$.
So the left limit is 0. The right limit at 0 is evidently 0. Therefore the function iscontinuous at 0.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/650798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove that this sequence converge. I am obliged to prove that this sequence:
$\large {a_n=(1+\frac{1}{3})(1+\frac{1}{9})(1+\frac{1}{27})...(1+\frac{1}{3^n})}$
is convergent sequence.
I mean I was thinking about this and I know that $\large\lim_{n \to \infty} (1+\frac{1}{3^n})=1 $
From this I know that it will be probably convergent sequence but I know that it is not well written proof, and probably does not prove anything. I would be glad for any tips how to prove this.
|
Take $\log$:
$$\log\lim_{n\to\infty} a_n = \lim_{n\to\infty}\log a_n = \lim_{n\to\infty}\sum_{k=1}^n\log(1+{1\over 2^k}) = \sum_{k=1}^\infty\log(1+{1\over 2^k});$$
now, as
$$\lim_{k\to\infty}{\log(1+{1\over 2^k})\over{1\over 2^k}}=1,$$
the behavior of $\sum\log(1+{1\over 2^k})$ is the same that the behavior of $\sum {1\over 2^k}$, and this series is convergent (geometric with reason < 1).
(Edited for greater clarity)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/650937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is this expression a quadratic form I have an matrix expression that basically is of the form:
\begin{equation}
tr(B X BX )
\end{equation}
Where $B$ and $X$ and nonsquare matrices. $B$ is $p \times n$, $X$ is $n \times p$.
It seems to me this trace expression is a quadratic form because I got it as part of a longer matrix expression which represents the Hessian term of a Taylor series approximating a matrix function. However, I can't see to get it into the form:
\begin{equation}
\text{vec}(X)^T [H] \text{vec}(X)
\end{equation}
Where $[H]$ is the square matrix representation of this quadratic form. I want to get it into this form so I can get the eigenvectors of this matrix $[H]$. $\text{vec}(X)$ is the vector of length $pn$ where the columns of $X$ are on top of each other.
Can someone help me confirm whether this trace expression can indeed be converted into an explicit matrix representation?
I know that I can make this into:
\begin{equation}
\text{vec}(X^T)^T (B^T \otimes B) \text{vec}(X)
\end{equation}
But this is not what I want because it gives me $\text{vec}(X^T)$ on the left! It seems so close yet so far.
Thanks.
|
You are almost there. Notice that the elements of $\mathrm{vec}(X^T)$ are just a reordering of $\mathrm{vec}(X)$. The two are related by a permutation matrix that is sometimes known as a stride permutation. If $X\in \mathbb{R}^{m\times n}$ then the stride permutation matrix $L_m^{mn}$ satisfies the equation $L_m^{mn}\mathrm{vec}(X)=\mathrm{vec}(X^T)$. Therefore you have $\mathrm{trace}(BXBX)=\mathrm{vec}(X)((L_m^{mn})^T B\otimes B)\mathrm{vec}(X)$. There is also another interesting way to obtain this result that will make future calculations simpler. For matrices $A\in\mathbb{R}^{m_1\times n_1}$ and $B\in\mathbb{R}^{m_2\times n_2}$
define the box product
$A\boxtimes B\in\mathbb{R}^{(m_1m_2)\times(n_1n_2)}$
by
$$ (A\boxtimes B)_{(i-1)m_2+j,(k-1)n_1+l} = a_{il}b_{jk} "=
(A\boxtimes B)_{(ij)(kl)}" $$
then $I_m\boxtimes I_n = L_{m}^{mn}$ and you can also write
$$\mathrm{trace}(BXBX)=\mathrm{vec}^\top(X)(B\boxtimes B)\mathrm{vec}(X)$$
The box-product essentially behaves like the Kronecker product and satisfies the following properties:
\begin{eqnarray}
A\boxtimes(B\boxtimes F) &=& (A\boxtimes B)\boxtimes F \\
(A\boxtimes B)( C\boxtimes D) &=& (A D)\otimes(BC)\\
( A\boxtimes B)^\top &=& B^\top\boxtimes A^\top\\
(A\boxtimes B)^{-1} &=& B^{-1}\boxtimes A^{-1}\\
\mathrm{trace}(A\boxtimes B) &=& \mathrm{trace}(AB)\\
(A\boxtimes B)\mathrm{vec}(X) &=& \mathrm{vec}(BX^\top A^\top).
\end{eqnarray}
In addition to these Kronecker and box products can easily be multiplied using the following rules:
\begin{eqnarray}
(A\boxtimes B)(C\otimes D) &=& (AD)\boxtimes(BC)\\
(A\otimes B)(D\boxtimes C) &=& (AD)\boxtimes(BC)\\
(A\boxtimes B)(C\otimes D) &=& (A\otimes B)(D\boxtimes C)\\
(A\boxtimes B)(C\boxtimes D) &=& (A\otimes B)(D\otimes C)
\end{eqnarray}
For two two-by-two matrices the box product can explicitly be written
$$
A\boxtimes B = \begin{pmatrix}
a_{11}b_{11} & a_{12}b_{11} & a_{11}b_{12} & a_{12}b_{12} \\
a_{11}b_{21} & a_{12}b_{21} & a_{11}b_{22} & a_{12}b_{22} \\
a_{21}b_{11} & a_{22}b_{11} & a_{21}b_{12} & a_{22}b_{12} \\
a_{21}b_{21} & a_{22}b_{21} & a_{21}b_{22} & a_{22}b_{22} \\
\end{pmatrix}.
$$
and here's an example stride-permutation
$$
L_2^6=I_2\boxtimes I_3 = \left(
\begin{array}{llllll}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1
\end{array}
\right).
$$
Finally, I should mention that there is actually a mistake in the answer. My convention has been to use row-wise concatenation of matrices and the definition above, from which easily follows:
\begin{eqnarray*}
\mathrm{trace}(BXBX) &=& \mathrm{vec}^\top((BXB)^\top)\mathrm{vec}(X)\\
&=& \mathrm{vec}^\top(B^\top X^\top B^\top)\mathrm{vec}(X)\\
&=& \mathrm{vec}^\top(X) B\boxtimes B^\top \mathrm{vec}(X)
\end{eqnarray*}
This answer makes more sense, as the matrix $B\boxtimes B^\top$ is always symmetric, whereas $B\boxtimes B$ is not.
I struggled with questions similar to the one you just made, and the box product has been a great help to me. I hope this was of help to you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/650995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Continuity of sum of functions Suppose we have that $f_n:\mathbb{R}\rightarrow\mathbb{R}$ is continuous and $f_n\geq0$ for all $n\in\mathbb{N}$. Assume that $f_n(x)\leq1$ for all $x\in[-n,n]$. My question is: Why is the function $f:\mathbb{R}\rightarrow\mathbb{R}$ defined by $f=\sum_{n\in\mathbb{N}}{\frac{f_n}{2^n}}$ continuous? I thought about to use the Weierstrass M-test, but you do not know that all functions are bounded. Can someone help me with this problem?
|
Appy M-test the the tail of the series $\sum_{k=n}^\infty{f_k\over 2^k}$ in $[-n,n]$. $f =$ finite sum of continuous functions in $\Bbb R$ + continuous tail in $[-n,n]$ is continuous in $[-n,n]$. So, $f$ continuous in $\Bbb R$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/651084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
how to find $ \lim\limits_{x \to \infty} \left(\sqrt{x^2 +1} +\sqrt{4x^2 + 1} - \sqrt{9x^2 + 1}\right)$ How can I find this?
$ \lim\limits_{x \to \infty} \left(\sqrt{x^2 +1} +\sqrt{4x^2 + 1} - \sqrt{9x^2 + 1}\right)$
|
Since for any $A>0$
$$\sqrt{A^2 x^2+1}-A|x| = \frac{1}{A|x|+\sqrt{A^2 x^2+1}}<\frac{1}{2A|x|}$$
holds, we have:
$$\left|\sqrt{x^2+1}+\sqrt{4x^2+1}-\sqrt{9x^2+1}\right|=\left|\sqrt{x^2+1}-|x|+\sqrt{4x^2+1}-2|x|-\sqrt{9x^2+1}+3|x|\right|\leq \left|\sqrt{x^2+1}-|x|\right|+\left|\sqrt{4x^2+1}-2|x|\right|+\left|\sqrt{9x^2+1}-3|x|\right|<\frac{1}{|x|}\left(\frac{1}{2}+\frac{1}{4}+\frac{1}{6}\right),$$
hence the limit is $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/651148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Continuous curve, traps itself outside the unit circle. Lets say i have an injective continuous curve $\sigma$ in $\mathbb{C}$, indexed on $[0,\infty)$ and converging to $\infty$. If $\vert \sigma(0)\vert>0$ , is it possible that it can trap itself outside the unit circle? By that i mean, that there doesn't exist an extension of the curve, so that the beginning point is on the unit circle? My intuitive guess if of course no, but i wonder if there is a simple proof that doesn't require more than the first course in topology . I would also appreciate if someone could confirm that my guess is correct.
thanks for reading.
|
Let's compactify $\mathbb C$ to $S^2=\mathbb C\cup\{\infty\}$, and add the point at $\infty$ to the image of $\sigma$. Your question amounts to whether the complement of a simple arc $\gamma$ in $S^2$ is path-connected. (A simple arc is a homeomorphic image of $[0,1]$). This is equivalent to asking whether the complement of a simple arc in $\mathbb R^2$ is connected, because we can apply a Möbius transformation to $S^2$ to move some point of the complement of $\gamma$ to $\infty$.
The above reformulations are easy but the crux of the matter is still in algebraic topology.
The fact that a Jordan arc does not disconnect the plane is a standard result in algebraic topology, usually proved using homology theory. All texts in algebraic topology have this, but I also like the proof in: Albrecht Dold, A simple proof of the Jordan-Alexander complement theorem, Amer. Math. Monthly 100 (1993), 856-857. A more elementary proof can be found in a Monthly paper by Carsten Thomassen, downloadable from Andrew Ranicki's website. – Robin Chapman Jul 23 '10 at 9:21
Here is a direct link to the paper The Jordan-Schoenflies Theorem and the Classification of Surfaces by Carsten Thomassen, which does not use the language of algebraic topology. The result is Proposition 2.11; its proof uses Lemma 2.10, which itself uses Lemma 2.8, which uses Lemmas 2.3 and 2.4... it would be impractical to reproduce it here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/651207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Positive elements of a $C^*$ (MURPHY, ex 2-2). I'm studying "MURPHY, $C^*$-Algebras and Operator Theory" thoroughly and got stuck in the following exercise:
Exercise 2, chapter 2. Let $A$ be a unital $C^*$-algebra.
(a) If $a,b$ are positive elements of $A$, show that $\sigma(ab)\subseteq\mathbb{R}^+=\left\{x\in\mathbb{R}:x\geq 0\right\}$.
(b)If $a$ is an invertible element of $A$, show that $a=u|a|$ for a unique unitary $u$ of $A$. Give an example of an element of $B(H)$ for some Hilbert space $H$ that cannot be written as a product of a unitary times a positive operator.
(c)Show that if $a\in Inv(A)$, then $\Vert a\Vert=\Vert a^{-1}\Vert=1$ if and only if $a$ is a unitary.
Item (b) is easy: the only possibility for $u$ is $u=a(a^{-1})(a^*)^{-1}|a|=(a^*)^{-1}|a|$, which is readily verified to be unitary. For the example, let $a$ be any non-invertible isometry in a Hilbert space $H$ (e.g. $a$ is the unilateral shift in $l^2(\mathbb{Z}_+)$). If $a=up$, $u$ unitary and $p$ positive, then $1=a^*a=p^*u^*up=p^2$, so $p=(1)^{1/2}=1$, hence $a=u$, absurd.
Now, I really have no idea how to proceed with items (a) and (c). If $A$ were abelian, both would be trivial (using the Gelfand representation). I believe that in (c) we have to use the representation given in (b). Actually, to use the Gelfand representation in (c), it suffices to show that $a$ is normal, but I don't see how the hipothesis on the norms would imply that.
|
For (a), you have to use that $\sigma(xy)\cup\{0\}=\sigma(yx)\cup\{0\}$ for any two operators $x,y$. Then
$$
\sigma(ab)=\sigma((ab^{1/2})b^{1/2})\subset\sigma(b^{1/2}ab^{1/2})\cup\{0\}\subset\mathbb R^+.
$$
The point is that $b^{1/2}ab^{1/2}$ is positive.
For (c),
\begin{align}
\max\sigma(a^*a)&=\|a^*a\|=\|a\|^2=1=\|a^{-1}\|^2\\ &=\|(a^{-1})^*a^{-1}\|=\|(aa^*)^{-1}\|=1/\min\sigma(aa^*)=1/\min\sigma (a^*a).
\end{align}
Thus $a^*a $ is positive with spectrum $\{1\} $, so $a^*a=1$. Similarly, $aa^*=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/651297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Nonpiecewise Function Defined at a Point but Not Continuous There I make a big fuss that my calculus students provide a "continuity argument" to evaluate limits such as $\lim_{x \rightarrow 0} 2x + 1$, by which I mean they should tell me that $2x+1$ is a polynomial, polynomials are continuous on $(-\infty, \infty)$, and therefore $\lim_{x \rightarrow 0} 2x + 1 = 2 \cdot 0 + 1 = 1$.
All the examples they encounter where it is not correct to simply evaluate at $a$ when $x \rightarrow a$ fall into one of two categories:
*
*The function is not defined at $a$.
*The function is piecewise and expressly constructed to have a discontinuity at $a$.
I'd like to find a function $f$ with the following properties:
*
*$f(a)$ exists
*$f(a)$ is not (obviously) piecewise defined
*$f(x)$ is not continuous at $a$
*$f$ is reasonably familiar to a Calculus I student - trigonometry would be admissible, but power series would not (though they might
still make for interesting reading)
|
There's always $[x]$, and $\chi_S(x)$ for any proper subset $S$ of $\mathbb{R}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/651381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 10,
"answer_id": 5
}
|
Rank of a subgroup of $\mathbb{Z}^3$ given a generating set $(2,-2,0)$, $(0,4,-4)$, and $(5,0,-5)$? Is there some standard approach to finding the rank of a subgroup given a generating set?
In particular, I'm considering the subgroup of $\mathbb{Z}^3$ generated by $v_1=(2,-2,0)$, $v_2=(0,4,-4)$, and $v_3=(5,0,-5)$. This is a submodule of a free $\mathbb{Z}$-module of rank $3$, so the subgroup has rank at most $3$. Also, it is clear that any two distinct generating vectors are independent. So the rank is at least $2$. However, $10v_1+5v_2-4v_3=0$, so the generating set is not a basis.
Is there some procedure to determine what the rank is?
|
The submodule generated by $v_i$ is contained in the rank $2$ submodule of $\mathbb Z^3$ consisting of those tuples whose entries add up to zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/651467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is an isomorphism? I'm familiar with the concepts of group isomorphism, ring isomorphism, and graph isomorphism, but it's never been presented to me what an isomorphism is in general: given any X, what is an X isomorphism?
Informally, I understand isomorphism as "preservation of structure", where "preservation" is domain specific. Is there a formal definition?
|
As the comment suggests, looking up the word "morphism" in the context of "Categories" and Objects will give you a more rigorous and general idea about isomorphisms. But I always like to think of isomorphism as something that allows you to make copies of a given "object", be it rings or groups, or Topological spaces or manifolds,etc.
Conveniently you can also look at it as a transformation that allows you to preserve the algebraic / geometric / topological structure or atleast the most basic and desired properties pertaining to these.
Edit: The word "copies" might be very misleading actually. Its more prevalent in the context of "Product spaces" than here. But I could not find a better word.Actually I think it might be quite wrong to say copies. Instead I would go along with saying that it allows you to collect spaces which though not essentially copies of the original space still basically carry the same "bloodline" if you will. I realise that this is not a technically sound answer, but was trying to see how best I could put this without resorting too many technical terms.I apologise if this isn't what you need
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/651566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Problem of continuous real valued function Which ofthe following statements are true?
a.If $f:\mathbb R\to\mathbb R$ is injective and continuous, then it is strictly monotonic.
b.If $f\in C[0,2]$ is such that $f(0)=f(2)$,then there exists $x_1,x_2$ in [0,2] such that $x_1-x_2=1$ and $f(x_1)=f(x_2).$
c.Let $f$ and $g$ be continuous real valued function on $\mathbb R$ such that for all $x\in \mathbb R$ wehave $f(g(x))=g(f(x.))$ If there exists $x_0\in \mathbb R $ such that $f(f(x_0))=g(g(x_0))$ then there exists $x_1\in\mathbb R$ such that $f(x_1)=g(x_1).$
|
for b> $g(x)=f(x+1)-f(x)$ then $\exists c \in [0,1]$ s.t. $g(c)=0 \implies f(c)=f(c+1)$ .
for c> if $f(x) \neq g(x) $ then $r(x)=f(x)-g(x) \neq0 $ for all $x$.
$0 \neq r(f(x))+r(g(x))=f(f(x))-g(f(x))+f(g(x))-g(g(x))=f(f(x))-g(g(x))$ for all $x$ which is a contradiction so $f(x_1)=g(x_1)$ for some $x_1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/651671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What does $a$ mean in Taylor series formula? I'm trying to code the Taylor summation in MATLAB, being Taylor's formula the following:
I've also seen $a$ denoted as $x_0$ in distinct bibliography.
Problem is that I'm not sure how should I evaluate or assign for $a$.
At lecture I studied the exponential Taylor's representation being:
And I got to that same summation by evaluating $a$ as 0 but and I know that is meant to be McLaurin formula, but what's within evaluating with a different value or letting $a=x$?
What should I do in order to code properly Taylor summation for distinct functions?
|
$a$ is the point for which you calculate the derivatives that you plug in the expansion, along with the displacement from $a$ raised to the correct power, as you seem to understand yourself
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/651776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
Why is the tight closure tightly closed? Let $R$ be a commutative noetherian ring containing a field of characteristic $p\gt0.$
For an ideal $I\subset R,$ the tight closure $I^*$ is defined as $$\{f\in R\mid \exists t\in R, t\not\in\mathfrak p, \forall \mathfrak p \text{ minimal prime of }R,\ tf^q\in I^{[q]} \text{ for almost all }q=p^e\},$$ where $I^{[q]}$ is the extended ideal of $I$ under the Frobenius homomorphism $R\to R.$ An ideal is said to be tightly closed if it coincides with its tight closure. And my question is
Why is the tight closure of an ideal $I$ tightly closed?
I tried writing out the definitions and see why this holds, but to no avail: there seems to miss something to conclude that $(I^*)^*\subset I^*.$
Any hint is welcomed and thanked, as well as any references.
P.S. I saw this statement in the book Three lectures on commutative algebra.
|
Take $x_1,\dots,x_n$ a system of generators for $I^*$. By definition there exists $c_i\in R^0$ such that $c_ix_i^q\in I^{[q]}$ for $q\gg0$. Set $c=c_1\cdots c_n$. Then $c(I^*)^{[q]}\subset I^{[q]}$ for $q\gg0$.
Now let $x\in (I^*)^*$. There exists $c'\in R^0$ such that $c'x^q\in (I^*)^{[q]}$ for $q\gg0$. Then $cc'x^q\in I^{[q]}$ for $q\gg0$, so $x\in I^*$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/651867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
finding a solution for $m$ given $(1+i)z^2-2mz+m-2=0$ Given the equation: $(1+i)z^2-2mz+m-2=0$, while $z$ is complex and $m$ is a parameter.
For which values of $m$ the equation has one solution?
So my idea was to use: $b^2-4ac=0$ for $ax^2+bx+c=0$
But it leads to difficult computation which i could not solve.
Is there any other way? or any way to solve this question?
Thanks.
|
Your idea makes sense. The equation has one solution if and only if the discriminant $b^2-4ac$ is zero.
$$b^2-4ac=(-2m)^2-4(m-2)(1+i)=4m^2-(4+4i)m+(8+8i)$$
This is a quadratic equation in $m$ which has the two solutions $2i$ and $1-i$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/651964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Graph Theory Vertex Problem Let $G$ be a graph of order $8$ with $V(G)=\{v_1, v_2,...,v_8\}$ such that deg $v_i=i$ for $1 \leq i \leq 7$. What is deg $v_8$.
Any help or hints would be greatly appreciated.
|
You must join $v_7$ with all other vertices.
After that, $v_1$ is already "fed up". What can you now say about $v_6$? What happens then to $v_2$? And so on.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Solve the equation $|x-7|-3|2x+1|=0$ This equation is very unfamiliar with me, I never seen things like that because I always solved equations of the form $|\text{something}|=\text{things}$ but never seen equations that look like $|\text{something}|=|\text{things}|$. So if I learn how to solve it I will be able to solve questions that looks like it. Thank you;
|
|x-7|=|6x+3|
then either
x-7=6x+3
or
x-7=-6x-3
solve these two equations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 9,
"answer_id": 5
}
|
Derivative of $(\ln x)^{\ln x}$ How can I differentiate the following function? $$f(x)=(\ln x)^{\ln x}.$$ Is it a composition of functions? And if so, which functions?
Thank you.
|
Note that $$a^b = {\left(e^{\ln a}\right)}^b = e^{b\ln a}$$
so $$\left(\ln x\right)^{\ln x} = {\left(e^{\ln\ln x}\right)}^{\ln x}\\
= e^{\ln x\cdot\ln\ln x} = x^{\ln \ln x}$$ either of which which you should be able to do with methods you already know.
We can apply this technique generally to calculate the derivative of $f(x)^{g(x)}$:
$$f(x)^{g(x)} = e^{g\ln f}$$
so $$\begin{align}
\frac d{dx} f^g = \frac d{dx} e^{g\cdot\ln f} & = \left(\frac d{dx} \left(g\cdot\ln f\right)\right)e^{g\cdot\ln f}\qquad\text{(chain rule)}\\& = \left(\frac d{dx} \left(g\cdot\ln f\right)\right)f^g \\
& = \left(g'\ln f + \frac {gf'}f\right)f^g\qquad\text{(product rule)}
\end{align} $$
where $f' =\frac{df}{dx}$ and $g' = \frac{dg}{dx}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Find the value of the expression The expression $ax^2 + bx + 1$ takes the values $1$ and $4$ when $x$ takes the values $2$ and $3$ respectively. How can we find the value of the expression when $x$ takes the value of $4$?
|
Hint $\,\ f(3)=2^2,\ f(2) = 1^2,\ f(0) = (-1)^2\ $ so $\,f-(x-1)^2\!=0,\,$ having $3$ roots $\,x=3,2,0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Gaussian random variable in $\mathbb{R}^n$ question Let $X=(X_1,...,X_n)$ is a Gaussian random variable in $\mathbb{R}^n$ with mean $\mu$ and covariance matrix $V$.
I want to show that we can write $X_2$ in the form $X_2 = aX_1 + Z$, where $Z$ is independent of $X_1$, and I want to find the distribution of $Z$.
Any help with this would be really appreciated.
Thanks!
|
Assuming that $X_2 = aX_1 + Z$ holds (with $Z$ and $X_1$ independent), you can find $a$ in terms of particular entries of the covariance matrix. Once you have $a$, you know that $Z=X_2 - aX_1$, so you can find the distribution of $Z$, and check that it is independent of $X_1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
$X^n-Y^m$ is irreducible in $\Bbb{C}[X,Y]$ iff $\gcd(n,m)=1$
I am trying to show that $X^n-Y^m$ is irreducible in $\Bbb{C}[X,Y]$ iff $\gcd(n,m)=1$ where $n,m$ are positive integers.
I showed that if $\gcd(n,m)$ is not $1$, then $X^n-Y^m$ is reducible. How to show the other direction. Please help.
|
Assume $f(X,Y) =X^n-Y^m=g(X,Y)h(X,Y)$. Then $f(Z^m,Z^n)=0$ implies that one of $g(Z^m,Z^n)$ or $h(Z^m,Z^n)$ is the zero polynomial. Suppose that $g(Z^m,Z^n)=0$. That means that for all $k$, the monomials cancel, i.e. if $$g(X,Y)=\sum a_{i,j}X^iY^j $$
then $$\sum_{mi+nj=k}a_{i,j}=0.$$
Can we ever have $mi+nj=mi'+nj'$? That would mean $m(i-i')=n(j-j')$, hence $n\mid i-i'$ (because none of $n$'s prime factors are in $m$) and likewise $m\mid j-j'$. So if $i>i'$ this implies $j> j'$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 5,
"answer_id": 3
}
|
How to evaluate the integral $\int_1^n\frac{1}{(\ln x)^{\ln x}}dx$ I'm stuck with this integral to test the convergence of a series. $$\int_1^n\frac{1}{(\ln x)^{\ln x}}dx.$$ Could you give me a couple of hints to compute this integral please? Is it a simple integral or I need to know something special to solve it?
|
\begin{array}{l}
\int {\left( {{{\left( {\ln x} \right)}^{ - \ln x}}} \right)} \;dx\\
= \int {\left( {{{\left( {\ln x} \right)}^{ - \ln x}}\cdot1} \right)} \;dx\\
= {\left( {\ln x} \right)^{ - \ln x}}\int 1 \;dx - \int {\left( { - \frac{{{{\left( {\ln x} \right)}^{ - \ln x}}}}{x}\;\cdot\int 1 \;dx} \right)} \;dx\\
= x{\left( {\ln x} \right)^{ - \ln x}} - \int {\left( { - {{\left( {\ln x} \right)}^{ - \ln x}}} \right)} \;dx\\
= x{\left( {\ln x} \right)^{ - \ln x}} + {\left( {\ln x} \right)^{ - \ln x}}\int 1 \;dx + \int {\left( {\frac{{{{\left( {\ln x} \right)}^{ - \ln x}}}}{x}\;\int 1 \;dx} \right)\;dx} \\
= x{\left( {\ln x} \right)^{ - \ln x}} + x{\left( {\ln x} \right)^{ - \ln x}} + {\left( {\ln x} \right)^{ - \ln x}}\int 1 \;dx - \int {\left( {\frac{{ - {{\left( {\ln x} \right)}^{ - \ln x}}}}{x}\int 1 \;dx} \right)} \;dx\\
= \sum\limits_{n = 1}^\infty {\left( {x{{\left( {\ln x} \right)}^{ - \ln x}}} \right)} \\
so\\
\int_1^n {{{\left( {\ln x} \right)}^{ - \ln x}}} \;dx = \sum\limits_{k = 1}^\infty {\left( {n{{\left( {\ln n} \right)}^{ - \ln n}}} \right)} \; - \sum\limits_{k = 1}^\infty {\left( {{{\left( {\ln 1} \right)}^{ - \ln 1}}} \right)} = \sum\limits_{k = 1}^\infty {\left( {n{{\left( {\ln n} \right)}^{ - \ln n}}} \right)} - \sum\limits_{k = 1}^\infty {\left( {{0^0}} \right)}
\end{array}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Elliptic curves in projective form question Let
$K$
be any field with Char
$K
\neq 2, 3$,
and let
$\varepsilon
:
F
(
X_0
;X_1
;X_2
) =
X_1^2
X_2-
(
X_0^3
+AX_0
X_2^2
+
BX_2^3
)$
;
with
$A, B
\in
K$,
be an elliptic curve. Let
$P$
be a point on
$\varepsilon$.
(a).
Show that $3P = \underline{o}$, where $\underline{o}$ is the point at infinity ($(0,1,0)$) if and only if the tangent line to
$\varepsilon$
at
$P$
intersects
$\varepsilon$
only at
$P$
(b).
Show that if $3P
=
\underline{o}$
then the 3 x
3 matrix
$( \frac{\partial ^2 F}{\partial X_i \partial X_j}$)
has determinant $0$.
[This matrix is called the Hessian matrix].
(c).
Show that there are at most nine 3-torsion points over
$K$
I'm having trouble getting to grips with the projection notation - any help greatly appreciated!
|
(a) Note that $3P=O$ iff $2P=-P$. To compute $2P$ you have to intersect the tangent line $t$ in $P$ with $\varepsilon$. The line $t$ will meet $\varepsilon$ in two points, say $P$ and $Q$, because it already meet $\varepsilon$ at $P$ with multiplicity $2$. In any case, we know that $2P=-Q$. Therefore, if $2P=-P$ then it means that $Q=P$ and so $t$ meets $\varepsilon$ only at $P$, and conversely if $t$ meets $\varepsilon$ only at $P$ then $Q=P$ and $2P=-P$.
(c) The entries of the Hessian matrix $H$ are linear polynomials, because you're taking second derivatives of an homogeneous polynomial of degree $3$. By point (b), a necessary condition for $P$ to be a $3$-torsion point is that $(\det H)(P)=0$. Now, $\det(H)$ is an homogeneous polynomial of degree $3$, so a necessary condition for $P$ to be a torsion point is that it is a zero of two homogeneous polynomials of degree $3$: one is $\det (H)$ and the other is $F$. So you're looking at the intersection points of two cubics. By Bezout's theorem, there are at most $9$ such points provided that the two cubics don't have a component in common. But since $F$ is an irreducible curve, this can happen only if $\det (H)$ and $\varepsilon$ are the same curve. This cannot happen, as you can check that $(1\colon 0\colon 0)\notin \varepsilon$ while it belongs to the cubic defined by $\det (H)=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
conditional expectation? I'm trying to solve an expected value problem where a biased coin is flipped until a run of five heads is achieved. I need to compute the $E(X)$ where $X$ is the number of tails expected before the run of five heads.
Would this require conditional expectation, since $E(X)$ is dependent on $P(Y)$ which is the probability of a run of five heads?
I know how to calculate the expected value of flips, but I'm pretty lost on counting the number of tails.
If $E(Y)$ is value $n$, then would I solve like so?
$P(X = k \mid X E {n})$
$E(X) = P(X)E(Y)$
|
The following is a conditional expectation argument. We first deal with an unbiased coin, and then a biased coin. Let $e$ be the required expectation.
Unbiased Coin: If the first toss is a tail (probability $\frac{1}{2}$) then the expected number of tails is $1+e$.
If first toss is a head and the second is a tail (probability $\frac{1}{4}$, then the expected number of tails is $1+e$.
If first two tosses are head and the third is a tail, then the expected number of tails is $1+e$.
Same for first three heads, and fourth a tail.
Same for first four heads, and fifth a tail.
If first five tosses are heads, then expected number of tails is $0$.
Thus
$$e=\frac{1}{2}(1+e)+\frac{1}{4}(1+e)+\cdots +\frac{1}{32}(1+e).$$
Solve for $e$.
Biased Coin: The same idea works for a biased coin. Let the probability of head be $p\ne 0$. Then the probability of tail is $1-p$, the probability of head followed by tail is $p(1-p)$, the probability of two heads followed by tail is $p^2(1-p)$, and so on. Thus
$$e=(1-p)(1+e)+p(1-p)(1+e)+\cdots +p^4(1-p)(1+e).$$
Solve for $e$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Fast calculation for $\int_{0}^{\infty}\frac{\log x}{x^2+1}dx=0$ I want to show that $\int_{0}^{\infty}\frac{\log x}{x^2+1}dx=0$, but is there a faster method than finding the contour and doing all computations?
Otherwise my idea is to do the substitution $x=e^t$, integral than changes to $\int _{-\infty}^{\infty}\frac{t e^t}{1+e^{2t}}dt$. Next step is to take the contour $-r,r,r+i\pi,-r+i\pi$ and integrate over it...
|
Here is a general approach. Consider the integal
$$ F(s) = \int_{0}^{\infty}\frac{x^{s-1}}{1+x^2}=\frac{\pi}{2\sin(s\pi/2)},\quad 0<Re(s)<2. $$
which is the Mellin transform of the function $\frac{1}{1+x^2}$. Now, our integral can be evaluated as
$$ I = \lim_{s\to 1} F'(s) = 0.$$
Note:
1) To evaluate the above integral, you can use the technique.
2)
$$ F(s)= \int_{0}^{\infty} x^{s-1}f(x)dx \implies F'(s)= \int_{0}^{\infty} x^{s-1}\ln(x)f(x)dx .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
Prove $\lim_{x \to \infty} f(x) = L \iff \lim_{x \to 0} g(x) = L$ Let $f(x) = g(1/x)$ for $x>0$. Prove: $\lim_{x \to \infty} f(x) = L \iff \lim_{x \to 0} g(x) = L$ for some $L \in \mathbb{R}$.
I assume I am supposed to use l'Hopital's rule in some way (considering that is what section we are in). I've tried looking at the definition of a limit and the sequential criterion for limits, but I have no idea where to go.
Just a push in the right direction would be awesome. Thanks in advance.
|
Here is the right direction:
Given $\lim_{x \rightarrow \infty} f(x) = L$ we know $\exists N : x> N \Rightarrow |f(x) -L| < \epsilon$.
Set $x' = \frac 1 x$ and $\delta = \frac 1 N$. Then $x>N \Leftarrow\Rightarrow x' < \frac 1 N = \delta$.
Thus $x' < \delta \Rightarrow x> N \Rightarrow |f(x) -L| = |f(\frac 1 {x'}) -L| = |g(x') -L| < \epsilon$. And so $\lim_{x' \rightarrow 0}g(x') = L$.
Note: We are actually assuming $x'>0$, so the limit only works as $x'$ approaches $0$ from the right.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Questions concerning the differential operator Consider the differential equation:-
$a \phi + (bD^3 - cD)w =0$, where $a, b$ and $c$ are constants, $D$ denotes the differential operator $\dfrac{d}{dx}$, and $w$ is a function of $x$.
I'm defining $w = Lw'$ and $x=Lx'$, where $L$ is a constant.
I'm trying to obtain $\phi$ in terms of $x'$. But I've two questions that pop into my mind immediately:-
$1.$ How do I change the differential operator from $\dfrac{d}{dx}$ to $\dfrac{d}{dx'}$, so that I can obtain $\phi$ correctly in terms of $x'$?
$2.$ Let's say I'm keeping the differential operator as such, and differentiating $w$ with respect to $x$. After the differentiation, if I substitute $x$ with $Lx'$, is $\phi$ the same as the one obtained by changing the differential operator?
|
Here is a start. First make the change of the dependent variable $w=Lz$ (I used z instead of w' to avoid confusion with derivative)
$$ w=Lz \implies D^n w= L D^n z,\quad D=\frac{d}{dx}, $$
so, the differential equation becomes
$$ a \phi(x) + L(bD^3 - cD)z =0 \longrightarrow (1).$$
Now, we use the other change of variables $x=Lt$ (again I let $t=x'$ avoiding the confusion) in $(1)$ as
$$ \frac{dz}{dx} = \frac{dz}{dt}\frac{dt}{dx} = \frac{1}{L} \frac{dz}{dt} $$
$$ \implies D^2 z = \frac{d^2z}{dx^2} = \frac{d}{dx}\left(\frac{1}{L}\frac{dz}{dt}\right)\frac{dt}{dx} = \frac{d}{dt}\left(\frac{1}{L}\frac{dz}{dt}\right)\frac{dt}{dx}=\frac{1}{L^2}\frac{d^2z}{dt^2}$$
$$ \implies D^3 z = \frac{1}{L^3}\frac{d^3z}{dt^3}. $$
Now, just go back and make substitutions in $(1)$. I think you can do that.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Does $\{u_1, u_2, u_3, u_4\}$ spanning $\mathbb R^3$ mean that $\{u_1,u_2,u_3\}$ also does? Since it's a subset? Does $\{u_1, u_2, u_3, u_4\}$ spanning $\mathbb R^3$ mean that $\{u_1,u_2,u_3\}$ also does? Since it's a subset? A little unclear about this...
|
A concrete example:
$$u_1=(1,0,0),u_2=(2,0,0),u_3=(0,1,0),u_4=(0,0,1)$$
$\{u_1,u_2,u_3,u_4\}$ clearlly spans $\mathbb{R}^3$.
On the other hand $u_4 \notin span\{u_1,u_2,u_3\}$, and therefore $\{u_1,u_2,u_3\}$ does not span $\mathbb{R}^3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/652999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 1
}
|
Is the ring $\mathbb{C}[t^2,t^3]$ integrally closed? I am trying to understand if the ring $\mathbb{C}[t^2,t^3]$ in integrally closed (into its field of fractions), but I have no idea about how to proceed. All I have tried until now has failed. Any hints/ideas?
|
The field of fractions is $\mathbb{C}(t)$. The element $t$ is integral because it is a root of $x^2-t^2$. But $t \notin \mathbb{C}[t^2,t^3]$. In fact, the integral closure is $\mathbb{C}[t]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/653087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is $\nabla u \in L^{\infty}$ if $u$ is bounded $C^{0}$? I would like to prove something of the form $|A_{1}(u)| \leq c \lVert u \rVert_{L^{\infty}}$ and $|A_{2}(u)| \leq c \lVert \nabla u \rVert_{L^{\infty}}$ for some operators $A_{1},\ A_{2}$ and arbitrary constant $c$. I am working on domain $\Omega \subset \mathbb{R}^{2}$ which is bounded and open, and has $C^{1}$ boundary. I know that $u \in W^{1,p}(\Omega)$ for $p > 2$. From this I can use general Sobolev embeddings to show that $W^{1,p} \hookrightarrow C^{0,\gamma}$ for some $\gamma < 1$. If I now also assume that $u$ is a bounded function on $\Omega$ such that $u \in L^{\infty}(\Omega)$ I would like to show that $\nabla u \in L^{\infty}(\Omega)$, which seems as though it should also be valid. I only have intuition on this, but it seems as though a continuous function should have a bounded derivative a.e. if it is bounded itself.
I am not sure how to show the above result, though. I think that what I need to do is to show that if the Cauchy sequence $\{u_{n}\}$ for $u_{n} \in L^{\infty}(\Omega) \cap C^{0,\gamma}(\Omega)$ converges to $u \in L^{\infty} \cap C^{0,\gamma}(\Omega)$ (i.e. $u_{n} \rightarrow u$ in $L^{\infty}$) then this also implies that $\{\nabla u_{n}\}$ is a Cauchy sequence with $\nabla u_{n} \rightarrow \nabla u^{*}$ and then show that $u^{*} = u$. However, I am not sure how to do this at all. Any help would be appreciated.
|
Let $\Omega = \overline{B_1(0)}$ and $u(x) := \sqrt{1-\Vert x\Vert^2}$.
Then $u\in C^0(\Omega) \cap L^\infty(\Omega) \cap C^\infty(\Omega^\circ)$, but
$\nabla u \notin L^\infty(\Omega)$, thus this $u$ serves as a counter-example.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/653152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$H_1\times H_2 Let $H_1, H_2, G_1, G_2$ be groups. Clearly if $H_1<G_1$ and $H_2<G_2$, then $H_1\times H_2<G_1\times G_2$.
I'm wondering if the converse statement is true. I'm quite sure it's not. Can you find a counterexample?
|
It depends even on the very construction of products and the notion of natural inclusion.
For example is $A\times (B\times C)=(A\times B)\times C$ or merely $A\times (B\times C)\cong(A\times B)\times C$? Is $A\times B<A\times (B\times C)$? If yes, this easily leads to a counterexample.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/653220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Absolute continuity preserves measurability In studying absolutely continuous function, I knew that if $f:[a,b]\to\mathbb{R}$ is absolutely continuous, then $f(N)$ has measure zero if $N$ is, and $f(E)$ is measurable if $E$ is.
Suppose continuous function $f:[a,b]\to\mathbb{R}$ is such that if $E$ is measurable, then $f(E)$ is measurable. Then does it follow that if $N$ has measure zero, then $f(N)$ has measure zero?
|
Suppose $N\subset[a,b]$ is of measure $0$, but $f(N)$ is not. Then, as $f(N)$ is measurable, it contains a non-measurable set $B$. One can then find a subset $A$ of $N$ with $f(A)=B$. As $A$ is measurable with measure $0$, we have a contradiction.
In fact, a continuous function on $[a,b]$ maps every measurable set onto a measurable set if and only if it maps measure zero sets to measure zero sets. This is Exercise 18.39 b) in Hewitt and Stromberg's Real and Abstract Analysis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/653333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.