Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Limit of $\lim\limits_{n \rightarrow \infty}(\sqrt{x^8+4}-x^4)$ I have to determine the following:
$\lim\limits_{n \rightarrow \infty}(\sqrt{x^8+4}-x^4)$
$\lim\limits_{n \rightarrow \infty}(\sqrt{x^8+4}-x^4)=\lim\limits_{x \rightarrow \infty}(\sqrt{x^8(1+\frac{4}{x^8})}-x^4 = \lim\limits_{x \rightarrow \infty}(x^4\sqrt{1+\frac{4}{x^8}}-x^4 = \lim\limits_{x \rightarrow \infty}(x^4(\sqrt{1+\frac{4}{x^8}}-1)= \infty$
Could somebody please check, if my solution is correct?
|
A short way to (non-rigorously) find the limit is to observe that for large $x$,
$$
\sqrt{x^8+4} \approx \sqrt{x^8}=x^4
$$
so that for large $x$ (especially in $\lim_{x \to \infty}$)
$$
\sqrt{x^8+4}-x^4 \approx x^4-x^4=0
$$
So the limit must be $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/598928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Help using the C-S-B theorem Let $\Bbb R$ denote the set of real numbers. Let $H\subseteq\Bbb R$ and assume that there are real numbers $a,b$ with $a>b$ such that the open interval $(a,b)$ is a subset of $S$. Prove that the cardinality of $H$ equals $\mathfrak{c}$.
|
$w\mapsto x+ \dfrac{y-x}{1+2^w}$ is an injective mapping in one direction.
(As $w\to\infty$, this function goes to $x$; as $w\to-\infty$, this function goes to $y$; for other values of $w$, it's between $x$ and $y$.)
$v\mapsto v$ is an injective mapping in the other direction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Why is this combination of a covariant derivative and vector field a (1,1)-tensor? I have a question regarding something Penrose says in section 14.3 of The Road to Reality.
It says '...when $\nabla$ acts on a vector field $\xi$, the resulting quantity $\nabla \xi$ is a $(1,1)$-valent tensor.'
I understand that $\nabla$ is a $(0,1)$-tensor and $\xi$ as a vector field is a $(1,0)$-tensor, so it sort of makes sense that $\nabla \xi$ is a $(1,1)$-tensor. However, what's throwing me off is the wording "acts on". I understand that the idea of $\nabla$ is that it gives a notion of derivative not just for scalar fields, but vector fields and general tensors. It seems "acts on" implies that $\nabla \xi$ should be a $0$-tensor.
So my question is: is this just a poor choice of language, and is $\nabla \xi$ just $\nabla$ with respect to a vector field (instead of just a single vector) $\xi$, or is the actual evaluation of $\xi$ by $\nabla$ not a scalar field as I would expect, but actually a $(1,1)$-tensor?
Thanks.
EDIT: I have received an answer to my question (see comment below), but now I'd really like more detail behind how, if I may ask in this same question, the $(1,1)$-tensor $\nabla \xi$ describes how $\xi$ changes from point to point (intuitively and concretely, preferably, but at least intuitively).
|
A $(1,1)$-tensor can be thought of as a linear map that sends vectors to vectors; so given a vector $X$ based at $p$, $\nabla\xi(X)=\nabla_X \xi$ will be another vector based at $p$, which you should think of as the change in the vector field $\xi$ when you move a small amount in the direction $X$ starting from $p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How do you solve this exponential equation? $3(16)^x+2(81)^x=5(36)^x$
How do you change the bases to combine the terms? The correct answer should be 0 and 0.5.
Edit: So this equation can't be solved algebraically? I have to use creative logic to solve it?
|
Note that
$$3(16)^x=3(4)^{2x}, 2(81)^x=2(9)^{2x}, \text{and}\; 5(36)^x=5(6)^{2x}$$
If $2x=1$, you get that $12+18=30$, a true statement. Thus, $x=\frac{1}{2}$.
The case where $x=0$ is trivial.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Distance from point in circle to edge of circle The situation is as follows:
I have a circle with a diameter of $20$ and a center at $(0,0)$.
A point $P$ inside that circle is at $(2,0)$.
How do I calculate the distance from $P$ to the edge of the circle for a given angle $\theta$?
|
Let the centre of the circle be $O$, and let the point $(2,0)$ be $P$. Draw a line $PQ$ to the periphery of the circle, making an angle $\theta$ with the positive $x$-axis. We want to find the length of $PQ$.
Consider the triangle $OPQ$. We have $\angle OPQ=180^\circ-\theta$. By the Cosine Law, with $x=PQ$, we have
$$100=x^2+4-(2x)(2\cos(180^\circ-\theta))=x^2+4+4x\cos\theta.$$
This is a quadratic equation in $x$: Solve.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
Continuity correction: Change P(2 ≤ x < 9) to continuous? Convert discrete probability into continuous probability using continuous correction:
attempt:
Discrete: P(2 ≤ x < 9)
therefore continuous should be
Continuous: P(1.5 < X < 8.5)
Is this right? or should it be P (1.5 < x < 9)?
|
I will assume that your discrete random variable takes integer values.
I find it difficult to remember a bunch of rules, so I remember only one: That if $k$ is an integer, and we are approximating the discrete $X$ by a continuous $Y$, then $\Pr(X\le k)$ is often better approximated by $\Pr(Y\le k+0.5)$.
Now $\Pr(2\le X\lt 9)=\Pr(2\le X\le 8)=\Pr(X\le 8)-\Pr(X\le 1)$.
We are ready to apply the only rule that I remember. Our probability is approximately $\Pr(Y\le 8.5)-\Pr(Y\le 1.5)$. (Of course, for the continuous $Y$, there is no difference in probability between $\le$ and $\lt$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Where is $k$ algebraically closed used? Suppose $k$ is algebraically closed, $A$, $B$ are $k$-algebras and $A$ is an affine $k$-algebra. It is known that then $A\otimes_k B$ is a domain if $A$ and $B$ are domains. This can be found in Milne's Algebraic Geometry notes as Proposition 4.15(b). I do not see where the assumption $k$ algebraically closed is used.
He gives an example that the above is not true if $k$ is not algebraically closed. But i dont see where this assumption is being used in the proof.
I think that for an affine $k$-algebra the Jacobson radical is the nilradical, so here we do not need $k$ to be algebraically closed.
|
I like this question a lot. While he does give a counterexample when $k$ is not algebraically closed, it is hard to see in his proof where this property is used. Where Milne uses algebraic closure is in the lines
For each maximal ideal $\mathfrak{m}$ of $A$, we know $(\sum\overline{a}_ib_i)(\sum \overline{a}_i~'b_i')=0$ in $B$, and so we either $(\sum \overline{a}_ib_i)=0$ or $(\sum \overline{a}_i'b_i')=0$. Thus either all the $a_i\in \mathfrak{m}$ or all the $a_i'\in \mathfrak{m}$.
We know that $\{b_1,b_2,\dots\}$ and $\{b_1',b_2',\dots\}$ are linearly independent over $k$ (emphasis on $k$). The elements $\overline{a}_i$ and $\overline{a}_i'$ live in $A/\mathfrak m$, which is a priori only an algebraic field extension of $k$. It is possible for elements of a $k$ algebra to be linearly independent over $k$ but not over some extension (consider $1,i\in \mathbb C$ over $\mathbb R$ as opposed to $1,i\in \mathbb C$ over $\mathbb C$). The fact that $k$ is algebraically closed forces $A/\mathfrak{m}=k$, so we can apply the linear independence condition.
Great Question! I think Milne could use to write a line or two more for clarity, especially in an introductory treatment.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Proof of Proposition IV.3. 8 in Hartshorne Hartshorne book Proposition (IV.3. 8) is that
Let $X$ be a curve in $\mathbb{P}^3$, which is not contained in any plane.
where, curve means a complete, nonsingular curve over algebraically closed field $k$.
Suppose either
(a) every secant of $X$ is a multisecant. or
(b) for any two points $P,Q$ in $X$, the tangent lines $L_P.L_Q$ are coplanar.
Then there is a point $A$ in $\mathbb{P}^3$, which lies on every tangent line of $X$.
In proof, fix a point $R$ in $X$, consider the projection from $R$ , $\phi:X-R \rightarrow \mathbb{P}^2$.
My question is that
(1) If $\phi$ is inseparable, why the tangent line $L_P$ at $X$ passes through $R$ for any point $P$ in $X$?
(2) If $\phi$ is separable, does there exist point $T$ is a nonsingular point of $\phi(X)$ over which $\phi$ is not ramified?
|
Here is the proof of (1). If $P =R$, it is trivial. If $P \ne R$, since $\phi$ is inseparable, $\phi$ must be ramified at $P$, this implies the line $\overline {PR}$ must be $L_P$. Check Figure 13 on page 299 of Hartshorne's book.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Geometric Slerp - Calculating Points along an Arc I'm trying to understand how to use Geometric Slerp, as seen here.
Having looked at the following equation:
How can P0 and P1 be calculated in order to using this equation? Aren't P0 and P1 represented by 2 numbers? The 2 numbers being x and y coordinates? or have I miss understood the equation?
Below is what I'm trying to achieve; in a program, I have a camera following a car and when the car turns, the cameras position needs to update to stay behind it (I'm think using a Geometric Slerp is the way to go).
Below are two doodles to help you understand my description above. The first image shows the car and camera; the second shows the details:
Do I need to calculate P1 from P0's position to use this? Either way, I'm unsure how this can be implemented. Thanks.
Edit:
I've tried to implement it using P0 and P1 as X Coordinates, but doesn't work as expected:
slerp = (((sin((1-t)*Omega))/(sin(Omega)))*p0)+(((sin(t*Omega))/(sin(Omega)))*p1)
|
Your equation is a vector equation. So, yes, $P_0$ and $P_1$ are 2D points. You multiply these points by scalars, and then add together.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is the matrix corresponding to an equivalence relation positive semidefinite? Let $|X| < \infty$ and $(X,R)$ be an equivalence relation. Define the $|X| \times |X|$ matrix $A$ by
$$(A)_{ij} = \begin{cases}1 & (i,j) \in R,\\0 & \text{ otherwise}.\end{cases}$$
Is this matrix positive semidefinite? Is there a simple way to prove it?
|
Notice that $A$ is equal to $I$ plus the adjacency matrix of a graph consisting of a disjoint union of cliques. The eigenvalues of a clique $K_n$ are well-known to be $n-1$ and $-1$ and the spectrum of a disjoint union of graphs is the union of the spectra of the connected components. It follows that $A$ is positive semidefinite.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Equivalent Conditions of Projection Map I got a problem in doing the following:
Let $A_1,\dots,A_k$ be linear operators on a vector space $V$ with dimension $n<+\infty$ such that
$$A_1+\cdots+A_k=I.$$
Prove that the following conditions are equivalent:
1) the operator $A_i$ are projections, i.e. $A_i^2=A_i$.
2) $A_iA_j=0$ for $i\neq j$
3) $rank(A_1)+\cdots+rank(A_k)=n$
I have proved 2) implies 1) and 1) implies 3), but I got difficulty in proving 3) implies 2). It will be good if someone can provide me a hints on that.
|
Let $V_k$ be the range of $A_k$. Suppose (3) holds. Then the map $(V_1\oplus \dots\oplus V_k)\to V$ given by $(x_1,\dots,x_k)\mapsto x_1+\dots +x_k$ is a surjective map between spaces of the same dimension; thus, the map is an isomorphism.
So, every element of $ V$ has a unique representation as a sum of elements of $V_i$, $i=1,\dots,k$. For any $x\in $, the vector $ A_j x$ can be represented in the above form either as $ A_j x$, or as $\sum_{i} A_i A_j x$. The uniqueness gives you (1) and (2) at once.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Any two norms on finite dimensional space are equivalent Any two norms on a finite dimensional linear space are equivalent.
Suppose not, and that $||\cdot||$ is a norm such that for any other norm $||\cdot||'$ and any constant $C$, $C||x||'<||x||$ for all $x$. Define $||\cdot||''=\sum |x_i|\cdot||e_i||$ (*). This is a norm and attains at least as large values as $||\cdot||$ for all $x$.
Could this be used as part of a proof? That two norms $||\cdot||_1,||\cdot||_2$ are equivalent means that there are $m,M$ such that $m||x||_1 \leq ||x||_2 \leq M||x||_1$ for all $x$. In the above I only say that there cannot be a norm such that there does NOT exist an $M$ such that $||x||\leq M||x||'$ for any other norm $||\cdot||'$, but I'm really not sure that proves the entire assertion.
If this is utter nonsense, some hints would be appreciated, thank you.
(*The $e_i$ form a base and sloppily assumed the space to be real.)
|
This lecture note answers this question quite well.
https://math.mit.edu/~stevenj/18.335/norm-equivalence.pdf
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
}
|
How to calculate the partial derivative of matrices' product Let $U = \frac{1}{2}u^TKu$,
then $\frac{\partial U}{\partial u} = Ku$.
How could I get this answer? Is there any book explains how to calculate derivative of matrices?
|
By definition
$$
U=u^TKu=\frac{1}{2}\sum_{j=1}^n\sum_{i=1}^n K_{ij}u_iu_j.
$$
Differentiating with respect to the $l$-th element of $u$ we have
$$
2\frac{\partial U}{\partial u_l}=\sum_{j=1}^n K_{lj}u_j+\sum_{i=1}^n K_{il}u_i
$$
for all $l=1,\,\ldots,n$ and consequently
$$
\frac{\partial U}{\partial u}=\frac{1}{2}(Ku+K^Tu)=\frac{1}{2}(K+K^T)u
$$
If $K$ is symmetric, $K^T=K$ and
$$
\frac{\partial U}{\partial u}=Ku
$$
There are many books, for example
Golub, Gene H., and Charles F. Van Loan, Matrix Computations
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Connected Subsets If C is a collection of connected subsets of M, all having a point in
common, prove that union of C is connected.
I know a set is connected if it is not disconnected. Also, from the above, I know the intersection of all subsets C is nonempty. I am not sure where to go from there.
|
Hint:Let $C=(C_i)_{i\in I}$.Let $A=\cup_{i\in I} C_i$.
Now,(by contrdiction) suppose that $A$ is not connected.Then there is a function $g:A\to ${$0,1$} which is continuous and onto. Let $x_0\in \cap_{i\in I} C_i$. Then $x_0\in A$. Suppose that $g(x_0)=0$. Because $A$ is disconnected,there is a $x_1\in A:g(x_1)=1$.Also thre is a $i_1\in I:x_1\in C_{i_1}$.So $x_0,x_1\in C_{i_1}$.....your turn:)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/599973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to determine the matrix of adjoint representation of Lie algebra? My questions will concern two pages:
http://mathworld.wolfram.com/AdjointRepresentation.html
and
http://mathworld.wolfram.com/KillingForm.html
In the first page, we know the basis of four matrix $\{e_1,e_2,e_3,e_4\}$, and my try to find their adjoint representations is (taking example of $e_2$):
$$\hbox{ad}_{e_2}e_1=-e_2,\\\hbox{ad}_{e_2}e_2=0,\\\hbox{ad}_{e_2}e_3=e_1-e_4,\\\hbox{ad}_{e_2}e_4=-e_3.$$
Then in the basis $\{e_1,e_2,e_3,e_4\}$, we can write the matrix of adjoint representation of $e_2$ as:
$$\hbox{ad}(e_2)=\left[\begin{array}{cccc}0 & 0 & 1 & 0\\-1 & 0 & 0 & 1\\0 & 0 & 0 & 0\\0 & 0 & -1 & 0\end{array}\right]$$
just like the result in the page. Now my questions:
Q1. If my try is right, now we read the second page ("killing form") and let's do the same calculations with the basis $[X,Y,H]$. I find the matrix of $\hbox{ad}(Y)$ as
$$\hbox{ad}(Y)=\left[\begin{array}{ccc}0 & 0 & 2\\0 &0 & 0\\-2 & 0 & 0\end{array}\right]$$ but not the result in the page (just its transposition). If this page is right, my precedent result should be
$$\hbox{ad}(e_2)=\left[\begin{array}{cccc}0 & -1 & 0 & 0\\0 & 0 & 0 & 0\\1 & 0 & 0 & -1\\0 & 1 & 0 & 0\end{array}\right].$$ What should it be?
Q2. We have the fomula of Lie algebra: $\hbox{ad}_XY=[X,Y]$. What are the relationships between $\hbox{ad}(X)$ and $\hbox{ad}_X(Y)$?
Q3. In the page of "killing form", how does he get $B=\left[\begin{array}{ccc}8 & 0 & 0\\0 & -8 & 0\\0 & 0 & 8\end{array}\right]$?
Thanks!
|
$\newcommand{\ad}{\operatorname{ad}}$
Answer to Q1:
You shouldn't bother too much with this, it's just a matter of notation. Anyway, I think there's a mistake in their $\ad(Y)$ in the sense that, if they want to be coherent with the first page, they should have your $\ad(Y)$ and not the transpose of it.
Answer to Q2:
The relatiion is simply that $\ad_X(Y)$ is the second column of $\ad(X)$. In your example $\ad_X(Y)$ is the $2\times 2$ matrix
$
XY-XY =
\begin{pmatrix}
2 & \phantom{-}0 \\ 0 & -2
\end{pmatrix}
$, which corresponds to the vector $\begin{pmatrix}0 \\ 0 \\ 2\end{pmatrix}$ in the basis $X,Y,Z$. This means that $\ad_X(Y)$ is expressed as the linear combination
$$ 0\cdot X + 0\cdot Y + 2\cdot Z = \begin{pmatrix}0 \\ 0 \\ 2\end{pmatrix} \cdot \begin{pmatrix}X \\ Y \\ Z\end{pmatrix} $$
Answer to Q3:
They just use the defining formula $B(X,Y)=Tr(\ad(X)\cdot\ad(Y))$. By the basic theory of bilinear forms we know that $(i,j)$-entry of the resulting matrix is given by
$$ Tr(\ad(e_i)\cdot\ad(e_j)) $$
where in our case $e_1=X$, $e_2=Y$ and $e_3=H$. As an example, the entry $(2,2)$ is computed by
$$ Tr(\ad(Y)\cdot\ad(Y)) = Tr\;
\begin{pmatrix}
-4 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -4 \\
\end{pmatrix}
= -8
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/600009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
For all but finitely many $n \in \mathbb N$ In my book I have the following theorem:
A sequence $\langle a_n \rangle$ converges to a real number $A$ if and only if every neighborhood of $A$ contains $a_n$ for all but finitely many $n \in \mathbb N$.
Can anyone clarify what the phrase, "All but finitely many", means?
|
In your sentence, "but" (is not a conjunction, but a preposition and) means "except". At first, I didn't know that the word "but" has two meanings and thus I had exactly the same doubt as you.
Notice that the bold word "but" above (is a conjunction and) doesn't have the same meaning as in your sentence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/600102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Deriving the formula of a summation Derive the formula for
$$
\sum_{k=1}^n k^2
$$
The solution's that I was given has $k^3 + (k-1)^3$ as the first step but doesn't say how it got to that. Any help?
|
Hint:
For nonnegative integers $n,r$ with $r\leq n$ it is surprisingly
easy to prove by induction that
$\sum_{k=r}^{n}\binom{k}{r}=\binom{n+1}{r+1}$
This result allows you to find formulas for $\sum_{k=1}^{n}k^{r}$
for $r=1,2,3,\ldots$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/600199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Show that if $(a,b)=1$, $a\mid c$ and $b\mid c$, then $a\cdot b\mid c$
Show that if $(a,b)=1$, $a\mid c$ and $b\mid c$, then $a\cdot b\mid c$.
Tried
$c=a\cdot k$ and $c=b\cdot j$ with $k,j\in\mathbb{N}$ then $a\cdot b\mid c^2=c\cdot c$.
|
$c=a.k=b.j$
But $(a,b)=1$, and $a$ divides $b.j$, so $a$ divides $j$. Hence $a.b$ divides $c$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/600401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
}
|
How to prove $\sum_{n=0}^\infty \left(\frac{(2n)!}{(n!)^2}\right)^3\cdot \frac{42n+5}{2^{12n+4}}=\frac1\pi$? In an article about $\pi$ in a popular science magazine I found this equation printed in light grey in the background of the main body of the article:
$$
\color{black}{
\sum_{n=0}^\infty \left(\frac{(2n)!}{(n!)^2}\right)^3\cdot \frac{42n+5}{2^{12n+4}}=\frac1\pi
}
$$
It's true, I checked it at Wolfram, who gives a even more cryptic answer at first glance, but finally confirms the result.
The appearance of $42$ makes me confident that there is someone out there in this universe, who can help to prove that?
|
$\newcommand{\+}{^{\dagger}}%
\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\ds}[1]{\displaystyle{#1}}%
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}%
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}%
\newcommand{\fermi}{\,{\rm f}}%
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}%
\newcommand{\half}{{1 \over 2}}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\isdiv}{\,\left.\right\vert\,}%
\newcommand{\ket}[1]{\left\vert #1\right\rangle}%
\newcommand{\ol}[1]{\overline{#1}}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}%
\newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}%
\newcommand{\sech}{\,{\rm sech}}%
\newcommand{\sgn}{\,{\rm sgn}}%
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
$\large\tt Hint:$
$\ds{%
{1 \over \pi}
=
\sum_{n=0}^{\infty}\bracks{\pars{2n}! \over \pars{n!}^{2}}^{3}
{42n + 5 \over 2^{12n + 4}}
=
{21 \over 8}\sum_{n=0}^{\infty}{2n \choose n}^{3}n\pars{2^{-12}}^{n}
+
{5 \over 16}\sum_{n=0}^{\infty}{2n \choose n}^{3}\pars{2^{-12}}^{n}\,,
\qquad{\large ?}}$
Let's consider the function
$\ds{{\cal F}\pars{x} \equiv \sum_{n=0}^{\infty}{2n \choose n}^{3}x^{n}}$ and we
have to evaluate
$\ds{\braces{\bracks{{21 \over 8}\,x\,\partiald{}{x}
+ {5 \over 16}}{\cal F}\pars{x}}_{x = 2^{-12}}}$ $\ds{\pars{~\mbox{this expression returns the value}\ {1 \over \pi}~}}$:
\begin{align}
{\cal F}\pars{x} &\equiv \sum_{n=0}^{\infty}x^{n}\int_{\verts{z_{1}} = 1}
{\dd z_{1} \over 2\pi\ic}\,{\pars{1 + z_{1}}^{2n} \over z_{1}^{n + 1}}
\int_{\verts{z_{2}} = 1}
{\dd z_{1} \over 2\pi\ic}\,{\pars{1 + z_{2}}^{2n} \over z_{2}^{n + 1}}\int_{\verts{z_{1}} = 1}
{\dd z_{1} \over 2\pi\ic}\,{\pars{1 + z_{3}}^{2n} \over z_{3}^{n + 1}}
\\[3mm]&=
\prod_{i = 1}^{3}\pars{\int_{\verts{z_{i}} = 1}
{\dd z_{i} \over 2\pi\ic}\,{1 \over z_{i}}}\sum_{n = 0}^{\infty}\bracks{%
x\pars{1 + z_{1}}^{2}\pars{1 + z_{2}}^{2}\pars{1 + z_{3}}^{2}
\over
z_{1}z_{2}z_{3}}^{n}
\\[3mm]&=
\prod_{i = 1}^{3}\int_{\verts{z_{i}} = 1}
{\dd z_{i} \over 2\pi\ic}\,
{1 \over
z_{1}z_{2}z_{3} - x\pars{1 + z_{1}}^{2}\pars{1 + z_{2}}^{2}\pars{1 + z_{3}}^{2}}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/600483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 0
}
|
Prove the lecturer is a liar... I was given this puzzle:
At the end of the seminar, the lecturer waited outside to greet the attendees. The first three seen leaving were all women. The lecturer noted " assuming the attendees are leaving in random order, the probability of that is precisely 1/3." Show the lecturer is lying (or badly mistaken).
I've puzzled it out to proving that there is no ratio of $\binom{a}{3}/\binom{a+b}{3}$ that is 1/3, where $ a,b
\in\mathbb{N}$ and $a\ge3$ and $b\ge0$, $a$ being the number of women and $b$ the number of men.
I'm stuck at this point (but empirically pretty convinced).
Any help/pointers appreciated.
Rasher
PS- as an amusing aside, the first 12 values in the sequence of values for $\binom{3+b}{3}$ are the total number of gifts received for each day of the "12 days of Christmas" song.
I've narrowed it down to proving that in the sequence generated by $n^3+3 n^2+2 n$ with $n
\in\mathbb{N}$ and $n\ge1$ it is impossible for $3(n^3+3 n^2+2 n)$ to exist in the form of $n^3+3 n^2+2 n$ . Still stymied at this point.
I found today a (somewhat) similar question at MathOverflow. Since my question seems to boil down to showing the Diophantine $6 a - 9 a^2 + 3 a^3 - 2 b + 3 b^2 - b^3=0$ has no solutions for $(a,b)
\in\mathbb{N}$ and $(a,b)>= 3$ would it be appropriate to close this here and ask for help at MathOverflow to determine if this can be proved?
An update: I asked a post-doc here at Stanford if he'd have a look (he's done some heavy lifting in the area of bounds on ways $t$ can be represented as a binomial coefficient). To paraphrase his response "That's hard...probably beyond proof in the general case". Since I've tested for explicit solutions to beyond 100M, I'm settling with the lecturer is lying/mistaken at least in spirit unless one admits lecture halls the size of a state.
|
Let $a$ = the number of women, $b$ = the number of men, and $n = a + b$ be the total number of attendees.
The probability that the first 3 students to leave are all female is $\frac{a}{n} \cdot \frac{a-1}{n-1} \cdot \frac{a-2}{n-2}$. Setting this expression equal to $\frac{1}{3}$ and cross-multiplying gives $3a(a-1)(a-2)=n(n-1)(n-2)$.
The product of any three consecutive integers is divisible by 6, so the left-hand side is divisible by 18. For the equation to work out, we must have $n \in \{0, 1, 2\}$ modulo 9.
This doesn't solve your puzzle, but it does rule out (informally) 2/3 of the domain.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/600595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
}
|
Change $y= (1/2)x +1$ into standard form and get the answer $x-2y=-2$ I know the answer to the problem because I can check the answers in the back of the book but when I do the work myself I get
$$
-\frac{1}{2}x +y= 1
$$
when I attempt to change it into standard. I need a step by step explanation on how the book got
$$
x-2y=-2.
$$
|
For standard form your x-coefficient needs to be positive and all coefficients must be integers, so multiply the entire expression by -2
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/600661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Complete representative set of squares modulo $15$.
$2.\,\,$Do the following computations.
$\text{(a)}$ Solve the equation $x^2\equiv 1\mod15$
Solution: We only need to choose a complete representative set modulo $15$ and verify the equation over such a set. In the following table, we choose the representative set
$$\{0,\pm1,\pm2,\pm3,\pm4,\pm5,\pm6,\pm7\}$$
and verify the equation as follows: $$
\begin{array}{c|c}
x & 0 & \pm 1 & \pm 2 & \pm 3 & \pm 4 & \pm 5 & \pm 6 & \pm 7 \\
\hline
x^2 & 0 & 1 & 4 & -6 & 1 & -5 & 6 & 4 \\
\end{array}
$$
We see that the equation has four solutions: $\pm1$ and $\pm 4$.
(Note that $15$ is not a prime, so we do not just have two square roots!)
I understand that if you do $8^2 \mod 15 $, $9^2 \mod 15$, $10^2 \mod 15$, and so forth, you get repetitions of the above representative set. What I don't understand is how you could know this before doing those calculations manually.
|
If you really wanted to, you could choose the set $\{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14\}$, but notice that $8 \equiv -7 \mod 15$, $9 \equiv -6 \mod 15$, etc. So then it is more convenient to choose the representative set $\{0, \pm 1, \pm 2, \pm 3, \pm 4, \pm 5, \pm 6, \pm 7\}$. This cuts down the amount of work from $15$ computations to $8$, as clearly $a^2 \equiv (-a)^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/600734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Let $T:\mathbb{R}^{p} \rightarrow \mathbb{R}$ linear and $K=\left \{ \overline{x}\in \mathbb{R}^{p}:||x||_{2}\leq1 \right \}$. Show that T(K)=[-M,M] Let $T:\mathbb{R}^{p} \rightarrow \mathbb{R}$ linear (i.e. $T(x)=a_{1}x_{1}+\dots +a_{p}x_{p} $ ) and $K=\left \{ \overline{x}\in \mathbb{R}^{p}:||x||_{2}\leq1 \right \}$.
Show that $T(K)=[-M,M]$ with $M=\sqrt{(T(\overline{e}_{1}))^{2}+ \cdots + (T(\overline{e}_{p}))^{2}}$ where $\mathfrak{B}= \left \{ \overline{e}_{1}, ... , \overline{e}_{p} \right \}$ is the canonical basis.
The linearity of $T$, symmetry of $K$ (i.e. $-K=K$), continuity of $T$, compactness of $K$ are useful.
What I've done so far:
I know that any $\overline{x} \in \mathbb{R}^{p}$ can be written as
$\overline{x}=b_{1}\overline{e}_{1}+ \cdots + b_{p}\overline{e}_{p}$
$\Rightarrow T(\overline{x})=T(b_{1}\overline{e}_{1})+ \cdots + T(b_{p}\overline{e}_{p})=b_{1}T(\overline{e}_{1})+ \cdots + b_{p}T(\overline{e}_{p})=b_{1}a_{1}+ \cdots +b_{p}a_{p}$ (*)
on the other hand
$M=\sqrt{(T(a_{1}b_{1})^{2}+ \cdots + (T(a_{p}b_{p})^{2}}$ (**)
I have the feeling that I can connect this (* and **) two using the Hölder inequality and then use symmetry to get $-M$ from $M$. is this a correct idea?
|
To find $M$:
$Max$ $T(x)$
s.t. $||x||_{2}=1$
$\mathcal{L}=a_{1}x_{1}+ \cdots + a_{p}x_{p}-\lambda[(x_{1}^{2}+ \cdots + x_{p}^{2} )^{\frac{1}{2}}-1]$
F.o.c.
$x_{i}:\text{ } \frac{a_{i}}{\lambda}$
$\lambda: \text{ }(x_{1}^{2}+ \cdots + x_{p}^{2} )^{\frac{1}{2}}=1$
$\therefore x^{*}=T(\overline{e}_{1}+ \cdots + \overline{e}_{p})=T(\overline{e}_{1})+ \cdots + T(\overline{e}_{p}) $
$\therefore T(x^{*})=(T(\overline{e}_{1}))^{2}+ \cdots + (T(\overline{e}_{p})^{2}) $
Then $(T(x^{*}))^{\frac{1}{2}}=(\sum_{i=1}^{p}x_{i}^{*2})^{\frac{1}{2}}=||x||_{2}=1 \Rightarrow T(x^{*}))=\sum_{i=1}^{p}x_{i}^{*2}=1$
$\Rightarrow \text{ }M=T(x^{*})=(T(x^{*}))^{\frac{1}{2}}=\sqrt{T(\overline{e}_{1}))^{2}+ \cdots + (T(\overline{e}_{p})^{2}}$
and since for every function $min f=max -f$, We have that
$T(K)=[-M,M]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/600834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
A diffusion partial differential equation, or Sturm-Liouville eigenvalue ODE What is the analytical solution for the following diffusion partial differential equation (initial value problem)?
$$\frac{\partial f}{\partial t} = (ax^2+b)\frac{\partial f}{\partial x}+\frac{\partial^2 f}{\partial x^2},$$
where $a$ and $b$ are real number constants.
We can separate the variables or take the Fourier transform $\tilde f(x)$ of $f$ in the time domain $t$, and turn the above into an ordinary differential equation eigenvalue problem in $x$:
$$k\tilde f= (ax^2+b)\frac{d\tilde f}{d x}+\frac{d^2 \tilde f}{d x^2}.$$
where $k$ can be views as an eigenvalue for the differential operator on the left hand side. Now we can further transform this into the Sturm-Liouville form.
However, I can not immediate recognize a transformation that can turn the above into a known form that admits an analytic solution. Can someone help?
|
Using Maple we get the solution is: $$f(x,t)=F1(x)\cdot F2(t)$$ where $F1$ and $F2$ are functions such that $$F1_{xx}=c_1\cdot F1-(a\cdot x^2-b)F1_x \quad\text{ and } $$$$F2_t=c_1\cdot F2, $$ where $c_1$ is an arbitrary constant. The ODE for $F1$ has an "explicit" solution in terms of the Heun Triconfluent function (very ugly! Maple is not making some simplifications because it is assuming a and b are complex numbers) and the second ODE is just $F2(t)= c_2 \exp(c1\cdot t)$.
$F1(x) = c_3\cdot HeunT\left(-3^{2/3}\cdot a^2\cdot c_1/(a^2)^{4/3}, -3\cdot \sqrt{a^2}/a, a\cdot b\cdot 3^{1/3}/(a^2)^{2/3}, (1/3)\cdot 3^{2/3}\cdot (a^2)^{1/6}\cdot x\right)\cdot \exp\left(-(1/6)\cdot x\cdot (a\cdot x^2+3\cdot b)\cdot ((a^2)^{1/6}\cdot a+(a^2)^{2/3})/(a^2)^{2/3}\right)+c_4\cdot HeunT\left(-3^{2/3}\cdot a^2\cdot c_1/(a^2)^{4/3}, 3\cdot \sqrt{a^2}/a, a\cdot b\cdot 3^{1/3}/(a^2)^{2/3}, -(1/3)\cdot 3^{2/3}\cdot (a^2)^{1/6}\cdot x\right)\cdot \exp\left((1/6)\cdot x\cdot (a\cdot x^2+3\cdot b)\cdot ((a^2)^{1/6}\cdot a-(a^2)^{2/3})/(a^2)^{2/3}\right)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/600936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove a set is closed
Suppose $f\colon \mathbb R \to \mathbb R$ is a continuous function and $K$ is a closed subset of $\mathbb R$. Prove that the set $A = \{x \in \mathbb R : f(x) \in k\}$ is also closed.
Could someone show me direction as I am lost?
|
The definition of a continuous function is : A function that inverse image of open (closed) sets are open (closed).
Thus your set $A=f^{-1}(K)$ is closed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/601018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
$X$ is homeomorphic to $X\times X$ (TIFR GS $2014$) Question is :
Suppose $X$ is a topological space of infinite cardinality which is homeomorphic to $X\times X$. Then which of the following is true:
*
*$X$ is not connected.
*$X$ is not compact
*$X$ is not homeomorphic to a subset of $\mathbb{R}$
*None of the above.
I guess first two options are false.
We do have possibility that product of two connected spaces is connected.
So, $X\times X$ is connected if $X$ is connected. So I guess there is no problem.
We do have possibility that product of two compact spaces is compact.
So, $X\times X$ is compact if $X$ is compact. So I guess there is no problem.
I understand that this is not the proof to exclude first two options but I guess the chance is more for them to be false.
So, only thing I have problem with is third option.
I could do nothing for that third option..
I would be thankful if some one can help me out to clear this.
Thank you :)
|
The Cantor set is a counter-example to the second and third statement. Note that the Cantor set is homeomorphic to $\{0,1\}^{\mathbb N}$, hence it is homeomorphic to the product with itself.
An infinite set with the smallest topology (exactly two open sets) is a counter-example to the first statement. Martini gives a better counter-example in a comment.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/601113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
}
|
When does a Eigendecomposition result in a Q with det(Q)=1? With Eigendecomposition I can decompose a symmetric real matrix $A$ into $Q\Lambda Q^T$, where $Q$ is orthogonal. If $det(Q)=1$, $Q$ is a rotation matrix and if $det(Q)=-1$, $Q$ is a rotation matrix with reflection.
Is there a way to know beforehand if $det(Q)=1$ or $-1$?
|
You can always arrange that $\det Q = 1$. If $q_1, \ldots, q_n$ is your eigenbasis (the columns of $Q$), note that
$\det(q_1, \ldots, q_n) = -\det(q_2,q_1,\ldots, q_n)$ and $(q_2,q_1, \ldots, q_n)$ is also an orthonormal eigenbasis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/601190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Question on probability in hashing Consider a hash table with n buckets, where external (overflow) chaining is used
to resolve collisions. The hash function is such that the probability that a key
value is hashed to a particular bucket is
1/n.
The hash table is initially empty and K
distinct values are inserted in the table.
(a) What is the probability that bucket number 1 is empty after the Kth
insertion?
(b) What is the probability that no collision has occurred in any of the K
insertisons?
(c) What is the probability that the first collision occurs at the Kth insertions?
|
(a) The probability that bucket 1 is empty after ONE insertion is $(n-1)/n$. That's the probability that the first item didn't hash to bucket 1. The event that it's empty after TWO insertions is defined by "first item missed bucket 1" AND "2nd item missed bucket one". With this, you can (I hope) compute the probability that the bucket's empty after two insertions. From this, you can generalize to $K$ insertions.
(b) For $K = 1$, it's 1. For $K = 2$, the second item must miss the bucket of the first item. So it has $n-1$ places it can safely go. The probability of success is therefore $\frac{n-1}{n}$. What about the third item? It has only $n-2$ places it can go. So the probability for $K = 3$ is $1 \cdot \frac{n-1}{n}\cdot \frac{n-2}{n}$. I'll bet you can generalize. Be careful of the case $K > n$.
(c) Once you work out the details of the first two parts, you can probably make progress on this one as well. Hint: the first collision occurs on the $k$th insertion if (i) the first $k-1$ insertions didn't collide (see part b) and (ii) the $k$th insertion DOES cause a collision (see the complement of part b).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/601281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Weierstrass $\wp$ function question Given the Weierstrass $\wp$ function with periods $1,\tau$ and $\wp(z) := \sum\limits_{n^2+m^2\ne 0} \frac{1}{(z+m+n\tau)^2}-\frac{1}{(m+n\tau)^2}$, I am trying to show $\wp = (\pi^2 \sum\limits^\infty_{n=-\infty} \frac{1}{\sin^2(\pi(z+n\tau))})+K$ for some constant $K$. Note I am not trying to prove what $K$ is as I know what it is but it is unimportant here. I am just trying to show this must be true for some constant. The way I am attempting to solve is the following.
I know $\wp^\prime(z)=-2\sum\limits_{n,m\in \mathbb{Z}} \frac{1}{(z+m+n\tau)^3}$. Next I want to integrate both sides. Thus, if I can take the integration inside the sum I will be able to complete the problem easily from there. However, that is my problem. I do not know why I can take the integration inside. I either never learned those rules or more likely have forgotten them. So essentially my larger problem boils down to a small one. Thanks for the help.
Prove $-2\int \sum\limits_{n,m\in \mathbb{Z}} \frac{1}{(z+m+n\tau)^3} = \sum\limits_{n,m\in \mathbb{Z}} \frac{1}{(z+m+n\tau)^2}$.
|
The sum
$$\sum_{m,n\in\mathbb{Z}} \frac{1}{(z+m+n\tau)^2}$$
does not converge absolutely, so working with that is not easy, you have to explicitly prescribe the order of summation to get a well-defined sum, and need to justify each manipulation of the sum accordingly. That can be done here, but I think it's easier to prove
$$\wp(z) = \sum_{n=-\infty}^\infty \frac{\pi^2}{\sin^2 (\pi(z+n\tau))} + K$$
by considering the function
$$h(z) = \sum_{n=-\infty}^\infty \frac{\pi^2}{\sin^2 (\pi(z+n\tau))},$$
and either reach the conclusion by differentiating it, or arguing that it is an elliptic function with poles only in the lattice points, and whose principal parts coincide with that of the poles of $\wp$, whence $h-\wp$ is an entire elliptic function, hence constant.
First, one has to see that $h(z)$ is a meromorphic function. Since each term in the sum is evidently holomorphic in $\mathbb{C}\setminus \Omega$, where $\Omega$ is the lattice spanned by $1$ and $\tau$, it suffices to see that the sum converges locally uniformly.
For real $x,y$, we have $\lvert \sin (x+iy)\rvert^2 = \sin^2 x + \sinh^2 y$, so the terms in the sum of $h$ decay exponentially, which shows the locally uniform convergence of the sum. To make it precise, consider $A(M) := \{ z : \lvert\operatorname{Im} z\rvert \leqslant M\}$. For a given $M > 0$, choose $N \in \mathbb{N}$ such that $N\cdot \lvert \operatorname{Im}\tau\rvert > 2M$. Then for $\lvert n\rvert \geqslant N$ we have $\lvert \operatorname{Im} (z+n\tau)\rvert \geqslant \lvert n\rvert \cdot\lvert \operatorname{Im}\tau\rvert - M \geqslant \frac12\lvert n\rvert\cdot \lvert \operatorname{Im}\tau\rvert$, and hence
$$
\lvert \sin^2 (\pi(z+n\tau))\rvert = \lvert \sin^2 (\pi(z+n\tau))\rvert
\geqslant \sinh^2 (\pi n\operatorname{Im}\tau/2) \geqslant Ce^{\pi\lvert n \operatorname{Im}\tau\rvert},
$$
thus $$\sum_{\lvert n\rvert \geqslant N} \frac{\pi^2}{\sin^2 (\pi(z+n\tau))}$$ converges uniformly on $A(M)$ by the Weierstraß $M$-test.
Hence $h$ is a holomorphic function on $\mathbb{C}\setminus\Omega$. Since the sum converges compactly, termwise differentiation is legitimate, and we have
$$h'(z) = \sum_{n=-\infty}^\infty \frac{d}{dz}\left(\frac{\pi^2}{\sin^2 (\pi(z+n\tau))}\right).$$
Now we can use the partial fraction decomposition of $\dfrac{\pi^2}{\sin^2 \pi w}$ to obtain
$$h'(z) = \sum_{n=-\infty}^\infty \frac{d}{dz}\left(\sum_{m\in \mathbb{Z}} \frac{1}{(z+m+n\tau)^2}\right).$$
The partial fraction decomposition of $\dfrac{\pi^2}{\sin^2 \pi w}$ converges compactly, hence can be differentiated term by term, and thus
$$h'(z) = \sum_{n=-\infty}^\infty \sum_{m\in\mathbb{Z}} \frac{-2}{(z+m+n\tau)^3} = \wp'(z).$$
The nested sum here converges absolutely and compactly, thus here we can rearrange the sum as we please.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/601359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Explain complex numbers My cousin asked me if I could provide him with a practical example with complex numbers. I found it hard to do, so does anyone have a easy practical example with the use of complex numbers?
I tried to show him that complex numbers is needed to solve $x^2 = -1$, but he was not impressed.
|
Prelude
You mentioned in your comments that he is 13 years old.
I'm only a couple years older than that, and don't have any knowledge of practical uses.
Short Answer
However, I can tell you what imaginary numbers are used for (more generically): to describe numbers that aren't real.
I think it is best described with a quadratic equation with no solutions.
Example
We have the equation $ y = x^2 + 1 $.
We want to find the zeros (where such parabola intersects the x-axis).
We end up with the following equation: $ 0 = x^2 + 1 $ Solving for $x$, we end up with this: $ x = \pm\sqrt{-1} $. Using imaginary numbers, we can rewrite this as $ x = \pm i $.
If we were to graph the equation, we would see that the parabola never intersects the x-axis. Our solution, which is compromised of imaginary numbers, tells us that there is no real solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/601436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 5
}
|
Begging the question in Rudin? I read this in Theorem 2.35 of Baby Rudin:
Corollary. In the context of metric spaces) If $F$ is closed and $K$ is compact then $F \cap K$ is compact.
Proof. Because intersections of closed sets are closed and because compact subsets of metric spaces are closed, so is $F \cap K$; since $F \cap K \subset K$, theorem 2.35 shows $F \cap K$ is compact.
He assumes that $F \cap K$ is a compact subset in order to prove $F \cap K$ is compact.
|
I have a Second Edition (1964) of Rudin in which the proof is given this way:
Theorems $2.26(b)$ and $2.34$ show that $F\cap K$ is closed; since
$F\cap K \subset K$, Theorem $2.35$ shows that $F\cap K$ is compact.
Theorem $2.26(b)$ says that intersections of closed sets are closed, $2.34$ says that compact subsets of metric spaces are closed, and $2.35$ that closed subsets of compact sets are compact.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/601541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
What can be said about a function that is odd (or even) with respect to two distinct points? This question is a little open-ended, but suppose $f : \mathbb R \to \mathbb R$ is odd with respect to two points; i.e. there exist $x_0$ and $x_1$ (and for simplicity, let's take $x_0 = 0$) such that
$$
(1): \quad f(x) = -f(-x)
$$
and
$$
(2): \quad f(x_1 + x) = -f(x_1 - x)
$$
for all $x$.
Then, the vague question I'd like to answer is
What else can we conclude about this function?
It seems maybe I can conclude it is periodic, with period $2x_1$ (in general $2\left|x_1 - x_0\right|$): given the values of $f$ on $(0,x_1)$, $(1)$ determines the values on $(-x_1,0)$, then this and $(2)$ determine the values on $(x_1,3x_1)$, then this and $(1)$ determine the values of $f$ on $(-3x_1,-x_1)$, this and $(2)$ determine the values on $(3x_1,5x_1)$, and so forth.
Is there anything else to say here?
|
No. There's really not much else to say. You get a periodic function that's got a certain symmetry (on some period, it's "even", and on a period offset from this by a half-period, it's also even) from your conditions. But if I give you a periodic function satisfying this "double evenness" property on some period, then it'll turn out to be doubly-even as a function on the reals. So the two ideas are really one and the same.
The double-evenness probably implies something about the fourier coefficients, esp. if one of the "symmetry points" is the origin. I guess that even single-evenness tells you that all the "sine" coefficients are zero. The other evenness probably says something like all the odd (or all the even?) cosine coefficients are also zero, but I haven't worked out the details.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/601721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Proof that there is an order relation For an arbitrary set M there is a relation $R \subseteq 2^M \times 2^M$ about
$$ A \mathrel R B \Leftrightarrow A \cup \{x\} = B$$ The join is a disjoint join. There are not more details what is $x$.
Show that $R^*$, the reflexive and transitive hull of R is a order relation.
So I have to show, that $R^*$ is reflexive, transitive and anti-symmetric.
I'm a little confused.
To show that $R^*$ is reflexive: Let $a \in A$ and $a \in B$. How can $(a,a)\in R^*$. For this case a have to be $x$. Or do I understand something wrong? I really have problems understanding what I should do now, though I know what reflexive, transitive and antisymmetric means. How do I write it down formally correct?
This is an old exercise. Please give me not a complete solution, just a hint. Perhaps the reflexive part.
|
Since the union is specified be a disjoint union, $R$ is not itself reflexive: you cannot choose $x\in A$, since in that case $A$ and $\{x\}$ are not disjoint. In fact $A\,R\,B$ if and only if $B$ is obtained from $A$ by adding one extra element of $M$ that was not in $A$.
I would begin by taking the reflexive closure of $R$, which I’ll call $R^+$: $$R^+=R\cup\left\{\langle A,A\rangle:A\in 2^M\right\}\;,$$ so for $A,B\in 2^M$ we have $A\,R^+\,B$ if and only if $A\subseteq B$ and $|B\setminus A|\le 1$. What needs to be added to $R^+$ to get the transitive closure $R^*$ of $R^+$? It’s a standard result that it’s obtained by adding $R^+\circ R^+$, $R^+\circ R^+\circ R^+$, and so on, so that
$$R^*=\bigcup_{n\ge 1}(R^+)^n\;,$$
where $(R^+)^1=R^+$ and $(R^+)^{n+1}=(R^+)^n\circ R^+$ for each $n\ge 1$. For $n\ge 1$ let $$R_n=\bigcup_{k=1}^n(R^+)^k$$ and show by induction on $n$ that for any $A,B\in 2^M$, $A\,R_n\,B$ if and only if $A\subseteq B$ and $|B\setminus A|\le n$. It follows that $A\,R^*\,B$ if and only if $A\subseteq B$ and $B\setminus A$ is finite. Writing this in the same style as the original definition of $R$, we can say that $A\,R^*\,B$ if and only if there is a finite $F\subseteq M$ such that $A\sqcup F=B$, where $\sqcup$ denotes disjoint union.
By construction $R^*$ is reflexive and transitive, so all that remains is to show that $R^*$ is antisymmetric, but that’s immediate: if $A\,R^*\,B$ and $B\,R^*\,A$, then $A\subseteq B$ and $B\subseteq A$, so $A=B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/601807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Why these are equivalent? Situation: operator theory, spectrum of a operator.
We consider this as definition:
$\lambda$ is a eigenvalue if $\lambda x=Tx$ for some $x\ne 0$
but I see someone saying this:
$\lambda x-Tx=0\not \Rightarrow x=0 $ so $\lambda $ is a eigenvalue.
I cannot see why the latest sentence implies $\lambda$ is a eigenvalue. Some help? Is this a very basic logic problem?
|
The statement "$\lambda x = Tx$ for some nonzero $x$" is the same as "$\lambda x - Tx = 0$ for some nonzero $x$." So if $\lambda x - Tx$ doesn't imply $x = 0$, then there's a nonzero $x$ satisfying the equation, so you're back to the first statement.
It might help to see that both formulations are equivalent to the third formulation
$\lambda$ is an eigenvalue iff $\lambda I - T$ has a nontrivial kernel.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/601889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is $\sqrt x \sin\frac{1}{x}$ continuous at $0$? If it is not defined, does it count as continuous? Is $\sqrt x \sin\frac{1}{x}$ continuous at $0$?
I found the limit of the function which is $0$, but the function is not defined at $0$. Is it continuous then?
|
If the function is undefined, it cannot be continuous. However, if the limit exists, you can define $g(x)$ to be $\sqrt{x} \sin(1/x)$ for $x \neq 0$ and let $g(0)=0$. Then $g$ would be continuous (provided you took the limit correctly).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/601971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Minimal time to ride all ski slopes
Suppose we want to know what the minimum time is to ride all ski slopes on a mountain. We know the time it takes to ride a slope, and we know the time it takes to take a ski lift to get from one ski station to another, given that we have to end up where we started.
This screems Minimum-Cost-Flow-Problem to me. Thus my first idea was to construct a directed graph, whose vertices represent the ski stations and the edges represent the slopes and lifts. Let the cost of an edge equal the time it takes to ride that slope/lift. Now I want to find the minimal time it takes to ride every slope (note: not necessarily every lift), but I can't seem to find a way to ensure that we have to ride every slope atleast once.
Is there a chance to do it this way or am I completely wrong with my attempt?
|
As far as I can tell, your problem is a lightly disguised version of Traveling Salesman (note that since you have to ski all the slopes, for algorithmic purposes the skiing time is irrelevant, all that matters is the time to travel between slopes.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/602092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Why is the Cech nerve $C(U)$ of a surjective map $U\to X$ weakly equivalent to $X$? Let $f:U\to X$ be a surjective map of sets and
$$
...U\times_XU\times_XU \substack{\textstyle\rightarrow\\[-0.6ex]
\textstyle\rightarrow \\[-0.6ex]
\textstyle\rightarrow}
U\times_X\times U \substack{\textstyle\rightarrow\\[-0.6ex]
\\[-0.6ex]
\textstyle\rightarrow}
U
$$
the simplicial set given by its Cech nerve $C(U)$. When I consider $X$ as a discrete simplicial set, there is a map $f':C(U)\to X$ of simplicial sets.
Why is $f'$ a weak equivalence?
I see that $colim(U\times_X\times U \substack{\textstyle\rightarrow\\[-0.6ex]
\\[-0.6ex]
\textstyle\rightarrow} U)\cong X$ and that $\pi_0(C(U))$ is that colimit but I don't understand why there is a weak equivalence between the simplicial set $C(U)$ and the discrete one $\pi_0(C(U))$? Is $C(U)$ itself discrete? I don't think so.
Update: Is $C(U)$ weakly equivalent to $\pi_0(C(U))$ even for a non-surjective map $f:U\to X$ of sets?
|
Show that $f': C(U)\to X$ is an acyclic fibration. Using the fact that $C(U)$ is a groupoid, it suffices to verify two conditions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/602164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Another question on continuous functions and Cauchy's Integral formula. Let $C(z,r)$ denote the circle centered at $z$ with radius $r$. Let $f$ be a continuous function defined on a domain $D$. For $n=1,2$ and each $z \in D$ let
$A_n(z)=\lim_{r\to 0} \frac{1}{2\pi ir^n} \int_{C(z,r)} f(\zeta) d\zeta$ if the limit exists.
Find an example of a continuous function $f$ for which $A_2(z)$ does not exist for some $z \in D$
$\textbf{My Attempt:}$
Let $\gamma(t)=z+re^{i\theta}$
I have tried a lot of functions which all evaluate to either $0$ or $2\pi i r$
For example let $f(z)=\bar{z}$
$A_2(z)=\lim_{r \to 0} \frac{1}{2\pi i r^2}\int_{0}^{2\pi} (\bar{z}+re^{-i\theta})rie^{i\theta}=\lim_{r \to 0} \frac{1}{2 \pi i r} \int_{0}^{2\pi} \bar{z}e^{i\theta}+r=2\pi r$ and so $A_2(z)$ exists.
I have also tried $f(z)=z\bar{z}$,$f(z)=Re(z)$, $f(z)=Im(z)$.
Also, the integral for any analytic $f$ will be zero and I am not sure what to conclude from $\frac{0}{0}$.
|
Hint: Consider
$$f(z) := \frac{\overline{z}}{|z|^{\frac{1}{2}}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/602258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Cohomology groups for the following pair $(X,A)$ Let $X=S^1\times D^2$, and let $A=\{(z^k,z)\mid z\in S^1\}\subset X$. Calculate the groups and homomorphisms in the cohomology of the exact sequence of the pair $(X,A)$.
I know that theorically one has $$0\rightarrow C_n(A)\rightarrow C_n(X)\rightarrow C_n(X,A)\rightarrow 0$$ then apply Hom$(-,\mathbb{Z})$, and then apply the snake lemma to obtain the long exact sequence $$...\rightarrow H^n(X,A)\rightarrow H^n(X)\rightarrow H^n(A)\rightarrow H^{n+1}(X,A)\rightarrow ...$$
but I have never seen an example done to an actual space (I'm using Hatcher), so my idea was to try to compute the homology groups instead and using the universal coefficient obtain the cohomology groups, but even then I am not quite sure how I would obtain the maps.
If anyone could explain how to do this, or even give a link where they work out examples I would be very grateful :)
|
As both $X$ and $A$ are homotopic to $\mathbb S^1$, $H_1(X) = H_1(A) = \mathbb Z$ and all other homology groups vanish. The long exact sequence is
$$0 \to H_2(X, A) \to \mathbb Z \overset{f}{\to} \mathbb Z \to H_1(X, A)\to \mathbb Z \overset{g}{\to} \mathbb Z \to H_0(X, A)\to 0,$$
where the first two $\mathbb Z$'s corresponds to $H_1$ and the second two corresponds to $H_0$. Then $f = k$, as the generator $z$ of $H_1(A)$ is mapped to $z^k$ of $H_1(X)$. On the other hand, $g=1$ as $A$ and $X$ are both path connected and $g$ is the map induced by the inclusion $A\subset X$. Thus you have $H_2(X, A) = 0=H_0(X, A)$ and $H_1(X, A) = \mathbb Z/k\mathbb Z$. Thus you can use Universal coefficient theorem to find $H^i(X, A)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/602341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Describe units and maximal ideals in this ring
If $p$ is a fixed prime integer, let $R$ be the set of all rational numbers that can be written in a form $\frac{a}{b}$ with $b$ not divisible by $p$. I need to describe all the units in $R$ and all maximal ideals in $R$.
$\mathbb{Z} \subset R$, because $n=\frac{n}{1}$ for every integer $n$. It also seems to me that $R$ is a PID, since the addition and multiplication operations are just usual addition and multiplication of rational numbers.
So if $\frac{a}{b} \in R$ and $p \nmid a$, then $\frac{a}{b}$ is a unit in $R$ since $\frac{a}{b} \cdot \frac{b}{a}=1_R$. Am I missing anything?
But how about maximal ideals in $R$?
|
You have an example of something called a local ring, this is a ring with a unique maximal ideal. You are taking the ring $\mathbb{Z}$ and you are localizing at $(p)$. (In short, localization a ring $R$ at a prime ideal $I$ is just inverting everything that is outside of $I$. The fact that $I$ is prime tells us that $R\backslash I$ is multiplicatively closed, we write this $R_I$).
It is well known that in a local ring $R_I$ the unique maximal ideal is the ideal generated by $I$ in its localization, so in your case the unique maximal ideal will be generated by $(p)$, everything of the form $p\cdot\frac{a}{b}\in R$ (fractions whose numerator is a multiple of $p$).
You could finish your problem using the following lemma:
Lemma: If $R$ is a local ring with unique maximal ideal $I$, then $R\backslash I=R^\times$.
Proof: Let $x$ be a unit. Then $x\notin I$ because the ideal generated by $x$ is $R$ and $I$ is maximal (in particular proper). Hence $x\in R\backslash I$.
Now let $x\in R\backslash I$, consider the ideal $(x)$. Assume for sake of contradiction that $x$ is not a unit, meaning that $(x)$ is proper. Hence, $(x)$ is contained in a maximal ideal, but there is only one such an ideal, so we get $(x)\subset I$, so $x\in I$, a contradiction with the fact that $x\in R\backslash I$, so we get that $(x)$ is not proper, meaning that $x$ is a unit.
Using the above lemma you could finish your problem. You already found all the units. Hence you found $R\backslash I=R^\times=\{\frac{a}{b}: p\nmid a\}$, taking complements: $I=\{\frac{a}{b}:p\mid a\}$.
Here is a reference if you want to see why localization at a prime ideal yields a local ring.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/602430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
how does $\sum_{n=0}^{\infty} (-1)^n \frac{1}{1+n}$ diverge? I thought that to prove an alternating series two tests needed to be proven
$$a_n \ge a_{n+1}$$
which is true and
$$ \lim_{n\to\infty} b_n = 0 \ \ \ \ \ \ \text{which} \ \ \ \ \ \ \lim_{n\to\infty}\frac{1}{1+n}=0$$
yet sources (wolfram alpha) indicate that it does not converge
|
In fact it converges. Let $b_n=(-1)^n$ and $a_n=\frac {1}{n+1}$. Then $a_n>0$ is decreasing that goes to $0$.Also $b_n$ has bounded partial sums because $\sum_{k=0}^{n} b_n\leq 1$.So using Dirichlet's Proposition we have that it converges. See here http://en.wikipedia.org/wiki/Dirichlet's_test
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/602525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Compute $\lim_{x\to 0}\dfrac{\sqrt[m]{\cos x}-\sqrt[n]{\cos x}}{x^2}$ For two positive integers $m$ and $n$, compute
$$\lim_{x\to 0}\dfrac{\sqrt[m]{\cos x}-\sqrt[n]{\cos x}}{x^2}$$
Without loss of generality I consider $m>n$ and multiply the numerator with its conjugate. But what next? Cannot proceed further! Help please!
|
Since $x$ is going to zero, expand $\cos x$ as a Taylor series (one term would be sufficient) and use the fact that, for small values of $y$, $(1-y)^a$ is close to $(1-a y)$ (this is also coming from a Taylor series). So, you will easily establish that
$\cos^{1/m}(x) = 1- \dfrac{x^2}{2m}$
Doing the same for $n$, you end with $\dfrac{m - n}{2mn}$
For sure this only applies if $m$ is not equal to $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/602597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Topology - Projections I'm pretty sure I have this right, but want to double check and make sure.
Let $X_1$ = $X_2$ = $\mathbb{R}$ and let $p_1: X_1 \times X_2 \rightarrow X_1$ and $p_2: X_1 \times X_2 \rightarrow X_2$ be the projections. Let $A = {(x,y): 1 \le x \le 2, 3 \le y \le 3x}$. Find $p_1(A)$ and $p_2(A)$.
Answer?
$p_1(A)= [1,2]$
$p_2(A)= [3,6]$
|
Your solution is correct.
Since $A=\{(x,y)\mid 1\le x\le2, 3\le y \le3x \}$, its projection $p_1(A)$ is a subset of $[1,2]$. On the other hand, for each $x\in[1,2]$ there is a $y\in[3,3x]$, so there is a point in $A$ which projects to $x$, thus $p_1(A)=[1,2]$
Since $3\le y\le3x\le6$, we have $p_2(A)\subseteq[3,6]$. And if $3\le y\le6$, then for $x=y/3$ the point $(x,y)\in A$ and projects to $y$, so $p_2(A)=[3,6]$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/602707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How many different colourings are there of the cube using 3 different colours? A cube is called coloured if each of its faces is coloured by one of 3 given colours. Two colourings are considered to be the same if there is a rotation of the cube carrying one colouring to the other. How would you prove there are exactly 57 different colourings of the cube?
I think the approach is to use Burnsides orbit counting theorem?
|
Yes your approach is correct, and do you see that you should use a group of order 24? Which one? See also here for the full answer. But first try it yourself!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/602774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Hamming distance for a linear code I want to solve this exercise:
"""
Prove the equality $d_{min}(D)=\min\{wt_H(z) | z \in D \} $ for a linear code D.
"""
$wt_H $ denotes the Hamming weight. What is $d_{min}$? The read that it is the mininmum distance of the error? What do I have to calculate then to get this value $d_{min}$?
Does one of you know an ansatz what one has to do here to prive this equality?
|
Let $x,y\in D$ be such that $d_{min}(D)=d(x,y)$. Note that $d(a,b)=d(a-c,b-c)$ for all $a,b,c\in D$. Hence $d_{min}(D)=d(x,y)=d(x-y,0)=wt_H(x-y)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/602858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Given the probability that X > Y and X If $P[X>Y]=\frac{1}{2}$, and $[X<Y]=\frac{1}{2}$, then is $E[X]=E[Y]$?
How can I visualize this problem?
|
Assume that $X$ is standard normal and consider
$$Y=2X+X^+.$$
Then $E[X]=0$, $[Y\gt X]=[X\gt0]$ hence $P[Y\gt X]=\frac12$, and $[Y\lt X]=[X\lt0]$ hence $P[Y\lt X]=\frac12$.
Furthermore, $E[Y]=2E[X]+E[X^+]=2\cdot0+\frac1{\sqrt{2\pi}}$. Thus, $E[Y]\ne E[X]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/602971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
When can a metric space be embedded in the plane? It's easy to check if a graph can be embedded in the plane: just check for forbidden minors. Is it also easy to check if a "distance function" can be embedded?
Are there any necessary and sufficient conditions one can check?
I know there's a lot of research into approximate embeddings, but clearly in some cases that's not necessary.
|
I have the following
Conjecture. A metric space $(X,d)$ can be isometrically embedded in the plane $\mathbb R^2$ endowed with the standard metric $\rho$ iff each four-point subspace of $(X,d)$ can be isometrically embedded in the plane.
To build such an embedding $i$ we fix three different points $x$, $y$ and $z$ of $X$ for which the triangle inequality is strict and fix an arbitrary embedding $i_0:(\{x,y,z\},d|\{x,y,z\})\to (\mathbb R^2, \rho )$ (case when there are no such triple $\{x,y,z\}$ have to be considered separately). Let $t\in X$ be an arbitrary point. It seems that there exists an unique point $t’\in \mathbb R^2$ such that $d(x,t)=\rho(i_0(x),t’)$, $d(y,t)=\rho(i_0(y),t’)$, and $d(z,t)=\rho(i_0(z),t’)$. Put $i(t)=t’$. Then it seems that $i$ is an isometric embedding.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Are there simple methods for calculating the determinant of symmetric matrices? I've seen that there are lots of exercises about determinants of symmetric matrices in my algebra books. Some are easy and others are a bit more twisted, but the basic problem is almost always the same. I have been trying to come up with a method to calculate these a bit more quickly, since—at least for me—they invariably end with a very ugly stream of numbers and letters.
For example I started with a $3\times 3$ matrix like this:
$$A= \begin{pmatrix}
a & b & c \\
b & a & b \\
c & b & a \end{pmatrix}$$
which looks fairly simple, but the best I could come up with for the determinant was:
$$2b^2(c-a)+a(a^2-c^2)
\quad
\text{ or }
\quad
a(a^2-2b^2-c^2)+2b^2c$$
These look horrific and absolutely not what anyone in his right mind would use. It goes without saying that I haven't even tried this with matrices bigger than $3\times 3$. Is there something I have been missing, or is there nothing to do about it?
|
Edit (July 2021): As suggested in the comment, the answer here calculated the determinant of
$$\begin{pmatrix}
\ a & b & c \\
b & c & a \\
c & a & b \end{pmatrix},$$
instead of the one in the post.
Original answer:
do R1 --> R1+R2+R3
take out $(a+b+c)$
you will end up with
$$=(a+b+c)\begin{pmatrix}
\ 1 & 1 & 1 \\
b & c & a \\
c & a & b \end{pmatrix}$$
c1 --> c1-c3
c2 --> c2-c3
$$=(a+b+c)\begin{pmatrix}
\ 0 & 0 & 1 \\
b-a & c-a & a \\
c-b & a-b & b \end{pmatrix}$$
expanding along R1:
$$=(a+b+c)(a^2 +b^2 + c^2 -ab-bc-ca)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 7,
"answer_id": 0
}
|
How to show that every $\alpha$-Hölder function, with $\alpha>1$, is constant? Suppose $f:(a,b) \to \mathbb{R} $ satisfy $|f(x) - f(y) | \le M |x-y|^\alpha$ for some $\alpha >1$
and all $x,y \in (a,b) $. Prove that $f$ is constant on $(a,b)$.
I'm not sure which theorem should I look to prove this question. Can you guys give me a bit of hint? First of all how to prove some function $f(x)$ is constant on $(a,b)$? Just show $f'(x) = 0$?
|
Hint: Show that $f'(y)$ exists and is equal to $0$ for all $y$. Then as usual by the Mean Value Theorem our function is constant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 4
}
|
Radius of Convergence for analytic functions I know that the radius of convergence of any power series can be found by simply using the root test, ratio test etc.
I am confused as to how to find the radius of convergence for an analytic $f$ such as
$f(z)=\frac{4}{(z-1)(z+3)}$.
I can't imagine that I would have to find the power series representation of this, find the closed form, and then use one of the convergence tests. I am fairly certain that the radius of convergence would have to do with the singularities at $1$ and $-3$, however, I can't find a formula for the radius of convergence..
|
It is very useful to remember that the radius of convergence of power series in the complex plane is basically the distance to nearest singularity of the function. Thus if a function has poles at $i$ and $-i$ and you do a power series expansion about the point $3+i$, then the radius of convergence will be $3$ since that is the distance from $3$ to $i$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Recursive Square Root Futility Closet This post on Futility Closet the other day: http://www.futilitycloset.com/2013/12/05/emptied-nest/
asked for the solution to this equation:
\begin{equation}\sqrt{x+\sqrt{x+\sqrt{x...}}} = 2\end{equation}
The problem can be described recursively as
\begin{equation} \sqrt{x + 2} = 2 \end{equation}
and then solved for x = 2. This is very similar to the solution on Futility Closet.
It can be extended into the more general
\begin{equation} \sqrt{x + y} = y \end{equation}
and then
\begin{equation} x = y^2 - y \end{equation}
which works, as far as I'm aware for all x>1.
When 1 is plugged in for y, however, the output becomes 0. This is not altogether unfathomable (though it is quite strange) in and of itself, but it is the same result as when 0 is plugged in. That means that either
\begin{equation}\sqrt{0+\sqrt{0+\sqrt{0...}}} = \sqrt{1+\sqrt{1+\sqrt{1...}}} \end{equation}
or one of these values is undefined. I expect that either imaginary numbers or dividing by zero is involved, but I can't quite figure out how. Why does the reasoning fall apart here?
|
Some steps for further investigation.
*
*If $x > 0$ is fixed, show that the nested radical
$$
\sqrt{x+\sqrt{x+\sqrt{x+\cdots}}}
$$
converges to a positive number.
*Define the function
$$
f(x) = \sqrt{x+\sqrt{x+\sqrt{x+\cdots}}}.
$$
Show that $f(x) > 1$ for all $x > 0$.
*Calculate
$$
\lim_{x\to 0^+} f(x).
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Uniform convergence of $\sum\limits_{n=1}^\infty \frac{(-1)^n}{ne^{nx}}$ Prove that
$$ f(x) = \sum\limits_{n=1}^\infty \frac{(-1)^n}{ne^{nx}} = \sum\limits_{n=1}^\infty \frac{(-e^{-x})^n}{n}$$
is uniform convergent for $x \in [0,\infty)$.
Attempt:
At first, this looked a lot like the alternating series. At $x=0$, it is! And for $ x > 0$, $f (x)$ appears to satisfy $|f(x)| < \frac{1}{n}$, but I haven't been able to strengthen this assumption in any way. As the sum of $\frac{1}{n}$ diverges, this isn't useful to apply the M-test for uniform convergence.
Next, I've simply tried to find a way to prove pointwise convergence to $0$ for $f(x)$, but haven't been successful. If it could be shown that $f(x)$ has a pointwise limit function over an arbitrary closed interval $[a,b], 0 \le a$, then Dini's Theorem gives uniform convergence. But, I haven't been able to reach that conclusion.
|
The series converges uniformly on $[1,\infty)$ by the Weierstrass M-Test (thanks to the exponential term). To prove it converges uniformly on $[0,1]$, use properties of convergent alternating series. For any convergent alternating series of reals with terms nonincreasing in absolute value, the absolute value of the difference between the $n$th partial sum $s_n$ and the sum $s$ is less than or equal to the absolute value of the $n+1$st term.
By the way, the series is an alternating series for all $x \geq 0$.
The proof above for $0\leq x \leq 1$ looks like it works for all $x \geq 0$, but I'm going to stick with what I wrote above.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Application of Fenchel Young- Inequality i'm stuck on the weak duality ineqiality.
For $X,Y$ euclidean spaces: $f: X\rightarrow (-\infty,\infty]$, $g: Y\rightarrow (-\infty,\infty]$ and $A:X\rightarrow Y$ linear bounded mapping.
I want to show that $\inf_{x\in X}\{f(x)+g(Ax)\}\geq \sup_{y \in Y}\{-f^{*}(A^{*}y)-g^{*}(-y)\} $
I use the Fenchel Young inequality to do so: for $u \in X$ and $x \in dom(f)$ it holds $f(x)+f^{*}(u)\geq <u,x>$, where $f^{*}$ is the fenchel conjugate.
Starting now:
$\inf_{x\in X}\{f(x)+g(Ax)\}\geq \inf_{x\in X}\{-f^{*}(x^{*})+<x^{*},x>+g(Ax)\},x^{*}\in X$ arbitary $\geq \inf_{x\in X}\{-f^{*}(x^{*})+<x^{*},x>+<-y,Ax>+g^{*}(-y)\}$, $x^{*}\in X,y\in Y$ arbitary
Since they are arbitary i would choose such that $x^{*}=A^{*}y$ holds. But how i get the supremum, to finish the proof?
|
One approach is to reformulate the primal problem as
\begin{align*}
\operatorname*{minimize}_{x,y} & \quad f(x) + g(y) \\
\text{subject to} & \quad y = Ax.
\end{align*}
Now formulate the dual problem. The Lagrangian is
\begin{equation*}
L(x,y,z) = f(x) + g(y) + \langle z, y - Ax \rangle
\end{equation*}
and the dual function is
\begin{align*}
G(z) &= \inf_{x,y} L(x,y,z) \\
&= -f^*(A^Tz) - g^*(-z).
\end{align*}
The dual problem is
\begin{align*}
\operatorname*{maximize}_z & \quad -f^*(A^Tz) - g^*(-z).
\end{align*}
The standard weak duality result from convex optimization now tells us that $p^\star \geq d^\star$, where $p^\star$ and $d^\star$ are the primal and dual optimal values.
In other words,
\begin{equation*}
\inf_x f(x) + g(Ax) \geq \sup_z -f^*(A^T z) - g^*(-z).
\end{equation*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Polynomial Interpolation and Error I have numerical analysis final coming up in a few weeks and I'm trying to tackle a practice exam.
Assuming $p(x)$ interpolates the function $f(x)$, find the polynomial $p(x)$ that satisfies the following conditions:
$$p(0) = 20, p(1) = 26, p'(1)=9, p(2) = 36, p'(2)=16$$.
I also have to provide an expression for the interpolation error.
I've been going through my book and notes all afternoon, but I'm afraid I just don't understand.
|
As asked by Neurax, i describe the way of getting the function without using matrix for the 5x5 system.
I keep the notation used on my previous answer and I use the conditions in the order they appear in the initial post.
Equations then write
p(0) = 20 gives a = 20
p(1) = 26 gives a + b + c + d + e = 26
p'(1) = 9 gives b + 2 c + 3 b + 4 e = 9
p(2) = 36 gives a + 2 b + 4 c + 8 d + 16 e = 36
p'(2) = 16 gives b + 4 c + 12 d + 32 e = 16
Then the first equation gives simply a = 20. Replacing this value in the four next equations lead to
b + c + d + e = 6
b + 2 c + 3 d + 4 e = 9
2 b + 4 c + 8 d + 16 e = 16
b + 4 c + 12 d + 32 e = 16
Now, we eliminate b from what became the new first equation. This gives
b = 6 - c - d - e
and we inject this expression in the next three equations. This leads now to
c + 2 d + 3 e = 3
2 c + 6 d + 14 e = 4
3 c + 11 d + 31 e = 10
Now, we eliminate c from what became the new first equation. This gives
c = 3 - 2 d - 3 e
and we inject this expression in the next two equations. This leads now to
2 d + 8 e = -2
5 d + 22 e = 1
Now, we eliminate d from what became the new first equation. This gives
d = -4 e - 1
Injected in the remaining equation, we then have
2 e = 6
Now we are done and we shall back substitute : so e = 3 , d = -13, c = 20 , b = -4 and a = 20.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Spring Calculation - find mass
A spring with an $-kg$ mass and a damping constant $9$ can be held stretched $2.5 \text{ meters}$ beyond its natural length by a force of $7.5 \text{ Newtons}$. If the spring is stretched $5 \text{ meters}$ beyond its natural length and then released with zero velocity, find the mass that would produce critical damping.
My work:
The restoring force is $-kx$. Then
$$7.5 = -k(2.5) \\
-\frac{7.5}{2.5} = k \\
ma = -\frac{7.5x}{2.5} \\
my’’ + 9y’ + -3y = 0,\quad y(0) = 2.5, y(5) = 0 \\
\frac{-9 \pm \sqrt{81 + 4(m)(3)}}{2m} \\
-\frac{9}{2m} \pm \frac{\sqrt{81+12m}}{2m} \\
y = Ae^{-(9/2)x}\cos\left(\frac{\sqrt{81+12m}}{2m}x\right) + Be^{-(9/2)x}\sin\left(\frac{\sqrt{81+12m}}{2m}x\right) \\
2.5 = A + B\cdot 0 \\
0 = (2.5)e^{-45/2}\cos\left(\sqrt{81+12m}\frac{5}{2m}\right) + Be^{-45/2}\sin\left(\sqrt{81+12m}\frac{5}{2m}\right)$$
Any help would be appreciated
|
Critical damping occurs when $(\gamma^2)-4mk=0$
Therefore,
$$(9^2)-4m(7.5/2.5)=0$$
$$81=12m$$
$$m=6.75 kg$$
Bob Shannon also got the same answer, but I wanted to present it this way because I thought it might be more clear to some.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Why does Trapezoidal Rule have potential error greater than Midpoint? I can approximate the area beneath a curve using the Midpoint and Trapezoidal methods, with errors such that:
$Error_m \leq \frac{k(b-a)^3}{24n^2}$ and $Error_T \leq \frac{k(b-a)^3}{12n^2}$.
Doesn't this suggest that the Midpoint Method is twice as accurate as the Trapezoidal Method?
|
On an interval where a function is concave-down, the Trapezoidal Rule will consistently underestimate the area under the curve. (And inversely, if the function is concave up, the Trapezoidal Rule will consistently overestimate the area.)
With the Midpoint Rule, each rectangle will sometimes overestimate and sometimes underestimate the function (unless the function has a local minimum/maximum at the midpoint), and so the errors partially cancel out. (They exactly cancel out if the function is a straight line.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 0
}
|
Simultaneous irreducibility of minimal polynomials
Let $F$ be a field. Let $u,v$ be elements in an algebraic extension of $F$ with minimal polynomials $f$ and $g$ respectively. Prove that $g$ is irreducible over $F(u)$ if and only if $f$ is irreducible over $F(v)$.
I have only obtained that $f,g$ are irreducible over $F$.
|
$g$ is irreducible over $F(u)$ if and only if $[F(u,v):F(u)]=\deg g$ and $f$ is irreducible over $F(v)$ if and only if $[F(u,v):F(v)]=\deg f$. If consider $[F(u,v):F]$ you are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
polynomial over a finite field Show that in a finite field $F$ there exists $p(x)\in F[X]$ s.t $p(f)\neq 0\;\;\forall f\in F$
Any ideas how to prove it?
|
Take some element $\alpha_1\in F$
Then consider $f_1(x)=(x-\alpha_1)+1$.. What would be $f_1(\alpha_1)$?
Soon you will see that $f(\alpha_1)$ is non zero but may probably for some $\alpha_2$ we have $f_1(\alpha_2)=0$
Because of this i would now try to include $(x-\alpha_2)$ in $f_1(x)$ to make it
$f_2(x)=(x-\alpha_1)(x-\alpha_2)+1$.. What would be $f_1(\alpha_1),f
_2(\alpha_2)$?
keep doing this until you believe that resultant function does not have a root in $F$.
You have two simple questions :
*
*will the resultant be a polynomial in general if you repeat this steps.
*How do you make sure that no element in field is a root of resultant
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/603986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
idea for the completion of a metric space While doing the proof of the existence of completion of a metric space, usually books give an idea that the missing limit points are added into the space for obtaining the completion. But I do not understand from the proof where we are using this idea as we just make equivalence classes of asymptotic Cauchy sequences and accordingly define the metric.
|
For a metric space $\langle T, d\rangle$ to be complete, all Cauchy sequences must have a limit. So we add that limit by defining it to be an "abstract" object, which is defined by "any Cauchy sequence converging to it".
We have two cases:
*
*The Cauchy sequence already had a limit in $T$. In this case there is no need to add new points, and we identify that abstract object to the already existing point.
*The Cauchy sequence did not converge in $T$. Then you add this "object" to your space, and define distance accordingly. You can prove using triangle equality that you can choose any "equivalent" Cauchy sequence, and the metric will be the same.
The important point is that the point we add are not real points, they are just abstract objects, which have some property that make them behave well under your tools and language.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/604070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Alternating sum of binomial coefficients $\sum(-1)^k{n\choose k}\frac{1}{k+1}$ I would appreciate if somebody could help me with the following problem
Q:Calculate the sum:
$$ \sum_{k=1}^n (-1)^k {n\choose k}\frac{1}{k+1} $$
|
$$
\begin{align}
\sum_{k=1}^n(-1)^k\binom{n}{k}\frac1{k+1}
&=\sum_{k=1}^n(-1)^k\binom{n+1}{k+1}\frac1{n+1}\\
&=\frac1{n+1}\sum_{k=2}^{n+1}(-1)^{k-1}\binom{n+1}{k}\\
&=\frac1{n+1}\left(1-(n+1)+\sum_{k=0}^{n+1}(-1)^{k-1}\binom{n+1}{k}\right)\\
&=-\frac n{n+1}
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/604173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
How to prove there exists a solution? Guillemin Pollack Prove there exists a complex number $z$ such that
$$
z^7+\cos(|z^2|)(1+93z^4)=0.
$$
(For heaven's sake don't try to compute it!)
|
Although the answers above are correct ones, they fail to use $deg_2$ as the book of Guillemin & Pollack suggest. Heres an approach that use the notion of $deg_2$:
Let $f:\mathbb{C} \to \mathbb{C}$ be defined as
$$
f(z)=z^7+\cos(|z^2|)(1-93z^4).
$$
Consider the homotopy $F(z,t)=tf(z)+(1-t)z^7$ between $f(z)$ and $z^7$. Now if $W=\left\{z: |z|\leq R \right\}$ (where $R$ is taken large enough such that $F(z,t)\neq0$ for all $(z,t) \in \partial W \times [0,1]$), then the maps
$$
\frac{F(\cdot,t)}{|F(\cdot, t)|} : \partial W \to S^1
$$
are well define for all $t \in [0,1]$, ($S^1=\left\{z: |z|=1 \right\}$), more over since $F(z,1)=f(z)$ and $F(z,0)=z^7$ are homotpic we have that
$$
deg_2 \left(\frac{f}{|f|}\right)=deg_2 \left( \frac{z^7}{|z^7|}\right)
$$
but clearly $g(z)=z^7/|z^7|$ makes seven turns around any point $y \in S^1$, then #$\left(g^{-1}(y)\right) = 7$, and since $7 \equiv 1 \pmod 2$, we have that $deg_2(g)$ is nonzero, that is
$$
deg_2 \left(\frac{f}{|f|}\right)\neq 0
$$
this means that there exist $z\in W$ such that $f(z)=0$ as required.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/604278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Is it possible that $(x+2)(x+1)x=3(y+2)(y+1)y$ for positive integers $x$ and $y$? Let $x$ and $y$ be positive integers. Is it possible that $(x+2)(x+1)x=3(y+2)(y+1)y$?
I ran a program for $1\le{x,y}\le1\text{ }000\text{ }000$ and found no solution, so I believe there are none.
|
The equation $x(x+1)(x+2) = 3y(y+1)(y+2)$ is equivalent to $\left(\frac{24}{3y-x+2}\right)^2 = \left(\frac{3y-9x-6}{3y-x+2}\right)^3-27\left(\frac{3y-9x-6}{3y-x+2}\right)+90$.
This is an elliptic curve of conductor $3888$. Cremona's table says its group of rational points is of rank $2$, and is generated by the obvious solutions $(x,y) \in \{-2;-1;0\}^2$
I am not sure how one would go about proving an upper bound for the integral solutions of the original equation. There are papers on the subject (for example, Stroeker and de Weger's "Solving elliptic diophantine equations: The general cubic case." seems to be applicable here)
Also, see How to compute rational or integer points on elliptic curves
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/604333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 2,
"answer_id": 1
}
|
Integral involving a confluent hypergeometric function I have the following integral involving a confluent hypergeometric function:
$$\int_{0}^{\infty}x^3e^{-ax^2}{}_1F_1(1+n,1,bx^2)dx$$
where $a>b>0$ are real constants, and $n\geq 0$ is an integer.
Wolfram Mathematica returns the following solution: $\frac{a^{n-1}(a+bn)}{(a-b)^{n+2}}$. However, I can't figure out how it arrived at it (I always try to check the solutions "on paper" that Mathematica gives me -- or at least using Gradshteyn and Ryzhik). Can anyone help?
|
Let's start with the hypergeometric function. We have:
\begin{eqnarray}
F_{2,1}[1+n,1;b x^2] &=&
\sum\limits_{m=0}^\infty \frac{(1+n)^{(m)}}{m!} \cdot \frac{(b x^2)^m}{m!} \\
&=& \sum\limits_{m=0}^\infty \frac{(m+1)^{(n)}}{n!} \cdot \frac{(b x^2)^m}{m!} \\
&=&\left. \frac{1}{n!} \frac{d^n}{d t^n} \left( t^n \cdot e^{b x^2 \cdot t} \right)\right|_{t=1} \\
&=& \frac{1}{n!} \sum\limits_{p=0}^n \binom{n}{p} n_{(p)} (b x^2) ^{n-p} \cdot e^{b x^2}
\end{eqnarray}
Therefore the integral in question reads:
\begin{eqnarray}
&&\int\limits_0^\infty x^3 e^{-a x^2} F_{2,1}[1+n,1;b x^2] dx =\\
&& \frac{1}{2} \sum\limits_{p=0}^n \frac{n^{(p)}}{p! (n-p)!} b^{n-p} \frac{(n-p+1)!}{(a-b)^{n-p+2}} \\
&&\frac{1}{2} \frac{1}{(a-b)^{n+2}} \sum\limits_{p=0}^n \binom{n}{p} \frac{(n-p+1)^{(p)}}{(n-p+2)^{(p-1)}} \cdot b^{n-p} (a-b)^p \\
&& \frac{1}{2} \frac{1}{(a-b)^{n+2}} \sum\limits_{p=0}^n \binom{n}{p} (n-p+1) \cdot b^{n-p} (a-b)^p \\
&& \frac{1}{2} \frac{1}{(a-b)^{n+2}} a^{n-1} (a+b n)
\end{eqnarray}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/604502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
If $R[s], R[t]$ are finitely generated as $R$-modules, the so is $R[s + t]$.
Let $S \supset R$ as rings with $1 \in R$. Suppose that $s, t \in S$ and that the subrings $R[s], R[t]$ are finitely generated by $\{1, s, \dots, s^k\}$ and $\{1, t, \dots, t^m \}$. Then $R[s + t]$ is also finitely generated.
Let $g$ be a minimal degree monic polynomial for $s$ over $R$.
I'm thinking let $f(X) = X + t. \ $ Then $s + t = f(s). \ $ So $f(X)^n = q(X) g(X) + r(X)$ by the division algo, where $r(X) = 0$ or $\deg{r} \lt \deg{g}$. Then $f(s)^n = (s+t)^n = 0 + r(s), \ $ for some $r(X) \in R[t][X]$.
Now I'm very confused, but this looks like the right direction. Please work within it if you can, but all answers accepted. Thanks.
|
Let $R[s]$ be fin-gen with monic $f$ and $R[t]$ fin-gen with monic $g$.
$(X + t)^k, k \geq 1$, is a polynomial in $R[t][X]$ . Since $s$ is integral over $R$ with $f$ it's also integral over $R[t]$ with $f$ since $R \subset R[t]$. Then $(X + t)^k = q(X) f(X) + r(X)$ for some $r = 0$ or $\deg r \lt \deg f$. This means that if $h(X) = (X + t)^k$, then $h(s) = (s + t)^k$ and $h(s) = r(s)$ for some $r$. So we have that $p \in R[s+t]$ can be expressed as $p = p_0 + p_1 (s+t) + \dots p_n (s+t)^n = p_0 + p_1 r_1(s, t) + \dots + p_n r_n(s,t)$, each $r_i$ having degree $\leq \deg f + \deg g$, and each of which is a polynomial in $\{s^{i}t^{j} : 0 \leq i,j, i+j \leq \deg f + \deg g \} $. This set is finite and clearly generates $R[s + t]$ over $R$ as an $R$-module.
QED
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/604552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Of any 52 integers, two can be found whose difference of squares is divisible by 100 Prove that of any 52 integers, two can always be found such that the difference of their squares is divisible by 100.
I was thinking about using recurrence, but it seems like pigeonhole may also work. I don't know where to start.
|
Look at your $52$ integers $\mod 100$. So, the difference of their squares resulting in division by $100$ can be given by $a^2=b^2(\mod 100)$. This will resolute in product of the difference of the numbers and sum of the numbers is divisible by $100$. since, any of $52$ integer numbers are asked, there can be no optimal solution answer for this.
For example:- Look at the pairs of additive inverses $(0,100)$, $(1,99)$, $(2,98)$, etc. There are $51$ such pairs. Since we have $52$ integers, two of them must belong to a pair $(a,−a)$. Then $a^2-(−a)^2=0(\mod 100)$, so that the difference of their squares is divisible by $100$.
Likewise, since square of $10$ is $100$, so all the pair of integers with a difference of multiple of $10$ and the numbers whose additive numbers comes resulting in $\mod 10$ will also end up in the difference of squares is divisible by $100$.
For example: $(0,10)$, $(10,20)$, $(20,30)$,... etc.
Similarly, sum is multiple of $20$ and difference is multiple of $5$ will produce the same result.
so, this assumption is proved.
But the following analysis can be further improved by Chinese remainder theorem. There are $11$ distinct squares modulo $25$. By the Chinese remainder theorem, there are only $2\cdot 11=22$ distinct squares modulo $100$. So the $52$ in the problem can be improved to $23$.
since, square of $0$'s at unit place ending up in $0$, likewise $1$ in $1$, $2$ in $4$, $3$ in $9$, $4$ in $6$, $5$ in $5$, $6$ in $6$, $7$ in $9$, $8$ in $4$, $9$ in $1$. So, every square is congruent to either $0$ or $1$ or modulo $4$. so, there are only $23$ integers, for which two of the integers will draw the result that difference of their squares is divisible by $100$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/604635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47",
"answer_count": 7,
"answer_id": 3
}
|
Pullback of a vector bundle on Abelian variety via $(-1)$ Let $A$ be an abelian variety over some field $k$ and $(-1) : A \to A$ is the inverse map of $A$ as an algebraic group. If $V$ is a vector bundle over $A$ what is $(-1)^* V$? In other words, is there a way to describe $(-1)^* V$ in more standart functors? If it is not possible in general, what about special cases of $\operatorname{dim} A=1$ or $\operatorname{rk} V=1$?
For example, for elliptic curves, using Atiyah classification, it is easy to see that if $c_1(V)=0$ for some vector bundle $V$ then $(-1)^*(V) \cong V^*$. Is it true for higher dimensions?
Also, it is very interesting to understand $(-1)_*$, in particular, what is $(-1)_*\mathcal{O}$?
|
For any morphism $f:X \to Y$ of varieties (or schemes), one has $f^*\mathcal O_Y = \mathcal O_X$.
A line bundle on an ellipitic curve of degree $d$ is of the form $\mathcal O(D)$ where $D = (d-1)O + P$, where $O$ is the origin and $P$ is a point on the curve. Applying $[-1]^*$ takes this to $\mathcal O(D')$, where $D' = (d-1)O + (-P).$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/604744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
1000 Doors Homework Problem I am faced with the following problem as homework- a man has 1000 doors. he opens every door, and then he closes every second door. Then he works on every third door- if it's open, then he closes it. if it's closed, he opens it. Then he works on every fourth door, fifth door, and so on all the way until the 1000th door. I need find the amount of doors left open and their numbers (Door 1, Door 2, etc.).
Can someone please tell me the first step to solving this and what kind of mathematics I need to use.
|
This is an absolute classic, in the first round he opens all of them, then he closes multiples of 2. Then he alters multiples of 3. So in round $j$ the door $a$ is opened or closed if and only if j divides it. How many times is each door altered think about its divisors.
full solution:
a door $j$ is altered the same number of times as the number of divisors it has, so the open doors are the ones with an odd number of divisors, let $d$ divide $j$, then $j=dk$. So every divisor corresponds to another unique divisor. Unless we have $d=k$ in which case $j=d^2$. so the ones open are exactly the perfect squares. Also see: light bulb teaser
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/604825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Sequence of distinct moments of $X_{n}$ converging to $1$ implies $X_{n}$ converges to $1$ Suppose for $0<\alpha<\beta$ and $X_{n}\geq0$ we have $EX_{n}^{\alpha},EX_{n}^{\beta}\to1$ as $n\to\infty$. Show that $X_{n}\to1$ in probability. In special cases this is pretty clear (for instance, assuming $\alpha\geq1$ and $X_{n}\geq1$). But I don't know how to prove this in the general case. It is interesting because the question appears in a section on weak convergence (convergence in distribution), yet the problem appears to have nothing to do with this.
|
Define $Y_n:=X_n^\alpha$ and $p:=\beta/\alpha\gt 1$. The assumptions give that the sequence $\{Y_n,n\geqslant 1\}$ is tight, so it's enough to prove that each subsequence converges in distribution to the constant $1$.
Take $\{Y_{n'}\}$ a subsequence which converges in distribution to $Y$·
Since $\{Y_n,n\geqslant 1\}$ is bounded in $\mathbb L^{(1+p)/2}$, we have
$$\mathbb EY^{(1+p)/2}=\lim_{n'\to\infty}\mathbb E(Y_{n'}^{(1+p)/2})=1.$$
We also that $\mathbb E(Y)=1$, hence by the equality case in Hölder's inequality, $Y=1$ almost surely.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/604921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Another limit of summation Please help - how to solve this:
$$\lim_{n \to \infty}\frac1{n}\sum_{k=0}^{n}{\frac{1}{1+\frac{k^6}{n^3}}} $$
|
It seems the following.
Put $$f(k,n)= \frac{1}{1+\frac{k^6}{n^3}}.$$ Then $$\sum_{k=0}^{n} f(k,n)\le \sum_{k\le n^{2/3}}f(k,n) + \sum_{k\ge n^{2/3}} f(k,n)
\le n^{2/3}+\frac{n}{1+n}.$$ Hence $$0\le \lim_{n \to \infty}\frac1{n}\sum_{k=0}^{n} f(k,n)\le \lim_{n \to \infty} \frac1{n}\left(n^{2/3}+\frac{n}{1+n}\right)=0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/605106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the Area of a rectangle please help a pool 20 ft by 30 ft is going to have a deck x feet wide added all the way around the pool write an expression in simplified form for the area of the deck. I have tried doing this but have failed please help
|
You are on the correct path cris.
The pool is still a rectangle, even with the new deck. Thus you are almost correct.
Each side will have $x$ feet added to it.
Thus it will be $(20+2x)(30+2x) = 4x^2+100x+600$
Try finding the answer to this:
If there is only 120 Feet of bamboo for the new deck,what is the largest the new area (pool + deck) can be.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/605181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Derivative Counterexamples - Calculus I need counterexamples for the following (I guess these claims are not correct):
*
*If $ lim_{n\to \infty} n\cdot (f(\frac{1}{n}) - f(0) ) =0$ then $f$ is differentiable at $x=0$ and $f'(0)=0$ .
*If f is defined in a neighberhood of $a$ including $a$ and differentiable at a neighberhood of $a$ (except maybe at $a$ itself), and $lim_{x\to a^- } f'(x) = lim_{x\to a^+} f'(x) $ , then $f$ is differentiable at $x=a$.
*If $f$ is diff for all $x$ and satisfies $lim_{x\to \infty } f'(x) =0 $ then there exists a number $L<\infty$ for which $ lim_{x\to \infty} f(x)= L$
*If $f$ is diff for all $x$ and satisfies $lim _{x\to \infty} f(x)= L $ then $lim_{x\to\infty} f'(x)=0 $ .
*If $f $ is diff at $x=0$ and $lim_{x\to 0 } \frac{f(x)}{x} =3 $ , then $f(0)=0$ and $f'(0)=3$
Thoughts:
5) I think this claim is correct and follows from the uniqueness of the derivative... I have no idea how to prove it, but it sounds reasonable
3) Isn't a counterexample for this is $f(x)=lnx$ ?
4) I have tried using some trigonometric functions, but still couldn't manage to find a counterexample
2) I guess that an example for this would be a function that its derivative isn't defined at this point , but its limits do
1) have no idea... It sounds incorrect (although I guess that the other direction of the claim is correct)
Help?
Thanks !
|
For 1, if $\lim_{n\to\infty}n(f(1/n)-f(0))=0$ then we have $\displaystyle\lim_{n\to\infty}\frac{f(\frac{1}{n})-f(0)}{\frac{1}{n}}=0$. It is common to assume that $n$ denotes a natural number but this was not indicated in the problem statement. So assuming the stronger statement (i.e. that the limit is taken for $n\in\mathbb{R}$) this is equivalent to $$\lim_{h\to 0^{+}}\frac{f(h)-f(0)}{h}=0.$$ This does not mean that $\lim_{h\to 0}\frac{f(h)-f(0)}{h}$ exists. For a counterexample think up a piecewise function. Something like the following:
$$f(x)=\left\{\begin{array}{ll}
0& :x\geq0\\h(x)&:x<0
\end{array}\right..$$
Notice that this will satisfy the conditions of 1 regardless of what $h(x)$ is.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/605233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Index map defines a bijection to $\mathbb{Z}$? In the book "Spin Geometry" by Lawson and Michelsohn, page 201, proposition 7.1(chapter III), it asserts that the mapping which assign a Fredholm operator from one Hilbert space to another its index ($\dim\ker-\dim\text{coker} $) defines a bijection from the set of connected components of these Fredholm operator to $\mathbb{Z}$. There the authors use a series of lemmas to prove this assertion.
However, just think the most simple example--both Hilbert spaces are of finite dimension, with dimensions $m$ and $n$ respectively. Then the index is constant $m-n$ as we all know. From the proof by the authors, we can only say that the map $\mathtt{ind}$ is injective, and can not speak more. Thus the authors made an mistake there.
My question is, if we assume both the Hilbert spaces are of infinite dimension (and both separable), then is this very mapping being bijection? Or there is some reasonable way to remedy this defect?
|
If $H$ has infinite dimension, the shift operator under a base is Fredholm with index $-1$. As $ind$ is a groups homomorphism between $\text{Fredholms}/\text{Compact}$ and $\mathbb{Z}$, and $-1$ is a generator of $\mathbb{Z}$, this proves that $ind$ is indeed surjective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/605421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Extended ideals If $R\subset S$ is a ring extension where $1\in R$ and $I$ is an ideal of $R$ is it true that $IS$, the subset of $S$ generated by $I$ is an ideal of $S$? Should we assume $R$ is commutative?
|
It would be sensible to look at an example where $S$ is a simple ring to limit the number of ideals possible in $S$.
Take, for example, the $2\times 2$ upper triangular matrices $R=T_2(\Bbb R)\subseteq M_2(\Bbb R)=S$, and consider the ideal $I$ of $R$ of matrices of the form $\begin{bmatrix}0&x\\0&0\end{bmatrix}$.
Clearly $IS=\{\begin{bmatrix}a&b\\0&0\end{bmatrix}\mid a,b\in\Bbb R \}$ is not an ideal (although it is a right ideal) of $S$.
If $S$ is commutative (or even if $I$ is contained in the center of $S$) you can do the footwork to show that then $IS$ is indeed an ideal of $S$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/605528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Index of conjugate subgroup equals the index of subgroup Show that if $G$ is a group and $H$ is its subgroup then $[G:H]=[G:gHg^{-1}]$, $g \in G$.
Attempted solution:
Let $f:G\mapsto\hat{G}$ be a group homomorphism such that $\mbox{Ker}f \subseteq H$ we will try to show that $[G:H]=[f(G):f(H)]$.
Define a map $\phi: xH\mapsto f(x)f(H)$.
Function $\phi$ is injective as $f(x)f(H)=f(y)f(H)\Rightarrow f^{-1}(y)f(x)f(H)=f(H) \Rightarrow f(y^{-1}x)=f(h_1)\Rightarrow f(y^{-1}xh_1^{-1})=1_{\hat{G}}$ for some $h_1 \in H$. Since $\mbox{Ker}f \subseteq H$ we have $y^{-1}xh_1^{-1}=h_2$ for $h_2 \in H$. Finally, $xh_1=yh_2$ and, hence, $xH=yH$.
Also, $\phi$ is well defined since $xH=yH$ implies $f(xH)=f(yH)\Rightarrow f(x)f(H)=f(y)f(H)$. Surjectivity of $\phi$ is obvious. Since $\phi$ is a bijection, $[G:H]=[f(G):f(H)]$.
To complete the proof we take $f=f_g: G\rightarrow G$, with the rule $x\mapsto gxg^{-1}$, $g\in G$.
Is it correct? Perhaps someone knows more elegant proof?
|
Hint: construct directly a set theoretic bijection (not a homomorphism) between the cosets of $H$ and those of $gHg^{-1}$.Fix a $g \in G$ and let $\mathcal{H}=\{xH: x \in G\}$ the set of left cosets of $H$ and let $\mathcal{H}'=\{xH^g: x \in G\}$ the set of left cosets of $H^g:=gHg^{-1}$. Define $\phi:\mathcal{H} \rightarrow \mathcal{H}'$ by $\phi(xH)=x^gH^g$, where $x^g:=gxg^{-1}$. We will show that $\phi$ is well-defined, injective and surjective.Suppose $xH=yH$, then $y^{-1}x \in H$ and hence $gy^{-1}xg^{-1} \in H^g$. But $gy^{-1}xg^{-1}=(gy^{-1}g^{-1})(gxg^{-1})= (y^{-1})^g(x^g)=(y^g)^{-1}x^g$. And this implies $x^gH^g=y^gH^g$, so $\phi$ does not depend on the particular coset representative, that is, $\phi$ is well-defined. Now assume $\phi(xH)=\phi(yH)$, so $x^gH^g=y^gH^g$. Then $(y^g)^{-1}x^g \in H^g$. But $(y^g)^{-1}x^g=(gyg^{-1})^{-1}gxg^{-1}=gy^{-1}g^{-1}gxg{-1}=gy^{-1}xg^{-1}=(y^{-1}x)^g$. It follows that $y^{-1}x \in H$, so $xH=yH$ and $\phi$ is injective.Finally, pick an arbitrary $xH^g \in \mathcal{H}'$. Then $\phi(x^{g^{-1}}H)=xH^g$, which means that $\phi$ is surjective. This concludes the proof: $\#\mathcal{H}=\#\mathcal{H}'$, that is index$[G:H]=$index$[G:H^g]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/605648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Unit circle - how to prevent backward rotation Let's assume we have a unit circle (0, 2$\pi$).
Basically I have a point on this circle who is supposed to move only forward. This point is controlled by the user mouse and constantly calculate 25 times per seconds.
For the moment I calculate the new angle ( based on the user mouse position ) on this unit circle in order to compare it with the old one and to be sure that new_angle > old_angle.
In order to do that, I'm actually using the following function ( in degree ) :
atan2(mousePosY - unitCircleOriginY, mousePosX - unitCircleOriginX) * 180 / $\pi$
This working pretty fine until I reach 2π because at this point the previous angle is 359 while the new one is 0.
I try severals workaround without success.
This may looks trivial but drive me crazy.
|
It is not trivial. As long as you update frequently enough that the user cannot move more than $180$ degrees between updates you can just find whether to add or subtract $360$ to get the change as close to $0$ as possible, so
If (new_angle - old_angle > 180) do not update because real rotation is negative
If (-360 < new_angle - old_angle < -180) update because real rotation is positive
If (0< new_angle - old_angle < 180) update because real rotation is positive
You could skip the first if in your program. I put it in because it helps to see what is happening. If you don't sample often enough so the user can't move 180 degrees betweem samples, you are sunk. 25 Hz sounds plenty fast enough.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/605716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Let $f : X → Y$ be a continuous closed surjection such that $f^{–1}(y)$ is compact for all $y ∈ Y .$ Let $f : X → Y$ be a continuous closed surjection such that $f^{–1}(y)$ is compact for all $y ∈ Y .$ Suppose that $X$ is Hausdorff. Prove that $Y$ is Hausdorff.
I have that $f$ is a qoutient map, but I can not think of anythink useful to do with that. Any help is appreciated.
|
Just take $y, y' ∈ Y$. Their preimages are compact disjoint subsets of $X$ and so can be separated by disjoint open sets. Complements of images of these sets are open disjoint and separating $y, y'$ in $Y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/605784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Borel function which is not continuous (in every point) Give example function $f: \mathbb{R} \rightarrow \mathbb{R}$ which $\forall x \in \mathbb{R}$ is not continuous function but is Borel function.
I think that I can take $$f(x) = \begin{cases} 1 & x \in \mathbb{R} \setminus \mathbb{Q} \ \\ -1 & x \in \mathbb{Q} \end{cases} $$
Am I right? How fast prove that it is Borel function?
|
Yes, you're correct. To show that it's a Borel function, begin by showing that if $\mathcal{O} \subseteq \mathbb{R}$ is an open set, then $f^{-1}(\mathcal{O})$ is one of exactly four possible sets: $\emptyset, \mathbb{Q}$, $\mathbb{R} \setminus \mathbb{Q}$, and $\mathbb{R}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/605877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How to verify $y_c(x)+y_p(x)$ is a solution to the differential equation? I am given a nonhomogeneous differential equation:
$$y''+4y'+3y=g(x)$$
where $g(x)=3 \sin 2x$.
After working through the problem, I have
$$y_c(x)=C_1e^{-3x}+C_2e^{-x}$$
(I was to find a general solution for which $g(x)=0$)
$$y_p(x)=-(24/65) \cos 2x-(3/65) \sin 2x. $$
(On this part, I was given $y_p(x)=A \cos 2x+ B \sin 2x$)
Now I'm stuck. How do I verify that $y_c(x)+y_p(x)$ is a solution to the differential equation?
Thanks!
|
Well, your solution $y_{c}(x)$ satisfies the problem $y'' + 4y' + 3y = 0$ and $y_{p}(x)$ satisfies the problem $y'' + 4y'+ 3y = g(x)$. So, $(y_{c}+y_{p})'' + 4(y_{c}+y_{p})' + 3(y_{c}+y_{p}) = [y_{c}'' + 4y_{c}' + 3y_{c}] + [y_{p}'' + 4y_{p}' + 3y_{p}] = 0 + g(x) = g(x)$. Hence, $y(x)$ satisfies the ODE. Note that derivatives of any order are additive; that is $(y_{1}+y_{2})^{(n)}(x) = y_{1}^{(n)}(x) + y_{2}^{(n)}(x)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/605968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Double harmonic sum $\sum_{n\geq 1}\frac{H^{(p)}_nH_n}{n^q}$ Are there any general formula for the following series
$$\tag{1}\sum_{n\geq 1}\frac{H^{(p)}_nH_n}{n^q}$$
Where we define
$$H^{(p)}_n= \sum_{k=1}^n \frac{1}{k^p}\,\,\,\,\,H^{(1)}_n\equiv H_n =\sum_{k=1}^n\frac{1}{k} $$
For the special case $p=q=2$ in (1) I found the following paper
Stating that
$$\sum_{n\geq 1}\frac{H^{(2)}_nH_n}{n^2}=\zeta(5)+\zeta(2)\zeta(3)$$
See equation (3a) .
Is there any other paper in the literature discussing (1) or any special cases ?
|
In here we provide a generating function of the quantities in question. Let us define:
\begin{equation}
{\bf H}^{(p,r)}_q(t) := \sum\limits_{m=1}^\infty H_m^{(p)} H_m^{(r)} \frac{t^m}{m^q}
\end{equation}
In here we take $q\ge1$. We have:
\begin{eqnarray}
&&{\bf H}^{(p,1)}_q(t) = Li_p(1) \cdot \frac{1}{2} [\log(1-t)]^2 \cdot 1_{q=1}+\\
&&\frac{(-1)^{q}}{2} \sum\limits_{l=(q-2)}^{p+q-3} \left(\binom{l}{q-2} 1_{l < p+q-3} + ({\mathcal A}^{(p)}_{q-2}) 1_{l=p+q-3}\right) \cdot \underbrace{\int\limits_0^1 \frac{[Li_1(t \xi)]^2}{\xi} Li_{l+1}(\xi) \frac{[\log(1/\xi)]^{p+q-3-l}}{(p+q-l-3)!}d\xi}_{I_1}+\\
&& \frac{(-1)^{q-1}}{2} \sum\limits_{j=0}^{q-3} \left({\mathcal A}^{(p)}_{q-2-j}\right) \cdot \zeta(p+q-2-j) \underbrace{\int\limits_0^1 \frac{[Li_1(t \xi)]^2}{\xi} \frac{[\log(\xi)]^j}{j!} d\xi}_{I_2}+\\
&& \sum\limits_{l=1}^p \underbrace{\int\limits_0^1 \frac{Li_q(t \xi)}{\xi} Li_l(\xi) \frac{[\log(1/\xi)]^{p-l}}{(p-l)!} d\xi}_{I_3}
\end{eqnarray}
Here $t\in (-1,1)$ and $p=1,2,\cdots$ and
\begin{equation}
{\mathcal A}^{(p)}_{q} := p+\sum\limits_{j=2}^{q} \binom{p+j-2}{j}
= p \cdot 1_{p=1} + \frac{p+q-1}{p-1} \binom{p+q-2}{q}\cdot 1_{p > 1}
\end{equation}
Note 1: The quantities in the right hand side all contain products of poly-logarithms and a power of logarithm. Those quantities, in principle, have been already dealt with in An integral involving product of poly-logarithms and a power of a logarithm. for example.
Note 2: Now that we have the generating functions we will find recurrence relations for the sums in question and hopefully provide some closed form expressions .
Now we have:
\begin{eqnarray}
&&I_1 =\\
&&\sum\limits_{l_1=2}^{l+1} \binom{p+q-2-l_1}{l+1-l_1} (-1)^{l+1-l_1} \zeta(l_1) \left({\bf H}^{(1)}_{p+q-l_1}(t) - Li_{p+q+1-l_1}(t) \right)+\\
&&\sum\limits_{l_1=2}^{p+q-2-l} \binom{p+q-2-l_1}{l} (-1)^{l-1} \zeta(l_1) \left({\bf H}^{(1)}_{p+q-l_1}(t) - Li_{p+q+1-l_1}(t) \right)+\\
&&\sum\limits_{l_1=1}^{p+q-2-l} \binom{p+q-2-l_1}{l} (-1)^{l-0} \left( {\bf H}^{(l_1,1)}_{p+q-l_1}(t) - {\bf H}^{(l_1)}_{p+q+1-l_1}(t) \right)
\end{eqnarray}
and
\begin{eqnarray}
&&I_2=2 (-1)^j \left[ {\bf H}^{(1)}_{j+2}(t) - Li_{j+3}(t)\right]
\end{eqnarray}
and
\begin{eqnarray}
&&I_3=\\
&&\sum\limits_{l_1=2}^l \binom{p-l_1}{p-l}(-1)^{l-l_1} \zeta(l_1) Li_{p+q+1-l_1}(t) +\\
&& \sum\limits_{l_1=2}^{p-l+1} \binom{p-l_1}{l-1}(-1)^{l} \zeta(l_1) Li_{p+q+1-l_1}(t)-
\sum\limits_{l_1=1}^{p-l+1} \binom{p-l_1}{l-1} (-1)^l {\bf H}^{(l_1)}_{p+q+1-l_1}(t)
\end{eqnarray}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
}
|
Epsilon delta proof min http://www.milefoot.com/math/calculus/limits/DeltaEpsilonProofs03.htm
I've been studying these épsilon delta proofs. In the non-linear case, he gets:
$$\delta=\min\left\{5-\sqrt{25-\dfrac{\epsilon}{3}},-5+\sqrt{25+\dfrac{\epsilon}{3}}\right\}$$
Well, I know that these $\delta$ are not equal the opposite of the other, but it has shown that $x$ must be within the range covered by these two deltas. Well, I already have bounded the $x-a$ (in this case, $x-5$) in therms of $\epsilon$, so it should work that for any given $\epsilon$, i could get only the $-5+\sqrt{25+\dfrac{\epsilon}{3}}$. Why I have to get the minimum?
|
It is because $\delta$ has to be acceptable in the worst case. Say we are proving $\lim_{x \to 0} f(x)=L$ and for (the given) $\epsilon$ we are within $\epsilon$ over the interval $\delta \in (-1,0.1)$ The definition of limit is symmetric: it says whenever $x$ is within $\delta$ of $0$, then $|f(x)-L|\lt \epsilon$ so we have to shrink the interval to make it symmetric, so our answer should be within $\delta \in (-0.1,0.1)$. This sounds restrictive, but it is not. One can prove that symmetric limits leads to the same thing as asymmetric limits and every interval includes a symmetric interval.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
If $f(x)=\chi_{(0,\infty)}\exp(-1/x)$, show that $f\in C^{\infty}$. Define the function $f:\mathbb{R}\to\mathbb{R}$ as follow:
$f(x)=\chi_{(0,\infty)}\exp(-1/x)$
In other words: $f(x)=0$ if $x\le 0$, and $f(x)=\exp(-1/x)$ if $x>0$.
Show that $f\in C^{\infty}$.
So I think I want to show the nth derivative $f^{(n)}$ is continuously differentiable for all $n$. I might want to proceed by induction. The only point we need to worry about is when $x=0$, since for $x$ outside of its neighbor, $f$ clearly satisfy the property. So I want to show $f^{n}(0)$ exists for all $n$. That's it, $\lim_{d\to0}\frac{f^{n-1}(d)-f^{n-1}(0)}{d}=0$. Maybe I can find a closed-form for the nth derivative at $0$??
|
Just note that taking derivatives (by the definition, not by the algorithm) always leaves you with terms of the form $1/P(x) \cdot e^{1/x}$. Then use that the exponential grows faster than any polynomial at infinity, noting that as $x\to 0$, we have $\frac1{x}\to \infty$. More explicitly, for $f'(x)$,
$$\lim_{x\to 0}\frac{e^{1/x}}{x} = 0.$$ This way you can find a formula for $f'(x)$ quite easily, using the normal technique for $x>0$ and noting it is zero elsewhere. Then repeat for higher derivatives.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the particular solution for $ y''+2y'=2x+5-e^{-2x}$? How is the particular solution for $y''+2y'=2x+5-e^{-2x}$ be the following?
$$y_p = Ax^2 + Bx + Cxe^{-2x}$$
Shouldn't it be $y_p = Ax + B + Cxe^{-2x}$?
Anything of degree one should be in the form $Ax + B$, and $2x+5$ is in degree one and not squared... I just don't get it.
|
Hint:
The particular solution is of the form:
$$y_p = a x + b x^2 + c x e^{-2x}$$
We have to take $a + b x$ and multiply by $x$ and multiply $e^{-2x}$ by $x$ because we already have a constant in homogeneous and also have $e^{-2x}$ in homogeneous.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Reasoning behind the cross products used to find area Alright, so I do not have any issues with calculating the area between two vectors. That part is easy. Everywhere that I looked seemed to explain how to calculate the area, but not why the cross product is used instead of the dot product.
I was hoping math.se could explain this, it has been irking me too much. I understand the role of the cross product in torque and magnetism, but this one escapes me.
|
I think the signed part of area is the most difficult to assign some intuitive meaning.
Consider two vectors in $\mathbb{R^2}$, and let $A : \mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}$ be the signed area. Then $A$ should be linear in each variable separately, since we should have $A(\lambda x,y) = \lambda A(x,y)$, and $A(x_1+x_2,y) = A(x_1,y)+A(x_2,y)$ (see the following diagram to convince yourself why this should be the case):
The area between a vector and itself should be zero (that is $A(z,z) = 0$ for all $z$), so the linearity requirement gives $A(x+y,x+y) = A(x,x)+A(x,y)+A(y,x) + A(y,y) = A(x,y)+A(y,x) = 0$, and so $A(x,y) = -A(y,x)$.
Then $A(x,y) = A(\sum_i x_i e_i, \sum_j y_j e_j) = \sum_i \sum_j x_i y_i A(e_i,e_j) = (x_1 y_1 -x_2y_2 )A(e_1,e_2)$.
It seems reasonable to assign an area of one to the area spanned by $e_1,e_2$, hence $A(e_1,e_2) = 1$, which gives $A(x,y) = x_1 y_1 -x_2y_2 $ (which, of course, equals $\det \begin{bmatrix} x & y \end{bmatrix}$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 2
}
|
Properties of the relation $R=\{(x,y)\in\Bbb R^2|x-y\in \Bbb Z\}$
$A= \Bbb R \\
R=\{(x,y)\in\Bbb R^2|x-y\in \Bbb Z\}$
Determine if the relation is (a)reflexive, (b)symmetric, (c)transitive, (d)anti-reflexive, (e)anti-symmetric, (f)asymmetric, (g)equivalence relation.
This is what I did but I'm not sure:
It is reflexive: $\forall x:(x,x)\in R:x-x=0\in\Bbb Z$
It is symmetric: $\forall x,y\in\Bbb R:xRy\in\Bbb Z\Rightarrow yRx\in\Bbb Z$
It is transitive: $\forall a,b,c\in \Bbb R:(aRb \ and \ bRc)\in \Bbb Z \Rightarrow aRc $
It isn't anti-reflexive: $(1,1)\in \Bbb R$
It is anti-symmetric: $aRa\in \Bbb Z \Rightarrow a=a$
It isn't asymmetric because it symmetric.
There is an equivalence relation.
Is it correct ? Thanks.
|
Most of your answers are correct, but the justifications given are a little confusing. In general, you should offer a genuine proof. For example:
It is reflexive.
Proof. Let $x \in \mathbb{R}$ be fixed but arbitrary. Then $x-x=0$. Thus $x-x \in \mathbb{Z}.$ So $xRx.$
Anyway, your answers for "reflexive", "symmetric" and "transitive" are correct.
The claim that $R$ is anti-symmetric is incorrect. Observe that $0R1$ and $1R0$, but it does not follow that $0=1$.
Also, if a relation on a non-empty domain is reflexive, then its not anti-reflexive (exercise!). So that answer is also correct. Along a similar vein, the only relation that is both symmetric and asymmetric is the always-false relation. But since $0R0$, the given relation $R$ is not always false. So it cannot be asymmetric. Therefore, that answer is also correct.
Edit. By the way, defining $R$ via set-builder notation is imo confusing. I would suggest defining $R$ as the unique subset of $\mathbb{R}^2$ such that:
$$\forall x,y \in \mathbb{R} : xRy \;\leftrightarrow\;x-y \in \mathbb{Z}.$$
From the above form, it is obvious that any time $xRy$ is written down, we may deduce $x-y \in \mathbb{Z}$, and any time $x-y \in \mathbb{Z}$ is written down, we may deduce $xRy$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Real polynomial in two variables I have problems proving the following result:
Each $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ such that $\forall a,b \in \mathbb{R} \ : \ f_a(y) := f(a,y), \ f_b(x) := f(x,b) $ are polynomials is a polynomial with two variables.
If I consider $f$ as a function of $x$, then its derivative $f'(x) = \frac{\partial f}{\partial b}(x)$. Similarly if we treat $f$ as a function of $y$.
I assume that $f_a(y) = \frac{\partial f}{\partial a} (y) $ and $f_b(x)=\frac{\partial f}{\partial b}(x)$.
But I am not sure if we can assume that, because the degree of the derivative should be smaller than the degree of the original function (and it isn't).
Actually, I'm not even sure if what I'm trying to prove is true, because in the original formulation of the problems there is written $f_a(y) := (a,y), \ f_b(x):=(x,b)$. But that didn't make sense.
Could you help me here?
Thank you.
|
Set $f(\cdot, y):x\mapsto f(x,y)$. Since this $f(\cdot, y)$ is $x$-continuous, when $y_n\to y$ the polynomials $f(\cdot, y_n)$ converge pointwise everywhere. Hence, for fixed $N$, the sets
$\{y| \deg f(\cdot, y)\leq N \}$ are closed. By Baire there exist $N$ and $Y\subset \Bbb R$ open, such that $\deg f(\cdot, y)\leq N, \forall y\in Y$. Interchanging $x$ and $y$ we have $M$ and $X\subset \Bbb R$ with analogous properties. Fix arbitrarely $x_0<\dots <x_N$ in $X$, $y_0<\dots <y_N$ in $Y$ and take the polynomial $g$ of bi-degree $(N,M)$ which equals $f$ at the $(N+1)(M+1)$ points $(x_j,y_k)$. If $h=f-g\equiv 0$ we are done. For any $y\in Y$, $h(\cdot, y)$ vanishes identically because its degree is $\leq N$ and it vanishes at $x_0,\dots , x_N$. Thus, $\forall x\in\Bbb R$ $h(x,\cdot)$ vanishes identically because it vanishes in the open set $Y$. This means $h\equiv 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 2
}
|
Exercise 3.3.25 of Karatzas and Shreve This is the Exercise 3.25 of Karatzas and Shreve on page 163
Whith $W=\{W_t, \mathcal F_t; 0\leq t<\infty\}$ a standard, one-dimensional Brownian motion and $X$ a measurable, adapted process satisfying
$$E\int_0^T|X_t|^{2m}dt<\infty$$
for some real numbers $T>0$ and $m\geq1$, show that
$$E\left|\int_0^TX_tdW_t\right|^{2m}\leq(m(2m-1))^mT^{m-1}E\int_0^T|X_t|^{2m}dt$$
(Hint: Consider the martingale $\{M_t=\int_0^tX_sdW_s, \mathcal F_t; 0\leq t\leq T\}$, and apply Ito's rule to the submartingale $|M_t|^{2m}$.)
By the hint, using Ito's rule I get
$$E|M_T|^{2m}=m(2m-1)E\int_0^T|M_t|^{2m-2}dt$$
I don't know how to continue. I tried to use Holder's inequality, but failed.
Thanks very much!
|
First of all, your calculation is not correct. Itô's formula gives
$$\begin{align*} M_T^{2m} &= 2m \cdot \int_0^T M_t^{2m-1} \, dM_t + m \cdot (2m-1) \cdot \int_0^t M_t^{2m-2} \, d \langle M \rangle_t \\ \Rightarrow \mathbb{E}(M_T^{2m}) &= m \cdot (2m-1) \cdot \mathbb{E} \left( \int_0^T M_t^{2m-2} \cdot X_t^2 \, dt \right)\tag{1} \end{align*}$$
where we used in the second step that the stochastic integral is a martingale and $d\langle M \rangle_t = X_t^2 \, dt$. Note that $(M_t^{2m-2})_{t \geq 0}$ is a submartingale; therefore we find by the tower property
$$\mathbb{E}(M_t^{2m-2} \cdot X_t^2) \leq \mathbb{E}((M_T^{2m-2} \mid \mathcal{F}_t) \cdot X_t^2) = \mathbb{E}(M_T^{2m-2} \cdot X_t^2). \tag{2}$$
Hence, by Fubini's theorem,
$$\mathbb{E} \left( \int_0^T M_t^{2m-2} \cdot X_t^2 \, dt \right) \leq \mathbb{E} \left( M_T^{2m-2} \cdot \int_0^T X_t^2 \, dt \right).$$
By applying Hölder's inequality (for the conjugate coefficients $p=\frac{2m}{2m-2}$, $q=m$), we find
$$\mathbb{E} \left( \int_0^T M_t^{2m-2} \cdot X_t^2 \, dt \right) \leq \left[\mathbb{E} \left( M_T^{2m} \right) \right]^{1-\frac{1}{m}} \cdot \left[ \mathbb{E} \left( \int_0^T X_t^2 \, dt \right)^m \right]^{\frac{1}{m}}. \tag{3}$$
Combining $(1)$ and $(3)$ yields,
$$\bigg(\mathbb{E}(M_T^{2m}) \bigg)^{\frac{1}{m}} \leq m \cdot (2m-1) \cdot \left[ \mathbb{E} \left( \int_0^T X_t^2 \, dt \right)^m \right]^{\frac{1}{m}}.$$
Finally, by Jensen's inequality,
$$\left(\int_0^T X_t^2 \, dt \right)^m \leq T^{m-1} \cdot \int_0^T X_t^{2m} \, dt.$$
This finishes the proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Matrices such that $A^2=A$ and $B^2=B$ Let $A,B$ be two matrices of $M(n,\mathbb{R})$ such that $$A^2=A\quad\text{and}\quad B^2=B$$
Then $A$ and $B$ are similar if and only if $\operatorname{rk}A = \operatorname{rk}B$.
The first implication is pretty easy because the rank is an invariant under matrix similarity. But the second one is kind of baffling me. I thought of reasoning about linear mappings instead of matrices. My reasoning was, basically, that if we consider the matrix as a linear mapping with respect to the canonical basis ($T(v)$ for the matrix $A$, $L(v)$ for the matrix $B$) then we have $$T(T(v))=T(v)\quad\text{and}\quad L(L(v))=L(v)$$
for all $v \in V$. Then the mapping must be either the $0$ function or the identity function (if this was the case, then the rest of the demonstration would be easy). But I soon realised that equating the arguments of the function, in general, doesn't work.
Thanks in advance for your help.
|
Your way of thinking is very good.
Hint: If $L:V\to V$ is an idempotent linear transformation ($L^2=L$) then $$V=\ker L\oplus{\rm im\,}L\,.$$
Use the decomposition $v=(v-Lv)+Lv$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Median and Mean of Sum of Two Exponentials I have a cumulative distribution function:
$$G(x) = -ae^{-xb} - ce^{-xd}+h$$
The associated probability density function is:
$$g(x) = abe^{-xb} + cde^{-xd}$$
My problem concerns $x\ge 0, X \in R$.
I know that the mean (expected value) of $x$ can be computed by:
\begin{align}
E[x] &= \int_0^{\infty} x g(x)~dx \\
&=\int_0^{\infty}xabe^{-xb}~dx~+~\int_0^{\infty}xcde^{-xd}~dx\\
&=\frac{a}{b} + \frac{c}{d}
\end{align}
The median is when $G(x) = 0.5$. This requires finding the roots of
$$
0 = -ae^{xb} - ce^{xd}+h-0.5
$$
Based on the Abel–Ruffini theorem I know that there are no general solutions to this problem.
My question relates to the component exponential decay equations contained in $G(x)$:
\begin{align}
F(x) &= ae^{xb}\\
J(x) &= ce^{xd}
\end{align}
The mean of $F(x)$ is $\frac{a}{b}$ and median $\frac{a\ln2}{b}$. The mean of $J(x)$ is $\frac{c}{d}$ and median $\frac{c\ln2}{d}$.
MY QUESTION:
The ratio of the mean of $g(x)$ to the mean of $f(x)$ is
$$\frac{a}{b} + \frac{c}{d} \div \frac{a}{b}$$
But is the ratio of the MEDIAN of $g(x)$ TO THE MEDIAN of $f(x)$ the same? I am assuming not, because there is no generic formula to solve for the median of $g(x)$. But I don't know how to prove this, so am not sure.
|
Your intuition is correct, the ratio of mean to median of a random variable $X$ with density of shape $abe^{-ax}+cde^{-cx}$ is not always the same as the ratio of mean to median of an exponentially distributed random variable. (The latter ratio, as your post pointed out, is $\frac{1}{\ln 2}$.)
To show this, it is enough to give an example. Let $X$ have density function $g(x)=\frac{1}{2}\left(e^{-x}+2e^{-2x}\right)$ (for $x\gt 0$). The mean of $X$ is $\frac{3}{4}$. To compute the median, we solve the equation
$$\frac{1}{2}(2-e^{-m}-e^{-2m})=\frac{1}{2}.$$
This is one of the rare cases where we can compute explicitly: $m=\ln\left(\frac{1+\sqrt{5}}{2}\right)$. The golden section strikes again.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How many times are the hands of a clock at $90$ degrees.
How many times are the hands of a clock at right angle in a day?
Initially, I worked this out to be $2$ times every hour. The answer came to $48$.
However, in the cases of $3$ o'clock and $9$ o'clock, right angles happen only once.
So the answer came out to be $44$.
Is the approach correct?
|
Yes, but a more “mathematical” approach might be this: In a 12 hour period, the minute hand makes 12 revolutions while the hour hand makes one. If you switch to a rotating coordinate system in which the hour hand stands still, then the minute hand makes only 11 revolutions, and so it is at right angles with the hour hand 22 times. In a 24 hour day you get 2×22=44.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 9,
"answer_id": 2
}
|
Try to solve the following differential equation: $y''-4y=2\tan2x$ I am trying to solve this equation:
$y''-4y=2\tan2x$
the Homogeneous part is:
$$y_h=c_1e^{2x}+c_2e^{-2x}$$
and I get according the formula:
$$C_1'e^{2x}+C_2'e^{-2x}=0$$
$$2C_1'e^{2x}-2C_2'e^{-2x}=2\tan2x$$
my questions is:
*
*if $y_h$ is right?
*how can I find $c_{1}$ , $c_{2}$ ?
thanks
|
What you have done so far is correct. You should proceed as follows:
Write the last two equations as a system
$$\left(\begin{array}{cc}
e^{2x} & e^{-2x}
\\
e^{2x} & -e^{-2x}
\end{array}\right)
\cdot \left(\begin{array}{c} C_{1}^{\prime}
\\
C_{2}^{\prime}
\end{array}\right)=\left(\begin{array}{c}
0
\\
\tan(2x) \end{array}\right)$$
Invert the matrix on the LHS to get
$$\left( \begin{array}{cc}
C_{1}^{\prime}
\\
C_{2}^{\prime} \end{array}\right)=-\frac{1}{2}\left(\begin{array}{cc}
-e^{-2x} & -e^{-2x}
\\
-e^{2x} & e^{2x}\end{array} \right)\cdot \left( \begin{array}{c}
0
\\
\tan(2x) \end{array}\right)=-\frac{1}{2}\left(\begin{array}{c}
-e^{-2x}\tan(2x)
\\
e^{2x}\tan(2x) \end{array} \right)$$
This gives you a system of first order equations which you can solve by integrating. However, its not a particularly nice integral.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Negative exponential distance Let $X := \left\{(a_k)_{k \in \mathbb N}, a_k \in \mathbb C\right\}$. Let $d\left( (a_k)_{k \in \mathbb N}, (b_k)_{k \in \mathbb N} \right) := e^{-u}$ with $u$ the smallest integer $k$ such that $a_k \ne b_k$ be a distance on $X$. Is $(X, d)$ compact/complete/connected?
Here's my not very rigorous take : $X$ is not complete since it's not even a normed space (I don't think there exists a norm that induces $d$). It is not compact since $\left((1, \dots ), (2, \dots), (3, \dots), \dots\right)$ is a sequence of elements of $X$ that does not have a convergent subsequence. I can't find an argument for connectedness, any ideas?
|
You are correct that $X$ is not compact, by exactly the example you mention.
On the other hand, $X$ is in fact complete. Observe that if $d((a_k), (b_k)) < e^{-n}$, then $a_i = b_i$ for all $i \leq n$. It follows that if $(\mathbf{a}_n)_{n\in \mathbb{N}} = ((a_{n,k})_{k\in\mathbb{N}})_{n\in \mathbb{N}}$ is a Cauchy sequence, then for each $k$, the sequence $(a_{n,k})_{n\in\mathbb{N}}$ is eventually constant (call its eventual value $b_k$). One can then show that the sequence $(b_k)_{k\in \mathbb{N}}$ is the limit of $(\mathbf{a}_n)_{n\in \mathbb{N}}$.
Finally, $X$ is not connected. Indeed, for any $\mathbf{a} = (a_k)_{k\in \mathbb{N}}$, the $\varepsilon$-ball $B$ centered at $\mathbf{a}$ is disjoint from the union of the $\varepsilon$-balls centered at at every $\mathbf{b} \not \in B$. These are open, so it follows that $X$ is not connected.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/606953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove that in an impartial Game, the P-Positions all have Sprague-Grundy Value =0 I'm looking at some work with Combinatorial Game Theory and I have currently got:
(P-Position is previous player win, N-Position is next player win)
Every Terminal Position is a P-Position,
For every P-Position, any move will result in a N-Position,
For every N-Position, there exists a move to result in a P-Position.
These I am okay with, the problem comes when working with the Sprague-Grundy function;
g(x)=mex{g(y):y \in F(x)}, where F(x) are the possible moves from x and mex is the minimum excluded natural number.
I can see every Terminal Position has SG Value 0, these have x=0, and F(0) is the empty set.
The problem comes in trying to find a way to prove the remaining two conditions for positions, can anyone give me a hand with these?
|
If you can win a given game, you must use the following strategy: always move to a 0-position, so your opponent is forced to move to a non-zero position, then reply to a 0-position again. Eventually you reach a terminal position, because games end in a finite number of turns by definition. Since a terminal position has always value 0, this means you made the last move, so you won.
If a game is not zero, the next player wins using that strategy (he begins moving to a 0 position, and so on). On the other hand, if the game is 0, the next player can only move to a non-zero position, and therefore the previous player can win with the previously stated strategy.
You will can also find this, along with other results that might be useful for you, in the first chapter of my Master's Thesis (https://upcommons.upc.edu/bitstream/handle/2117/78323/memoria.pdf)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/607044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove $\sin^2\theta + \cos^2\theta = 1$ How do you prove the following trigonometric identity: $$ \sin^2\theta+\cos^2\theta=1$$
I'm curious to know of the different ways of proving this depending on different characterizations of sine and cosine.
|
$$\large \sin^2\theta + \cos^2\theta
=\sin\theta\sin\theta+\cos\theta\cos\theta
=\cos(\theta-\theta)
=\cos0
=1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/607103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 16,
"answer_id": 10
}
|
Find $f$ if $f(f(x))=\sqrt{1-x^2}$ Find $f$ if $f(f(x))=\sqrt{1-x^2} \land [-1; 1] \subseteq Dom(f)$
$$$$Please give both real and complex functions. Can it be continuous or not (if f is real)
|
I guessed that $f$ is of the form $\sqrt{ax^2+b}$. Then, $f^2$ is $\sqrt{a^2x^2 + \frac{a^2-1}{a-1}b}$. From here on in, it is algebra:
$$
a^2 =-1 \implies a = i ~~~~\text{and}~~~~\frac{a^2-1}{a-1}b = 1 \implies b = \frac{1-i}{2}
$$
So we get $f(x) = \sqrt{ix^2 + \frac{1-i}{2}}$. I checked using Wolfram, and $f^2$ appears to be what we want.
DISCLAIMER: This is not the only solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/607234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Probability of an Event defined by two continuous random variables I'm having trouble solving this word problem. I have the answer, but do not know how to get there.
An electronic gadget employs two integrated circuit chips: a signal processing chip and a power condition chip, which fail independently of each other. The gadget fails to operate only upon the failure of either of the two IC chips (i.e. all other modalities of failure of the gadget can be ignored). The time to failure of a chip is defined as the time interval between its manufacture and its failure, and is random (i.e. varies from chip to chip). The time to failure for he signal processing chip, denoted by X, has an exponential distribution, having the probability density function:
$$
f_x(x) = ae^{-ax}u(x)\\
\text{where } u(x) = \text{unit step function}\\
a = 10^{-4}/\text{hour}\\
f_y(y) = b e^{-by}u(y)\\
\text{where } b = 2 \cdot 10^{-4}/\text{hour}
$$
Question: Find the probability that, when a given sample of gadget fails, the failure is due to the power conditioning chip rather than the signal processing chip.
Answer: 2/3
|
Hint: Let $Y$ be the random time taken for the power conditioning chip (PC) to fail and $X$ be the random time taken for the signal processing (SP) chip to fail. How do you denote the following event?$$\mathbb{P}[\{\text{Time required for SP chip to fail} > \text{Time required for PC chip to fail}\}]$$ If you figure out the above, you have your answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/607343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What do polynomials look like in the complex plane? I have a hard time visualizing the fundamental theorem of algebra, which says that any polynomial has at least one zero, superficially I know this is true as every polynomial must have either an imaginary zero or real zero, but how do I visualize this in the complex plane?
For example if we have a real polynomial, we know that it is zero when it crosses the x axis this is because $y = 0$, however if $f(z) = 0$, then it must be the case that $f(z) = w = u+iv = 0+i0=0$ therefore every zero in $f(z)$ passes the origin? That does not make sense to me, what am I missing here?
|
See these:
*
*Visual Complex Functions by Wegert.
*Phase Plots of Complex Functions: A Journey in Illustration by Wegert and Semmler.
*The Fundamental Theorem of Algebra: A Visual Approach by Velleman.
Try an interactive demo at http://www.math.osu.edu/~fowler.291/phase/.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/607436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 6,
"answer_id": 5
}
|
Simple examples of $3 \times 3$ rotation matrices I'd like to have some numerically simple examples of $3 \times 3$ rotation matrices that are easy to handle in hand calculations (using only your brain and a pencil). Matrices that contain too many zeros and ones are boring, and ones with square roots are undesirable. A good example is something like
$$
M = \frac19 \begin{bmatrix}
1 & -4 & 8 \\
8 & 4 & 1 \\
-4 & 7 & 4
\end{bmatrix}
$$
Does anyone have any other examples, or a process for generating them?
One general formula for a rotation matrix is given here. So one possible approach would be to choose $u_x$, $u_y$, $u_z$ and $\theta$ so that you get something simple. Simple enough for hand calculations, but not trivial. Like the example given above.
|
Some entries of the rotation matrix, whether $2 \times 2$ or $3 \times 3$, are the trigonometric functions; to ensure that entries of the matrix are simple numbers that are less computationally expensive, pick integer multiples of $\pi$ on which the trigonometric functions are either $1, -1$ or $0$.
Is that what you meant?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/607540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 2
}
|
Can a function be both upper and lower quasi-continuous? Can you give me a non-trivial example? Below is the definition I am using:
A function $f: X \rightarrow \mathbb{R}$ is said to be upper (lower) quasi-continuous at $x \in X$ if for each $\epsilon >0$ and for each neighbourhood $U$ of $x$ there is a non-empty open set $G \subset U$ such that $f(y)< f(x) + \epsilon$ ($f(y)> f(x) - \epsilon$), for each $y \in G$.
|
For a slightly non-trivial example, consider
$$f(x)=\begin{cases}\sin\Bigl(\dfrac1x\Bigr)&x\ne0,\\a&x=0.\end{cases}$$
I think you will find that this function is quasi-continuous (i.e. upper and lower) if $\lvert a\rvert\le1$, more generally upper quasi-continuous iff $a\ge -1$ and lower quasi-continuous iff $a\le1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/607637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove $\max \cos(x)$ is $1$ and $\min \cos(x)$ is $ -1$ Prove $\max \cos(x)$ is $1$ and $\min \cos(x)$ is $-1$
How to prove it with only calculus and not multivariable calculus?
Please notice that this is not a homework question, but a pre-exam question. Thanks a lot.
|
It is $\cos{(\varphi)}=\Re{({e^{i\varphi}})}$ and $|e^{i\varphi}|=1$.
$|\Re{(z)}|\leq|z|$ together with the evaluation at $0$ and $\pi$ proves your question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/607729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Checking irreducibility of polynomials over number fields Are there general methods for checking irreducibility of polynomials over number fields? For instance, letting $F = \mathbb{Q}(\sqrt{3})$, I want to know whether $x^3 - 10 + 6\sqrt{3}$ is irreducible over $F$ (I know that it is but it's not trivial). There is the generalized Eisenstein criterion but it's unwieldy when you're not working over the rationals. Are there any methods as useful, or almost as useful, as Eisenstein's criterion, reducing modulo a prime, etc?
|
The "usual" algorithm for factoring polynomials over number fields go back to Kronecker, and uses the idea that it is essentially sufficient to factor the norm. for a bit of a discussion, see my recent preprint (which is really about something else, but gives a description when talking about an algorithm to compute the Galois group), and references therein (e.g. to Susan Landau's original paper).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/607842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.