Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
The Moore Plane's topology In the definition of the Moore plane $X=L{_1}\cup L{_2}$, where $L{_1}$ is the line $y=0$ and $L{_2}=X\setminus L{_1}$ , I have a problem. In the Engelsking's book, for each $x\in L{_1}$ neghbourhood of $x$, is the form $U(x,1/i)\cup \{ x \}$ where $U(x,1/i)$ be the set of $X$ inside the circle centered $x$ and radius $1/i$ for i=1,2,..... So, I wonder that whether or not radius is greater than $1$?If it is, how can cover all $X$ with small radius? thanks
You don’t need to cover $X$ with basic open nbhds of points of $L_1$: you also have the basic open nbhds of points of $L_2$, which are ordinary Euclidean balls small enough to stay within $L_2$. Specifically, the following collection is a base for $X$: $$\left\{\{\langle x,0\rangle\}\cup B\left(\left\langle x,\frac1k\right\rangle,\frac1k\right):k\in\Bbb Z^+\right\}\cup\left\{B(\langle x,y\rangle,\epsilon):y>0\text{ and }0<\epsilon\le y\right\}\;,$$ where for $B(p,r)$ is the usual Euclidean open ball of radius $r$ centred at $p$. Note that your description of basic open nbhds at points of $L_1$ isn’t actually correct: the ball $U$ is tangent to $L_1$ at $x$ and therefore does not have its centre at $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/571348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Problem related to continuous complex mapping. We are given with a map $g:\bar D\to \Bbb C $, which is continuous on $\bar D$ and analytic on $D$. Where $D$ is a bounded domain and $\bar D=D\cup\partial D$. 1) I want to show that: $\partial(g(D))\subseteq g(\partial D).$ And further, I need two examples: a) First, to show that the above inclusion can be strict, that is: $\partial(g(D))\not= g(\partial D).$ b) Second example, I need to show that conclusion in (1) is not true if $D$ is not bounded. So basically we have to show that the boundary of the open set $g(D)$ is contained in image of boundary of $D$ (and sometimes strictly contained). I think that we will use open mapping theorem. But how this theorem will help us here that is not clear.
It's better to assume the function $g$ to be bounded. If the image curve $f(\partial D)$ forms infinitely many loops everywhere, then $\partial f(D)\subsetneqq f(\partial D)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/571404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How to solve this initial value problem on $(-\infty, \ +\infty)$? I've managed to solve the following initial-value problem on the interval $(0, +\infty)$: $$x y^\prime - 2y = 4x^3 y^{1/2} $$ with $y = 0$ when $x = 1$. The unique solution is $y = (x^3 - x)^2$. How to solve this problem on the interval $(-\infty, \ +\infty)$?
With the scaling $x \to \alpha x$, $y \to \beta y$ we can see that the equation is invariant whenever $\alpha^{1/2} = \beta^{3}$. It means that $y^{1/2}/x^{3}$ is invariant over the above mentioned scaling. It suggests the variable change $u = y/x^{6}$ or/and $y = x^{6} u$. It leads to: $$ {1 \over 4\sqrt{u} - 6u}\,{{\rm d}u \over {\rm d}x} = {1 \over x} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/571474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Find the four digit number? Find a four digit number which is an exact square such that the first two digits are the same and also its last two digits are also the same.
The Number is of the form 1000A + 100A + 10B + B = 11( 100A + B ) = 11 ( 99A + A + B ) Since it is a perfect square number , (99A + A + B) should be divisible by 11 hence (A + B) is divisible by 11....(i) Any perfect square has either the digits 1 , 4 , 9 , 6 , 5 , 0 at the units' place , ...(ii) Only numbers which satisfy property i and ii are enlisted: 7744 2299 5566 6655 [Note that (A + B) being divisible by 11 was a crucial property to note] Clearly , Only 7744 is a perfect square number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/571582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 5 }
Determinant of a n x n Matrix - Main Diagonal = 2, Sub- & Super-Diagonal = 1 I'm stuck with this one - Any tips? The Problem: Let $n \in \mathbb{N}.$ The following $n \times n$ matrix: $$A = \left( \begin{array}{ccc} 2 & 1 & & & & ...\\ 1 & 2 & 1 & & & ...\\ & 1 & 2 & 1 & & ...\\ & & 1 & 2 & 1 & ...\\ & & & 1 & ... & 1\\ ... & ... & ... & ... & 1 &2 \end{array} \right) $$ e.g. for the main diagonal = 2, the sub- and superdiagonal = 1 . Show with Induction that $\det(A) = n + 1$. My solution approach: Laplace Expansion starting with the 2 in the bottom right corner $(a_{n+1,n+1})$. But how can I tell wether its positive or negative? After that I'm stuck with the 1 $(a_{n,n+1})$(the sub matrix matrix becomes ugly and I get a recursively solution). How can I formalize this in a proper manner?
$A_n=2A_{n-1}-A_{n-2}$, therefore $A_n=a\cdot n\cdot1^n+b$. The coefficients $a=1,b=1$ can be computed from $A_2=2$ and $A_3=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/571664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is it possible to subtract a matrix from both side? I have this equation $AX + B = I$ and I want to find Matrix $X$. $$(A^{-1})AX + B = (A^{-1})I$$ $$X + B = (A^{-1})I$$ My question is, is it legal to do $X + B - B = (A^{-1})I - B$?
$AX+B=I$, $A^{-1}AX+A^{-1}B=A^{-1}I$. So: $X=A^{-1}I-A^{-1}B$. It's legal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/571789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Limit: $\lim_{n\to \infty} \frac{n^5}{3^n}$ I need help on a homework assignment. How to show that $\lim_{n\to\infty} \left(\dfrac{n^5}{3^n}\right) = 0$? We've been trying some things but we can't seem to find the answer.
Can you complete this? $n>32 \to n^5<2^n \to \frac{n^5}{3^n}<\frac{2^n}{3^n}$ $n>\log_{\frac32}(\epsilon) \to (\frac32)^n>\epsilon \to (\frac{2}{3})^n<\epsilon$
{ "language": "en", "url": "https://math.stackexchange.com/questions/571852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 6 }
Proof of a trigonometric expression Let $f(x) = (\sin \frac{πx}{7})^{-1}$. Prove that $f(3) + f(2) = f(1)$. This is another trig question, which I cannot get how to start with. Sum to product identities also did not work.
Let $7\theta=\pi, 4\theta=\pi-3\theta\implies \sin4\theta=\sin(\pi-3\theta)=\sin3\theta$ $$\frac1{\sin3\theta}+\frac1{\sin2\theta}$$ $$=\frac1{\sin4\theta}+\frac1{\sin2\theta}$$ $$=\frac{\sin4\theta+\sin2\theta}{\sin4\theta\sin2\theta}$$ $$=\frac{2\sin3\theta\cos\theta}{\sin4\theta\sin2\theta}\text{ Using } \sin2C+\sin2D=2\sin(C+D)\cos(C-D)$$ $$=\frac{2\cos\theta}{2\sin\theta\cos\theta}$$ $$=\frac1{\sin\theta}$$ All cancellations are legal as $\sin r\theta\ne0$ for $7\not\mid r$
{ "language": "en", "url": "https://math.stackexchange.com/questions/571954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is every function from $\aleph_0 \to \aleph_2$ bounded? If $f$ is a function $f:\aleph_0 \to \aleph_2$, does it mean that the range of f is bounded in $\aleph_2$? Does this hold for all regular cardinals?
One of the definitions of a cardinal $\kappa$ being regular is that, whenever $\alpha < \kappa$, every function $f : \alpha \to \kappa$ is bounded. In any case, you can prove this directly, using the fact that a countable union of sets of cardinality $\aleph_1$ has cardinality $\aleph_1$: consider $$\bigcup_{n < \omega} \{ \alpha < \omega_2 : \alpha \le f(n) \}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/572120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Is my calculation correct about a probability problem? Suppose there are $300$ tickets in the pool, where $7$ of them belong to me. $20$ tickets are randomly taken out of the pool, and are declared as "winning tickets". What is the probability that exactly 4 of the winning tickets are mine? When I tried to solve this I found $$\frac{\binom{20}{4} \left(7 \times 6 \times 5 \times 4 \prod _{j=1}^{16} (-j+293+1)\right)}{\prod _{i=1}^{20} (-i+300+1)} \approx 0.000433665 $$ Is this the correct probability? Thanks.
Your answer is correct but the way it is notated is not very elegant. I should choose to write it as: $$\frac{\binom{20}{4}\binom{280}{3}}{\binom{300}{7}}$$ Choosing $7$ from $300$ gives $\binom{300}{7}$ possibilities. $4$ of them belonging to your $20$ gives $\binom{20}{4}$ possibilities and $3$ of them belonging to the $280$ that are not yours gives $\binom{280}{3}$ possibilities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/572223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Measurable functions are not polynomials The problem I have says: If $f$ is measurable on $\mathbb R$, prove that there is at most a countable number of polynomials $P$ such that $P\overset{\text{a.e.}}{=}f$. I think I need to show that if $f$ is not a polynomial then it is different almost everywhere from every polynomial. I don't know how to show this though, if it is correct. Now, if $f$ is a polynomial, then if $f\overset{\text{a.e.}}{=}P_1$ and $f\overset{\text{a.e.}}{=} P_2$, then that means that $P_1\overset{\text{a.e.}}{=}P_2$ where $P_1,P_2$ are polynomials, but doesn't that mean that $P_1=P_2$? I have a hard time finding a counter-example to this. That is where I don't understand the at most countable part, it seems to me that if a function is almost everywhere equal to a polynomial, then it can't be almost everywhere equal to another one. Any help would be appreciated!
Since the unique open set with $0$ measure is the empty set, the set where two continuous functions are different is either empty or of positive measure. So given a measurable function $f$, there is at most one continuous function $g$ such that $f=g$ almost everywhere. (if $g_1$ and $g_2$ do the job, $g_1=g_2$ almost everywhere, hence (by continuity) everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/572309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof that f is Riemann Integrable Theorem 6.1.8a: If $f$ is continuous on $[a,b]$ then $f$ is Riemann Integrable on $[a,b]$. Theorem 6.1.7: A bounded real-valued function $f$ is Riemann Integrable on $[a,b]$ if and only if for every $e > 0$, there exists a partition $P$ of $[a,b]$ such that $U(\mathcal{P},f) - L(\mathcal{P},f) < e$. Futhermore, if $P$ is a partition of $[a,b]$ for which the above inequality holds, then the inequality also holds for all refinements on $P$. Theorem 6.1.13: A bounded real-valued function f of $[a,b]$ is a Riemann Integrable if and only if the set of discontinuities of f has zero measure. So we get any abstract partition of $[a,c-d]$ and one of $[c+d,b]$ such that $U[\mathcal{P}_1,f]-L[\mathcal{P}_2,f] < \frac{e}{3}$? Now I get confused as to how to define $d$. Please correct/help!
Since $f$ is continuous on $[a, c - \delta]$ and $[c + \delta, b]$ therefore it is integrable on both these intervals. Let $P_{1}$ be partition of $[a, c - \delta]$ and $P_{2}$ be partition of $[c + \delta, b]$ such that $U(P_{1}, f) - L(P_{1}, f) < \epsilon / 3$ and $U(P_{2}, f) - L(P_{2}, f) < \epsilon / 3$. Now choose $P_{3} = \{c - \delta, c + \delta\}$ as a partition of $[c - \delta, c + \delta]$. The trick here is to choose $\delta$ small enough such that $U(P_{3}, f) - L(P_{3}, f) < \epsilon / 3$. Clearly we have $$U(P_{3}, f) - L(P_{3}, f) = \{c + \delta - (c - \delta)\}(M_{c} - m_{c}) = 2\delta(M_{c} - m_{c})$$ where $M_{c} = \sup \{f(x)\mid x \in [c - \delta, c + \delta]\}$ and $m_{c} = \inf \{f(x)\mid x \in [c - \delta, c + \delta]\}$. Clearly $M_{c} - m_{c} \leq 2M$ where $M = \sup \{|f(x)|\mid x \in [a, b]\}$ and therefore if we choose $\delta < \epsilon / 12M$ then $$U(P_{3}, f) - L(P_{3}, f) = 2\delta(M_{c} - m_{c}) < 2\cdot\frac{\epsilon}{12M}\cdot 2M = \frac{\epsilon}{3}$$ Let $P = P_{1} \cup P_{2} \cup P_{3}$ then $P$ is a partition of $[a, b]$ such that $\displaystyle \begin{aligned}U(P, f) - L(P, f) &= U(P_{1}, f) - L(P_{1}, f) + U(P_{2}, f) - L(P_{2}, f) + U(P_{3}, f) - L(P_{3}, f)\\ &< \frac{\epsilon}{3} + \frac{\epsilon}{3} + \frac{\epsilon}{3}\\ &= \epsilon\end{aligned}$ Hence $f$ is Riemann-Integrable on $[a, b]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/572399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Algorithm for finding a basis of a subgroup of a finitely generated free abelian group Let $G$ be a finitely generated free abelian group. Let $\omega_1,\cdots, \omega_n$ be its basis. Suppose we are given explicitly a finite sequence of elements $\alpha_1,\cdots, \alpha_m$ of $G$ in terms of this basis. Let $\alpha_i = \sum_j a_{ij} \omega_j, i = 1,\cdots,m$. Let $H$ be the subgroup of $G$ generated by $\alpha_1,\cdots, \alpha_m$. It is well-known that $H$ is a free abelian group of rank $\le n$. My question Is there algorithm for finding a free basis of $H$ from the data $a_{ij}, 1 \le i \le m, 1\le j \le n$? If yes, what is it? Remark My motivation for the above question is as follows. Let $K$ be an algebraic number field degree $n$. Let $\mathcal{O}_K$ be the ring of algebraic integers of $K$. Let $\omega_1,\cdots,\omega_n$ be its integral basis. Suppose we are given explicitly a finite sequence of elemements $\mu_1,\cdots, \mu_r$ of $\mathcal{O}_K$ in terms of this basis. Suppose not all of these elements are zero. Let $I$ be the ideal of $\mathcal{O}_K$ generated by $\mu_1,\cdots, \mu_r$. It is well-known and easy to see that $I$ is a free $\mathbb{Z}$-submodule of $\mathcal{O}_K$ of rank $n$. I would like to know how to find a free basis of $I$ as a $\mathbb{Z}$-module.
Let $x_1,\cdots, x_m$ be a sequence of elements of $G$. We denote by $[x_1,\cdots,x_m]$ the subgroup of $G$ generated by $x_1,\cdots, x_m$. We use induction on the rank $n$ of $G$. Suppose $n = 1$. Then $\alpha_1 = a_{11}\omega_1,\cdots, \alpha_m = a_{m1}\omega_1$. We may suppose that not all of $\alpha_i$ are zero. Let $d =$ gcd$(a_{11},\cdots,a_{m1})$. There exist integers $c_1,\cdots,c_m$ such that $d = c_1a_{11} + \cdots + c_ma_{m1}$. Then $d\omega_1 = c_1\alpha_1 + \cdots + c_m\alpha_m \in H$. Clearly $d\omega_1$ is a basis of $H$. Suppose $n \gt 1$. If $H \subset [\omega_1,\cdots,\omega_{n-1}]$, we are done by the induction assumption. So we suppose not all of $a_{1n},\cdots,a_{mn}$ are zero. Let $d =$ gcd$(a_{1n},\cdots,a_{mn})$. Let $a_{in} = b_id$ for $i = 1,\cdots,m$. There exist integers $c_1,\cdots,c_m$ such that $d = c_1a_{1n} + \cdots + c_ma_{mn}$. Let $\beta_n = c_1\alpha_1 + \cdots + c_m\alpha_m$. Then $\gamma_i = \alpha_i- b_i\beta_n \in [\omega_1,\cdots,\omega_{n-1}]$ for $i = 1,\cdots, m$. Since $[\gamma_1,\cdots,\gamma_m] \subset [\omega_1,\cdots,\omega_{n-1}]$ and $[\omega_1,\cdots,\omega_{n-1}] \cap [\beta_n] = 0, [\gamma_1,\cdots,\gamma_m]\cap [\beta_n] = 0$. Hence $H = [\gamma_1,\cdots,\gamma_m] + [\beta_n]$ is a direct sum and we are done by applying the induction assumption on $[\gamma_1,\cdots,\gamma_m]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/572479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimization of $log_{a}(bc)+log_{b}(ac)+log_{c}(ab)$? I am trying to find the minimal value of the expression: $log_{a}(bc)+log_{b}(ac)+log_{c}(ab)$ I think experience gives that the variables should be equal, if so then the minimal value is 6, but this not true in general. Any hints or help will be greatly appreciated.
Assuming $a > 1$ and $a <= b <= c$. Let $q = b/a$ and $r = c/b$, so b = $qa$ and $c = qra$, and $ q,r \ge 1$. $$ f(a,b,c) = \lg_a a^2 q^2 r + \lg_{aq} a^2 q r + \lg_{aqr} a^2 q $$ $$ = \dfrac{2 \ln a + 2 \ln q + \ln r}{\ln a} + \dfrac{2 \ln a + \ln q + \ln r}{\ln a + \ln q} + \dfrac{2 \ln a + \ln q}{\ln a + \ln q + \ln r}$$ $$ = 6 + \dfrac {2 \ln q + \ln r}{\ln a} + \dfrac{-\ln q + \ln r}{\ln a + \ln q} + \dfrac{-\ln q - 2 \ln r}{\ln a + \ln q + \ln r}$$ We know $\ln q$ and $\ln r$ are both $\ge 0$, so $\dfrac {\ln q}{\ln a}$ is >= to both $\dfrac {\ln q}{\ln a + \ln q}$ and $\dfrac {\ln q}{\ln a + \ln q + \ln r}$, let $\epsilon_1 \ge 0$ be the sum of these differences, so $$ f(a,q,r) = 6 + \epsilon_1 + \dfrac{\ln r}{\ln a} + \dfrac{\ln r}{\ln a + \ln q} + \dfrac {-2 \ln r}{\ln a + \ln q + \ln r}$$ Similarly, both $\dfrac{\ln r}{\ln a}$ and $\dfrac{\ln r}{\ln a + \ln q}$ are greater than $\dfrac {\ln r}{\ln a + \ln q + \ln r}$, and let $\epsilon_2 \ge 0$ be the sum of the differences. $ f(a,q,r) = 6 + \epsilon_1 + \epsilon_2$, and both epsilons are greater than 0, so $f(a,q,r) >= 6$. When $q = r = 1$, $f(a,q,r) = 6$. So 6 is the minimum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/572566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Numerical integration of $\int_0^2 \frac{1}{x+4}dx.$ I have homework problem. Determine the number of intervals required to approximate $$\int_0^2 \frac{1}{x+4}dx$$ to within $10^{-5}$ and computer the approximation using (a) Trapezoidal rule, (b) Simpson's rule, (c) Gaussian quadrature rule. I think the phrase "within $10^{-5}$,"means that the error term. I know that the m-point Newton-Cotes rule is defined by $$Q_{NC(m)}=\int_a^b p_{m-1}(x)dx,$$ where $p_{m-1}$ interpolates the function on $[a,b].$ So when $m=2,$ we call $Q_{NC(2)}$ trapezoidal rule, ans $Q_{NC(3)}$ is simpson's rule. Can anyone explain what are these three rules and how I can proceed?? And what does $m$ represent?? Is it I am kind of lost in this class.. ans the text book is really really bad that I have no idea what it talks about...
It might also be asking you to use the remainder term formula. I happen to remember that the remainder term for Simpson's rule using $n$ intervals (where here $n$ must be an even number) is $$ -\frac{(b-a)^5 f^{(4)}(\xi)}{180n^4}$$ where $a$ and $b$ are the limits of integration, $f$ is the integrand, and $\xi$ is between $a$ and $b$. But this will only get you a lower bound for an $n$ that guarantees the error is less than $10^{-5}$. There is a similar formula for the trapezoidal rule. But I have absolutely no idea what it is for Gaussian quadrature!
{ "language": "en", "url": "https://math.stackexchange.com/questions/572630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Project point onto line in Latitude/Longitude Given line AB made from two Latitude/Longitude co-ordinates, and point C, how can I calculate the position of D, which is C projected onto D. Diagram:
What to do Express $A,B,C$ using Cartesian Coordinates in $\mathbb R^3$. Then compute $$D=\bigl((A\times B)\times C\bigr)\times(A\times B)$$ Divide that vector by its length to project it onto the sphere (with the center of the sphere as center of projection). Check whether you have the correct signs; the computation might instead result in the point on the opposite side of the earth, in which case you'd simply flip all coordinate signs. The correct point is likely the one closer to e.g. the point $A+B$, so you can simply try both alternatives and choose the correct one. Then turn the resulting Cartesian vector back into latitude and logitude. How this works The description above was obtained by viewing the sphere as the real projective plane. In that view, a point of the real projective plane corresponds to two antipodal points on the sphere, which is the source of the sign ambiguitiy I mentioned. $P=A\times B$ is a vector orthogonal to both $A$ and $B$. Every great circle which is orthogonal to $AB$ will pass through the projection of $P$ onto the sphere. $Q=P\times C$ is orthogonal to both $P$ and $C$, so it is orthogonal to the great circle which connects $C$ with $P$ (resp. its projection onto the sphere). That great circle is the one which also connects $C$ and your desired $D$. $D=Q\times P$ is orthogonal to both $P$ and $Q$, so it lies both on the greatcircle $AB$ and the greatcircle $CD$. Therefore it must point in the direction of the desired point. Project onto the sphere, choose the correct point from the antipodal pair, and you have the solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/572746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Techniques for removing removable singularities (without resorting to series expansion)? Suppose $f: \mathbb{C} \supset U \to \mathbb{C}$ is a meromorphic function with a removable singularity at the point $z_0 \in U$. Then $f$ can be extended to a holomorphic function over all of $U$. However, the material I've encountered does not provide much in the way of practical techniques for explicitly computing the holomorphic extension. The only technique I've encountered thus far is to simply write out the power series. For instance, suppose we start with the function $$ f(z) = \frac{e^z-1}{z}, $$ which has a removable singularity at $z=0$. The corresponding power series is $$ \frac{1}{z}\left( -1 + \sum_{n=0}^\infty \frac{z^n}{n!} \right) = \sum_{n=1}^\infty \frac{z^{n-1}}{n!}.$$ This series is certainly convergent (it is termwise smaller than the series for $e^z$), and is well-defined at $z=0$. However, it leaves something to be desired: we started out with a nice, finite algebraic expression, and ended up with a nasty infinite sum. One reason I call this sum "nasty" is that it does not lend itself well to numerical evaluation at an arbitrary point $z$. Away from the origin, the original function $f(z)$ can be evaluated using a small number of standard numerical routines (exponentiation, division, etc.). In contrast, to evaluate the sum to within machine precision, we likely have to evaluate a large number of terms (how many?), and we also have to be careful about things like catastrophic cancellation while accumulating the sum, especially near the origin. What a mess! Question: Are there other techniques for explicitly constructing the holomorphic extension of a removable singularity, that do not result in series expansions? (Note that in general I am interested in functions that are considerably more complicated than the example given above.)
The extension across the removable singularity simply coincides with the original function outside the singularitites (that's the very point!). At the removable singularity, the value of the extended function is just $\lim_{z\to a} f(z)$. In your particular case, we have that $$ \lim_{z\to 0} \frac{e^z-1}{z} = 1, $$ so the extended function is $$ f(z) = \begin{cases} \frac{e^z-1}{z}, & z \neq 0 \\ 1, & z = 0 \end{cases}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/572837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Poisson Distribution Lambda, Probability, and Looking for Exactly k Automobiles arrive at a vehicle equipment inspection station according to a Poisson process with a rate of $ \lambda $ = 10 per hour. Suppose that with probability 0.5 an arriving vehicle will have no equipment violations. What is the probability that exactly 5 have no violations? I wanted to know if this process was correct? And I'm a bit confused as to how the probability comes into play here. $ p_X(k) = e^{-\lambda} \cdot \frac{\lambda^k}{k!} $ Then, $~ p_X(5) = e^{-10} \cdot \frac{10^5}{5!} = 0.0378 = 3.78\%$
If $X$ has Poisson distribution with parameter $\lambda$, and $Y$ has binomial distribution with the number of trials equal to the random variable $X$, and $p$ any fixed probability $\ne 0$, then the number of "successes" has Poisson distribution with parameter $\lambda p$. This has been proved repeatedly on MSE, at least twice by me. Here is a link to a proof. In our case $\lambda=10$ and $p=0.5$, so the required probability is $e^{-5}\frac{5^5}{5!}$. (It is not the number you obtained.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/572917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Describing where a Kleisli Triple fits into a Monad ontology I'm trying to map a Kleisli triple onto my existing understanding of Monads. I can represent my understanding of Monads like this: (courtesy of Jim Duey's slides at 13) Could you please point to the part on this diagram where Kleisli triples fit in - or even better - draw another diagram that this diagram can fit into that explains it?
Kleisli triples fit in the diagram exactly where you have "monads". Kleisli triples are equivalent to monads (you might say they are one presentation of monads). I infer from the link that you are thinking about this in the context of programming languages? This question/answers might help: What is a monad in FP, in categorical terms?.
{ "language": "en", "url": "https://math.stackexchange.com/questions/573042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is the expected value (mean) of a variable written using square brackets? My question is told in a few words: Why do you write $E[X]$ in square brackets instead of something like $E(X)$? Probably it is not a "function". How would you call it then? This question also applies for $Var[X]$.
I don't. You can write in both ways, it doesn't matter. Some don't even use brackets but might instead write $EX$ or $VX$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/573148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 0 }
Is there a means of analytically proving the following identity? Okay, so before I begin, my background is more in the world of applied, rather than pure, mathematics, so this question is motivated by a physics problem I'm looking at just now. Mathematically, it boils down to looking for a minimum of a real valued function of a single, positive, real-valued variable, $u$. The part of differentiating the function and finding the condition for a stationary point is straightforward. I can, somewhat heuristically, convince myself that the function must be a minimum, but this is speaking from a physical standpoint. The condition for a minimum rests on the truth (or otherwise) of the following inequality: $ \cosh^2(u) \geq \frac{u^2}{8}. $ Now, I can plot this on a graph, and it clearly holds up for the range of values over which the plot is carried out (and would appear to be true in general). I can then say that it's true for all sensible values of the physical parameter $u$, which is simply a combination of a bunch of physical constants and a variable. But obviously I cannot draw an infinite graph, and would rather like a concrete proof to show that this is true for all positive, real-valued $u$. Is there a method that is recommended for dealing with such a problem? I don't expect a full solution, as I realise it's quite elementary-looking, and once pointed in the right direction I could no doubt take care of it myself. I'm just curious to know what the best analytic method would (in your opinion) be to deal with it so the proof looks a wee bit neater and more rigorous anyway. Thanks in advance.
A way to do this is to define a function $f: f(u)=\cosh^2(u)-u^2/8$ and to show that $\forall u, f(u)\geq 0$. And how to do this? Take the derivative of $f(u)$, which is $2 \cosh(u) \cdot \sinh(u)- u/4$, find the minimum value of $f$ according to the value which makes the derivative $0$. As it's greater or equal than $0$, you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/573225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Proving all sufficiently large integers can be written in the form $a^2+pq$. This is one of those numerous questions I ask myself, and to which I seem unable to answer: Can every integer greater then $657$ be written in the form $a^2+pq$, with $a\in\mathbb Z$ and $p,q$ prime? A quick brute force check trough numbers up to $100000$ told me that the only positive integers that can not be represented in the given form are $1, 2, 3, 12, 17, 28, 32, 72, 108, 117, 297$ and $657$. The above conjecture seems quite likely to me. Since pretty much integers are of the form $pq$, it is quite probable that, given a large $n>0$, at least one of $n, n-1, n-2^2, n-3^2, \ldots$ is of the form $pq$. I.e, considering the set $F=\{4,6,9,10,14,15,21,22,25,26,\ldots\}$ of all numbers of the form $pq$, a number $n$ not satisfying the property implies that none of the integers $n, n-1^2,\ldots, n-\lfloor\sqrt n\rfloor^2$ is an element of $F$. It's not much, but that's all I've discovered so far. (Besides, is there any hope solving this riddle without using complicated analytic number theory?)
This is very hard. (Notation: A semiprime is a product of exactly two distinct primes.) Just consider this problem for perfect squares. We're looking at the equation $b^2=a^2+pq$, which easily translates to $$(b-a)(b+a)=pq.$$ For fixed $b$, this is soluble in $a$ (and $p$ and $q$), precisely if either: 1) $2b-1$ is a semiprime, and we take $a=b-1$, or 2) There are primes $p$ and $q$ such that $p+q=2b$. If we assume that $2b-1$ is not a semiprime (and this is true for most $b$ -- looking at $b<x$, there are $O(x \frac{\log\log x}{\log x})$ values of $b$ for which $2b-1$ is a semiprime), we're therefore reduced to solving the Goldbach problem, and I don't know how to say anything useful about that (though I wish I did!). Thus, proving that every large integer $n$ can be written as $a^2+pq$ is at least as hard as proving Goldbach (or, I guess, proving Goldbach for a large subset of the even integers, those that aren't one more than a semiprime, but that's probably just as hard).
{ "language": "en", "url": "https://math.stackexchange.com/questions/573317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Counting Problems. I'm having trouble with the following. * *A man has 10 distinct candies and puts them into two distinct bags such that each bag contains 5 candies. In how many ways can he do it? a. For this problem I thought it would (10 choose 5) since we could place 5 candies in one box out of 10, and then for the second box 5 candies would be left, so (5 choose 5) however im not sure if it is right. * *How many ways are there to divide 10 boys into two basketball teams of 5 boys each? b. I would think this is similar to A, and am not sure. * *A person has 10 distinct candies, and puts them in two identical bags, such that no bag is empty, how many ways can he do it? *A person has 10 identical candies and puts them in two identical bags such that none are empty, how many different ways can he do it. I'm practicing for a exam and can't figure them out.
I’m going to add a little more explanation of the difference between the first and second problems. In the first problem we’re told that the bags are distinct; given the wording of the last two problems, that almost certainly means that they are individually identifiable, not interchangeable. That’s as if in the second problem we were to split the $10$ boys into a team called the Gryphons and another team called the Hippogryphs. There would be $\binom{10}5$ ways to choose which $5$ boys are to be Gryphons, and of course the other $5$ boys would be the Hippogryphs, so there would be $\binom{10}5$ ways to choose the teams. Similarly, in the first problem there are $\binom{10}5$ ways to split the candies between the two individually identifiable bags. In the actual second problem, however, the teams don’t have names; they’re just two teams of $5$. If the boys are $A,B,C,D,E,F,G,H,I$, and $J$, and we pick $A,B,C,D$, and $E$ to be one team, then of course $F,G,H,I$, and $J$ make up the other team. But we might just as well have selected $F,G,H,I$, and $J$ for one team, leaving $A,B,C,D$, and $E$ to be the other team. Since the teams are not named, choosing $A,B,C,D$, and $E$ gives us exactly the same division into teams as selecting $F,G,H,I$, and $J$: every possible division into teams is counted twice. That’s why in the second problem you don’t get $\binom{10}5$ possibilities, but rather only $\frac12\binom{10}5$. To put it a little differently, picking $A,B,C,D$, and $E$ to be the Gryphons in the modified problem gives us a different division into named teams from picking $F,G,H,I$, and $J$ to be the Gryphons, but it gives us exactly the same two groups of people playing against each other. It’s a different division into named (or otherwise individually identifiable) teams but not a different division into two teams of $5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/573380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Does there exist a semigroup such that every element factorizes in this way, which nonetheless lacks a left identity? If a semigroup $S$ has a left identity-element, then for any $y \in S$ we can write $y = xy$ for some $x \in S$. Just take $x$ to be any of the left identities, of which there is at least one, by hypothesis. Does there exist a semigroup $S$ such that every $y \in S$ factorizes in this way, which nonetheless lacks a left identity-element?
To give a concrete example based on vadim123's answer: Let $X$ denote an arbitrary set (for ease of imagining, assume non-empty). Then $2^X$ can be made into an idempotent monoid by defining composition as binary union. Now delete the empty set from $2^X$, obtaining a semigroup $S$. Since $S$ is idempotent, thus every $A \in S$ factorizes as $A \cup A$. Nonetheless, $S$ has no identity element.
{ "language": "en", "url": "https://math.stackexchange.com/questions/573464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Are these isomorphic $\mathbb{Z}_{2}\times\mathbb{Z}_{3}$ and $\mathbb{Z}_{9}^{*}$ Is $\mathbb{Z}_{2}\times\mathbb{Z}_{3}$ isomorphic to $\mathbb{Z}_{9}^{*}$ both have orders 6 both have elements with orders 1,2,3,6 (1 element of order 1, 2 elements of order 3, 1 element of order 2 and 2 elements of order 6) Both are cyclic thus Abelian Can I assume then that they are isomorphic? Or a specific isomorphism must be constructed?
Yes, the two groups are isomorphic. And you are almost there in proving this. Note that $\mathbb Z_2\times \mathbb Z_3 = \mathbb Z_6$, since $\gcd(2, 3) = 1$. And since the order of $\mathbb Z^*_9 = 6$ and is cyclic, we know that $\mathbb Z^*_9 \cong \mathbb Z_6$. There is no need to construct an explicit isomorphism here, though you need to (implicitly or explicitly) invoke the following standard fact about finite cyclic groups: Every finite cyclic group of order $n$ is isomorphic to $\mathbb Z_n$, the group of integers under addition, modulo $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/573536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Prove that the $x$-axis in $\Bbb R^2$ with the Euclidean metric is closed I want to show that the $x$-axis is closed. Below is my attempt - I would appreciate any tips on to improve my proof or corrections: Let $(X,d)$ be a metric space with the usual metric. Want to Show: $\{(x,y) | x ∈ \Bbb R, y = 0\}$ is closed Claim: $\{(x,y) | x ∈ \Bbb R, y ≠ 0\}$ is open Proof: Let $\{(x,y) | x ∈ \Bbb R, y ≠ 0\} = C$. Let $z$ be an arbitrary $(a,b) ∈ C$ and let $$ε = \min\{d(z,(0,y)), d(z,(x,0))\}.$$ Then for any $p ∈ $B_ε$(z)$, $B(p) ∈ \{(x,y) | x ∈ \Bbb R, y ≠ 0\}$ with radius $ε/2$.
Another fun way you might approach this problem is to let $f : \mathbb{R}^2 \to \mathbb{R}$ be defined by $f(x,y) = y^2$. If you know/can show that $f$ is continuous, then it will imply that $f^{-1}(\{0\})$ is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/573619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Need help with permutations and combinations problems A woman has 6 friends each evening, for 5 days, she invites 3 of them so that the same group is never invited twice. How many ways are there to do this? ( Assume that the order in which groups are invited matters.) Attempt: I know if I did, 6 choose 3, I would get all the groups in which it assumes for the subsets of all 6 people {a, b, c, d, e, f} that there are 20 unique subsets of 3. However for the second part, where no same group is invited twice, should the solution be : 20 x 19 x 18 x 17 x 16, for the 5 days there are, 1 subset is used, leaving 19 unique subsets as possibilities. I'm just not sure.
The number to make groups of 3 people out of 6 is ...? Then note that if a group is invited on the first evening, it can't be invited for the second, so there will be one less to choose and so on. Can you solve it on your own now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/573686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Enjoyable book to learn Topology. I believe Visual Group Theory - Nathan Carter is the best book for a non-mathematician (with high school math) to learn Group Theory. Could someone please recommend me a similar book (if there is) to learn Topology? Edit: I know many books in Topology, but someone who has read the above book will know what kind of reference I'm asking for. I am not looking for hard exercises, but to learn the concept and use it. Thanks.
Topology -James Munkres I have been using James Munkres book for self study. The proofs are well presented ,easy to follow and yet still rigorous. The first few chapters give you Set Theory concepts to prepare you for the rest of the book
{ "language": "en", "url": "https://math.stackexchange.com/questions/573781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 7, "answer_id": 6 }
Find the prime factor decomposition of $100!$ and determine how many zeros terminates the representation of that number. Find the prime factor decomposition of $100!$ and determine how many zeros terminates the representation of that number. Actually, I know a way to solve this, but even if it is very large and cumbersome, and would like to know if you have an easier way, or if I am applying wrong. Setting by $\left[\frac{b}a \right]$ the quotient of $b$ with $a$, we have too $E_p(m)$ the largest exponent power $p$ dividing $m$, and found the demonstration of a theorem that says (that in my text says it was discovered that Lagendre) $$E_p(n!)=\left[\frac{n}p \right]+\left[\frac{n}{p^2} \right]+\left[\frac{n}{p^3} \right]+\;...$$always remembering that there will be a number $s$ such that $p^s\geq n!$ which tells us that $$\left[\frac{n!}{p^s} \right]=0$$ thus making the sum of a finite $E_p(n!)$. So that I can address the first question I asked, really have to get all the cousins $(p_1,p_2,...,p_k)$ and make all $$E_{p_1},\;E_{p_2},\;E_{p_3},\;...,\;E_{p_k}$$ with $p$ and cousin $1<p<100$. And to find the zeros have to see how we both exponents in numbers 5 and 2. Example $$10!=2^83^45^27\\p<10\\E_2(10!)=5+2+1=8\\E_3(10!)=3+1=4\\E_5(10!)=2\\E_7(10!)=1\\$$
It's possible to demonstrate that if N is a multiple of 100, N! ends with (N/4)-1 zeroes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/573856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability of forming a 3-senator committee If the Senate has 47 Republicans and 53 Democrats, in how many ways can you form a 3-senator committee in which neither party holds all 3 seats? The solution says that: You can choose one Democrat, one Republican, and one more senator from either party. We can make these choices, in that order, in $53\cdot 47\cdot 98$ ways. But then we've counted each possible committee twice, since any given committee can be arranged in the order Democrat-Republican-Third Person in two different ways (depending on which member of the majority party on the committee is chosen as the Third Person). How are there two different ways to arrange the Democrat-Republican-Third committee based on the third person chosen? I only see one possible way.
Hint: find [Number of ways of choosing arbitrary set of senators] - [number of ways of choosing 3 Democrats] - [number of ways of choosing 3 Republicans]. Finding each of these 3 quantities is a standard probability task (think combininations, permutations, etc.). NOTE: I'm ignoring the given solution, since I think it makes the problem look more difficult than it actually is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/573954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Negative curvature compact manifolds I know there is a theorem about the existence of metrics with constant negative curvature in compact orientable surfaces with genus greater than 1. My intuition of the meaning of genus make me think that surfaces with genus greater that 1 cannot be simply-connected, but as my knowledge about algebraic-topology is zero, I might be wrong. My question is: are there two and three dimensional orientable compact manifolds with constat negative curvature that are simply-connected? If yes, what is an example of one? If not, what is the reason? Thanks in advance!
Every two (classification of surfaces) and three (Poincare-Thurston-Perelman) closed simply-connected manifold is diffeomorphic to a sphere, hence does not admit any metric of negative curvature. If a manifold has constant sectional curvature, its metric lifts to the universal cover, which is isometric to one of the space forms. Since a simply connected manifold is homeomorphic to its universal cover, and hyperbolic space is non-compact, no closed negatively curved manifold is simply connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/574053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
When do two functions differ by a constant throughout an interval (Fundamental Theorem of Calculus) I'm reading the proof of the Fundamental Theorem of Calculus here and I don't understand the following parts (at the bottom of page 2): I don't know how to conclude that $G(x)-F(x)=C$ for a $x \in [a,b]$. How do I prove the above statement and does it rely on another theorem not mentioned in this proof? I tried to figure this out by looking at the definitions of $G(x)$ and $F(x)% but only the definition of $G(x)$ is provided.
This is a consequence of the following general fact: If $f'(x) = 0$ for all $x$ in an interval $[a, b]$, then $f$ is constant on $[a, b]$. One way to prove this is by the Mean Value Theorem: If there were to exist $x_1$ and $x_2$ in the interval for which $f(x_1) \ne f(x_2)$, there would exist a $c$ between $x_1$ and $x_2$ for which $$0 \ne \frac{f(x_1) - f(x_2)}{x_1 - x_2} = f'(c)$$ contradicting the fact that $f' \equiv 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/574125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Sum of series $\frac{n}{(n+1)!}$ I'm encountering some difficulty on a question for finding the sum of the series $$\sum_{n=0}^\infty \dfrac{n}{(n+1)!}$$ The method I use to tackle this type of problem is generally to find a similar sum of a power series and algebraically manipulate it to match that of the original. I haven't found anything similar except for the summation of $e^x$ starting from $n=-1$, and subbing in $n^{\frac{1}{n}}$. Though, I'm not sure that will even work. Thanks in advance for any help/advice!
hint: $$ \sum_{n=0}^\infty \dfrac{n}{(n+1)!}= \sum_{n=1}^\infty \dfrac{n-1}{(n)!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/574211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Triangle Ratio/Proportions Problem I would like someone to verify that I am solving this problem correctly. I do not remember the theorem that allows me to make the two halves of the triangle proportional. Because (h1/h2 = h1/h2) Triangles are proportional? Here is the problem: My work:
$$\frac{Area(BML)}{Area(BCM)}=\frac{LM}{MC} \Rightarrow \frac{5}{10}=\frac{LM}{MC}$$ $$\frac{Area(MCK)}{Area(BCM)}=\frac{KM}{MB} \Rightarrow \frac{8}{10}=\frac{KM}{MB}$$ Let's say the area of $AMK$ be $2A$ then from $\frac{Area(ALM)}{Area(AMC)}=\frac{LM}{MC}$, the area of $ALM$ will be $4+A$. Now $$\frac{Area(AMK)}{Area(ABM)}=\frac{KM}{MB} \Rightarrow \frac{2A}{5+4+A}=\frac{4a}{5a}\\\Rightarrow 10A=36+4A\Rightarrow A=6 \Rightarrow S=4+A+2A=22$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/574268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Do this algorithm terminates? Let $x \in \mathbb{R}^p$ denote a $p$ dimensional data point (a vector). I have two sets $A = \{x_1, .., x_n\}$ and $B = \{x_{n+1}, .., x_{n+m}\}$. So $|A| = n$, and $|B| = m$. Given $k \in \mathbb{N^*}$, let $d_x^{(A, k)}$ denote the mean Euclidean distance from $x$ to its $k$ nearest points in $A$; and $d_x^{(B, k)}$ denote the mean Euclidean distance from $x$ to its $k$ nearest points in $B$. I have the following algorithm: * *$A' = \{ x_i \in A \mid d_{x_i}^{A, k)} > d_{x_i}^{(B, k)} \}$ ... (1) *$B' = \{ x_i \in B \mid d_{x_i}^{A, k)} < d_{x_i}^{(B, k)}$ ... (2) *A = $\{ x_i \in A \mid x_i \not\in A' \} \cup B'$ ... (3) *B = $\{ x_i \in B \mid x_i \not\in B' \} \cup A'$ ... (4) *Repeat (1), (2), (3) and (4) until: (no element moves from $A$ to $B$ or from $B$ to $A$, that is A' and B' become empty) or (|A| $\leq$ or |B| $\leq$ 1) Do this algorithm terminates, and if it so, is it possible to easily prove it ? Note: the $k$ nearest points to $x$ in a set $S$, means: the $k$ points (others than $x$) in $S$, having the smallest Euclidean distance to $x$.
Take $p=1$ and $k=1$. Consider $A=\{0,3\}$ and $B=\{2,5\}$. * *$d_0^A=3$ and $d_0^B=2$ *$d_2^A=1$ and $d_2^B=3$ *$d_3^A=3$ and $d_3^B=1$ *$d_5^A=2$ and $d_5^B=3$ So $A'=A$ and $B'=B$, so $A$ becomes $B$ and $B$ becomes $A$, and the algorithm never stops.
{ "language": "en", "url": "https://math.stackexchange.com/questions/574344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability space proof PROBLEM Let $(\Omega, \mathcal{F}, P)$ be a probability space and let $(E_n)$ be a sequence in the $\sigma$-algebra $\mathcal{F}$. $a)$ If the sequence $(E_n)$ is increasing (in the sence that $E_n \subset E_{n+1}$) with limit $E = \cup_nE_n$, prove that $P(E_n) \rightarrow P(E)$ as $n \rightarrow \infty$ $b)$ If the sequence $(E_n)$ is decreasing with limit $E$, prove that $P(E_n) \rightarrow P(E_n)$ as $n \rightarrow \infty$ MY APPROACH $a)$ We know that $E_n \subset E_{n+1}$ so obviously $E_{n+1}$ consists of all sequences $E_i$ for $1 \leq i \leq n$. $b)$ We know that the sequence is decreasing so $E_{n+1} \subset E_n$, so $E_1$ consists of all sequences $E_i$ for $ 2 \leq i \leq n$ I don't know how to formulate these proofs properly. I hope someone could help me..
a) Define $A_n:=E_{n}\setminus E_{n-1}$: and $A_0:=E_0$. Then * *if $i\neq j$, we have $A_i\cap A_j=\emptyset$; *for each $N$, $\bigcup_{i=1}^NA_i=\bigcup_{i=1}^NE_i$; *$\mathbb P\left(\bigcup_{i=1}^NA_i\right)=\sum_{i=1}^N\mathbb P(A_i)$. b) Consider the sequence $(\Omega\setminus E_n)_n$. This forms a non-decreasing sequence, hence using the result of the first part: $$\mathbb P\left(\bigcup_{N=1}^\infty(\Omega\setminus E_N)\right)=\lim_{n\to\infty}\mathbb P\left(\bigcup_{j=1}^n(\Omega\setminus E_j)\right).$$ The LHS is $1-\mathbb P\left(\bigcap_{N=1}^\infty E_n\right)$ while the RHS is $1-\lim_{n\to \infty}\mathbb P\left(\bigcap_{j=1}^nE_j\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/574441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Probability that exactly k of N people matched their hats [SRoss P63 Ex 2g] The match problem stated in Example 5m of Chapter 2 (of A First Course in Pr, 8th Ed, Ross) showed that the probability of no matches when $N$ people randomly select from among their own $N$ hats $= P[N]= \sum_{0 \le i \le N}(-1)^i/i!$ What is the probability that exactly $k$ of the $N$ people have matches? Solution: Let us fix our attention on a particular set of $k$ people and determine the probability that these $k$ individuals have matches and no one else does. Letting $E$ denote the event that everyone in this set has a match, and letting $G$ be the event that none of the other $N − k$ people have a match, we have $P(E \cap G) = P(E)P(G|E)$ (Rest of solution pretermitted) $P(E) = \dfrac{\text{ 1 choice for C1 } \times ... \times \text{ 1 choice for C(k - 1) } \times \text{ 1 choice for C(k) } \times N - k \text{ choices for C(N - k) }\times \ N - k - 1 \text{ choices for C(N - k - 1)} \times ...}{N!}$ , where $C(k) =$ chap $k$, chaps 1 through k each has one choice due to their success in finding their hat, and the $P(E \cap G) = P(E)\binom{N}{k}P[N - k]$. I see that $P(E) \neq P(E \cap G)$, but I don't apprehend the method and still deem $G$ redundant. Since $E$ is the event that exactly these $k$ people, for some $k$, have a match, how and why isn't the required probability just $P(E)$? Since there are only $N$ people, thus the occurrence of $E$ (coincidently, directly, and straightaway) equals the occurrence of $G$?
Since $E$ is the event that exactly these $k$ people, for some $k$, have a match, how and why isn't the required probability? Because * *You are asked for the probability that exactly $k$ people (no more and less) match their hats. The event that the (say) first $k$ match is not necessarily a "success", because there can be more matches in the remaining people. *Furthermore, the event $E$ is the probability that a particular set of $k$ people matches
{ "language": "en", "url": "https://math.stackexchange.com/questions/574613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How do you rotate a vector by $90^{\circ}$? Consider a vector $\vec{OA}$. How will I rotate this vector by $90^{\circ}$ and represent in algebraically?
Calling the vector $\overrightarrow v$, with components $v_x,v_y$ the angle between the vector and the $x$ axis is: $\alpha=\arctan\frac{v_y}{v_x}$. So if you add $\frac{\pi}{2}$ to $\alpha$, you get: $$v_x=\overrightarrow v\cos(\alpha+\frac{\pi}{2})$$ $$v_y=\overrightarrow v\sin(\alpha+\frac{\pi}{2})$$ If you wanto to subtract $90°$, you need to put $\alpha-\frac{\pi}{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/574693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Simplify Sum of Products: $\;A'B'C' + A'B'C + ABC'$ How would you simplify the following sum of products expression using algebraic manipulations in boolean algebra? $$A'B'C' + A'B'C + ABC'$$
Essentially, all that's involved here is using the distributive law (DL), once. Distributive Law, multiplication over addition: $$PQ + PR = P(Q + R)\tag{DL}$$ In your expression, in the first two terms, put $P = A'B'$: We also use the identity $$\;P + P' = 1\tag{+ID}$$ $$\begin{align} A'B'C' + A'B'C + ABC' & = A'B'(C' + C) + ABC' \tag{DL}\\ \\ &= A'B'(1) + ABC' \tag{+ ID}\\ \\ & = A'B' + ABC'\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/574749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that there is no natural number $n$ such that $3^7$ is the largest power of $3$ dividing $n!$ Show that there is no natural number $n$ such that $7$ is the largest power $a$ of $3$ for which $3^a$ divides $n!$ After doing some research, I could not understand how to start or what to do to demonstrate this. We have $$E_3(n!)\neq7\;\;\forall n\in\mathbb{N}\\\left[\frac{n}{3} \right]+\left[\frac{n}{3^2} \right]+\left[\frac{n}{3^3} \right]+\dots\neq7$$I do not know where from, or what to do to solve it.
Hint: What is the smallest value $n_1$ such that $3^7\mid (n_1)!$? What is the largest value $n_0$ such that $3^7\nmid (n_0)!$? What is the largest exponent $k$ such that $3^k\mid (n_1)!$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/574898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Asymptotics of logarithms of functions If I know that $\lim\limits_{x\to \infty} \dfrac{f(x)}{g(x)}=1$, does it follow that $\lim\limits_{x\to\infty} \dfrac{\log f(x)}{\log g(x)}=1$ as well? I see that this definitely doesn't hold for $\dfrac{e^{f(x)}}{e^{g(x)}}$ (take $f(x)=x+1$ and $g(x)=x$), but I'm not sure how to handle the other direction.
It does not follow. Take the example of $f(x)=e^{-x}+1$ and $g(x)=1$. Then $$ \lim_{x \to \infty}\frac{f(x)}{g(x)}= \lim_{x \to \infty} \frac{e^{-x}+1}{1}=1 $$ However, $$ \lim_{x \to \infty} \frac{\log f(x)}{\log g(x)}=\lim_{x \to \infty} \frac{\log(e^{-x}+1)}{\log 1} $$ Does not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/575008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Find a generating function for $a_r = n^3$ What is the generating function for $a_r = n^3$? I computed an answer, just wanted to double check my answer.
Here is how you advance. Assume $$ F(x) = \sum_{r=0}^{\infty} a_r x^r \implies F(x)=\sum_{r=0}^{\infty} r^3 x^r $$ $$ \implies F(x)= (xD)(xD)(xD)\sum_{r=0}^{\infty} x^r = (xD)^3 \frac{1}{1-x}, $$ where $D=\frac{d}{dx}$. Can you finished now? Added Here is the final answer $$ F(x)={\frac {x \left( 1+4\,x+{x}^{2} \right) }{ \left( 1-x \right) ^{4}}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/575090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Continuity of a function defined by an integral Ok, Here's my question: Let $f(x,y)$ be defined and continuous on a $\le x \le b, c \le y\le d$, and $F(x)$ be defined >by the integral $$\int_c^d f(x,y)dy.$$ Prove that $F(x)$ is continuous on $[a,b]$. I think I want to show that since $f(x,y)$ is continuous on $[a,b]$, I can use proof by contradiction to get $F'(x)$ continuous on $[a,b]$, which would then imply that $F(x)$ is continuous. But How do I go about setting this up? Any hints would be great. Thank you in advance. Also, this is my first attempt to format everything properly, So I'm sorry if this didn't post properly.
This is quite relevant for Rudin 10.1 from Real Analysis on page 246. Function $ f(x,y) $ is a continuous function on a compact set $[a,b]$. Therefore it is uniformly continuous, so the integrated expression is some real value, which can be made arbitrarily small. In other words, uniform continuity of $f(x,y)$ indeed implies continuity of $F(x)$. Great question!
{ "language": "en", "url": "https://math.stackexchange.com/questions/575253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Proofs with Induction Imply Proofs Without Induction? Assume we can prove $\forall x P(x)$ in first order Peano Arithmetic (PA) using induction and modus ponens. Does this mean we can prove $\forall x P(x)$ from the other axioms of PA without using induction? Given the induction axiom $(P(0) \land \forall x(P(x) \rightarrow P(Sx))) \rightarrow \forall x P(x)$ we must first prove $P(0) \land \forall x(P(x) \rightarrow P(Sx))$ using the other axioms of PA before we can deduce $\forall x P(x)$. This can be converted to $P(0) \land \forall x( \neg P(x) \lor P(Sx))$. We better not be able to prove $\exists x \neg P(x)$ from the other axioms so this reduces to $P(0) \land \forall x P(Sx)$. It seems reasonable if we can prove $P(0) \land \forall x P(Sx)$ from the other axioms without using induction we can prove $\forall x P(x)$ without induction.
The part: "This can be converted to $P(0) \land \forall x( \neg P(x) \lor P(Sx))$." is correct. The part: "We better not be able to prove $\exists x \neg P(x)$ from the other axioms so this reduces to $P(0) \land \forall x P(Sx)$." is not. In general $\forall x(A(x)\lor B(x))$ is not equivalent to $\forall x A(x) \lor \forall x B(x)$. This shows that your argument is not correct. One can also show that what it argues for is not true: It is easy to construct models of the rest of the axioms of first-order PA in which a sentence easily proved in first-order PA is false. The details depend on the exact formulation of first-order PA. For the version you linked to, note that the non-negative reals (or non-negative rationals, or non-negative numbers of the form $\frac{n}{2}$) with the usual interpretation of successor, addition, and multiplication form a model of the rest of the axioms if we throw the induction scheme away. The sentence $\exists x(x+x=S0)$ is true in all these models, but easily refutable in first-order PA.
{ "language": "en", "url": "https://math.stackexchange.com/questions/575314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Factorial lower bound: $n! \ge {\left(\frac n2\right)}^{\frac n2}$ A professor in class gave the following lower bound for the factorial $$ n! \ge {\left(\frac n2\right)}^{\frac n2} $$ but I don't know how he came up with this formula. The upper bound of $n^n$ was quite easy to understand. It makes sense. Can anyone explain why the formula above is the lower bound? Any help is appreciated.
Suppose first that $n$ is even, say $n=2m$. Then $$n!=\underbrace{(2m)(2m-1)\ldots(m+1)}_{m\text{ factors}}m!\ge(2m)(2m-1)\ldots(m+1)>m^m=\left(\frac{n}2\right)^{n/2}\;.$$ Now suppose that $n=2m+1$. Then $$n!=\underbrace{(2m+1)(2m)\ldots(m+1)}_{m+1\text{ factors}}m!\ge(m+1)^{m+1}>\left(\frac{n}2\right)^{n/2}\;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/575389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Getting angles for rotating $3$D vector to point in direction of another $3$D vector I've been trying to solve this in Mathematica for $2$ hours, but got the wrong result. I have a vector, in my case $\{0, 0, -1\}$. I want a function that, given a different vector, gives me angles DX and DY, so if I rotate the original vector by an angle of DX around the X axis, and then rotate it by an angle of DY around the Y axis, I'll get a vector with the same direction as the given vector. (I want this so I could input angles to SolidWorks to rotate a part so it will satisfy a constraint I defined in Mathematica.)
So you need a 3×3 rotation matrix $E$ such that $$ E\,\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ -1 \end{pmatrix} = -\hat{k} $$ This rotation matrix consists of two elementary rotations $$ \begin{aligned} E & = {\rm Rot}(\hat{i},\varphi_x){\rm Rot}(\hat{j},\varphi_y) \\ & = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos\varphi_x & -\sin\varphi_x \\ 0 & \sin\varphi_x & \cos\varphi_x \end{pmatrix} \begin{pmatrix} \cos\varphi_y &0 & \sin\varphi_y \\ 0 & 1 & 0 \\ - \sin\varphi_y & 0 & \cos\varphi_y \end{pmatrix} \end{aligned} $$ This is solved with $$ \varphi_x = - \tan^{-1} \left( \frac{y}{\sqrt{x^2+z^2}} \right) \\ \varphi_y = \pi - \tan^{-1} \left( \frac{x}{z} \right) $$ if $\sqrt{x^2+y^2+z^2}=1$ is true. Verification use $(x,y,z) = (\frac{1}{2}, \frac{3}{4}, \frac{\sqrt{3}}{4}) = (0.5, 0.75, 0.4330) $ to get $$ \varphi_x = - \tan^{-1} \left( \frac{\frac{3}{4}}{\sqrt{\left(\frac{1}{2}\right)^2+\left(\frac{\sqrt{3}}{4}\right)^2}} \right) = -0.8481 \\ \varphi_y = \pi - \tan^{-1} \left( \frac{\frac{1}{2}}{\frac{\sqrt{3}}{4}} \right) = 2.2845$$ With a rotation matrix $$ E = \begin{pmatrix} -0.6547 & 0 & 0.7559 \\ -0.5669 & 0.6614 & -0.4910 \\ -0.5 & -0.75 & -0.4330 \end{pmatrix} $$ and $$ \begin{pmatrix} -0.6547 & 0 & 0.7559 \\ -0.5669 & 0.6614 & -0.4910 \\ -0.5 & -0.75 & -0.4330 \end{pmatrix} \begin{pmatrix} 0.5 \\ 0.75 \\ 0.4330 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ -1\end{pmatrix} $$ NOTICE: Some combinations of $(x,y,z)$ will not yield the correct result because it won't be possible with the specific rotation sequence used.
{ "language": "en", "url": "https://math.stackexchange.com/questions/575472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Let $x$ and $y$ be two vectors in $\mathbb{R}^d$ with $\mid x \mid = \mid y \mid$. Find a unit vector $u$ such that $P_u x = y$ Let $u \in \mathbb{R}^d = V$ be a unit vector and set $W = \text{span}(u) ^{\bot}$ (with respect to the dot product). The reflector across W is $P_u = I_d - 2uu^T$. Let $x$ and $y$ be two vectors in $\mathbb{R}^d$ with $\mid x \mid = \mid y \mid$. Find a unit vector $u$ such that $P_u x = y$. I know that $P_u$ is orthogonal, $v \in V \Rightarrow v = cu + w$ unique $c \in \mathbb{R}$ and $w\in W$, and $Pv = -cu + w$. If $P_u x = -cu + w = y$, then $u = \frac{-1}{c} (y - w)$. I would need to show that this is a unit vector, but this is where I am stuck. I'm not sure if this is the route I want to take or if there is something else that I am missing.
If $x=y$ choose any non-zero vector perpendicular to $x$. Otherwise $u:=(x-y)/\|x-y\|$. In this case $$x\mapsto x-2\frac{\langle x,x-y\rangle}{\|x-y\|^2}(x-y)=\frac{\|x\|^2x-2\langle x,y\rangle x+\|y\|^2x-(2\|x\|^2x-2\|x\|^2y-2\langle x,y\rangle x+2\langle x,y\rangle y)}{\|x\|^2-2\langle x,y\rangle+\|y\|^2}.$$ As $\|x\|=\|y\|$ this dramatically simplifies to $$\frac{2\|x\|^2y-2\langle x,y\rangle y}{2\|x\|^2-2\langle x,y\rangle}=y.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/575554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding a $3 \times 3$ Matrix that maps points in $\mathbb{R}^3$ onto the a given line Give a $3 \times 3$ matrix that maps all points in $\mathbb{R}^3$ onto the line $[x,y,z] = t[a,b,c]$ and does not move the points that are on that line. Prove your matrix has these properties. Can someone verify if I am doing this correctly? I first find a matrix that takes the standard basis to a basis that has $\begin{bmatrix}a \\ b \\ c \end{bmatrix}$ in it: $\begin{bmatrix} a & 0 & 0 \\ b & 1 & 0 \\ c & 0 & 1 \end{bmatrix} = A$ now I choose a matrix that projects $\mathbb{R^3}$ onto the given line: $\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} = B$ Now I need to invert $A$ to go back to the standard basis and so $A^{-1} B A$ will project all of $\mathbb{R}^3$ onto the given line. Multiplying that out: $\begin{bmatrix}1/a & 0 & 0\\-b/a & 1 & 0 \\-c/a & 0 & 1\end{bmatrix}\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}\begin{bmatrix} a & 0 & 0 \\ b & 1 & 0 \\ c & 0 & 1 \end{bmatrix}$ $\begin{bmatrix} 1 & 0 & 0\\-b & 0 & 0 \\-c & 0 & 0 \end{bmatrix}$ To show that this maps points in $\mathbb{R}^3$ to $[x, y, z] = t[a, b, c]$ $\begin{bmatrix} 1 & 0 & 0\\-b & 0 & 0 \\-c & 0 & 0 \end{bmatrix}\begin{bmatrix} x \\ y\\z\end{bmatrix} = \begin{bmatrix}x \\ -bx \\-cx \end{bmatrix} = x\begin{bmatrix}1\\-b\\-c\end{bmatrix}$ I apologize if some of my explanations don't make sense. I am trying to solve this the way my tutor showed me but I may have misunderstood some of his explanations.
Since it maps all the vectors into directions of single vector, hence it must be rank 1; in particular following solution will work $\left[\begin{array}{ccc} \gamma a & \alpha a & \beta a\\ \gamma b & \alpha b & \beta b\\ \gamma c & \alpha c & \beta c \end{array}\right]\left[\begin{array}{c} x\\ y\\ z \end{array}\right]=(\gamma x+\alpha y+\beta z)\left[\begin{array}{c} a\\ b\\ c \end{array}\right] $ Now you need to choose $\gamma\ ,\alpha \ \& \beta $ such that $\left[\begin{array}{ccc} \gamma a & \alpha a & \beta a\\ \gamma b & \alpha b & \beta b\\ \gamma c & \alpha c & \beta c \end{array}\right]\left[\begin{array}{c} a\\ b\\ c \end{array}\right]=\left[\begin{array}{c} a\\ b\\ c \end{array}\right] $ Which should be straight forward to find.
{ "language": "en", "url": "https://math.stackexchange.com/questions/575647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How many permutations of $\{1,2,3,4,5\}$ leave at least two elements fixed? How many permutations $f: \{1,2,3,4,5\} \rightarrow \{1,2,3,4,5\}$ have the property that $f(i)=i$ for at least two values of $i$? I'm just struggling with this inclusion/exclusion question. I figured the best way would be to subtract (the cases where $f(i)=i$ holds for $1$ or $0$ values) from $5!$ ..not sure where to go from there.
There is $1$ permuatation that fixes $5$ elements. There are no permutations that fix $4$ elements. There are ${5 \choose 3} = 10$ permuations that fix $3$ elements (the other two are switched around). There are $2 {5 \choose 2} = 20$ permutations that fix $2$ elements, because there are $5 \choose 2$ ways to pick the elements that are fixed and $2$ ways to permute the remaining $3$ elements. So the final answer is $31$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/575731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are there any non-constructive proofs for which an example was never constructed? By non-constructive I mean the following: A mathematical object is proven to exist yet it is not constructed in the proof. Are there any examples of proofs like this where the mathematical object was never constructed? (by which i mean even after the existence of it was proven)
On the same line of thought but, imo, more striking, is the use of Zermelo's Theorem to prove there must exist a well-ordering of the reals (and thus, that golden dream of having a grip on that elusive first positive real number seems to be closer...). Yet no such ordering on $\;\Bbb R\;$, as far as I am aware, is known. Of course, Zermelo's Theorem, Zorn's Lemma and The Axiom of Choice are all logically equivalent in ZFC.
{ "language": "en", "url": "https://math.stackexchange.com/questions/575835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 1 }
Laplace transform of the following function find the laplace transform of the function : $$f(t) =\begin{cases} t^2, & 0<t<1 \\ 2\cos t+2, & t>1 \\ \end{cases}$$ My attempt: $$L\{f(t)\}=\int_{0}^{1}e^{-st} \ t^2 \ \text{d}t+\int_{1}^{\infty}e^{-st} \ (2\cos t+2) \ \text{d}t$$ Now, $$\int_{0}^{1}e^{-st} \ t^2 \ \text{d}t=\frac{-1}{s}e^{-s}-\frac{2}{s}te^{-s}-\frac{2}{s^3}e^{-s}+\frac{2}{s^3}$$ And $$\int_{1}^{\infty}e^{-st} \ (2\cos t+2) \ \text{d}t$$ But the integration is not stopping.
$$\begin{align} \int_1^{\infty} dt \, e^{-s t} \cos{t} &= \Re{\left [\int_1^{\infty} dt \, e^{-(s-i) t} \right ]}\\ &= \Re{\left [\frac{e^{-(s-i)}}{s-i} \right ]}\\ &= e^{-s} \Re{\left [(\cos{1}+i \sin{1}) \frac{s+i}{s^2+1}\right ]} \\ &= \frac{s \cos{1}-\sin{1}}{s^2+1} e^{-s} \end{align}$$ $$ \int_1^{\infty} dt \, e^{-s t} = \frac{e^{-s}}{s}$$ Multiply by $2$, add, done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/575905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Rational translates of the unit circle cover the plane Is it true that the translations of the unit circle by vectors with both coordinates rational cover the plane? This comes to solving $$ x=a+\cos \theta, \ y=b+\sin \theta$$ with unknowns $a,b$ rational and $\theta$ between $0,2\pi$. I couldn't find a positive answer or an immediate contradiction. (I typed the question from a phone so if I made new tags by error, please correct them)
No, the above statement is false.This is because it would imply that every real number is algebraic over $\mathbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/575986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$X$ be a normed space and assume that $E \subset X$ such that $\operatorname{int}(E) \neq\varnothing$ Let $X$ be a normed space and assume that $E \subset X$ such that $\operatorname{int}(E) \neq \varnothing$ then show that $E$ spans $X$. I am trying it in a following way.... Let be the norm $\|\cdot\|:X \rightarrow \mathbb{R}^{+} \cup \{0\}$ , i am trying to write $$X = \bigcup_{\alpha \in \mathbb{R}^{+} \cup \{0\}}\|\cdot\|^{-1}(\alpha)$$ and then somehow embedding the a ball a interior point of $E$ such that ball entirely lies inside this union ...these are the thin gs comming in mind plesae help find the solution this problem as soon as possible.i don't know whether this question easy or tricky...
"$E$ has a non-empty interior" means that there is $x_0\in E$ and $r\gt 0$ such that $B(x_0,r)\subset E$. We thus have $B(0,r)\subset\operatorname{span}(E)$. Since $E$ is a subspace, it's invariant by multiplication by scalars, so...
{ "language": "en", "url": "https://math.stackexchange.com/questions/576074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there any nonconstant function that grows (at infinity) slower than all iterations of the (natural) logarithm? Is there any nonconstant function that grows at infinity slower than all iterations of the (natural) logarithm?
In fact there are functions that go to $\infty$ more slowly than any function you can write down a formula for. For positive integers $n$ let $f(BB(n)) = n$ where $BB$ is the Busy Beaver function. Extend to $[1,\infty)$ by interpolation. EDIT: Stated more technically, a "function you can write down a formula for" is a recursive function: it can be computed by a Turing machine. $BB(n)$ is not recursive, and grows faster than any recursive function. If $g(n)$ is a recursive (integer-valued, for simplicity), nondecreasing function with $\lim_{n \to \infty} g(n) = \infty$, then there is a recursive function $h$ such that for all positive integers $n$, $g(h(n)) > n^2$. Namely, here is an algorithm for calculating $h(n)$ for any positive integer $n$: start at $y = 1$ and increment $y$ until $g(y) > n^2$, then output $y$. Now since $BB$ grows faster than any recursive function, for sufficiently large $n$ (say $n \ge N$) we have $BB(n) > h(n+1)$. For any integer $x \ge h(N)$, there is $n \ge N$ such that $h(n) \le x < h(n+1)$, and then (since $f$ and $g$ are nondecreasing) $$f(x) \le f(h(n+1)) \le f(BB(n)) = n < \sqrt{g(h(n)} \le \sqrt{g(x)}$$ and thus $$\dfrac{f(x)}{g(x)} \le \dfrac{1}{\sqrt{g(x)}} \to 0\ \text{as} \ x \to \infty $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/576130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 4 }
Beautiful Mathematical Images My Maths department is re-branding itself, and we've been asked to find suitable images for the departmental sign. Do you have a favourite mathematical image that could be used for the background of an A1-sized sign?
Penrose tiling, an example: http://en.wikipedia.org/wiki/Penrose_tiling Or any basic first year theorems (or more advanced) theorems like MVT, Taylor, BW, cardinalities, ordinals etc, in nice fonts or even as sculpture. Complex but symmetric 3D objects like the Wolframram Alpha star for example, see also George Hart.
{ "language": "en", "url": "https://math.stackexchange.com/questions/576306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 0 }
find all integers x such that $4x^2 - 1$ is prime I tried factoring it and got $(2x+1)(2x-1)$, however I do not know how to prove for all integers from here.
Well, you're trying to show that $4x^2-1$ is prime for some integer $x$, but you just factored it! If $4x^2-1$ is going to be prime, your factorization has to a trivial factorization. We can always factor primes as $p=p\cdot 1$, so maybe this will lead you to the correct answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/576356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Limit $\lim_{n\to \infty} n(\frac{1}{(n+1)^2}+\frac{1}{(n+2)^2}+\cdots+\frac{1}{(2n)^2})$ I need some help finding the limit of the following sequence: $$\lim_{n\to \infty} a_n=n\left(\frac{1}{(n+1)^2}+\frac{1}{(n+2)^2}+\cdots+\frac{1}{(2n)^2}\right)$$ I can tell it is bounded by $\frac{1}{4}$ from below and decreasing from some point which tells me it is convergent. I can't get anything else though. So far we did not have integrals and we just started series, so this should be solved without using either. Some hints would be very welcome :) Thanks
$$ \lim_{n\to\infty} n\sum_{k=n+1}^{2n} \frac{1}{k^2}=\lim_{n\to\infty} \frac{1}{n}\sum_{k=n+1}^{2n}\frac{1}{\left(\frac{k}{n}\right)^2}=\int_1^2 \frac{1}{x^2} \mathrm dx=\left[-\frac{1}{x}\right]_1^2=\frac{1}{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/576458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Solution to the limit of a series I'm strugling with the following problem: $$\lim_{n\to \infty}(n(\sqrt{n^2+3}-\sqrt{n^2-1})), n \in \mathbb{N}$$ Wolfram Alpha says the answer is 2, but I don't know to calculate the answer. Any help is appreciated.
For the limit: We take advantage of obtaining a difference of squares. We have a factor of the form $a - b$, so we multiply it by $\dfrac{a+b}{a+b}$ to get $\dfrac{a^2 - b^2}{a+b}.$ Here, we multiply by $$\dfrac{\sqrt{n^2+3}+ \sqrt{n^2 - 1}}{\sqrt{n^2+3}+ \sqrt{n^2 - 1}}$$ $$n(\sqrt{n^2+3}-\sqrt{n^2-1})\cdot\dfrac{\sqrt{n^2+3}+ \sqrt{n^2 - 1}}{\sqrt{n^2+3}+ \sqrt{n^2 - 1}} = \dfrac{n[n^2 + 3 - (n^2 - 1)]}{\sqrt{n^2+3}+ \sqrt{n^2 - 1}}$$ Now simplify and evaluate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/576550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Reed Solomon Code Working in GF(32). Polynomial is $x^5+x^2+1$. $\alpha$ is primitive element. $t = 3$. RS code. $n = 31$, $k = 25$. I have obtained generator polynomial $x^6+\alpha^{10}x^5+\alpha^{30}x^4+\dots$ How do I obtain generator matrix? I believe I write down coefficients in increasing order of $x$ over $31$ columns and $25$ rows and keep shifting since it is cyclic. Is this correct?
The answer given by Sudarsan is one possible generator matrix. If $${\bf d} = (d_0, d_1, \ldots , d_{k-1}) \longleftrightarrow d(x) = d_0 + d_1x + \cdots + d_{k-1}x^{k-1}$$ is the data polynomial, then the codeword polynomial corresponding to the codeword ${\bf c} = {\bf d}G$ (where $G$ is the generator matrix in Sudarsan's answer) is $d(x)g(x)$. Thus, using this generator matrix gives us a nonsystematic Reed-Solomon code meaning that $\bf d$ is not a subvector of ${\bf c}$. The generator matrix $\hat{G}$ of a systematic cyclic code, in which the codewords are of the form $${\bf c} = {\bf d}\hat{G} = ({\bf r}, {\bf d}) = (r_0, r_1, \ldots, r_{n-k-1}, d_0, d_1, \ldots, d_{k-1}),$$ is obtained as follows. We use the fact that $g_{n-k} = 1$. * *The first row is the same as in Sudarsan's answer. *The second row is obtained by first shifting the first row to the right by one place (as in Sudarsan's answer) but then subtracting $g_{n-k-1}$ times the first row from the shifted second row. What this does is modify the second row to put a $0$ below the $g_{n-k}$ in the first row. The first two rows thus look like $$\left[ \begin{array}{} g_0 & g_1 & \dots & g_{n-k-1} & 1 &0 & \dots & 0 & 0\\ -g_0g_{n-k-1} & g_0-g_1g_{n-k-1} & \dots & g_{n-k-2}-g_{n-k-1}^2& 0 & 1 &\dots & 0 & 0 \end{array}\right]\\ {\Large \Downarrow}\\ \left[ \begin{array}{} p_{0,0} & p_{0,1} & \dots & p_{0,n-k-1} & 1 &0 & \dots & 0 & 0\\ p_{1,0} & p_{1,1} & \dots & p_{1, n-k-1} & 0 & 1 &\dots & 0 & 0 \end{array}\right]$$ *The third row is obtained first shifting the newly constructed second row to the right by one place but then subtracting $p_{1,n-k-1}$ times the first row from so as to put a $0$ below the $1$ in the first row. Note that this forms a $3\times 3$ identity matrix on the right. *Lather, rinse, repeat, till you get $\hat{G} = [P \mid I]$ where $I$ is the $k\times k$ identity matrix and $P$ is the $k\times (n-k)$ matrix formed on the left as the shifting and subtracting is done. The first row of $\hat{G}$ is, of course, the same as the first row of $G$ in Sudarsan's answer. The rows of $\hat{G}$ are, of course, codewords, and the corresponding codeword polynomials are $$g(x),\\x^{n-k+1} - \left(x^{n-k+1} \mod g(x)\right), \\ x^{n-k+2} - \left(x^{n-k+2} \mod g(x)\right),\\ \vdots \\x^{n-1} - \left(x^{n-1} \mod g(x)\right),$$ The codeword polynomial corresponding to ${\bf d}\hat{G}$ is $x^{n-k}d(x) - \left(x^{n-k}d(x) \mod g(x)\right)$ where the second term on the right is the residue of $x^{n-k}d(x)$, a polynomial of degree $n-1$, modulo the generator polynomial $g(x)$. This residue is of degree $n-k-1$ or less, while $x^{n-k}d(x)$ has no nonzero coefficients of degree smaller than $n-k$, reflecting the fact that the codeword polynomial $x^{n-k}d(x) - \left(x^{n-k}d(x) \mod g(x)\right)$ has all the data symbols "in the clear" in the high-order coefficients followed by the parity symbols; that is, we have a systematic code.
{ "language": "en", "url": "https://math.stackexchange.com/questions/576612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find the eigenvectors and eigenvalues of A geometrically $$A=\begin{bmatrix}1 & 1 \\ 1 & 1\end{bmatrix}=\begin{bmatrix} 1/2 & 1/2 \\ 1/2 & 1/2\end{bmatrix} \begin{bmatrix}1 & 0 \\ 0 & 2\end{bmatrix} \begin{bmatrix}2 & 0 \\ 0 & 1\end{bmatrix}.$$ Scale by 2 in the $x$- direction, then scale by 2 in the $y$- direction, then projection onto the line $y = x$. I am confused with question since it is not a textbook like question. I don't know why $A$ equals to 3 matrices. You could ignore the word geometrically for the sake of easiness thanks
The geometry is what makes things easier (for me). Without the geometry, it would be a mechanical computation which I would not like doing, and might get wrong. Note that the vector $(1,1)$ gets scaled by our two scalings to $(2,2)$, and projection on $y=x$ leaves it at $(2,2)$. So the vector $(1,1)$ is an eigenvector with eigenvalue $2$. Now consider the vector $(-1,1)$. The two scalings send it to $(-2,2)$. Projection onto $y=x$ gives us $(0,0)$. So $(-1,1)$ is an eigenvector with eigenvalue $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/576663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Showing a numerical sequence converges How could I show that the following sequence converges? $$\sum_{n = 1}^{\infty} \frac{\sqrt{n} \log n}{n^2 + 3n + 1}$$ I tried the ratio and nth-root tests and both were inconclusive. I was thinking there might be a way to use the limit comparison test, but I'm not sure. Any hints?
Hint: $$\sum_{n = 1}^{\infty} \frac{\sqrt{n} \log n}{n^2 + 3n + 1} < \sum_{n = 1}^{\infty} \frac{ \log n}{n^{3/2}}$$ Then by integral test, since $\int_{1}^{\infty}\frac{ \log n}{n^{3/2}}=4$ (converges), so the given series converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/576743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Proof of Heron's Formula for the area of a triangle Let $a,b,c$ be the lengths of the sides of a triangle. The area is given by Heron's formula: $$A = \sqrt{p(p-a)(p-b)(p-c)},$$ where $p$ is half the perimeter, or $p=\frac{a+b+c}{2}$. Could you please provide the proof of this formula? Thank you in advance.
It is actually quite simple. Especially if you allow using trigonometry, which, judging by the tags, you do. If $\alpha$ is the angle between sides $a$ and $b$, then it is known that $$ \begin{align} A &= \frac{ab\sin \alpha}{2},\\ A^2 &= \frac{a^2b^2\sin^2 \alpha}{4}. \end{align} $$ Now, $\sin^2 \alpha = 1 - \cos^2 \alpha$, and you can find $\cos \alpha$ from the law of cosines: $$ c^2 = a^2 + b^2 - 2ab \cos \alpha. $$ You just find $\cos \alpha$ from this equality, plug it into the formula for $A$ above, and Heron's formula pops up as a result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/576831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
Maximum of a sequence $\left({n\choose k} \lambda^k\right)_k$ Is there an expression for the maximum of a sequence $\left({n\choose k} \lambda^k\right)_k$ (i.e. $\max_{k\in\{0,\ldots,n\}}{n\choose k}\lambda^k)$ in terms of elementary functions of $n$ and $\lambda$? This seems like a simple calculus problem but my usual method, finding the zero of the derivative, doesn't work here since $n \choose k$ is not differentiable.
In discrete case, it may be useful to look at ratio of successful terms. Here, let $a_k = \binom{n}k \lambda^k$. Then: $$\frac{a_{k+1}}{a_k} = \lambda\frac{n-k}{k+1}$$ As $k$ increases from $1$ to $n$, it is easily seen that the numerator decreases and the denominator increases, so the fraction decreases steadily from $\frac{n-1}2$ to $0$. At some point it becomes less than $1$, and the term before that point would be the maximum. So we solve for $$\lambda(n-k) \le k+1 \implies k \ge \frac{n\lambda-1}{\lambda+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/576900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the group $\Gamma$ such that $\mathbb{H}/\Gamma$ is a genus-n torus We know that the universal cover of genus-n torus is a unit disk ($n\ge2$), which is conformal to upper half plane $\mathbb{H}$, with automorphism group $SL(2,\mathbb{R})$. Thus the genus-n torus can be identified with $\mathbb{H}/\Gamma$. with $\Gamma$ isomorphic to the fundamental group of genus-n torus. I want to know the exact form how $\Gamma$ embedded in $SL(2,\mathbb{R})$.
First, the group of orientation-preserving isometries of the upperhalf plane is not $SL(2,R)$ but rather $PSL(2,R)$. Second, this surface is not called a genus-$n$ torus but rather a genus-$n$ surface; the term torus is generally reserved for the genus-$1$ case. So what you are asking for is an explicit Fuchsian group of a closed surface of genus $n$. These are not that easy to exhibit explicitly. One approach is to use congruence subgroups using suitable quaternion algebras, as developed here. Sometimes it is best to "construct" these groups geometrically using fundamental domains in the upperhalf plane. This shows the "existence" of the groups without exhibiting them explicitly. I also recommend the book by S. Katok on Fuchsian groups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/576988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Virtues of Presentation of FO Logic in Kleene's Mathematical Logic I refer to Stephen Cole Kleene, Mathematical Logic (1967 - Dover reprint : 2002). What are the "pedagogical benefits" (if any) of the presentation chosen by Kleene, mixing Natural Deduction and Hilbert-style ? Propositional Calculus - at pag.33 he refers back to formulas 1a-10b of Th.2 pag.15, which are usual Intro- and Elimination- rules for propositional connectives "rewritten" as axiom schemata. Predicate Calculus - at pag.107 he add two axiom schemata : $\forall x A(x) \rightarrow A(r)$ (the A-schema) and : $A(r) \rightarrow \exists x A(x)$ (the E-schema), and the two rules - the A-rule : from $C \to A(x)$ to $C \to \forall x A(x)$ and the E-rule : from $A(x) \to C$ to $\exists x A(x) \to C$ , with x not free in C. Then (Th.21 pag.118) he proves as derived rules the four standard rules of Intro- and Elim- for quantifiers.
@Peter Smith wrote: So it is worth noting that e.g. John Corcoran can write "Three Logical Theories" as late as 1969 (Philosophy of Science, Vol. 36, No. 2 (Jun., 1969), pp. 153-177), finding it still novel and necessary to stress the distinctions between different types of logical theory. Here it is https://www.academia.edu/9855795/Three_logical_theories
{ "language": "en", "url": "https://math.stackexchange.com/questions/577072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
On idempotent elements that are contained in center of a ring Let $e$ and $f$ be idempotent elements of a ring $R$. Assume that $e,f$ are contained in center of $R$. Show that $Re=Rf$ if and only if $e=f$ Help me a hint to prove it. Thank in advanced.
HINT : show that the sum and product of $R$ induce a sum and product on $Re$. What can you say about $e\in Re$ with respect to multiplication?
{ "language": "en", "url": "https://math.stackexchange.com/questions/577160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Fixed point in plane transformation. Some one give me a idea to solve this one. It's a problem from Vladimir Zorich mathematical analysis I. Problem 9.c from 1.3.5: A point $p \in X$ is a fixed point of a mapping $f:X \to X$ if $f(p)=p$. Verify that any composition of a shift, a rotation, and a similarity transformation of the plane has a fixed point, provided the coefficient of the similarity transformation is less than one.
Shift and rotation are just special cases of similarity transformations. A generic similarity can be written e.g. in the following form: $$ \begin{pmatrix}x\\y\end{pmatrix}\mapsto \begin{pmatrix}a&-b\\b&a\end{pmatrix}\cdot \begin{pmatrix}x\\y\end{pmatrix}+ \begin{pmatrix}c\\d\end{pmatrix} $$ With this you can solve the fixed point equation: $$ \begin{pmatrix}x\\y\end{pmatrix}= \begin{pmatrix}a&-b\\b&a\end{pmatrix}\cdot \begin{pmatrix}x\\y\end{pmatrix}+ \begin{pmatrix}c\\d\end{pmatrix} \\ \begin{pmatrix}a-1&-b\\b&a-1\end{pmatrix}\cdot \begin{pmatrix}x\\y\end{pmatrix}= \begin{pmatrix}-c\\-d\end{pmatrix} $$ This linear system of equations has a unique solution if and only if the determinant of the matrix is nonzero, i.e. if $$\begin{vmatrix}a-1&-b\\b&a-1\end{vmatrix}= (a-1)^2+b^2\neq0$$ Now $a$ and $b$ are real numbers, and so is $a-1$. The square of a real number is zero only if the number itself is zero, otherwise it is positive. So the sum of two squares is zero only if both the numbers which are squared are zero. So the only situation where there is no single and unique fixed point is $a=1,b=0$. In this case, the linear part of the transformation is the identity, so the whole transformation is either a pure translation or, if $c=d=0$, the identity transformation. A translation has no fixed points, and under the identity, every point is fixed. Both cases are ruled out by your statement about the coefficient of the similarity transformation. By the way, you can read that coefficient as the square root of the determinant of the linear matrix for the original transformation, i.e. $$\begin{vmatrix}a&-b\\b&a\end{vmatrix}=a^2+b^2\in(0,1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/577240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Almost sure convergence proof Cud someone please explain the proof of $ P(X_n \to X)=1 $ iff $$ \lim_{n \to \infty}P(\sup_{m \ge n} |X_m -X|>\epsilon) \to 0 $$. Im not able to understand the meaning of the various sets they take during the course of the proof.
Intuitively, the result means that to converge almost everywhere is equivalent to "bound the probability of the $\omega$'s for which $|X_n-X|$ is infinitely often larger than a positive number". Here is a more formal argument. Assume that $X_n\to X$ almost surely and fix $\varepsilon\gt 0$. Define $A_m:=\{|X_m-X|\gt \varepsilon\}$ and $B_n:=\bigcup_{m\geqslant n}A_m$. The sequence $(B_n)$ is non-increasing and $\bigcap_{n\geqslant 1}B_n$ is the set of $\omega$'s for which $\omega\notin A_m$ for infinitely many $m$'s. Since this set is contained in $\{\omega,X_n(\omega)\mbox{ doesn't converge to }X(\omega)\}$, as set of measure $0$, we are done. Conversely, assume that for each $\varepsilon\gt 0$, $\mathbb P(\sup_{m\geqslant n}|X_m-X|\gt \varepsilon)\to 0$. Take $\varepsilon=2^{-k}$ for a fixed $k$, and $n_k$ such that $\mathbb P(\sup_{m\geqslant n_k}|X_m-X|\gt 2^{-k})\leqslant 2^{-k}$ (we can assume $(n_k)_k$ increasing). Then by Borel-Cantelli's lemma, $\mathbb P(\limsup_k\{\sup_{m\geqslant n_k}|X_m-X|\gt 2^{-k}\})=0$. This means that there exists $\Omega'\subset\Omega$ of probability $1$ for which given $\omega\in\Omega'$, there is $k(\omega)$ such that $\sup_{m\geqslant n_k}|X_m-X|\leqslant 2^{-k}$ for $k\geqslant k(\omega)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/577334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Can You Construct a Syndetic Set with an Undefined Density? Let $A \subset \mathbb{N}$. Enumerate $A = \{A_1, A_2,...\}$ such that $A_1 \le A_2 \le ...$. We say that $A$ is syndetic if there exists some $M \geq 0$ such that $A_{i+1} - A_i \le M$ for all $i =1,2,..$ (that is, "the gaps of $A$ are uniformly bounded"). The natural density of $A$, if it exists, is defined to be $$d(A) = \lim_{N \to \infty} \frac{|A \cap \{1,2,..., N\}|}{N} = \lim_{N \to \infty} \frac{N}{A_N}.$$ It is possible that the limit does not exist. The examples of this phenomenon that I've seen all use the same idea. You need to have a set which first contains a lot of elements of $\{1,2,..N\}$, then misses a lot of $\{N, N+1, ..., N'\}$ then has a lot of $\{N' + 1, ..., N''\}$, etc. A common example is given by $$A = [2^3, 2^5] \cup [2^7, 2^9] \cup ... \cup [2^{4k-1}, 2^{4k+1}] \cup ...$$ Such examples cannot be syndetic. In the specific given example the problem is that the gaps $[2^{4k+1}, 2^{4k + 5}]$ are not bounded as $k \to \infty$. So my question is: how can one construct a syndetic set with no natural density (if possible)? Even better: can you construct $A \subset \mathbb{N}$ such that $A$ and $\mathbb{N} \setminus A$ are syndetic and such that $A$ has no density? Thank you very much in advance.
Take the union of the even integers and a subset of odd integers whose density fluctuates (say between 1/4 and 1/8 of odd numbers, to meet the other conditions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/577482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Proofs from the Book - need quick explanation I've been recently reading this amazing book, namely the chapter on Bertrand's postulate - that for every $n\geq1$ there is a prime $p$ such that $n<p\leq2n$. As an intermediate result, they prove that $\prod_{p\leq x}p \le 4^{x-1}$ for any real $x\geq2$, where the product is taken over all primes $p\leq x$ . While proving that, they rely on the inequality $$ \prod_{m+1<p\le2m+1}p\leq\binom{2m+1}{m}, $$ where $m$ is some integer, $p$'s are primes. They explain it by observing that all primes we are interested in are contained in $(2m+1)!$, but not in $m!(m+1)!$. The last part is what I don't understand. I can understand how this principle can be applied to the bound $(2m+1)!/(m+1)! = (m + 2)\ldots(2m+1)$, but why can we safely divide this by $m!$? Thank you!
Not sure if you're analysing too much for the last part? If we look at the inequality $$ \prod_{m+1<p<=2m+1}p\leq\binom{2m+1}{m}, $$ we see that for any prime $p \in (m+1,2m+1]$, we have $p|(2m+1)!$ but $p\nmid m!$ and $p \nmid (m+1)!$. ["The last part" that you mention is simply because $p > m+1$.] So $p$ is indeed a factor of the numerator of $\binom{2m+1}{m}$ but not in its denominator, which proves the inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/577546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Equal balls in metric space Let $x$ and $y$ be points in a metric space and let $B(x,r)$ and $B(y,s)$ be usual open balls. Suppose $B(x,r)=B(y,s)$. Must $x=y$? Must $s=r$? What I got so far is that: $$r \neq s \implies x \neq y$$ but that's it.
Think minimally: Let $X=\{x,y\}$ be a set with two points. Define $d(x,y)=d(y,x)=1$, and $d(x,x)=0=d(y,y)$. Then, $B(x,2)=B(y,3)$, yet $x\ne y$ and $2\ne 3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/577606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Why is $\sin{x} = x -\frac{x^3}{3!} + \frac{x^5}{5!} - \cdots$ for all $x$? I'm pretty convinced that the Taylor Series (or better: Maclaurin Series): $$\sin{x} = x -\frac{x^3}{3!} + \frac{x^5}{5!} - \cdots$$ Is exactly equal the sine function at $x=0$ I'm also pretty sure that this function converges for all $x$ What I'm not sure is why this series is exactly equal to the sine function for all $x$. I know exactly how to derive this expression, but in the process, it's not clear that this will be equal the sine function everywhere. Convergence does not mean this will be equal, it just mean that it will have a defined value for all $x$. Also, I would want to know: is this valid for values greater than $\frac{\pi}{2}$? I mean, don't know how I can proof that this Works for values greater than the natural definition of sine.
First, let's take the Taylor's polynomial $\displaystyle T_n(x) = \sum\limits_{k=0}^{n}\frac{f^{(k)}(a)}{k!}(x-a)^k$ at a given point $a$. We can now say: $\displaystyle R_n(x) = f(x) - T_n(x)$, where $R_n$ can be called the remainder function. If we can prove that $\lim\limits_{n\rightarrow \infty}R_n(x) = 0$, then $f(x) = \lim\limits_{n\rightarrow\infty}T_n(x)$, that is, Taylors series is exactly equal to the function. Now we can use the Taylor's theorem: $\mathbf{Theorem\,\, 1}$ If a function is $n+1$ times differentiable in an interval $I$ that contains the point $x=a$, then for $x \in I$ there exists $z$ that is strictly between $a$ and $x$, such that: $\displaystyle R_n = \frac{f^{(n+1)}(z)}{(n+1)!}(x-a)^{(n+1)}$. (This is known as the Lagrange remainder.)$\hspace{1cm}\blacksquare$ Ok, now for every Taylor's Series we want, we must prove that $\lim\limits_{n\rightarrow \infty}R_n(x) = 0$ in order for the series to be exactly equal to the function. Example: Prove that for $f(x) = \sin(x)$, Maclaurin's series (Taylor's series with $a=0$) is exactly equal to the function for all $x$. First, we know that $\displaystyle\left|\, f^{(n+1)}(z)\right| \leq 1$, because $\left|\sin(x)\right|\leq 1$, and $|\cos(x)|\leq 1$. So we have: $\displaystyle 0\leq \left|R_n\right| = \frac{\left|f^{(n+1)}(z)\right|}{(n+1)!}\left|x\right|^{n+1} \leq \frac{\left|x\right|^{n+1}}{(n+1)!}$ It's easy to prove that $\displaystyle\lim\limits_{n\rightarrow\infty}\frac{\left|x\right|^n}{n!} = 0$ for all $x\in \mathbb{R}$ (for example with D'Alambert ratio criterion for series convergence) Therefore, according to the squeeze theorem (In my country we call it the Two Policemen and the Burglar theorem :D), it follows that $\left|R_n\right| \rightarrow 0$ when $n\rightarrow\infty$. This in turn is equivalent to: $\displaystyle R_n\rightarrow 0$ as $n\rightarrow\infty$ (because $\left|R_n - 0\right| = \left|\left|R_n\right| - 0\right|$). Therefore, according to the Theorem 1, it holds that $f(x) = \lim\limits_{n\rightarrow\infty}T_n$, for all $x$ in the radius of convergence. ** Now, let's find the radius of convergence: $\mathbf{Theorem\,\, 2}$ For a Maclaurin Series of a function: $f(x) = \displaystyle\sum\limits_{k=0}^{+\infty}c_k x^k = c_0 + c_1x + c_2x^2 + \dots$ The equality holds for $x$ in the radius of convergence: $\displaystyle\frac{1}{R} = \limsup\limits_{k \rightarrow \infty} \left|c_k\right|^{\large{\frac{1}{k}}}$ (Cauchy-Hadamard formula) where $R$ is the radius of convergence of the given function, or more concretely, the power series converges to the function for all $x$ that satisfies: $\displaystyle\left|\,x\, \right| < R$, where $R\in [0,+\infty]$. $\hspace{1cm}\blacksquare$ Let's look at our given example for $\sin(x)$. We have: $\displaystyle\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \dots = \sum\limits_{k=0}^{+\infty}c_k x^{2k+1}$ where $\displaystyle c_k = \frac{(-1)^k}{(2k+1)!}$. Substituting for $R$ we get: $\displaystyle \frac{1}{R} = \limsup\limits_{k\rightarrow\infty}\frac{1}{\sqrt[k]{(2k+1)!}}=0,$ (this can be proven with Stirling's approximation) and $\displaystyle\implies \boxed{R = +\infty}$ So, our series converges for all $\displaystyle|x| < +\infty \, \Longleftrightarrow\, \boxed{ -\infty < x < +\infty}$, which we expected. Finally, we have that $f(x) = \lim\limits_{n\rightarrow\infty}T_n$, for all $x\in\mathbb{R}$, which is what we wanted to prove. $\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/577676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Proofs without words of some well-known historical values of $\pi$? Two of the earliest known documented approximations of the value of $\pi$ are $\pi_B=\frac{25}{8}=3.125$ and $\pi_E=\left(\frac{16}{9}\right)^2$, from Babylonian and Egyptian sources respectively. I've read that the Egyptian figure at least could be justified through some geometrical diagram which made the approximation a visually obvious statement about the areas of circles and squares. As far as I know, the Babylonian value on the other hand could have simply been obtained empirically through direct measurement of circle diameters and circumferences; I really have no idea. My question is simply can anyone provide simple visual proofs of these approximations? It doesn't matter to me if the proofs happen to be the historically used ones or not, as long as they get the job done. Side-note: The Egyptian value pertains to the area-$\pi$, whereas the Babylonian one is about the circumference-$\pi$. As far as anyone knew back in the day, the two constants were not necessarily equal a priori. Bonus points go to answers that can demonstrate both approximations for both pi's.
Straight edge and compass construction of a quadrature of a circle is possible only with Babylonian value of pi. Try this link : https://www.academia.edu/8084209/Ancient_Values_of_Pi. Though, Egyptian value of Pi (22/7 or 256/81) is rational, compass and straight edge construction of a quadrature of a circle is not possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/577745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Determine whether series is convergent or divergent $\sum_{n=1}^{\infty}\frac{1}{n^2+4}$ I still haven't gotten the hang of how to solve these problems, but when I first saw this one I thought partial fraction or limit. So I went with taking the limit but the solution manual shows them using the integral test. Was I wrong to just take the limit? $$\sum_{n=1}^{\infty}\frac{1}{n^2+4}$$ Next: $$\lim_{n\to\infty}\frac{1}{n^2+4}=0$$ So converges by the test for divergence?
We only have the following statement to be true: $$\text{If $\sum_{n=1}^{\infty} a_n$ converges, then $a_n \to 0$.}$$ The converse of the above statement is not true, i.e., $$\text{if $a_n \to 0$, then $\displaystyle \sum_{n=1}^{\infty} a_n$ converges is an incorrect statement.}$$ For instance, $\displaystyle \sum_{n=1}^{\infty} \dfrac1n$ diverges, even though $\dfrac1n \to 0$. To prove your statement, note that $\dfrac1{n^2+4} < \dfrac1{n^2}$ and make use of the fact that $\displaystyle \sum_{n=1}^{\infty} \dfrac1{n^2}$ converges to conclude that $\displaystyle \sum_{n=1}^{\infty}\dfrac1{n^2+4}$ converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/577831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
$2\times2$ matrices are not big enough Olga Tausky-Todd had once said that "If an assertion about matrices is false, there is usually a 2x2 matrix that reveals this." There are, however, assertions about matrices that are true for $2\times2$ matrices but not for the larger ones. I came across one nice little example yesterday. Actually, every student who has studied first-year linear algebra should know that there are even assertions that are true for $3\times3$ matrices, but false for larger ones --- the rule of Sarrus is one obvious example; a question I answered last year provides another. So, here is my question. What is your favourite assertion that is true for small matrices but not for larger ones? Here, $1\times1$ matrices are ignored because they form special cases too easily (otherwise, Tausky-Todd would have not made the above comment). The assertions are preferrably simple enough to understand, but their disproofs for larger matrices can be advanced or difficult.
Interesting although quite elementary is property for matrices made from consecutive integers numbers (or more generally from values of arithmetic progression) where rows make an arithmetic progression. Only $2 \times 2$ matrices made from consecutive integers numbers are non-singular, matrices of higher dimension are singular. For example matrix $\begin{bmatrix} 1 & 2\\ 3 & 4 \\ \end{bmatrix}$ is non-singular, but matrices like $\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}$, $\begin{bmatrix} 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16 \end{bmatrix}$, $\dots$ are singular. This can be applied for easy generating of singular matrices if needed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/577887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42", "answer_count": 17, "answer_id": 4 }
Prove that $\int_E |f_n-f|\to0 \iff \lim\limits_{n\to\infty}\int_E|f_n|=\int_E|f|.$ I'm reading Real Analysis by Royden 4th Edition. The entire problem statement is: Let $\{f_n\}_{n=1}^\infty$ be a sequence of integrable functions on $E$ for which $f_n\to f$ pointwise a.e. on $E$ and $f$ is integrable over $E$. Show that $\int_E |f_n-f|\to0 \iff \lim\limits_{n\to\infty}\int_E|f_n|=\int_E|f|.$ My attempt at the proof is: $(\Longrightarrow)$ Suppose $\int_E|f_n-f|\to0$ and let $\varepsilon>0$ be given. Then there exists an $N>0$ such that if $n\geq N$ then $|\int_E|f_n-f||<\varepsilon.$ Consider $$|\int_E|f_n|-\int_E|f||=|\int_E(|f_n|-|f|)|\leq|\int_E|f_n-f||<\varepsilon.$$ Thus, $\int_E|f_n|\to\int_E|f|.$ $(\Longleftarrow)$ Suppose now that $\int_E|f_n|\to\int_E|f|.$ Let $h_n=|f_n-f|$ and $g_n=|f_n|+|f|$. Then $h_n\to0$ pointwise a.e. on $E$ and $g_n\to2|f|$ pointwise a.e. on $E$. Moreover, since each $f_n$ and $f$ are integrable $\int_E g_n=\int_E|f_n|+|f|\to2\int_E|f|.$ Thus, by the General Lebesgue Dominated Convergence Theorem, $\int_E|f_n-f|\to\int_E0=0.$ I'm pretty sure I got this one down, but I was wondering if it was okay for $g_n$ to depend on $f$ or $f_n$ or does it need to be independent of them? Thanks for any help or feedback!
Fatou's Lemma is your friend. By Fatou, \begin{align*} \int_{E} 2|f| &= \int_{E} \liminf_{n\to\infty} (|f| + |f_n| - |f-f_n|) \\ &\leq \liminf_{n\to\infty} \int_{E} (|f| + |f_n| - |f-f_n|) \\ &= 2\int_{E} |f| - \limsup_{n\to\infty} \int_{E} |f-f_n|. \end{align*} So it follows that $\limsup_{n\to\infty} \int_{E} |f-f_n| = 0$ and the desired conclusion follows. You may also want to give a look on Scheffé's lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/577946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Can we make rectangle from this parts? I have next problem: Can we using all parts from picture (every part exactly one time) to make rectangle? I was thinking like: we have $20$ small square, so we have three possibility: $1 \times 20$, $2 \times 10$ and $4 \times 5$. I can see clearly that $1 \times 20$ and $2 \times 10$ are not possible. And some mine intuition says that we also can't make $4 \times 5$, but I can't prove it rigorous. Any help?
For the 4 by 5, suppose 4 rows and 5 columns, and consider rows 1,3 as blue and columns 1,3,5 as red. Then there are 10 blues and 12 reds. Now except for the T and L shapes, the other three contribute even numbers to either blue or red rows/columns. The L contributes an odd number to either blue or red, and the T contributes, depending on its orientation, either odd number to blue and an even number to red, or else an even number to blue and an odd number to red. So no matter how the T is oriented, we get an odd number for either the blues or the reds, which is impossible. Easier proof: Take the 4 by 5 (or the 2 by 10) board and color it with black and white squares as in a traditional chess board. Then all but the T piece are such that, no matter where they are placed, they cover two black and two white squares. But the T shape must cover either 3 black and 1 white, or else the reverse 1 black and 3 white. So together the tiles cover either 9 black and 11 white, or else 11 black and 9 white. However both the 4 by 5 and the 2 by 10 board have 10 each of black and white.
{ "language": "en", "url": "https://math.stackexchange.com/questions/578034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are all subspaces of equal dimension (of a vector space) the same? I haven't quite gotten my head around dimension, bases, and subspaces. It seems intuitively true, but are all subspaces of equal dimension of the same vector space the same? If so, does it follow from the definitions of dimension, subspace, and vector space, or does it need to be proven? Thanks
Here are some more intuitive "definitions". Dimension - Number of degrees of freedom of movement. One-dimensional implies only one direction of movement: up and down a line. Two-dimensional means two distinct directions of movement, spanning a plane, etc. Basis - The distinct directions of movement. One-dimensional movement only says we are moving along a line, but it does not specify which line. The basis tells us the directions we can move in. Can you see why the dimension of a subspace is not enough information to uniquely identify it? Why do we need to know the basis as well?
{ "language": "en", "url": "https://math.stackexchange.com/questions/578155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Functions and convergence in law Let $X$ be a random variable taking values in some metric space $M$. Let $\{\phi_n\}$ be a sequence of measurable functions from $M$ to another metric space $\tilde M$. Suppose that $\phi_n(X)$ converges in law to a random variable $Y$. Must it be the case that the pairs $(X, \phi_n(X))$ converge in law to a pair $(X, \phi(X))$, where $\phi$ is a measurable function such that $\phi(X)$ has the same law as $Y$?
A more elementary counterexample: Let $X$ have uniform distribution over $[0,1]$ and define $\phi_n(x):=x$ when $n$ is odd, and $\phi_n(x):=1-x$ when $n$ is even. It's clear that every $\phi_n(X)$ has the same distribution (namely, that of $X$), hence we have convergence in law. However, the pair $(X,\phi_n(X))$ doesn't converge in law to anything, since its distribution keeps flipping back and forth.
{ "language": "en", "url": "https://math.stackexchange.com/questions/578248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Show that the following set has the same cardinality as $\mathbb R$ using CSB We have to show that the following set has the same cardinality as $\mathbb R$ using CSB (Cantor–Bernstein–Schroeder theorem). $\{(x,y)\in \Bbb{R^2}\mid x^2+y^2=1 \}$ I think that these are the two functions: $f:(x,y)\to \Bbb{R} \\f(x)=x,\\f(y)=y $ $g:\Bbb{R}\to (x,y)\\ g(x)=\cos(x),\\g(y)=\sin(y)$ Is this correct ? Thanks.
HINT: There is no continuous bijection between the two sets. Find a bijection from the unit circle to $[0,2\pi)$, and an injection from $\Bbb R$ into $[0,2\pi)$. Also, when you define a function $g\colon\Bbb R\to\Bbb R^2$ you don't write $g(x)=\cos x$ and $g(y)=\sin y$. You should write $g(x)=(\cos x,\sin x)$ instead. Similarly when defining $f\colon\Bbb R^2\to\Bbb R$, you should define $f(x,y)=z$ rather than writing $f(x)=x$ and $f(y)=y$. Both functions that you have defined are meaningless expressions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/578311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is the ring of polynomial invariants of a finite perfect group an UFD? Let $G$ be a finite group. $G$ acts on $\mathbb K[x_1,...,x_n]$ by automorphisms fixing $K$. $\mathbb K[x_1,...,x_n]^G=\{ T\in \mathbb K[x_1,...,x_n],\forall \sigma \in G, T^{\sigma}=T\}$ is the ring of invariants. Is it true that $\mathbb K[x_1,...,x_n]^G$ is a unique factorization domain if commutator subgroup of $G$ equals to $G$?
Let $P$ be an irreducible polynomial. Any element $g$ of $G$ maps $P$ to some irreducible polynomial, because action is invertible. This polynomial $gP$ may be either proportional to $P$ or coprime to $P$. Let $Q = P \cdot (g_1 P) \cdot \ldots \cdot (g_k P)$ be a product of polynomials in the "essential orbit" of $P$: all pairwise non-proportional irreducible polynomials in the orbit. The action of $g \in G$ on $Q$ permutes the factors and multiplies them by some field elements, so $gQ = \phi(g) Q$, where $\phi\colon G \to k^*$ is some function on $G$. It is easy to check that $\phi$ is a homomorphism. As $k^*$ is commutative, the commutant of $G$ belongs to the kernel of $\phi$. Thus if $G$ is perfect, $Q$ is necessarily invariant. Now it's straightforward to see that invariant polynomials over such $G$ form an UFD. Indeed, take an invariant polynomial $R\in k[x_1, \ldots, x_n]^G$. Let $P$ be any irreducible divisor of $R$ (not invariant, just a polynomial). As $R$ is invariant, it obviously is divisible by $Q$, but $Q$ is invariant, hence the quotient is invariant too. Any decomposition of $R$ into irreducible invariant polynomials contains a multiple of $P$ (after all, polynomial ring itself is an UFD), hence one of the factors is $Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/578380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Need help with finding domains, intercepts, max/min of function Here is a picture for clarification of the question: So far, I have gotten this: the root is all non-negative numbers; x is greater than or equal to 0. In order to find the asymptotes I set the denominator to zero and so the asymptotic is zero?? I don't understand
You can see that the domain is $\{x\in \mathbb{R} : x\geq 0 \}$. To find the max/min (which eventually gets you the range), we find out the first derivative. $f'(x)=\frac{4-x}{2\sqrt(x)(x+4)^2}$. Here the critical point is $x=4$. Also $f''(x)=-\frac{1}{4x^{3/2}}-\frac{4-x}{x^{1/2}(x+4)^2}$, then $f''(4)=-\frac{1}{32}$. Thus the function has maximum at $x=4$, and the maximum value is $f(4)=\frac{1}{4}$. Obviously the minimum value of $f$ is $0$. If you want to draw its graph then try to come up with the graph of $\sqrt(x)$, consider the range $0\leq y \leq \frac{1}{4}$. Since $x+4$ is in the denominator the curve decreases as $x$ gets bigger, so the curve moves closer to the $x-$axis as $x$ gets bigger. I hope this will help.
{ "language": "en", "url": "https://math.stackexchange.com/questions/578587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is this curve formed by latticing lines from the $x$ and $y$ axes? Consider the following shape which is produced by dividing the line between $0$ and $1$ on $x$ and $y$ axes into $n=16$ parts. Question 1: What is the curve $f$ when $n\rightarrow \infty$? Update: According to the answers this curve is not a part of a circle but with a very similar properties and behavior. In the other words this fact shows that how one can produce a "pseudo-cycle" with equation $x^{\frac{1}{2}}+y^{\frac{1}{2}}=1$ from some simple geometric objects (lines) by a limit construction. Question 2: Is there a similar "limit construction by lines" like above drawing for producing a circle?
The OP's curve is (a portion of) the parabola with $(1/2,1/2)$ for its focus and $x+y=0$ for its directrix. With a slight modification (see below), the lines in the OP's drawing are tangent lines to the parabola, which can be thought of as (origami) crease lines created when the focus is "folded" to lie atop various points along the directrix. The slight modification is this: The tangent lines for the parabola run from $(0,t)$ to $(1-t,0)$, but it appears from the drawing that the OP is connecting $(0,{k\over16})$ to $({17-k\over16},0)$ rather than $(1-{k\over16},0)$. In the limit, this doesn't matter, but it does make a small difference along the way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/578662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 9, "answer_id": 8 }
Differential Equations and Newtons method How can I approach this question? For problem one this is what I did: Given the DE, $$p'(x) = p''(x) + \left(2\pi*\frac{f}{c}\right)^2p(x) = 0,$$ and its solution, $p(x) = \sin(kx)$, I substituted the things on the right hand side of the DE to get $$p'(x) = -\sin(kx)\,k^2 + \cos(kx) + \left(2\pi*\frac{f}{c}\right)^2 \sin(kx) = .0$$ Then, I plugged in $x=0$ to get $\cos(kx) = 0$. My answer does not depend on $f$ and $c$ at all, which is what the question is asking for. What is the right approach to solving for $k$? Also, for the initial condition $p(0) = 0$, shouldn't any value of $k$ work because you will always get $\sin(0) = 0$? Problem 1: Phonetics The shape of the vocal tract tends to promote certain sound frequencies. For example, to produce the first vowel in the word about, the vocal tract opens widely. The cross-sectional area throughout the vocal tract is approximately the same and may be modeled by a cylinder with one end open (the lips) and the other end closed (the glottis/vocal folds). Let $p(x)$ denote the sound pressure at position x within the cylinder starting at the lips, $x=0$, and ending at the glottis, $x=L$, where $L$ is the length of the vocal tract. Then $p(x)$ satisfies the differential equation $$ p''+(2πfc)2p=0\tag{$*$}$$ with conditions at the endpoint $p(0)=0$ and $p'(L)=0$. This is called a boundary value problem. $f$ is the frequency of the produced sound, and $c$ is the speed of sound. Show that $p(x)=\sin(kx)$ solves the differential equation and the first boundary condition ($p(0)=0$) when $k$ is chosen correctly. What value of $k>0$ ensures that this function is a solution? Your answer will depend on $f$ and $c$. Use the second boundary condition $p'(L)=0$ to determine the frequencies $f$ that the vocal tract can produce. Note: your answer should be expressed in terms of an integer $n$ so that there would be infinitely many frequencies produced. Your answer will also depend on $L$ and $c$. Problem 2: Third order differential equations and Newton's method We are trying to solve the third order differential equation $$y'''+3y''−y=0. \tag{$**$}$$ Inspired by earlier results in the course, we guess that the solution to this differential equation might be $y=Ae^{kx}$ where A and k are constants. Show that by plugging this guess into the differential equation we get an equation for $k$: $k_3+3k_2−1=0$. Find the positive root of this cubic by using three iterations of Newton's method and write down a solution to $(∗∗)$. Hint: plot your cubic to come up with a starting point for Newton's method
You are being asked to find a relationship between $k$ and $f$ and $c$. $\sin{(kx)}$ was given to show you the form of the equation but now you are asked to determine exactly what $k$ should be in this case, in terms of the other quantities in the problem. To do this, generate the needed derivates of $\sin{(kx)}$ and substitute them into the second part of the equation (I don't know where you got the $p'$part from as it was not given in the problem below). Now determine how to set the value of $k$ so that the equation is always true (hint: $\sin$ will not always be $0$ so the other part must be set to $0$). Once you have replaced $k$ with quantities from the original problem, you are actually on the right track regarding the boundary conditions: you will find that a trig function must be zero, so what values of the argument will satisfy that condition? The second problem is the same techniques applied again, just to equations of a different form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/578735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $\int_{-\infty}^{\infty}f(x)\ \mathrm dx=100$ then $\int_{-\infty}^{\infty}f(100x+9)\ \mathrm dx =?$ Given $\displaystyle\int_{-\infty}^{\infty}f(x)\ dx=100$, evaluate $\displaystyle\int_{-\infty}^{\infty}f(100x+9)\ dx.$ Question is as above. I'm not sure how to even start. Is the answer $100$? Seems like if the function is bounded from negative infinity to infinity, any transformation just changes the shape. Not sure how to explain this properly though.
Hint: Consider a particular example. What if $f(x) = 1$ for $0 < x < 100$ and $f(x) = 0$ elsewhere? What is $f(100x+9)$ in this case? What is the value of the second integral?
{ "language": "en", "url": "https://math.stackexchange.com/questions/578830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
sequence with infinitely many limit points I am looking for a sequence with infinitely many limit points. I don't want to use $\log,\sin,\cos$ etc.! It's easy to find a sequence like above, e.g. $1,1,2,1,2,3,1,2,3,4,1,2,3,4,5,1,\dots$ But how can you prove the limit points? The problem I am having is the recursion or definiton of the sequence which I can't name exactly. But for a formal proof I need this. So what's a sequence with infinitely many limit points without using $\log,\sin,\cos$ or any other special functions?
I realize this question was asked a long time ago, but for posterity, you may formally describe the sequence $<1, 1, 2, 1, 2, 3, 1, 2, 3, 4, ...>$ as $<x_i>$ where $x_i = n$ precisely when $i = \frac {(n^2 + (2k-1)n + (k^2 - 3k + 2))}{2}$ for some $k \in \mathbb{N}$. To justify this formula, we start by showing that $n$ first appears in the sequence in the term with index $\frac{n(n+1)}{2}$. By splitting the sequence into $(1),(1,2),(1,2,3),(1,2,3,4),...,(1,...,n)$, it is pretty clear that $n$ has index $1+2+3+4+...+n$, which can be shown to equal $\frac{n(n+1)}{2}$ by induction. Then we continue this useful grouping of the sequence to show the indices of the subsequent terms. $(1),...,(1,...,n),(1,...,n,n+1),(1,...,n,n+1,n+2),...$ The the second appearance comes $n$ terms after the first appearance, the third appearance comes $(n+1)$ terms after the second, and so on. In general the $k^{th}$ appearance of $n$ will come $n + (n+1) + ... + (n+k-2)$ terms after the first appearance. This can be easily shown to be $\frac{(k-2)(k-1)}{2} - \frac{(n-1)n}{2}$. But then the $k^{th}$ appearance of $n$ in the sequence will have index $\frac{n(n+1)}{2}+\frac{(k-2)(k-1)}{2} - \frac{(n-1)n}{2}$, which simplifies to $\frac {(n^2 + (2k-1)n + (k^2 - 3k + 2))}{2}$. I'll leave it to you to show that this expression defines the rule of assignment for a bijection $f:\mathbb{N}\times\mathbb{N} \to \mathbb{N}$, meaning every index $i \in \mathbb{N}$, is mapped to by exactly one pair $(n,k)\in \mathbb{N} \times \mathbb{N}$, therfore showing that $x_i$ is well defined. Once you have defined this sequence, showing it has infinitely many limit points is easy. We say that $m$ is a limit point of $<x_i>$ precisely if there is a subsequence of $<x_i>$ converging to $m$. Using $f(n,k) = \frac {(n^2 + (2k-1)n + (k^2 - 3k + 2))}{2}$ as our choice function, we choose the subsequence $<y_i>$ where $y_i = x_{f(m,i)} = m$. It stands that $<y_i>$ converges to $m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/578899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Can someone example and give an example? Given an example of a function $f $ such that $\lim_{x\rightarrow \infty } f(x)$ exists, but $\lim_{x\rightarrow \infty } f'(x)$ does not exist.
Consider $$ f(x) = \frac{\sin(x^2)}{x}. $$ Using the Squeeze Theorem, you can show that $$ \lim_{x \to \infty} f(x) = 0. $$ However, its derivative $$ f'(x) = 2\cos(x^2) - \frac{\sin(x^2)}{x^2} $$ never settles down as $x \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/578974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Normal domain is equivalent to integrally closed domain. Is it true? Normal domain is equivalent to integrally closed domain. Is it true? Can anyone tell me?
Here's a solution to this problem for the sake of completeness. Let $A$ be an integral domain with $K=Quot(A)$. We'll show it's normal (this is, for every prime ideal $p\subset A$ the localization $A_p$ is integrally closed) if and only if it's integrally closed. * *Let $A$ be normal and consider any $r\in K$ satisfying a monic polynomial equation with coefficients in $A$. Then, since $A\subseteq A_p\subseteq K$, for ever prime ideal $p$ we know $r\in A_p$ for all $p$. Therefore $$ r\in\bigcap_{p\text{ prime}}A_p=A, $$ where the last equality is shown in this answer or this one. * *The converse follows from the fact that if $A$ is integrally closed then $S^{-1}A$ is integrally closed for any multiplicatively closed set $S$, as shown in this answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/579059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is a certain subset of a regular uncountable cardinal stationary? this is an excerpt from Jech's Set Theory (page 94). For a regular uncountable cardinal $\kappa$ and a regular $\lambda<\kappa$ let $$E^\kappa_\lambda= \{\alpha<\kappa:\mbox{cf}\ \alpha=\lambda \}$$ It is easy to see that each $E^\kappa_\lambda$ is a stationary subset of $\kappa$ Well, I don't see so easily why this should hold. Essentially, given a set which is disjoint to a set of this form, there is no reason for it to be bound, so I tried proving it can not be closed. However, I have no idea why this should hold either, as it might not have any ordinal with cofinality $\lambda$ as a limit point as well. Thanks in advance
Note that if $C$ is a club, then $C$ is unbounded, and therefore has order type $\kappa$. Since $\lambda<\kappa$ we have some initial segment of $C$ of order type $\lambda$. Show that this initial segment cannot have a last element. Its limit is in $C$ and by the regularity of $\lambda$ must have cofinality $\lambda$. Therefore $C\cap E^\kappa_\lambda\neq\varnothing$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/579146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Action on Pairs, On Sets and on points in GAP I am trying to understand GAP in group action. I am confused in few things what is the difference between action on pairs, on sets, with the domain sometimes on list, and on blocks. Please help me to clarify these things. Thanks.
The "Group Actions" chapter of the GAP reference manual documents standard actions, and if you scroll until OnTuplesTuples, there will be a common example covering all of them. I will just take from there examples of the three actions in question, and try to shed more light. First, create $A_4$ as g: gap> g:=Group((1,2,3),(2,3,4));; gap> AsList(g); [ (), (2,3,4), (2,4,3), (1,2)(3,4), (1,2,3), (1,2,4), (1,3,2), (1,3,4), (1,3)(2,4), (1,4,2), (1,4,3), (1,4)(2,3) ] The group g acts transitively on the set $1,2,3,4$ so $1$ may be mapped to any of the points $1,2,3,4$: gap> Orbit(g,1,OnPoints); [ 1, 2, 3, 4 ] For example: gap> 1^(1,2)(3,4); 2 gap> 1^(1,3,2); 3 gap> 1^(1,4,2); 4 OnPairs extends OnPoints on pairs of points. A permutation s from g will map the pair [i,j] to [i^s,j^s]: gap> Orbit(g,[1,2],OnPairs); [ [ 1, 2 ], [ 2, 3 ], [ 1, 3 ], [ 3, 1 ], [ 3, 4 ], [ 2, 1 ], [ 1, 4 ], [ 4, 1 ], [ 4, 2 ], [ 3, 2 ], [ 2, 4 ], [ 4, 3 ] ] For example, gap> OnPairs([1,2],(1,2,3)); [ 2, 3 ] gap> OnPairs([1,2],(1,3,4)); [ 3, 2 ] etc. Finally, OnSets acts as OnPairs but additionally sorts the entries of the result: gap> Orbit(g,[1,2],OnSets); [ [ 1, 2 ], [ 2, 3 ], [ 1, 3 ], [ 3, 4 ], [ 1, 4 ], [ 2, 4 ] ] Thus, [ 2, 3 ] and [ 3, 2 ] are not distinguished in the case of OnSets action: gap> OnSets([1,2],(1,2,3)); [ 2, 3 ] gap> OnSets([1,2],(1,3,4)); [ 2, 3 ] Hope this helps to clarify things.
{ "language": "en", "url": "https://math.stackexchange.com/questions/579212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Brownian Motion conditional distribution Let $\{X(u),u\geq0\}$ be a standard Brownian motion. What is the conditional distribution of $X(t)$ given $\{X(t_{1}),\dots,X(t_{n})\}$, where $0<t_{1}<\cdots<t_{n}<t_{n+1}=t$? --So far, I have derived the joint pdf of $X(t_{n+1})$ and $X(t_{1}),\dots,X(t_{n})$ using the fact that each increment $X(t_{i})-X(t_{i-1})$ is independent and normally distributed. The pdf if (I think) given by $$ \begin{align*} f(x_{1},x_{2},\dots,x_{n},x_{n+1})&=f_{t_{1}}(x_{1})f_{t_{2}-t_{1}}(x_{2}-x_{1})\cdots f_{t_{n}-t_{n-1}}(x_{n}-x_{n-1})f_{t_{n+1}-t_{n}}(x_{n+1}-x_{n})\\ &=\frac{\exp\left\{-\frac{1}{2}\left[\frac{x_{1}^{2}}{t_{1}}+\frac{(x_{2}-x_{1})^{2}}{t_{2}-t_{1}}+\cdots+\frac{(x_{n}-x_{n-1})^{2}}{s-t_{n-1}}+\frac{(x_{n+1}-x_{n})^{2}}{t_{n+1}-t_{n}}\right]\right\}}{(2\pi)^{(n+1)/2}[t_{1}(t_{2}-t_{1})\cdots(s-t_{n-1})(t-s)]^{1/2}}. \end{align*} $$ So the $X(t_{i})$s are jointly normal. Now to find the conditional distribution, I believe we need to use multivariate distribution theory. I.e., if $\mathbf{X}$ and $\mathbf{Y}$ have a joint normal distribution, then the conditional distribution of $\mathbf{Y}$ given $\mathbf{X}$ is also normally distributed such that $$\mathbf{Y}|\mathbf{X}\sim\operatorname{Multivariate\ Normal}(\boldsymbol\mu^{*},\boldsymbol\Sigma),$$ where $\boldsymbol\mu^{*}=\boldsymbol\mu_{y}+\boldsymbol\Sigma_{yx}\boldsymbol\Sigma_{xx}^{-1}(\mathbf{x}-\boldsymbol\mu_{x})$ and $\boldsymbol\Sigma^{*}=\boldsymbol\Sigma_{yy}-\boldsymbol\Sigma_{yx}\boldsymbol\Sigma_{xx}^{-1}\boldsymbol\Sigma_{xy}$. How would I go about finding each of these components for $X(t)|X(t_{1}),\dots,X(t_{n})$? Thank you.
One knows that the marginal distributions of Brownian motion are normal and that $X(t)-X(t_n)$ is independent of $\sigma(X(s);s\leqslant t_n)$. Hence, the conditional distribution of $X(t)$ conditionally on $\sigma(X(t_k);1\leqslant k\leqslant t_n)$, for every $t_k\leqslant t_n$ (or, conditionally on $X(t_n)$ only) is normal with mean $X(t_n)$ and variance $t-t_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/579331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rationalizing a denominator. The question instructs to rationalize the denominator in the following fraction: My solution is as follows: The book's solution is which is exactly the numerator in my solution. Can someone confirm my solution or point what what I'm doing wrong? Thank you.
Going from the first to the second line of your working, you seem to have said $$(\sqrt{6} -2)(\sqrt{6}+2) = 6+4$$ on the denominator of the fraction. This isn't true: in general $(a-b)(a+b)=a^2-b^2$, not $a^2+b^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/579412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Does closed imply bounded? Definitions:1. A set $S$ in $\mathbb{R}^m$ is bounded if there exists a number $B$ such that $\mathbf{||x||}\leq B$ for all $\mathbf{x}\in S$, that is , if $S$ is contained in some ball in $\mathbb{R}^m$.2. A set in $\mathbb{R}^m$ is closed if, whenever $\{\mathbf{x}_n\}_{n=1}^{\infty}$ is convergent sequence completely contained in $S$, its limit is also contained in $S.$3. A set $S$ in $\mathbb{R}^m$ is compact if and only if it is both closed and bounded. Does closedness not imply boundedness in general? If so, why does a compact set need to be both closed as well as bounded?
$\mathbb{R}^m$ itself is a closed set. is it bounded? But in case of Compact sets, they are closed as well as bounded in $\mathbb{R}^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/579473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
How do i find the lapalace transorm of this intergral using the convolution theorem? $$\int_0^{t} e^{-x}\cos x \, dx$$ In the book, the $x$ is written as the greek letter "tau". Anyway, I'm confused about how to deal with this problem because the $f(t)$ is clearly $\cos t$, but $g(t)$ is not clear to me. Please help.
So the transform becomes $\frac{s}{s^2 + 1} \frac{1}{s-1}$ with the shift $s+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/579578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that [0,1] is equivalent to (0,1) and give an explicit description of a 1-1 function from [0,1] onto (0,1) The problem is stated as follows: Show that there is a one-to-one correspondence between the points of the closed interval $[0,1]$ and the points of the open interval $(0,1)$. Give an explicit description of such a correspondence. Now, I think I can prove the first part of the problem by demonstrating the following: Define $f: (0,1) \to \mathbb{R}$ as follows. For $n \in \mathbb{N}$, $n \geq 2$, $\space{ }f(\frac{1}{n}) = \frac{1}{n-1}$ and for all other $x \in (0,1)$, $\space{}f(x) = x$ * *Prove that $f$ is a $1-1$ function from $(0,1)$ onto $(0,1]$ *Slightly modify the above function to prove that $[0,1)$ is equivalent to $[0,1]$ *Prove that $[0,1)$ is equivalent to $(0,1]$ Since the "equivalent to" relation is both symmetric and transitive, it should follow that $[0,1]$ is equivalent to $(0,1)$. Hence, there does exist a one-to-one correspondence between $[0,1]$ and $(0,1)$. I have no trouble with the above. My problem is in "finding an explicit description of such a correspondence." Can I modify the above function, or will that not suffice?
Steps 2 and 3 are not necessary. The function $g:(0,1] \to [0,1]$ defined by $g(1) = 0$ and $g(x) = f(x)$ if $x \neq 1$ is a bijection. This shows that $(0,1]$ is equivalent to $[0,1]$ and, by transitivity, that $(0,1)$ is equivalent to $[0,1]$. Furthermore, the function $g \circ f$ is a one-to-one correspondence between $(0,1)$ and $[0,1]$ that you can describe explicitly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/579657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
finding inverse of sin with a variable in the argument $$h(t) = 18\sin\left(\pi\frac{t}{7}\right) + 20$$ Where $h(t)$ is height in meters and $t$ is the elapsed time in seconds. If $h$ is restricted to the domain $[3.5,10.5]$ find and interpret the meaning of $h^{-1}(20)$. In the Facit the answer is $7$. This means that a height of $20$ meters is reached in $7$ seconds.
Let $h(t) = h$ and $t = h^{-1}$. In this case: $h = 20 + 18sin(\frac{\pi h^{-1}}{7})$ $h - 20 = 18sin(\frac{\pi h^{-1}}{7})$ $\frac{h - 20}{18} = sin(\frac{\pi h^{-1}}{7})$ $arcsin(\frac{h - 20}{18}) = \frac{\pi h^{-1}}{7}$ $\frac{7 arcsin(\frac{h - 20}{18})}{\pi} = h^{-1}$ Now substitute 20 to $h$: $h^{-1}(20) = \frac{7 arcsin(\frac{20 - 20}{18})}{\pi} = \frac{7 arcsin(0)}{\pi}$. Given that $arcsin(0) = 0 + k\pi$, $\frac{arcsin(0)}{\pi}$ equals any natural number. Take all the natural numbers between 4 and 10, multiply each by 7. The least of them is 28, which leaves me with the thought that I've messed it up somewhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/579743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many transitive relations on a set of $n$ elements? If a set has $n$ elements, how many transitive relations are there on it? For example if set $A$ has $2$ elements then how many transitive relations. I know the total number of relations is $16$ but how to find only the transitive relations? Is there a formula for finding this or is it a counting problem? Also how to find this for any number of elements $n$?
As noticed by @universalset, there are 13 transitive relations among a total of 16 relations on a set with cardinal 2. And here are they :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/579817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
example of a connected matrix lie group with a nondiscrete normal subgroup H? What is an example of a connected matrix lie group with a non discrete normal subgroup H such that its tangent space at the identity is the zero matrix?
Take the group $G$ to be $SO(2)\simeq S^1$, a $1$-dimensional circle. Since this group is abelian, any subgroup will be normal. Then take dense subgroup $H$ isomorphic to $\mathbb{Z}$, which is generated by the image of $\sqrt{2}$ (or any irrational number) in $S^1$ via the parametrization $\mathbb{R}\to S^1$, $x\mapsto e^{2\pi i x}$. Tangent space to $H$ will be zero since there are no non-constant curves $\gamma\colon \mathbb{R}\to G$ with image in $H$. I think if you require $H$ to be closed, there can't be such pathological example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/579900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
What's the insight for a 3x3 matrix with orthogonormal columns,the rows are also orthogonormal? I know this can be easily proved with simple matrix tricks, But I don't know the insight for this, and just feels it amazing that if I pick up 3 orthogonormal vectors in 3d space, their corresponding x,y,z portions automatically forms orthogonormal basis,too! I've been googling a lot with no satisfactory answer, hope I can find it here, Thanks.
A square real matrix $M$ has columns ON if and only if $M^{-1}=^tM$ if and only if $(^tM)^{-1}=M=^t(^tM)$ if and only if $^tM$ columns are ON if and only if $M$ lines are ON. ON = orthonormal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/580005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Family of Straight line : Consider a family of straight lines $(x+y) +\lambda (2x-y +1) =0$. Find the equation of the straight .... Problem : Consider a family of straight lines $(x+y) +\lambda ( 2x-y +1) =0$. Find the equation of the straight line belonging to this family that is farthest from $(1,-3)$. Solution: Let the point of intersection of the family of lines be P. If solve : $$\left\{\begin{matrix} x+y=0 & \\ 2x-y+1=0 & \end{matrix}\right.$$ We get the point of intersection which is $P \left(-\dfrac{1}{3}, \dfrac{1}{3} \right)$ Now let us denote the point $(1,-3)$ as $Q$. So, now how to find $\lambda$ so that this will be fartheset from $Q$. If we see the slope of $PQ = m_{PQ} = -\dfrac{5}{2}$ Any line perpendicular to $PQ$ will have slope $\dfrac{2}{5}$ Please suggest further.. thanks.
HINT: We can rewrite the equation as $$x(1+2\lambda)+y(1-\lambda)+\lambda=0$$ If $d$ is the perpendicular distance from $(1,-3)$ $$d^2=\frac{\{1(1+2\lambda)+(-3)(1-\lambda)+\lambda\}^2}{(1+2\lambda)^2+(1-\lambda)^2}$$ We need to maximize this which can be done using the pattern described here or here
{ "language": "en", "url": "https://math.stackexchange.com/questions/580081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }