Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Trick to proving a group has exactly one idempotent element - Fraleigh p. 48 4.31
If $*$ is a binary operation on a set $S$, an element $x \in S$ is an idempotent for $*$ if $x * x = x$.
Let $\langle G, *\rangle$ be a group and let $x\in G$ such that $x*x = x.$ Then $x*x = x*e$, and by left cancellation, $x = e$, so $e$ is the only idempotent element in a group.
The trick here looks like writing $x$ as $x*e$. How can you prognosticate (please see profile) this? I didn't see it. It also looks like you have to prognosticate the 'one idempotent element' to be the identity element. Is this right? Can someone make this less magical and psychic?
|
This does not look so much like prognostication to me. The hint is trying to make explicitly clear something that is happening. You could instead start by right multiplying both sides by $x^{-1}$. Then you get $xxx^{-1} = xx^{-1} \implies x(e) = e \implies x = e$. By the uniqueness of the identity, there is only one such element.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/616696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
component of a vector $\mathbf a$ onto vector $\mathbf b$ Just a bit unsure about the definition. When I look online and at other questions on this site it says that the component of $\mathbf a$ onto $\mathbf b$ is $\dfrac{\mathbf a\cdot\mathbf b}{|\mathbf b|}$ but when I look in my notes and at my lecture slides it says that the component of $\mathbf a$ onto $\mathbf b$ is $\dfrac{\mathbf a\cdot\mathbf b}{\mathbf b\cdot\mathbf b}$. Also is the projection of $\mathbf a$ onto $\mathbf b$:
$$\frac{\mathbf a\cdot\mathbf b}{\mathbf b\cdot\mathbf b}\mathbf b?$$
|
Let ${\bf a} \in V^{n+1}$ and define $A=\text{span}({\bf a})$ and $A^{\perp} = \{ {\bf w} \in V : \langle {\bf w},{\bf a}\rangle=0 \}$.
Notice that $V = A \oplus A^{\perp}$. For any ${\bf b} \in V$ we can uniquely decomposed ${\bf b}$ as a component of $A$ and a component of $A^{\perp}$. By asking for the component of ${\bf b}$ onto the vector ${\bf a}$ you are asking for the image of ${\bf b}$ under the canonical projection $\pi : A \oplus A^{\perp} \twoheadrightarrow A$.
Take $\{{\bf a}\}$ as a basis for $A$ and $\{{\bf w}_1,{\bf w}_2,\ldots,{\bf w}_n\}$ as a basis for $A^{\perp}$. We can write ${\bf b}$ as follows:
$${\bf b} = \lambda {\bf a} + \mu_1{\bf w}_1 + \mu_2{\bf w}_2 + \cdots + \mu_n{\bf w}_n$$
Notice that $\pi({\bf b})=\lambda{\bf a}$ and $\lambda$ is the component of ${\bf b}$ on ${\bf a}$. Taking the scalar product:
$$
\begin{eqnarray*}
\langle {\bf a},{\bf b}\rangle &=& \langle {\bf a}, \lambda {\bf a} + \mu_1{\bf w}_1 + \cdots + \mu_n{\bf w}_n\rangle \\ \\
&=& \langle {\bf a},\lambda {\bf a}\rangle + \langle {\bf a},\mu_1 {\bf w}_1\rangle + \cdots + \langle {\bf a},\mu_n {\bf w}_n\rangle \\ \\
&=& \lambda\langle {\bf a},{\bf a}\rangle + \mu_1\langle {\bf a},{\bf w}_1\rangle + \cdots + \mu_n\langle {\bf a},{\bf w}_n\rangle \\ \\
&=& \lambda\langle {\bf a},{\bf a}\rangle + 0 + \cdots + 0 \\ \\
&=& \lambda\langle {\bf a},{\bf a}\rangle \\ \\
\end{eqnarray*}$$
It follows that
$$\lambda = \frac{\langle {\bf a},{\bf b}\rangle}{\langle {\bf a},{\bf a}\rangle}$$
Moreover, we also have
$$\pi({\bf b}) = \frac{\langle {\bf a},{\bf b}\rangle}{\langle {\bf a},{\bf a}\rangle}{\bf a}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/616799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Limits of square root $$\lim_{x\to\infty}\left(\sqrt{x+\sqrt{x+\sqrt{x + \sqrt x} }}-\sqrt x\right) $$
(original screenshot)
Compute the limit
Can you please help me out with this limit problem
|
Hint for a simpler one: $$\lim_{x \to \infty} \sqrt{x+\sqrt x}-\sqrt x=\lim_{x \to \infty}\sqrt x\left(\sqrt{1+\frac 1{\sqrt x}}-1\right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/616885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
}
|
Closed form solution to a recurrence relation (from a probability problem) Is there a closed form solution to the following recurrence relation?
$$P(i,j) = \frac{i^{5}}{5i(5i-1)(5i-2)(5i-3)(5i-4)}\sum\limits_{k=0}^{j-5}(1-P(i-1,k))$$
where $P(i,j)=0$ for $j<5$.
The above recurrence is the solution I obtained to a probability problem, and I've been trying to simplify it even further by obtaining a closed formula.
The problem:
If we have $5i$ colored balls, $i$ of each color. Let's call the colors
$1,2,3,4$ and $5$. We pick the balls without replacement until we obtain
colors $1,2,3,4$ and $5$ in sequence.Then $P(i,j)$ is the probability that
we will pick colors $1,2,3,4$ and $5$ in sequence in at most $j$ trials.
Note that, using a counting argument, one can show that
$$P(n,5n)= \frac{{n!}^{5}}{(5n)!}\sum\limits_{i=1}^n \frac{{(5n-4i)!}(-1)^{i+1}}{(n-i)!i!}$$
|
Odlyzko handles such problems in "Enumeration of Strings" (pages 205-228 in Combinatorial Algorithms on Words, Apostolico and Galil (eds), Springer, 1985). What you are looking for is strings over an alphabet of 5 symbols such that the string 12345 appears for the first time at the end. The autocorrelation polynomial of 12345 is just $c(z) = 1$. The generating function for the number of strings over an alphabet of $s$ symbols of length $n$ in which the pattern $p$ of length $k$ appears for the first time at the very end is:
$$
B_p(z) = \frac{c_p(z)}{(1 - s z) c_p(z) + z^k}
$$
The waiting time (the average length of strings in which the pattern appears for the first time at the very end) is just:
$$
W_p = B_p(1/s) = s^k c_p(1/s)
$$
In your particular case, $c_p(z) = 1$, so that $W_p = s^k = 5^5 = 3125$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/616968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Does this integral have a closed form? I was working with this problem in an exam:
Given $\lambda\in(-1,1)\subset\Bbb R$, find
$$f
(\lambda)=\int_{0}^{\pi}\ln\left(1+\lambda \cos x\right)\mathrm{d}x
$$
My try: put $\delta\in (0,1)$ such that $\lambda\in(-\delta,\delta)$. Using the power series of $\ln(1+x)$ and uniform convergence to take $\sum_{n=0}^\infty$ outside $\int_{0}^{\pi}$, finally, I got this:
$$f(\lambda)=-\sum_{k=1}^\infty{(2k-1)!!\over (2k)!!}{\lambda^{2k-1}\over 2k}$$
But I still wonder if there is a closed form for $f(\lambda)?$ And is my solution above right or wrong?
|
The answer to your integral is
$$
f(\lambda) = \pi \ln\left(\frac{1+\sqrt{1-\lambda^2}}{2}\right)
$$
Now to derivation. Given that
$$
\int_0^\pi \cos^n(x) \mathrm{d}x = \frac{1+(-1)^n}{2} \frac{\pi}{2^n} \binom{n}{n/2}
$$
Doing the series expansion of the integrand in $\lambda$ and interchanging the order of summation and integration yields:
$$
f(\lambda) = -\pi \sum_{m=1}^\infty \left(\frac{\lambda}{2}\right)^{2m} \frac{1}{2 m} \binom{2m}{m} = - \pi \int_0^{\lambda} \sum_{m=1}^\infty \frac{x^{2m-1}}{2^m} \binom{2m}{m} \mathrm{d}x
$$
The sum equals
$$
\sum_{m=1}^\infty \frac{x^{2m-1}}{2^m} \binom{2m}{m} = \frac{1}{x} \left(\sum_{m=0}^\infty \frac{x^{2m}}{2^m} \binom{2m}{m} -1 \right) = \frac{1}{x} \left(\frac{1}{\sqrt{1-2x^2}} -1 \right)
$$
Integrating this gives the answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 0
}
|
Any two groups of three elements are isomorphic - Fraleigh p. 47 4.25(b) The answer has no details. Hence maybe the answer is supposed to be quick. But I can't see it?
Hence I took two groups. Call them $G_1 = \{a, b, c\}, G_2 = \{d, e, f\}$.
Then because every group has an identity, I know $G_1, G_2$ has one each.
Hence WLOG pick $c$ as the identity in $G_1$. I want to match letters so pick $e$ as the identity in $G_2$.
Now we have $G_1 = \{a, b, \color{magenta}{c}\}, G_2 = \{d, \color{magenta}{e}, f\}$.
I know every group has an inverse. But how do I apply this to $G_1, G_2$ to simplify them?
And to prove $G_1, G_2$ are isomorphic, how do I envisage and envision what the isomorphism is?
Update Dec. 25, 2013 (1). Answer from B.S. Why does $ab = b$ fail?
$\begin{align} ab & = b \\ & = bc \end{align}$. What now?
(2.) Do I have to do all the algebra work for $G_1$ for $G_2$? Is there some smart answer?
Update Jan. 8, 2014 (1.) I'm confounded by drhab's comment on Dec. 30 2013. Is drhab saying: Even if the domain's identity is the $\color{green}{third}$ letter $\color{magenta}{c}$ but the codomain's identity is the second letter $\color{magenta}{e}$, $d^{\huge{\color{green}{3}}} = e$ anyways? Hence I should've chosen the $\color{green}{third}$ letter in the codomain as the identity too?
What else is drhab saying about this?
(2.) I don't understand drhab's comment on Dec. 28 2013. Why refer to commutativity? It's not a group axiom? And what are the binary operations?
Update: I didn't realize this before, but by dint of Martin Sleziak's comment, this question is just a sepcial case of Fraleigh p. 63 Theorem 6.10 = Pinter p. 109-111 Theorem 11.1.
|
Let $p$ be a prime and $G$ a groups of order $p$. If $a\in G$ is not the neutral element show that the homomorphism $\mathbb Z\to G$ given by $1\mapsto a$ induces an isomorphims $\mathbb Z/p\mathbb Z\to G$. Conclude that any two groups of order $p$ are isomorphic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 1
}
|
Action on G via Automorphism Here is an exercise from Isaacs, Finite Group Theory, $4D.1$:
Let $A$ act on $G$ via automorphism, and assume that $N \trianglelefteq G$ admits $A$ and that $N \geq C_G(N)$. Assume that $(|A|,|N|)=1$. If $A$ acts trivially on $N$, show that its action on $G$ is trivial.
Hint: Show that $[G,A]\leq N$ and consider $C_\Gamma(N)$ where $\Gamma=G\rtimes A$
I couldn't get the way how i can use the hint
|
I can only prove the conclusion is true if the assumption (|A|,|N|)=1 become to (|A|,|G|)=1.
Clearly, [A, N, G]=1, and [N,G,A]=1. By three subgroups lemma, [G,A,N]=1, which implies that $[G,A]\le C_G(N) \le N$. By the condition (|A|,|G|)=1, we get $G=C_G(A)[G,A]\le C_G(A)N=C_G(A)$, which implies $[G,A]=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Divisibility by seven Given number n, whose decimal representation contains digits only $1, 6, 8, 9$. Rearrange the digits in its decimal representation so that the resulting number will be divisible by 7.
If number is m digited after rearrangement it should be still $m$ digited.
If not possible then i need to tell "not possible".
EXAMPLE :
$1689$
After rearrangement we can have $1869$, which is divisible by $7$
How to tackle his problem
|
If there is no condition that each of these digits need appear at least once, then it is not possible. Consider $1111$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Solving equations of type $x^{1/n}=\log_{n} x$ First, I'm a new person on this site, so please correct me if I'm asking the question in a wrong way.
I thought I'm not a big fan of maths, but recently I've stumbled upon one interesting fact, which I'm trying to find an explanation for. I've noticed that graphs of functions $y = x^{1/n}$ and $y = \log_{n} x$ , where $n$ is given and equal for both functions, always have $2$ intersection points. This means, equation $x^{1/n}= \log_{n} x$ must have $2$ solutions, at least it's what I see from the graphs.
I've tried to solve this equation analytically for some given $n$, like $4$, but my skills are very rusty, and I cannot come up with anything. So I'm here for help, and my question(-s) are:
*
*are these $2$ functions always have $2$ intersection points?
*if yes, why, if not, when not?
*how to solve equations like $x^{1/n}= \log_{n} x$ analytically?
|
$$\begin{align}\sqrt[n]x&=\log_nx\\\\x&=t^n\end{align}\ \Bigg\}\iff t=\frac n{\ln n}\cdot\ln t\quad;\quad t=e^u\iff e^u=\frac n{\ln n}\cdot u\iff$$
$$(-u)\cdot e^{-u}=-\frac{\ln n}n\iff u=-W\bigg(-\frac{\ln n}n\bigg)\iff x=t^n=(e^u)^n=e^{nu}$$
$$x=\exp\bigg(-n\cdot W\bigg(-\frac{\ln n}n\bigg)\bigg)$$ where W is the Lambert W function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 1
}
|
Undergraduate-level intro to homotopy I'm looking for an undergraduate-level introduction to homotopy theory.
I'd prefer a brief (<200pp.) book devoted solely/primarily to this topic. IOW, something in the spirit of the AMS Student Mathematical Library series, or the Dolciani Mathematical Expositions series, etc.
Edit: I'm looking for an "easy read", one that aims to give quickly to the reader a feel for the subject.
|
My favorite ~200 page introduction to homotopy theory: http://www.math.uchicago.edu/~may/CONCISE/ConciseRevised.pdf. You can find a physical copy on Amazon for relatively cheap as well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 0
}
|
Uniqueness of morphism (reasoning in categorial language). This question is related to a previous question of mine.
I figured out that maybe the right thing to ask for is for someone to solve one of the problems in Aluffi's book, that way I’ll know what the reasoning should look like.
The following question appears in section 2.3 of Aluffi's book:
Let $\varphi: G \to H$ be a morphism in a category with products.
Explain why there is a unique morphism
$$(\varphi \times \varphi): G \times G \to H \times H.$$
compatible in the evident way with the natural projections.
Next let $\psi: H \to K$ be a morphism in the same category and
consider morphisms between the products $G \times G, H \times H, K \times K$.
Prove: $$(\psi \varphi) \times (\psi \varphi) = (\psi \times \psi)(\varphi \times \varphi).$$
How should a thorough rigorous answer to this question look like? Answers with diagrams are preferred (since this is a major part i still don't get).
Please don't leave any holes in reasoning unless they are totally obvious and if you use diagrams please explain how you reason with them.
I'd like to emphasis that I'm not at all asking this question out of laziness. I'm finding myself a bit lost in this new language and in need of guidance.
Big pluses to anyone who would make me understand the big picture and make it all feel like a bad dream.
|
The question isn't right. There isn't a unique morphism $G\times G\to H\times H$, there's a unique morphism... with such and such properties.
I recommend beginning your thinking in the category of sets. If I have a map of sets $G\to H$, is there a "natural" map $G\times G \to H\times H$ that springs to mind? Is there some special property that distinguishes this map from any other?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Need Help to convert a grammar into Chomsky Form I have to convert the following grammar into Chomsky Form
$$( \Sigma=\{a,b,c,+\}, \Sigma_Q=\{S,V\},I=S)$$
$$S -> S+S|V$$
$$V -> a|b|c$$
My idea is the following:
$$S_0 \rightarrow S$$
$$S \rightarrow ST|V$$
$$T \rightarrow +S$$
$$V \rightarrow A|B|C$$
$$A \rightarrow a$$
$$B \rightarrow b$$
$$C \rightarrow c$$
Could you tell me if this is right?
|
In a grammar in Chomsky normal form there can only be two types of productions: $A\to BC$ and $A\to a$ where $A,B,C$ are nonterminals (variables, $\Sigma_Q$ in your notation) and $a$ is terminal ($\Sigma$ in your notation).
In particular, "chain productions" which are of the form $A\to B$ are not allowed. You have several of them. In particular, the original productions $V\to a\mid b\mid c$ were OK, while the new $V\to A\mid B\mid C$ isn't.
You have also introduced a special start symbol $S_0$. In the definition of Chomsky as I know it that s not necessary. Also note your production $T\to +S$ is of "mixed type" (it has both a terminal and a nonterminal) and is illegal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
irreducibility of $x^{5}-2$ over $\mathbb{F}_{11}$. I am tasked to show that $x^{5}-2$ is irreducible over $\mathbb{F}_{11}$ the finite field of 11 elements. I've deduced that it has no linear factors by Fermat's little theorem. But showing it has no quadratic factors is proving harder.
My approach so far is assume it did. Factor the polynomial and remulitply and compare coefficients to obtain a contradiction. I'm having trouble doing that since there are so many cases. I was given then hint
"How many elements are in a quadratic extension of $\mathbb{F}_{11}$"? The answer is 121 but I don't know how that helps me. Any hints on dealing with the hint would be very nice. Thanks.
|
If $x^5 - 2$ has a quadratic factor, it has a root $\alpha$ in a quadratic extension $K$ of $\mathbb{F}_{11}$. Since the group of units of $K$ has $120$ elements, $\alpha^{120} = 1$. Now $\alpha^5 = 2$, so the order of $2$ must divide ...?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
$\ln\left(1+x\right)=x-\dfrac{x^2}{2}+\dfrac{x^3}{3}+\dots$ when $|x|<1$ or $x=1$. $\ln\left(1+x\right)=x-\dfrac{x^2}{2}+\dfrac{x^3}{3}+\dots$ when $|x|<1$ or $x=1$. Why is the restriction $|x|<1$ or $x=1$? I know from Wikipedia that it is because out of this restriction, the function's "Taylor approximation" is not fair. But how did we derive this limit?
|
The standard approach is to consider the derivatives of both functions. On the right you obtain $\sum(-x)^n$, which you may recognize as a geometric series, with sum $\frac1{1+x}$, the formula being valid for $|x|<1$. But this is also the derivative of the function on the left. This means the two functions are equal, up to a constant. Evaluate both functions at $x=0$ to see that the constant is zero.
The above leaves a few blanks, of course. Notably, that the power series converges for $|x|<1$ and that we can differentiate it term by term in this interval. This follows from general results on the radius of convergence of power series, and on uniformly convergent sequences of functions.
What the argument does not show is that the equality also holds at $x=1$. This needs a separate proof, and follows from a general theorem due to Abel. Since the series diverges when $x=-1$, the equality does not hold (for $x$ real) beyond the region $-1<x\le 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
How to prove that $n^{-2}[x+g(x)+g\circ g(x)+\cdots +g^{\circ n}(x)]$ converges when $n\to\infty$ Let $f:\mathbb R\to\mathbb R$ be a periodic function with period $1$. We assume that $f$ is Lipschitz continuous, and in particular, we assume that there exists an $L\in (0,1)$, such
that
$$
|f(x)-f(y)| \le L|x-y|, \quad \text{for all $x,y\in\mathbb R$.}
$$
Let also $g(x)=x+f(x).$
Show that the limit
$$\lim_{n\to +\infty}\frac1{n^2}[x+g(x)+g(g(x))+\cdots +g^{\circ n}(x)]$$
exists and is independent of $x$, where, for every $n\ge1$, $g^{\circ n}$ is $g$ composed with itself $n$ times, thus $g^{\circ 1}=g$ and, for every $n\ge1$, $g^{\circ n+1}=g\circ g^{\circ n}$.
My attempt: Since
$$f(x+1)=f(x),\forall x\in R$$
then
$$g(g(x))=g(x+f(x))=x+f(x)+f(x+f(x))$$
$$g(g(g(x)))=g(x+f(x)+f(x+f(x)))=x+f(x)+f(x+f(x))+f(x+f(x)+f(x+f(x)))$$
$$\cdots\cdots $$
and I have
$$|g(x)-g(y)|=|x-y+f(x)-f(y)|\le |x-y|+|f(x)-f(y)|<(L+1)|x-y|$$
where $L+1>1$.
So $g$ is also Lipschitz continuous, and that's all I can do.
Thank you for you help.
|
First attempt. One needs to use the Stolz–Cesàro theorem. According to this theorem:
\begin{align}
\lim_{n\to \infty}\dfrac{x+g(x)+g(g(x))+\cdots +\underbrace{g(g(\cdots(g(x))))}_{n-1}}{n^2}&=\lim_{n\to \infty}\frac{\underbrace{g(g(\cdots(g(x))))}_{n}}{2n+1}\\=&\lim_{n\to \infty}\frac{\underbrace{g(g(\cdots(g(x))))}_{n+1}-\underbrace{g(g(\cdots(g(x))))}_{n}}{2},
\end{align}
provided that the last limit exists. Define $x_n=\underbrace{g(g(\cdots(g(x))))}_{n}$. Then
$$
\underbrace{g(g(\cdots(g(x))))}_{n+1}=g(x_n)=f(x_n)+x_n,
$$
and hence
$$
\underbrace{g(g(\cdots(g(x))))}_{n+1}-\underbrace{g(g(\cdots(g(x))))}_{n}=f(x_n)=f\bigg(\underbrace{g(g(\cdots(g(x))))}_{n}\bigg)
$$
We need show that the limit of $f(x_n)$ exists.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 1,
"answer_id": 0
}
|
Dealing with Generating Functions accurately I'm currently working through Iven Niven's "Mathematics of Choice." In the chapter on Generating Functions, the exercises include problems like:
How many solutions in non-negative integers does the equation $2x+3y+7z+9r=20$ have?
This of course ends up being the same as finding the coefficient of $x^{20}$ in $(1+x^2+x^4+x^6+\cdots+x^{20})(1+x^3+x^6+\cdots+x^{18})(1+x^7+x^{14})(1+x^9+x^{18})$. Maybe I'm just being a big weenie, but dealing with polynomials this large ends up being really tedious and error-prone.
I've realized that you can save the most involved multiplication for last, since that requires the least detail, and I'm of course not bothering to keep track of any powers greater than the one I'm interested in.
Any tips for keeping these giant polynomials manageable?
|
When you can only reasonably evaluate by multiplying out the polynomials, you've described what is pretty much the only approach. Just multiply out, simplest factors first, and keep dropping all coefficients higher than the one you are interested in. In some cases this is all you can do, unfortunately. (Or, use a CAS to do the messy algebra for you.)
However, occasionally you can simplify your generating function in a way that allows you to compute the coefficients by other methods, and this is sometimes (though not always) easier.
Let's take your case as an example. The four polynomials in your product can be written as $\displaystyle \frac{1-x^{22}}{1-x^2}$, $\displaystyle \frac{1-x^{21}}{1-x^3}$, $\displaystyle \frac{1-x^{21}}{1-x^7}$, and $\displaystyle \frac{1-x^{27}}{1-x^9}$. Because we don't care about what happens to the coefficients above $x^{20}$, we can ignore the powers higher than $x^{20}$, and take the product of the resulting functions. That is, your problem is to find the coefficient of $x^{20}$ in $$g(x) = \frac{1}{(1-x^2)(1-x^3)(1-x^7)(1-x^9)}.$$
One could then use partial fractions to decompose the product into the sum of functions for which one already knows a formula for the $n$-th coefficient; this is the way to get Binet's formula for the Fibonacci numbers from the generating function $\displaystyle f(x) = \frac{x}{1-x-x^2}$. For this problem that would be a lot of work, perhaps more than just multiplying the polynomials, but in general this is quite powerful. Another method is to take many derivatives, observing that the coefficient of $x^n$ is $\frac{1}{n!}$ times the $n$-th derivative at $0$, though in this case I expect this will be at least as messy as doing the original multiplication.
The full power of generating functions is not generally expressed in finding just a single coefficient, but in the way that it allows you to compactly work with all coefficients: e.g., the same generating function $g(x)$ above will give you the solution (via the coefficient of $x^n$) to this problem when $20$ is replaced by any integer $n$.
You may also be interested in this question and its answers. If you are interested in learning a lot more about generating functions, Wilf's Generatingfunctionology is the way to go.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Probability of combinatoric explosion. Someone posted a puzzle on a forum.
Now this is fairly easy. Probability of reverse: The crate doesn't contain a unusual = the crate contains a weapon or none of two found crates contains a unusual. This produces a quadratic equation, I discard the solution outside the [0,1] range and substract the other one from 1 to get the answer.
The interesting part is the 5% of the 55% chance for 2 crates. If you open 1000 crates, you. will end
up with about 1100 new crates on the average and about no chance ever to reduce this number. It will only grow. You WILL get infinite number of crates if you start off with enough. Still, starting with just one you're quite likely not to get any specials.
Now what is the chance you will get the infinite number of crates (and as result, unusuals) if you start off with only one crate?
|
You found the solution for the question :
Let $p$, the probability of not having an unusual :
*
*Either you get a weapon
*or two crates without unusual
$$p=0.35+0.55*p^2$$
This polynomial have one root in $[0,1]$, hence
$$p=\frac{10-\sqrt{23}}{11}\approx 0.4731$$
Your question can be answered the same way :
Let $q$, the probability of having an infinite number of crates :
*
*You open two crates and at least one can lead to an infinite number of crates. This is the complementary of both crates not having infinite number of crates inside.
$$q=0.55*(1-(1-q)^2)=0.55(-q^2+2q)$$
$$q(0.55q-0.1)=0 $$
That leads to two solutions : $q=0$ or $q=\frac{2}{11}$. But as you explained $q>0$ because the event "having infinite number of crates" is likely to happen if you have enough crates. Hence
$$q=\frac{2}{11}$$
Note that if you use $0.5$ or $0.45$ instead of $0.55$ the only valid solution is $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/617967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Averages question
There are five sticks. The average length of any four of them is 600. What is the average of all five?
Is it possible to find the average of all with just this information given?
|
Let the lengths of the five sticks be $a_i,1\le i\le 5$
So, $\displaystyle a_1+a_2+a_3+a_4=4\cdot600=2400 \ \ \ \ (1)$
$\displaystyle a_1+a_2+a_3+a_5=4\cdot600=2400 \ \ \ \ (2)$
$\cdots$
Subtracting we get $a_4=a_5$
Similarly, we can show that the length of all the sticks are same.
I should not say anything more:)
Alternatively,
Clearly, there are $\binom 54=5$ combinations
Observation each of $a_i$ occurs $5-1$ times in the combinations
Adding we get $(5-1)(a_1+a_2+a_3+a_4+a_5)=5\cdot2400$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Smallest non-commutative ring with unity
Find the smallest non-commutative ring with unity. (By smallest it means it has the least cardinal.)
I tried rings of size 4 and I found no such ring.
|
When one thinks of a noncommutative ring with unity (at least I), tend to think of how I can create such a ring with $M_n(R)$, the ring of $n \times n$ matrices over the ring $R$. The smallest such ring you can create is $R=M_2(\mathbb{F}_2)$. Of course, $|R|=16$. Now it is a matter if you can find a even smaller ring than this. Of course, the subring of upper/lower triangular matrices of $R$ is a subring of order $8$ which is a noncommutative ring with unity. This is indeed the smallest such rings.
In fact, there is a noncommutative ring with unity of order $p^3$ for all primes $p$. See this paper for this and many other interesting/useful constructions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 4,
"answer_id": 0
}
|
Is $f(x)=x+\frac{x}{x+1}$ uniformly continuous on $(0,\infty)$
Is $f(x)=x+\frac{x}{x+1}$ uniformly continuous on $(0,\infty)$
Going from the epsilon delta definition we get:
$$\forall x,y>1,\text{WLOG}:x>y \ ,\ \forall\epsilon>0,\exists\delta>0 \ s.t. \ |x-y|<\delta\rightarrow |x+\frac{x}{x+1}-y-\frac{y}{y+1}|<\epsilon$$
But I'm not really sure on how to continue from here.
Alternatively: if $\lim\limits_{x\to \infty}f(x)$ exists on $[0,\infty)$ then the function is uniformly continuous, well, it's easy to see here that $\lim\limits_{x\to \infty}f(x)=\infty$ but is that enough ?
|
Take a smaller bite. Note that a sum of uniformly continuous functions is uniformly continuous. We know $x\mapsto x$ is uniformly continuous, so we just deal with $x/(x+1)$.
Also, notice that
$${x\over x + 1 } = 1 - {1\over x + 1}.$$
Now the job is to show that $x\mapsto 1/(x + 1)$ is uniformly continuous. Can you do the rest?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Double Euler sum $ \sum_{k\geq 1} \frac{H_k^{(2)} H_k}{k^3} $ I proved the following result
$$\displaystyle \sum_{k\geq 1} \frac{H_k^{(2)} H_k}{k^3} =- \frac{97}{12} \zeta(6)+\frac{7}{4}\zeta(4)\zeta(2) + \frac{5}{2}\zeta(3)^2+\frac{2}{3}\zeta(2)^3$$
After consideration of powers of polylogarithms.
You can refer to the following thread .
My question is : are there any papers in the literature which dealt with that result?
Are my evaluations worth publishing ?
|
The following new solution is proposed by Cornel Ioan Valean. Based on a few ideas presented in the book, (Almost) Impossible Integrals, Sums, and Series, like the Cauchy product of $(\operatorname{Li}_2(x))^2$, that is $\displaystyle (\operatorname{Li}_2(x))^2=4\sum_{n=1}^{\infty}x^n\frac{H_n}{n^3}+2\sum_{n=1}^{\infty}x^n\frac{H_n^{(2)}}{n^2}-6\sum_{n=1}^{\infty}\frac{x^n}{n^4}$, where if we multiply both sides by $\displaystyle \frac{\log(1-x)}{x}$ and then integrate from $x=0$ to $x=1$, using that $\displaystyle \int_{0}^{1}x^{n-1}\log(1-x)\textrm{d}x=-\frac{H_{n}}{n}$, we get
\begin{equation*}
\int_0^1 \frac{\log(1-x)}{x}(\operatorname{Li}_2(x))^2 \textrm{d}x=-\frac{1}{3}(\operatorname{Li}_2(x))^3\biggr|_{x=0}^{x=1}=-\frac{35}{24}\zeta(6)
\end{equation*}
\begin{equation*}
=6\sum_{n=1}^{\infty} \frac{H_n}{n^5}-4\sum_{n=1}^{\infty} \frac{H_n^2}{n^4}-2\sum_{n=1}^{\infty} \frac{H_nH_n^{(2)}}{n^3}
\end{equation*}
\begin{equation*}
=5\zeta^2(3)-\frac{17}{3}\zeta(6)-2\sum_{n=1}^{\infty} \frac{H_nH_n^{(2)}}{n^3},
\end{equation*}
where the first sum comes from the classical generalization, $
\displaystyle 2\sum_{k=1}^\infty \frac{H_k}{k^n}=(n+2)\zeta(n+1)-\sum_{k=1}^{n-2} \zeta(n-k) \zeta(k+1), \ n\in \mathbb{N},\ n\ge2$, and the second sum, $\displaystyle \sum_{n=1}^{\infty} \frac{H_n^2}{n^4}=\frac{97}{24}\zeta(6)-2\zeta^2(3)$, is calculated in the mentioned book or in this article.
To conclude, we have
\begin{equation*}
\sum_{n=1}^{\infty}\frac{H_n H_n^{(2)}}{n^3}=\frac{1}{2}\left(5\zeta^2(3)-\frac{101}{24}\zeta(6)\right).
\end{equation*}
Note the present solution circumvents the necessity of using the value of the series $\displaystyle \sum_{n=1}^{\infty} \left(\frac{H_n}{n}\right)^3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 1,
"answer_id": 0
}
|
Find the limit of $S_n=\sum_{i=1}^n \big\{ \cosh\big(\!\!\frac{1}{\sqrt{n+i}}\!\big) -n\big\}$, as $n\to\infty$? $S_n=\sum_{i=1}^n\big\{ \cosh\big(\frac{1}{\sqrt{n+i}}\!\big) -n\big\}$ as $n\to\infty$
I stumbled on this question as an reading about Riemannian sums as in
$$
\int_a^b f(x)\,dx =\lim_{x\to \infty}\frac{1}{n}\sum_{k=1}^n\, f\Big(a+k\frac{b-a}{n}\Big).
$$
With numerical approximations, it seems as though $$\lambda=\lim_{x\to \infty}S_n =\log(\sqrt2).$$ But can we apply the Riemannian definition to this sum (by fetching a suitable function)? On the other hand, I tried the squeeze theorem on the sum to yield $0.5<\lambda<1$ , but still no closed form.
|
We have
$$S_n=\sum_{i=1}^n \cosh\left(\frac{1}{\sqrt{n+i}}\right) -n=\sum_{i=1}^n \left(\cosh\left(\frac{1}{\sqrt{n+i}}\right) -1\right)$$
and by the Taylor-Lagrange inequality we have
$$\left|\cosh\left(\frac{1}{\sqrt{n+i}}\right) -1-\frac{1}{2(n+i)}\right|\le\frac{C}{n^{3/2}}$$
hence
$$\left|S_n-\sum_{i=1}^n\frac{1}{2(n+i)}\right|\le \frac{C}{n^{1/2}}\to0$$
so by Riemann series we have
$$\lim_{n\to\infty}S_n=\frac 1 2\int_0^1\frac{dx}{1+x}=\frac{\log 2}2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
$f$ is measurable in complete measure space, then there is a $g$ measurable in the uncomplete space such that $f = g$ almost everywhere? I am trying to understand the proof a theorem in Folland. The theorem says if $(X,M,\mu)$ is a measure space with completion $(X,\bar{M},\bar{\mu})$, and $f$ is $\bar{\mu}$-measurable, then there is a $m$-measurable $g$ such that $f=g$, $\bar{\mu}$-almost everywhere.
He says this is obvious if $f = \chi_E$, the characteristic/indicator function of set $E$. I am not really sure I understood why the theorem is true in this case.
Attempt: If $E$ is not a null set, then we can let $g = \chi_E$. If $E$ is a null set, then $E$ is either in $M$ or $E$ is a subset of a null set in $M$ by definition of completion. If $E$ is in $M$, then let $g = \chi_E$. If not, it is a subset of some null set $S$. Let $g = \chi_S$, so $f = g$ except on $S - E$, which is a null set. Also, we know from earlier in the book that characteristic functions are measurable.
Question: Is my attempt correct? Did I miss out or misunderstood anything?
|
If $E\in\overline{\mathcal M}$, then there exist $E_1,E_2\in {\mathcal M}$, such that
$$
E_1\subset E\subset E,\,\,\mu(E_2\smallsetminus E_1)=0 \quad \text{and}\quad \overline{\mu}(E)=\mu(E_1)=\mu(E_2).
$$
Thus
$$
\chi_{E_1}=\chi_{E_2}=\chi_E\quad \text{$\overline{\mu}$-a.e.}
$$
For a general $f$, we assume that $f\ge 0$, as if not we can $f$ as $f=f_+-f_-$, where $f_+,f_-$ are its positive and negative parts. Then $f$ can be written as
$$
f=\sum_{n\in\mathbb N} a_n\chi_{A_n},
$$
where $a_n>0$ and the $A_n$'s are $\overline{\mu}$-measurable. Choose $B_n\in{\mathcal M}$, s. t. $\chi_{A_n}=\chi_{B_n}$, $\,\,\overline{\mu}$-a.e., and a desired $\mu$-measurable function, $\overline{\mu}$-a.e. equal to $f$, would be
$$
g=\sum_{n\in\mathbb N} a_n\chi_{B_n}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Covering a Riemannian manifold with geodesic balls without too much overlap I'm looking for a proof of the following fact:
Let $M$ be a compact Riemannian manifold. There is a natural number $h$, such that for any sufficiently small number $r>0$, there exists a cover of $M$ by geodesic balls of radius $r$ such that any $h$ of the balls have empty intersection.
|
There is a neighborhood about each point where the exponential map is bilipschitz with constants arbitrarily close to 1. This implies that the images of balls of radius $R$ under the exponential map are almost balls of radius $R$, i.e. they contain balls of radius, say, $3R/4$ and lie in balls of radius $5R/4$. Cover your space with finitely many such neighborhoods.
Take a uniform cubical lattice in $R^n$ such that the $r$-neighborhood of the lattice points covers $R^n$ and send it to each of the chosen neighborhoods via the exponential map. The balls of radius $4r/3$ about the images of the lattice points cover the manifold and intersect only a bounded number of balls from points coming from the same exponential map. (Note the bound does not depend on r). But each point is only in finitely many images of the exponential map, so there is a fixed number on how many balls of radius $4r/3$ can contain any point, which gives a bound on the number that can intersect.
This is rough, but I feel confident in it, let me know what you think.
TL;DR the number of neighborhoods in the cover is finite by compactness, and each point only intersects a bounded number of balls in each cover because a Riemannian metric is Bilipschitz eq. to the Euclidean metric.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
}
|
Is Linear transformations $T_1,T_2 : \mathbb{R}^n\rightarrow \mathbb{R}^n$ Invertible? Let $T_1$ and $T_2$ be two Linear transformations from $\mathbb{R}^n$ to $\mathbb{R}^n$.
Let $\{x_1,x_2,\cdots,x_n\}$ be a basis of $\mathbb{R}^n$. Suppose that $T_1(x_i)\neq 0$ for every $1\leq i\leq n$ and that $x_i\perp Ker (T_2)$ for every $1\leq i\leq n$.
Which of the following is true?
*
*$T_1$ is invertible
*$T_2$ is invertible
*Both $T_1$ and $T_2$ are invertible
*Neither $T_1$ nor $T_2$ is invertible.
As $T_1(x_i)\neq 0$ for each $1\leq i\leq n$ we do not have $T_1(a_1x_1+a_2x_2+\dots+a_nx_n)=0 $ unless each $a_i=0$ i.e.,$a_1x_1+a_2x_2+\dots+a_nx_n=0$ i.e., $T_1$ is one one thus invertible.
I am not sure if $T_2$ is invertible or not.
we have $x_i\perp Ker (T_2)$ for all $x_i$. Would that be a good idea say something like
$\langle x_i :1\leq i\leq n\rangle \perp Ker(T_2)$ and as they span whole space we would have
$\mathbb{R}^n\perp Ker(T_2)$ and thus $Ker(T_2)=0$ so it is injective so it is invertible...
Thus both $T_1$ and $T_2$ are invertible?
I would be so thankful if someone can assure what it has been done here is sufficient/clear.
THank you
|
You are wrong that $T_1$ is invertible; it need not be. Indeed for $n=2$, $\{x_1,\ldots,x_n\}$ the standard basis, and $T_1$ given by the matrix $$\begin{pmatrix}1&-1\\-1&1\end{pmatrix}$$ neither of $T_1(x_1)=x_1-x_2$ nor $T_1(x_2)=x_2-x_1$ is zero, but $T_1(x_1+x_2)=0$. Moreover one can take $T_2=I$ to satisfy $x_i\perp\ker(T_2)=\{0\}$ for $i=1,2$.
Your argument that $\ker(T_2)=\{0\}$ always, hence $T_2$ is invertible, is correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Associated primes of a quotient module.
Let $R$ be a Noetherian ring, $M$ a finitely generated $R$-module and $p\in \operatorname{Ass}(M)$. Suppose $x$ is an $M$-regular element and $q$ is a minimal prime over $I=(p,x)$. How can we show that $q\in \operatorname{Ass}(M/xM)$?
Note: Since $q$ is minimal prime over $(p,x)$, we know that in $R_q$, $I_q$ is $qR_q$-primary.
|
I dont feel comfortable answering my own question but i am sketching a solution as I was asked to do so above. As seen in the solution by Youngsu and by YACP, we may assume that $(R,m)$ is a Noetherian local ring and $m$ is minimal over $(p,x)$. As $p\in \operatorname{Ass}M$, $p=(0:y)$ for some $0\neq y\in M$. By Krull's Intersection theorem there exists a $r$ such that $y\in x^rM\setminus x^{r+1}M$. Thus $y=x^ra$ for some $a\in M\setminus xM$. Since $x$ is $M$ regular, it follows that $p=(0:a)$. Since $p\in \operatorname{Ass}M$, we have the following natural composite map, say $f$, $Ra\cong R/p\subseteq M\to M/xM$. Note that $Im(f)\neq 0$. Observe that $Im(f)\cong R/(p,x)$ which is of finite length. Thus $m\in Ass(Im(f))\subseteq Ass(M/xM)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Does uniform integrability plus convergence in measure imply convergence in $L^1$?
Does uniform integrability plus convergence in measure imply convergence in $L^1$?
I know this holds on a probability space. Does it hold on a general measure space? I have tried googling. It returned very few results on UI on measure spaces, and none of them mentioned a result like the one in the title. This comes from a discussion about another question.
The proof i have seen for a probability space breaks for a general measure space.
By UI, i mean
$\sup_{f}\int_{|f|>h} |f|d\mu $ goes to $0$ as $h$ goes to infinity.
|
On $\Bbb R$ with Lebesgue measure, the sequence $(f_n)$ defined by $f_n={1\over n}\cdot \chi_{[n,2n]}$ for each $n$ would furnish a counterexample. Here, $\chi_A$ is the indicator function on the set $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Finding powers of ideals. Can any one assist me finding the product of ideals.
Let $R=K[x_1,x_2,\dots,x_n]$, and $I=x_1R+x_2R+\dots+x_nR$. Then what is $I^m$?
As $x_1^2+x_2^2$ belongs to $I^2$ it cannot be written as the product of two elements in $R$.
|
In general, if $I_1 = \sum_{i=1}^n x_i R$ and $I_2=\sum_{j=1}^m y_jR$, then $$I_1I_2 = \sum_{i=1}^n \sum_{j=1}^m x_iy_jR$$ is generated by the products of the generators of $I_1$ with the generators of $I_2$.
In your case it is $$I^n = \sum_{e_1+\dots + e_n=n} x_{e_1}x_{e_2}\cdots x_{e_n}R.$$ So if $R$ is the ring of polynomials in $x_1,\dots,x_n$, then $I^n$ is the ideal of polynomials whose monomials have total degree $\geq n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Elementary number theory problem Let $X = \{n \in \mathbb{N}: 6 \times n\,\, \text{does not consist of} \ 0,1,2,3 \, \text{or} \ 4\}.$ For eg, $93 \in X$ because $6 \times 93=558.$
Could anyone advise me how to prove there exists $2$ natural numbers such that the value of every $n \in X$ consists of at least one of them? Thank you.
|
$1*6=6$ and $98*6=588$ also $96*6=576$ therefore we wish to prove that any number contains either a $1$ or a $9$
Suppose a number does not contain a 1 or a 9.
Then 6n has more digits than n (because 1 is not a digit). Therefore the left-most digit must be 5. Since there is no 9. So think of doing the multiplication by hand. You get $48=8\cdot6$ plus the tens digit of another number. which is at most five. So the last two digits are between 50 and 53. Clearly the second digit is not a valid number.
i.e. assume a number $n=(a_1a_2a_3a_4\ldots a_k)_{10}\in X$, and $a_i\ne 1,9$. Since $a_1\ne 1$, the representation of $6n=(b_1b_2b_3\ldots b_{k+1})_{10}$ must have one more digit ($6\cdot 2=12>10$). But the supposition states $b_i\ne 1,2,3,4$, so $b_1$ and $b_2$ are at least $5$. Note that $a_1a_2a_3$, however, is at most $888$, since they cannot be $9$. Then we have $6(a_1a_2a_3)\le 5322$ and so either $b_1\le 4$ or $b_2\le 3$, which is a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How to prove that, for each $n\in \mathbb N$ $$
\frac{x^{n+1}-1}{x-1} = 1 + x + x^2 + \dots + x^n
$$
where $x\neq 1$, $x\in \mathbb R$.
I am really tired to prove that questions. I can not understand any one. Please help me.....How to prove that, for each $n\in \mathbb N$ (using Mathematical induction)
My try
how to complete this steps?
|
This is equivalent to $(x-1)(1+x+\cdots x^n) = x^{n+1}-1$
Base case, $n=0$,
$$(x-1)(x^0)=x^{0+1}-1$$
Now assume it is true up to $n-1$, so
$$(x-1)(1+x+\cdots x^{n-1}+x^n) = [(x-1)(1+x+\cdots x^{n-1})] + (x-1)(x^n) $$
$$= [x^n - 1] + [x^{n+1}-x^n] =x^{n+1}-1 $$
Now divide the $(x-1)$ out to RHS
$$(1+x+\cdots x^{n-1}+x^n) = \frac{x^{n+1}-1}{x-1}$$
which we can do so long as $x\neq 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/618959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
}
|
Open Sets Definition I understand that the definition of an open set in a Metric Space and this can transfer over if you're dealing with a Metric Topological Space. However, I'm not sure if there is a standard definition of an open set in a Topological Space. I have read that it may vary under constraints.
I found one definition on Wikipedia:
If ($X$, $D$) is a Topological Space, then a subset $O$ of $X$ is open if and only if $O$ is a neighbourhood of each of its points.
This is how I expressed it mathematically:
for all x $\in$ O there exist an open set U containing x such that U $\subseteq\ O $
|
The definition is equivalent to the usual definition in a topological space. When you have a metric space $(X,\rho)$, you can consider the topological space $(X,\tau_\rho)$ where the topology $\tau_\rho$ consists of all sets in $X$ that are unions of open balls $B(x,\varepsilon)$. Observe that
$(1)$ $X$ is a union of open balls, namely $X=\bigcup_{x\in X} B(x,1)$, so $X\in \tau_\rho$.
$(2)$ $\varnothing$ is the empty union of balls, so $\varnothing \in \tau_p$.
$(3)$ The arbitrary union of sets that are union of balls is clearly a union of balls.
$(4)$ If $B(x,\varepsilon_1)$ and $B(y,\varepsilon_2)$ have nonempty intersection, we can always find a point $z$ and a ball $B(z,\varepsilon_3)$ contained in $B(x,\varepsilon_1)\cap B(y,\varepsilon_2)$. This can be used to prove the intersection of two sets that are union of balls is again a union of balls, hence is in $\tau_\rho$ again.
Thus the tentative topology $\tau_\rho$ that $\rho$ induces is indeed a topology.
When dealing with metric spaces, we say that $O$ is a nbhd of $x$ if it contains an open ball centered at $x$. It can be shown open balls are nbhds of each of its points. Hence, open balls are open using the definition you supply.
Now, we'd like to prove
PROP A set in a metric space $(X,\rho)$ is open (i.e. it is a nbhd of each of its points) iff it is the union of open balls, that is, iff it is in $\tau_\rho$.
PROOF Suppose the set $S$ is a union of open balls. Pick $x$ in your set. Since $S$ is a union of balls, $x$ must be in some ball in the union, call it $B_1$. But $B_1$ is a nbhd of each of its points, so it contains some open ball containing $x$. This ball will be contained in $S$, so $S$ is open. Conversely, suppose $S$ is a nbhd of each of its points. Then for each $x$ we can find a ball $B(x,\varepsilon_x)$ contained in $S$. Then $S=\bigcup_{x\in S} B(x,\varepsilon_x)$ will be a union of balls $\blacktriangleleft$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Distance between points in hyperbolic disk models I was puzzeling with the distance between points in hyperbolic geometry and found that the same formula is used for calculating the length in the Poincare disk model as for the Beltrami-Klein model the formula
$$ d(PQ)=\frac{1}{2} \left| \log \left(\frac{|QA||BP|}{|PA||BQ|}\right)\right| $$
where A and B are the idealpoints (extremities) of the line (in the Beltrami-Klein model ) or the circle or diameter (in the Poincare disk model) that contains P and Q while PA, PB, QA, QB be the euclidean distances between them. (but see below for an extra question)
But let P and Q for simplicity be points on a diameter, then by going from a Beltrami-Klein model to a Poincare disk model the points P and Q get closer to the centre while the end points stay on the same points so the euclidean distances change, and the formula could give a different value.
Therefore (I think) the formula cannot be correct for both models, and so my question for which model is this equation and what is the formula for the other model.
ADDED LATER:
A more worked out example: (Schweikart Constant, altitude of the largest orthogonal isocleses triangle)
Let r be the radius of the disk
Then $ A = ( - \frac{1}{2} r \sqrt{2} , - \frac{1}{2} r \sqrt{2} ) $ ,
$ B = ( \frac{1}{2} r \sqrt{2} , \frac{1}{2} r \sqrt{2} ) $ ,
P = (0,0) and Q is on the line x=y
and the hypothenuse is the hyperbolic line between (r,0) and (0,r)
The euclidean lengths for PQ are:
For the Poincare Disk model: $ PQ = r ( \sqrt2 - 1 ) $
For the Beltrami-Klein model: $ PQ= \frac{1}{2} r \sqrt{2} $
What gives for the altitude:
For the Poincare Disk model:
$ d(PQ)= \frac{1}{2} | \log ( 1 + \sqrt{2} | $
And for the Beltrami-Klein model:
$ d(PQ)= \frac{1}{2} | \log ( 3 + 2 \sqrt{2} ) | = \log ( 1 + \sqrt{2}) $
What is right way to calculate the Schweikart Constant?
The Schweikart Constant is $ \log ( 1 + \sqrt{2}) $ , so it looks like the value in the Beltrami-Klein model is correct, but what is the correct formula for the Poincare Disk model?
Additional Question :
For the lengths in the Poincare disk models:
If the hyperbolic line is an euclidean circle are the euclidean lengths measured as the segment-lengths or as arc-lengths (along the circle)?
|
There are two common versions of the Poincaré disk model:
*
*Constant curvature $-1$; metric element $\frac{4(dx^2+dy^2)}{(1-x^2-y^2)^2}$
*Constant curvature $-4$; metric element $\frac{dx^2+dy^2}{(1-x^2-y^2)^2}$
The difference is where you stick that annoying factor of $4$. Wikipedia uses version 1, and its description of the relation with Beltrami-Klein model is also based on version 1.
Let's compute the distance from $(0,0)$ to $(x,0)$, $x>0$, in each version:
*
*$\int_0^x \frac{2\,dt}{ 1-t^2 } = \log\frac{1+x}{1-x}$
*$\int_0^x \frac{ dt}{ 1-t^2 } = \frac12\log\frac{1+x}{1-x}$
As you can see, it's the second version that has $\frac12$ in front of the logarithm.
The transformation $s=\frac{2u}{1+u^2}$, stated in Wikipedia under "relation to the Poincaré model", exactly doubles the Poincaré model distance from the origin, because
$$\log\frac{1+s}{1-s}=2\log\frac{1+u}{1-u}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
}
|
Finding a Uniformizer of a Discrete Valuation Ring Suppose I have a discrete valuation ring. Then what are some techniques for explicitly finding a uniformizer? I'm especially interested in situations where the ring is given similarly to the following example.
Suppose we take the projective variety defined by the equation $C: X^2+Y^2-Z^2$, Then if $P \in C$ is a point on this curve, then $\mathbb{C}[C]_P$ is a discrete valuation ring. Thus, we should be able to exhibit a uniformizer.
If we scale $Z$ to $1$ for the moment, we can think of this as a circle of radius $1$. A line through the point $(0, \frac{1}{2})$ will intersect the circle at exactly one other point (except when it is horizontal and intersects $(0, \frac{1}{2})$ with multiplicity $2$). Thus, if we let $t$ be the slope of such a line, we can parametrize any point on the curve as $(\frac{2t}{1+t^2}, \frac{1-t^2}{1+t^2}, 1)$ and scaling back, $(2t, 1-t^2, 1+t^2)$. We see that this satisfies $C$ for all complex $t$. Now, I would like to say that for a given point, something like $(1+t^2)X-(2t)Z$ should be a uniformizer but I don't know of a good way to tell if this is true or how to prove it.
|
As the local ring at a point $P$ is defined as $\mathcal{O}_{C,P} = \varinjlim_{P \in U} \mathcal{O}_C(U)$, given any point $P \in C$ we can first pass to whichever affine open that contains $P$, and then calculate the direct limit over all opens in that affine $U$. As passing to an affine open corresponds to dehomogenizing, this is a very powerful idea which I will now illustrate.
Say without loss of generality that $P = [a_0 : b_0: 1] \in U_2$, the affine open set where $z\neq 0$. Then
$$\mathcal{O}_{C,P} = \left(\frac{k[X,Y]}{X^2 + Y^2 - 1}\right)_{(X-a_0, Y-b_0)}$$
with $a_0,b_0$ satisfying $a_0^2 + b_0^2 - 1 = 0$.
The ultimate point
If $b_0 \neq 0$, then a uniformizing parameter for $\mathcal{O}_{C,P}$ is given by the line $X-a_0$. If $b_0 = 0$, then $a_0 \neq 0$ for $[0:0:1] \notin C$ and so a uniformizing parameter would then be given by $Y- a_0$.
Proof
Let us assume we are in the case $b_0 \neq 0$. The case $b_0 = 0,a_0 \neq 0$ follows along similar lines. We will now show $(X-a_0,Y -b_0) = (X-a_0)$. Note the inclusion $\supseteq $ is clear. For the other inclusion, as $X^2 + Y^2 + 1 = 0$ we have
$$\begin{eqnarray*} X^2 + Y^2 - 1 &=& X^2 + (Y+b_0)(Y-b_0) +b_0^2 - 1 \\
&=& X^2 + (Y+b_0)(Y-b_0) - a_0^2 \end{eqnarray*}$$
which implies that $Y-b_0 = (X^2 - a_0^2)/(Y +b_0)$. Note we can invert $Y+b_0$ using the hypothesis $b_0 \neq 0$ as $Y+b_0 \notin (X-a_0,Y-b_0)$. We have found a generator for the local ring at $P$ which then must be the uniformizing parameter for the DVR.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Analytical Geometry problem with complex numbers - alternate solutions. The question is to show that the equation of the lines making angles $45^\circ$ with the line:
$$ \bar{a}z + a\bar{z} + b = 0; \;\;\;\;\; a,z \in \mathbb{C}, b \in \mathbb{R} $$
and passing through a point $c \in \mathbb{C}$ is:
$$ \dfrac{z-c}{a} \pm i\dfrac{\bar{z} - \bar{c} } { \bar{a} } = 0 $$
Now, I know one method to solve this problem. Taking $z = x + iy$ and finding the slope of the given line, and then finding the slopes of the required lines...
But that is way too long! Are there better, more elegant methods?
|
Drawing figures will help you understand my answer.
Let $L$ be the given line. Let $D(d)$ be the intersection point of $L$ and the perpendicular line of $L$. Noting that the normal vector to $L$ is $a$, there exists $k\in\mathbb R$ such that
$$d+ka=c\iff d=c-ka.$$
In the following, we suppose that $k\not=0.$ The case $k=0$ means that $c$ is on the $L$. This case is easy so I'll mention about this case at the last.
Now, let $L_1,L_2$ be the lines we want, and let $E(e), F(f)$ be the intersection point of "$L$ and $L_1$" and "$L$ and $L_2$" respectively.
Hence, we have
$$e=d+iak, f=d-ika.$$
(Do you see why? Again, drawing figures will help you.)
Noting that the line which passes through $\alpha,\beta$ represents
$$(\bar{\beta}-\bar{\alpha})z-(\beta-\alpha)\bar{z}=\bar{\beta}\alpha-\beta\bar{\alpha},$$
we know $L_1$ is represented as
$$(\bar{e}-\bar{c})z-(e-c)\bar{z}=\bar ec-e\bar c$$
$$\iff (\bar d-\bar aki-\bar{c})z-(d+iak-c)\bar{z}=(\bar d-\bar aki)c-(d+iak)\bar c$$
Setting $\bar d=\bar c-k\bar a$ in this and dividing the both sides by $k(\not=0)$ gives us
$$\bar a(1+i)(z-c)+a(i-1)(\bar z-\bar c)=0$$
Dividing the both sides by $a\bar a(1+i)$ gives us
$$\frac{z-c}{a}+\frac{\bar z-\bar c}{\bar a}\times\frac{i-1}{1+i}=0.$$
Noting $(i-1)/(1+i)=i$, this is what we want.
We can repeat the same argument as above for $L_2$, so we can prove it for the $k\not=0$ case.
Finally, I'm going to mention the $k=0$ case. This means that $c$ is on $L$. Hence, $L_1$ is a line which passes through $c$ and $c+a-ia.$ So, we can repeat the same argument as above. Q.E.D.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
which of the following subsets of $\mathbb R^2$ are uncountable.
*
*$\{\,(a,b)\in \mathbb R^2\mid a\leq b\,\}$
*$\{\,(a,b)\in\mathbb R^2\mid a+b\in\mathbb Q\,\}$
*$\{\,(a,b)\in \mathbb R^2\mid ab\in \mathbb Z\,\}$
*$\{\,(a,b)\in\mathbb R^2\mid a,b\in \mathbb Q\,\}$.
I know option 1 is uncountable and option 4 is countable.
I think option 2, 3 are uncountable but I am not sure. Can someone help me?
|
$\mathbb{Q}$ being countable,$\mathbb{Q^2}$ is also countable and consequently $4$ is countable.Choosing a=y,b=n/y where n is a natural number and y is a positive irrational and less than 1 we can satisfy the requirement of $3$ in uncountably many ways, irrationals in ($0,1$) being uncountable.
$a=x-y,b=x+y \text{ (where x is rational and y irrational) }\Rightarrow a+b=2x\in \mathbb{Q}$ i.e there are uncountably many choices for 2. Choosing a and b both irrationals we can satisfy the requirement of 1 in uncountably many ways.So $1,2,3$ are uncountable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Find the following limit: $\lim_{n\to \infty}\frac{n!e^n}{n^n}$. How can I find the limit below:
$$\lim_{n\to \infty}\frac{n!e^n}{n^n}$$
I tried to use Stirling's approximation and got
$$
\lim_{n\to\infty}\frac{n!e^n}{n^n}=\lim_{n\to\infty}\frac{{\sqrt{2n\pi}(n/e)^n}}{(n/e)^n}=\lim_{n\to\infty}\sqrt{2n\pi}=+\infty
$$
Is this right?
|
Let $$a_n =\frac{n! e^n}{n^n}$$ then $$\frac{a_{n+1}}{a_n} = e \left(1+\frac{1}{n}\right)^{n} \geq e \Rightarrow a_{n+1} \geq e\ a_n$$ Since $a_1=e$, $a_n \geq e^n \rightarrow \infty$, when $n\rightarrow \infty$, which means $\lim\ a_n = +\infty$
EDIT: Scratch that, I made a mistake, $$\frac{a_{n+1}}{a_n} \geq e \left( 1+\frac{1}{n} \right)^{-n}$$ Would someone be so kind as to downvote and/or delete this answer?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How can I calculate this series? How can I calculate the series
$\displaystyle{%
\sum_{n=0}^{\infty }{\left(-1\right)^{n}
\over \left(2n + 1\right)\left(2n + 4\right)}\,\left(1 \over 3\right)^{n + 2}\
{\large ?}}$
|
$\newcommand{\+}{^{\dagger}}%
\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\ds}[1]{\displaystyle{#1}}%
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}%
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}%
\newcommand{\fermi}{\,{\rm f}}%
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}%
\newcommand{\half}{{1 \over 2}}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\isdiv}{\,\left.\right\vert\,}%
\newcommand{\ket}[1]{\left\vert #1\right\rangle}%
\newcommand{\ol}[1]{\overline{#1}}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}%
\newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}%
\newcommand{\sech}{\,{\rm sech}}%
\newcommand{\sgn}{\,{\rm sgn}}%
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
$\ds{%
{\cal I} \equiv \sum_{n=0}^{\infty }{\left(-1\right)^{n}
\over \left(2n + 1\right)\left(2n + 4\right)}\,\left(1 \over 3\right)^{n + 2}:\ {\large ?}}$
Let's
$\ds{%
\fermi\pars{x} \equiv \sum_{n=0}^{\infty}
{\pars{-1}^{n}x^{n + 2}\over \pars{2n + 1}\pars{2n + 4}}}$ such that
$\ds{{\cal I} = \fermi\pars{1 \over 3}}$.
\begin{align}
\fermi'\pars{x}&=\half\sum_{n = 0}^{\infty}{\pars{-1}^{n}x^{n + 1}\over 2n + 1}
\\[3mm]
\fermi''\pars{x}&=\half\sum_{n = 0}^{\infty}
{\pars{-1}^{n}\pars{n + 1}x^{n}\over 2n + 1}
={1 \over 4}\sum_{n = 0}^{\infty}\pars{-1}^{n}x^{n}
+
{1 \over 4}\sum_{n = 0}^{\infty}{\pars{-1}^{n}x^{n}\over 2n + 1}
\\[3mm]&={1 \over 4}\,{1 \over 1 + x} + {1 \over 2x}\,\fermi'\pars{x}
\quad\imp\quad
\fermi''\pars{x} - {1 \over 2x}\,\fermi'\pars{x} = {1 \over 4}\,{1 \over 1 + x}
\\
{1 \over x^{1/2}}\,\fermi''\pars{x} - {\fermi'\pars{x} \over 2x^{3/2}}
&={1 \over 4}\,{1 \over x^{1/2}\pars{x + 1}}
\quad\imp\quad
\totald{\bracks{x^{-1/2}\fermi'\pars{x}}}{x} = {1 \over 4}\,{1 \over x^{1/2}\pars{x + 1}}
\end{align}
$$
\fermi'\pars{x} = \half\,x^{1/2}\arctan\pars{x^{1/2}} + Cx^{1/2}
$$
where $C$ is a constant. However, $\fermi'\pars{x} \sim x/2$ when $x \sim 0$ which leads to $C = 0$.
Since $\fermi\pars{0} = 0$
\begin{align}
{\cal I} &= \fermi\pars{1 \over 3} = \half\int_{0}^{1/3}x^{1/2}\arctan\pars{x^{1/2}}\,\dd x
\\[3mm]&=
\left.{1 \over 3}\,x^{3/2}\arctan\pars{x^{1/2}}\right\vert_{0}^{1/3}
-
{1 \over 3}\int_{0}^{1/3}
x^{3/2}\,{1 \over \pars{x^{1/2}}^{2} + 1}\,\half\,x^{-1/2}\,\dd x
\\[3mm]&=
\underbrace{{1 \over 3}\,\pars{1 \over 3}^{3/2}\
\overbrace{\arctan\pars{\root{3} \over 3}}^{\ds{\pi/6}}}
_{\pi\root{\vphantom{\large A}3}/162}
-
{1 \over 6}\int_{0}^{1/3}\pars{1 - {1 \over x + 1}}\,\dd x
\\[3mm]&=
\color{#0000ff}{\large{\pi\root{3} \over 162} - {1 \over 18} + {1 \over 6}\ln\pars{4 \over 3}} \approx 0.0260
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
The difference between $\Delta x$, $\delta x$ and $dx$ $\Delta x$, $\delta x$ and $dx$ are used when talking about slopes and derivatives. But I don't know what the exact difference is between them.
|
Let $y=f(x)$.
$\Delta x$ and $\delta x$ both denote the change in $x$ (or increment of $x$). Some books prefer to use capital delta $\Delta$ and some lowercase delta $\delta$. The change can be small or large, but often we talk about the case that $\Delta x$ is small and especially $\Delta x\to 0$.
In mathematics, $dx$ is another independent variable which can assume any number.
$$-\infty<dx<\infty.$$
$df$ is a function of two variable $x$ and $dx$. Its value is denoted by $dy$ and
$$dy=f'(x)dx$$
Therefore,$dx$ does not need to be small. Often in calculus, it is said that $dx=\Delta x$, but there is a difference between them. While $\Delta x$ must be small enough such that $x+\Delta x$ lies within the domain of $f$, there is no restriction on $dx$.
In an old-fashioned approach which is still vastly used in physics, $dx$ is called an infinitesimal (or infinitely small change in $x$).
In calculus, use of infinitesimals is sometimes beneficial especially in integration applications (for example to derive the formula of the arc length or areas of surfaces of revolution, etc.) as we do not need to go through limit of sums process.
Unfortunately there are many answers to this question which are completely off-base. And my original answer was deleted, so now I have to add the following reference from Thomas's calculus, 3rd edition.
*
*The notation for partial differentiation is $\partial x$ (and not $\delta x$).
*$dx$ is not the limit of $\Delta x$. The limit of $\Delta x$ is zero when $\Delta x\to 0$.
*I did not talk about non-standard analysis.
Here is the link for those who want to study more (George B Thomas, Calculus and Analytic Geometry, 3rd edition, p. 82):
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 5,
"answer_id": 4
}
|
Help with finding tangent to curve at a point
Find an equation for the tangent to the curve at $P\left( \dfrac{\pi}{2},3 \right )$ and the horizontal tangent to the curve at $Q.$
$$y=5+\cot x-2\csc x$$
$y\prime=-\csc ^2 x -2(-\csc x \cot x)$
$y\prime= 2\csc x \cot x - \csc ^2 x\implies$ This is the equation of the slope.
Now I find the slope at $x=\dfrac{\pi}{2}:$
$$y^{\prime}=2\csc \left( \frac{\pi}{2} \right) \cot \left( \frac{\pi}{2} \right)- \csc ^2 \left( \frac{\pi}{2} \right)$$ $$y^{\prime}= -1$$
Equation for the tangent to the curve at $P$:
$$y-3=-1\left( x-\frac{\pi}{2} \right)$$ $$y=-x+\frac{6+\pi}{2}$$
Then I found the horizontal tangent at $Q$, which is where the slope is $0.$ So I set the slope equation equal to $0:$
$$y^{\prime}= 2\csc x \cot x - \csc ^2 x=0$$ $$-\csc ^2 x = -2\csc x \cot x$$
$$-\frac{1}{\sin^2 x}=-2\left( \frac{1}{\sin x} \frac{\cos x}{\sin x} \right )$$
$$\cos x = \frac{1}{2}$$
I don't know how to proceed from here. Since $\cos x = \dfrac{1}{2}$, $\cos^{-1}\left (\dfrac{1}{2}\right)=\dfrac{\pi}{3}$.
Plugging this into the equation would give the $y$ value, and I'd be able to find the equation of the tangent line, but I'm confused because there is not just one $x$ to consider, since $x=2n \pm \dfrac{\pi}{3}$.
The answer for equation of tangent at $Q$ is $y=5-\sqrt{3}$, and I don't know how to get this.
Can you please show how to work this last part in details? Thank you.
|
The first part looks good. For the second part as you see the solution to the equation gives
$$2 \cot x \csc x - \csc^2 x = 0 \Rightarrow x=2n \pi \pm\frac{\pi}{3}.$$
Choose $x=\frac{\pi}{3}$ as it is one such solution. We find the point on $y$ to be
$$5+\cot\left( \frac{\pi}{3} \right)-2\csc\left( \frac{\pi}{3} \right)=5-\sqrt{3},$$
And so $y=5-\sqrt{3}$ is indeed a horizontal tangent line, in particular the horizontal tangent line at $x=\frac{\pi}{3}$. You can verify this in WA.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
Information Entropy Applied to Complexity Theory I was just wondering whether or not information entropy has significant applications to complexity theory.
I ask because of a simple example I thought of. It comes from a riddle. Suppose you had 8 balls, one of which is slightly heavier than the rest but all of them look identical. You are tasked with determining the odd one out by using a scale, where you place any number of balls on each side. What is the least number of times you can use the scale?
At first you might think 3, by splitting the balls in half each time, but you can actually do it in 2, by splitting the balls into (approximately) thirds each time, weighing two of them (of equal size) and if they balance then you know that the ball remains in the unweighed group. So the question then becomes what is the minimum number of weighings required for $n$ balls instead?
If you have $n$ balls then the probability that the heavy ball is any one of them is equal so your current uncertaintiy is $H(1/n,\cdots,1/n) = \log n$ Since the scale can only do three things, fall left, fall right, or stay still, you can see that the maximum amount of entropy for a weighing is $H(1/3,1/3,1/3) = \log 3$. If you know which ball it is the entropy of the situation is 0. Each weighing decreases the total entropy by the entropy of that weighing. So, no matter what, you'll require at least $(\log n) / (\log 3) = \log_3 n$ weighings to discover the heavy ball.
I'm just curious whether or not methods like this are ever used to discover a lower bound to more complicated problems.
|
Information theory has been used to show lower bounds in communication complexity. This workshop at STOC '13 has talks going over the basics of the theory and some applications.
The description of Amit Chakrabarti's talk "Applications of information complexity I" mentions the use of information complexity in answering lower bound questions:
Information complexity was invented as a technique to prove a very
specific direct sum result in communication complexity. Over the next
decade, the notion of information complexity has been generalized,
extended, and refined, leading to the rich theory we see today. I
shall survey the key stages of this development, focusing on concrete
lower bound questions that spurred it: both inside communication
complexity and from applications in data streams and data structures.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Plane Geometry problem I came across this problem in a mathematics-related facebook group. Could anyone advise on the solution to it(i.e. hints only)? Thank you.
|
Since triangle ABC is equilateral, we know that angle BAC is 60º. Therefore, angle PAQ is also 60º; we have a circle theorem that tells us that angle POQ is thus twice that measure or 120º. The diameter AD bisects that angle, so angle POD is 60º.
Diameter AD is also an altitude of equilateral triangle ABC: thus, if $ \ s \ $ is the length of one side of that triangle, the diameter of the circle is $ \ \frac{\sqrt{3}}{2}s \ $ . The radius OD therefore has length $ \ \frac{\sqrt{3}}{4}s \ $ . Since the altitude AD bisects side BC, then the segment BD has length $ \ \frac{1}{2}s \ $ . Hence, $ \ \tan(\angle BOD ) \ = \ \frac{2}{\sqrt{3}} \ $ .
Now, the measure of angle POB, designated as $ \ \phi \ , $ is the difference given by the measure of angle POD (call it $ \ \alpha \ $ ) minus the measure of angle BOD (call that $ \ \beta \ $ ) . We can then use the formula for the tangent of the difference of two angles, with $ \ \tan(\angle POD) \ = \ \tan(\alpha) \ = \ \tan \ 60º \ = \ \sqrt{3} \ $ , to obtain
$$ \tan \phi \ = \ \tan( \alpha - \beta ) \ = \ \frac{\tan \alpha \ - \ \tan \beta}{1 \ + \ \tan \alpha \cdot \tan \beta} \ = \ \frac{\sqrt{3} \ - \ \frac{2}{\sqrt{3}}}{1 \ + \ \sqrt{3} \cdot \frac{2}{\sqrt{3}}} \ = \ \frac{ \frac{3 \ - \ 2}{\sqrt{3}}}{1 \ + \ 2 } \ = \ \frac{1}{3 \ \sqrt{3}} \ . $$
We thereby conclude that $ \ \cot \phi \ = \ 3 \ \sqrt{3} \ . $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Looking for a more elegant / generic proof of the reducibility of a polynomial in $K[[X,Y]]$ The polynomial $P(X,Y)=XY-(X+Y)(X^2+Y^2)$ is irreducible in $K[X,Y]$, as a sum of two homogenous forms of degree 2 and 3 ($K$ is supposed to be algebraically closed). To look at the irreducibility in $K[[X,Y]]$, I first worked in $K[[X]][Z]$, where $Y=X^2Z$.
The polynomial becomes $Q(X,Z)=X^3(X^3Z^3+X^2Z^2+(X-1)Z+1)$. Using Hensel's Lemma in $K[[X]]$, I found that the polynomial $F(Z)=X^3Z^3+X^2Z^2+(X-1)Z+1$ has $1$ as a root modulo (X), and, with $F'(Z)=3X^3Z^2+2X^2Z+X-1$, $F'(1)=3X^3+2X^2+X-1$ is a unit in $K[[X]]$. Therefore, there is a root to $F$ looking like $Z=1+XH(X)$ where $H\in K[[X]]$. Then we can write $F(Z)=(Z-1-XH(X))(X-1+X^2(XZ^2+(1+XG(X))Z+T(X)))$ where $G, T\in K[[X]]$.
Going back to the initial polynomial $P$, we found that $P(X,Y)=(Y-X^2+X^2H(X))(X^2-X+Y^2+XY+X^2G(X)Y+X^3T(X))$ where $H,G,T \in K[[X]]$. Since the two factors of $P$ are not invertible, and $K[[X,Y]]$ is a UFD, we can deduce that $P$ is reducible.
After two unsucessful approaches, this is the easiest one I found (and brute force looks too difficult). Is there a less pedestrian/ad hoc method for this ?
|
I got two interesting answers on MO here https://mathoverflow.net/a/153102/3333. One is elegant by easily generalizing the use of Hensel's lemma in case the lowest degree term of the polynomial is a product of distinct factors. The other is based on a republished article and more systematic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Orthornomal matrices Is there a more direct reason for the following:
If the columns of $n\times n$ square matrix are orthonormal, then its rows are also orthonormal.
The standard proof involves showing that left inverse of a matrix is same as the right inverse and thereby concluding that if $Q^TQ = I$, then $QQ^T = I$. This seems to be more of an algebraic manipulation. Can someone offer me a geometric insight?
Thanks
|
If $Q$ is orthonormal, its transpose is its inverse, and vice versa. Understanding this is key to your question. You are asking for a geometric understanding of this, rather than a symbolic algebra understanding. So let's give that a try.
Because columns are orthonormal, then what does $Q^T$ do to the $i$th column of $Q$? If you are in the habit of seeing matrix multiplication with a vector as a sequence of dot products of rows with that vector (which captures the degree to which the vector is parallel to each of the rows), then you see that $Q^T$ takes the $i$th column to the $i$th standard basis vector.
But then what does $Q$ do to the $i$th standard basis vector? Again if you see matrix multiplication with a vector as a sequence of dot products, then you see that $Q$ applied to the $i$th standard basis vector captures the projection of each of $Q$'s rows onto the $i$th standard basis vector, yielding $Q$'s $i$th column.
We've just established that $Q$ and $Q^T$, as actions on vectors, exchange the set of columns of $Q$ with the set of standard basis vectors. So they are inverse actions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/619998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Taylor expansion of $e^{\cos x}$ I have to find the 5th order Taylor expansion of $e ^{\cos x}$. I know how to do it by computing the derivatives of the function, but the 5th derivative is about a mile long, so I was wondering if there is an easier way to do it.
I'd appreciate any help.
|
Use
$$e^{\cos x}=e\cdot e^{\cos x-1}.$$
Then substitute the power series expansion of $\cos x-1$ for $t$ in the power series expansion of $e^t$. What makes this work is that the series for $\cos x-1$ has $0$ constant term. For terms in powers of $x$ up to $x^5$, all we need is the part $1+t+\frac{t^2}{2!}$ of the power series expansion of $e^t$, and only the part $-\frac{x^2}{2!}+\frac{x^4}{4!}$ of the series expansion of $\cos x -1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/620065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 2
}
|
evaluation of $\lim_{x\rightarrow \infty}\frac{\ln x^n-\lfloor x \rfloor }{\lfloor x \rfloor} = $ (1) $\displaystyle \lim_{x\rightarrow \infty}\frac{\ln x^n-\lfloor x \rfloor }{\lfloor x \rfloor} = $, where $n\in \mathbb{N}$ and $\lfloor x \rfloor = $ floor function of $x$
(2)$\displaystyle \lim_{x\rightarrow \infty}\left({\sqrt{\lfloor x^2+x \rfloor }-x}\right) = , $where $\lfloor x \rfloor = $ floor function of $x$
$\bf{My\; Try}::$ for (1) one :: We can write as $\displaystyle \lim_{x\rightarrow \infty}\frac{n\cdot \ln x-\lfloor x \rfloor }{\lfloor x \rfloor}$
and we can say that when $x\rightarrow \infty$, Then $\lfloor x\rfloor \rightarrow x$
So $\displaystyle \lim_{x\rightarrow \infty}\frac{n\cdot \ln(x)-x}{x} = n\lim_{x\rightarrow \infty}\frac{\ln (x)}{x}-1$
Now Let $\displaystyle L = \lim_{x\rightarrow \infty}\frac{\ln(x)}{x}{\Rightarrow}_{L.H.R} =\lim_{x\rightarrow \infty}\frac{1}{x} = 0$
So $\displaystyle \lim_{x\rightarrow \infty}\frac{n\cdot \ln x-\lfloor x \rfloor }{\lfloor x \rfloor} = n\cdot 0-1 =-1$
$\bf{My\; Try}::$ for (2)nd one::we can say that when $x\rightarrow \infty$, Then $\lfloor x^2+x\rfloor\rightarrow (x^2+x)$
So $\displaystyle \lim_{x\rightarrow \infty}\left({\sqrt{x^2+x}-x}\right) = \lim_{x\rightarrow \infty}\frac{\left({\sqrt{x^2+x}-x}\right)\cdot \left({\sqrt{x^2+x}+x}\right)}{\left({\sqrt{x^2+x}+x}\right)}$
$\displaystyle \lim_{x\rightarrow \infty}\frac{x}{\left(\sqrt{x^2+x}+x\right)} = \frac{1}{2}$
Now my doubt is can we write when $x\rightarrow \infty$, Then $\lfloor x\rfloor \rightarrow x$
and when $x\rightarrow \infty$, Then $\lfloor x^2+x\rfloor\rightarrow (x^2+x)$
please clear me
Thanks
|
No. You cannot write so. In this case, going back to the definition is the best. So, in order to solove your questions, use the followings :
$$x-1\lt \lfloor x\rfloor \le x\Rightarrow \frac{x-1}{x}\le \frac{\lfloor x\rfloor}{x}\lt \frac{x}{x}\Rightarrow \lim_{x\to\infty}\frac{\lfloor x\rfloor}{x}=1$$
where $x\gt0.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/620152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
Integral of the product of an bounded and any continous functions If $u:[0,1]\rightarrow \mathbb{R}$ is bounded measurable function so that for all $v\in C[0,1]$
$$\int_0^1uvdx=0$$then show that u is zero almost everywhere on $[0,1].$
Thanks in advance for any help!
|
Since $u$ is bounded and measurable, we have $u \in L^2[0,1]$. Since $L^2[0,1]$ is the completion of $C[0,1]$ with respect to the 'inner product' norm, we can find $u_n \in C[0,1]$ such that $u_n \to u$. Since $u_n$ is continuous, we have $\int u_n u = 0$, and since $| \int (u-u_n)u | \le \|u-u_n\| \|u\|$, we see that
$\lim_n \int u_n u = \int u^2 = 0$. Hence $u(t) = 0$ ae. $t$.
If $u \in L^1[0,1]$ instead, we can apply a different technique. The function $\sigma(t) = 1_{[a,b]}(t)$ can be approximated (with the $\|\cdot\|_1$ norm) by uniformly bounded, continuous $\sigma_n$ so that $\sigma_n(t) \to \sigma(t)$ ae. $t$.
The dominated convergence theorem shows that $\lim_n \int u \sigma_n = \int u \sigma = \int_a^b u$. Since the $u_n$ are continuous, we have $\int_a^b u = 0$ for all $a,b \in [0,1]$. By considering the function $g(x) = \int_0^x u$, the Lebesgue differentiation theorem gives $g'(x) = u(x) $ ae. $x$, from which it follows that $u(x) = 0 $ ae. $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/620202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
prove that the Galois group $Gal(L:K)$ is cyclic Let $L$ be a field extension of a field $K$. Suppose $L=K(a)$, and $a^n\in K$ for some integer $n$. If $L$ is Galois extension of $K$ (i.e., $L$ is the splitting field of $f(x)\in K[x]$, and $f$ is separable over $L$), then prove that the Galois group $Gal(L:K)$ is cyclic.
|
Problem is true if we impose the condition $K$ contains a primitive root of unity.
Proof: Letting $G_n$ be group of all root unity we can show there is an injection $Gal(E/K) \to G_n$ given by $\sigma \to \frac{\sigma(a)}{a}$. So, we done.
If we impose the condition that $a^{i}$ doesn't belong to $K$ for all $i<n$ the above map would be isomorphism $\implies$ galois group has order $n$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/620259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
What is the interpretation of the eigenvectors of the jacobian matrix? I'm trying to think about the jacobian matrix as a abstract linear map.
What is the interpretation of the eigenvalues and eigenvectors of the jacobian?
|
What Frank Science has said in the question comment above is right. I'm simply expanding on his comment here:
Since the Jacobian has eigenvectors, it is square i.e. the input and output space have same dimensions. If there is some kind of natural interpretation to the input and output basis and they can be mapped to each other, the eigenvectors of the Jacobian represent those directions in the input space, where if you move locally (small amount), you move in the same direction in the output space. This interpretation is no different from that of the eigenvectors of any matrix. The only addition is that of local (small) movement, because the Jacobian approximates the original function locally.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/620353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
}
|
Weierstrass $\wp$-function: $(\partial_z \wp(z,\omega))^2$ Let $\vartheta(z,\omega)$ be the Riemann theta function. For $j \in \mathbb{Z}$ let $c_j$ be the coefficient of $z^{j}$ in the Laurent expansion of $\partial_z \log \vartheta \left(z + \frac{1 + \omega}{2}, \omega \right)$ at $z$ = 0. The Weierstrass $\wp$ function is
$$
\wp(z, \omega) = - \partial_z^{2} \log \vartheta \left(z + \frac{1 + \omega}{2}, \omega \right) + c_1.
$$
I have successfully shown, that $\wp$ is $\omega$- and $1$-periodical and a few other properties. Im stuck at the last property I have to show.
Let $ e_1 = \wp \left(\frac{1}{2}, \omega \right), \; e_2 = \wp \left(\frac{\omega}{2}, \omega \right)$ and $e_3 = \wp \left(\frac{1+\omega}{2}, \omega \right)$. Then
$$
(\partial_z \wp(z,\omega))^{2} = 4(\wp(z,\omega) -e_1)(\wp(z,\omega) - e_2)(\wp(z,\omega) -e_3)
$$
Any hints on how I could start the proof are very much appreciated!
|
I have successfully shown, that $\wp$ is $\omega$- and $1$-periodical and a few other properties.
If among these other properties are (considering $\wp$ as a function of $z$ only here)
*
*$\wp$ is even,
*$\wp$ has order $2$ (that is, takes each value in $\widehat{\mathbb{C}}$ exactly twice in a fundamental parallelogram, counted with multiplicity; equivalently, it has only one pole, of order $2$, or two simple poles),
you are more or less done.
Consider
*
*the zeros of $\partial_z\wp$ (there are $3$),
*the order of the pole in $0$, and
*the leading coefficient in the Laurent expansion of both sides.
Let $f(z) = \wp'(z)^2 - 4(\wp(z)-e_1)(\wp(z)-e_2)(\wp(z)-e_3)$. Then $f$ is an elliptic function that can have poles only in the lattice points. As the Laurent expansion of $\wp(z)$ starts with $z^{-2} + \dotsc$, then $\wp'(z) = -2z^{-3} + \dotsc$, so the Laurent expansion of $f$ begins $$(-2z^{-3})^2 - 4(z^{-2})^3 + \dotsc = 0z^{-6}+\dotsc,$$ hence the order of the pole of $f$ is $< 6$, therefore $f$ attains the value $\infty$ with multiplicity less than $6$ in a fundamental parallelogram.
On the other hand $f$ has three zeros of order $\geqslant 2$ in the fundamental parallelogram, so it attains the value $0$ with multiplicity at least $6$.
But a non-constant elliptic function has equally many poles and zeros (counted with multiplicity) in a fundamental parallelogram (see below), hence the above shows $f$ is constant (it has more zeros than poles), and since $f$ attains the value $0$, we have $f \equiv 0$.
Let $\Omega = \langle \omega_1, \omega_2\rangle$, where $\operatorname{Im} \dfrac{\omega_2}{\omega_1} > 0$, a lattice, and $g$ a non-constant elliptic function for the lattice $\Omega$. Let $a \in \mathbb{C}$ such that $g$ has neither zeros nor poles on the boundary of the parallelogram $P_a = \{ a + \alpha \omega_1 + \beta \omega_2 : \alpha,\beta \in [0,1]\}$. The logarithmic derivative of $g$ is also an elliptic function for the lattice $\Omega$, and has neither zeros nor poles on the boundary of $P_a$. Let $\zeta_1,\, \dotsc,\, \zeta_k$ be the distinct zeros of $g$ in $P_a$, with multiplicities $\alpha_1,\, \dotsc,\, \alpha_k$, and let $\pi_1,\, \dotsc,\, \pi_m$ the poles of $g$ in $P_a$, with respective multiplicities $\beta_1,\, \dotsc,\, \beta_m$. Then $g'/g$ has simple poles in the $\zeta_\kappa$ and $\pi_\mu$, and is holomorphic everywhere else in a neighbourhood of $\overline{P}_a$. In the $\zeta_\kappa$ resp. $\pi_\mu$, writing $g(z) = (z-w)^r\cdot h(z)$ with $h$ holomorphic and nonzero in a neighbourhood of $w$ reveals that the residue of $g'/g$ in $\zeta_\kappa$ is $\alpha_\kappa$, and the residue in $\pi_\mu$ is $-\beta_\mu$. Hence, by the residue theorem,
$$\frac{1}{2\pi i} \int_{\partial P_a} \frac{g'(z)}{g(z)}\,dz = \sum_{\kappa = 1}^k \alpha_\kappa - \sum_{\mu = 1}^m \beta_\mu.$$
On the other hand, grouping the pairs of parallel sides, we compute
$$\begin{align}
\int_{\partial P_a} \frac{g'(z)}{g(z)}\, dz
&= \int_a^{a+\omega_1} \frac{g'(z)}{g(z)} - \frac{g'(z+\omega_2)}{g(z+\omega_2)}\,dz + \int_{a}^{a+\omega_2} \frac{g'(z+\omega_1)}{g(z+\omega_1)} - \frac{g'(z)}{g(z)}\,dz\\
&= 0 + 0\\
&= 0
\end{align}$$
by the periodicity. Hence
$$\sum_{\kappa = 1}^k \alpha_\kappa = \sum_{\mu = 1}^m \beta_\mu$$
a non-constant elliptic function has equally many zeros as poles (considering $g - c$ shows that all values in the Riemann sphere are attained the same number of times) counting multiplicities.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/620427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
kernel of a monic morphism Problem
Suppose $\mathscr{C}$ is an arbitrary category with zero object. $A$ and $B$ are two objects of $\mathscr{C}$. Let $f\in Mor_\mathscr{C}(A,B)$. It's given that $f$ is monic.
I need to show that $f$ has a kernel.
My claim Object $A$ with morhism $\iota$(will be def'd below) is the kernel of $f$. I did the followings:
1- Suppose $Z$ is zero object. I defined the morhisms:
$ \phi_{a} : Z \rightarrow A $ ,
$ \phi_{b} : Z \rightarrow B $
$ \psi_{a} : A \rightarrow Z $ ,
$ \psi_{a} : B \rightarrow Z $
Since Z is both initial and terminal these morphisms are in some sense "unique".
2- I defined $\iota=\phi_{a} \circ \psi_{a}$. I showed that $f \circ \iota = 0_{AB}$ ie. the zero morhism from $A$ to $B$.
3- Suppose there is an object $D$ with morphism $g: D \rightarrow A$ such that $f \circ g = 0_{DB}$. I showed that for every morphism $h: D \rightarrow A$ we have $g = \iota \circ h$.
4- I failed to prove uniqueness of h.
My question Could you please tell me that if i am in the right path or not? If i am, how can i fix the 4th step? Thanks
|
Well, basically on the right direction, but not exactly the right track. You should start it over and simplify.
You miss the observation that the domain of the kernel is going to be the zero object itself, and not $A$.
Claim: The kernel of a monic $f:A\to B$ will be the unique arrow $Z\to A$.
Proof: (your turn)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/620531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Absolute value equality involving complex analysis I'm preparing for a complex analysis prelim which I'll take next summer by consulting Ahlfor's Complex Analysis: An Intro to the Theory of Analytic Functions of One Variable (3rd edition). My question pertains to Exercise 4 of Chapter 1, Section 1.5, and it is as follows:
Show that there are complex numbers $z$ satisfying $$|z-a| + |z+a|= 2|c|$$ if and only if $|a| \leq |c|$. If this condition is fulfilled, what are the smallest and largest values of |z|?
Well, for the first part of the question I was able to show only one direction, i.e. if there are complex numbers $z$ such that $|z-a| + |z+a|= 2|c|$, then $|a| \leq |c|$ by applying the triangle inequality. Specifically, I did the following: $$ \begin{array} {lcl} 2|a| &=& |2a| \\ &=& |a-z+z+a| \\&\leq& |a-z| + |z+a| \\&=& |z-a| + |z+a| \\&=& 2|c|. \end{array}$$ Dividing both sides of the inequality by $2$ yields the result in one direction. The other direction is giving me trouble, and so is the second part of the exercise. Some helpful hints and advice would be much appreciated.
|
You can rotate the complex plane so that $a$ is real.
Let $z= x+ i \,y$. Then the left hand side is
$$
\sqrt{(x+a)^2+y^2} + \sqrt{(x-a)^2+y^2}$$
is minimum when $y=0$, i.e when $z$ is real. Clearly all $z$ in the interval $[-a,a]$ satisfies the condition.
Hence $z=0$ is the minimum, $z=\pm a$ is maximum in magnitude
Note: Added as an after thought.
If you do not want to make the initial rotation, then just consider $z$ of the form
$$
z = \lambda \, (-a) + (1-\lambda) a = a \,(1 - 2 \lambda),~~~ 0 \le \lambda \le 1$$
and show that all these $z$ meet the requirement.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/620685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Differences between $L^p$ and $\ell^p$ spaces Could someone explain some differences between the $L^p$ and $\ell^p$ spaces? Thanks a lot.
|
Either L or ℓ denotes the first alphabetic letter of Lebesgue. According to the definitions, there are interpretations for both L^p spaces and ℓ^p space. Please note that Frigyes Riesz named L^p spaces with adding the pulral "s" to the word "space". Mr. Riesz proved the L^2 spaces and ℓ^2 space are isomorphic. Of course, ℓ^2 space is the Hilbert space.
1.L^p (function) spaces
L^p spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. L^p spaces is called Lebesgue spaces, named after Henri Lebesgue.
2.ℓ^p (sequence) space
The above-mentioned p-norm can be extended to vectors that have an infinite number of components (sequences), which yields ℓ^p space. ℓ^p space is the sequence space consisting of the p-power summable sequences with the p-norm. In addition, it has bounded sequences.
However, some mathematicians used both L^p spaces and ℓ^p spaces to mention the Banach space, and both L^2 spaces and ℓ^2 space for the Hilbert space interchangeably (because infinite-dimensional vectors are used in L^p function spaces sometimes even though the usage is not strict in the math). The mathematicians have avoided the subtle difference of the two concepts. It makes people confused sometimes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/620803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 3
}
|
What information can one get from $f(x,y)\geq -3x+4y$ provided that $f$ is continuously differentiable near $(0,0)$?
Let $V$ be a neighborhood of the origin in ${\Bbb R}^2$ and $f:V\to{\Bbb R}$ be continuously differentiable. Assume that $f(0,0)=0$ and $f(x,y)\geq -3x+4y$ for $(x,y)\in V$. Prove that there is a neighborhood $U$ of the origin in ${\Bbb R}^2$ and a positive number $\epsilon$ such that, if $(x_1,y_1),(x_2,y_2)\in U$ and $f(x_1,y_1)=f(x_2,y_2)=0$, then
$$
|y_1-y_2|\geq\epsilon|x_1-x_2|.
$$
Using the assumption, we have
$$
f(x)=f'(0)x+o(\|x\|)
$$
which gives the local behavior of $f$ near the origin. But how the inequality $f(x,y)\geq -3x+4y$ would be used here?
|
Note that $f_y(0,0) \geq 4,$ and hence $Df (0,0) \neq 0.$ Hence the implicit function theorem tells you that that there is a neighborhood $(-\epsilon,\epsilon)$ of $0 \in \mathbb{R}$ and a $C^1$ diffeomorphism $g: (-\epsilon, \epsilon) \to g (-\epsilon, \epsilon)$ such that $f(x,y) = 0 \Rightarrow y = g(x),$ for all appropriate $x,y.$
Hence given $(x_1,y_1)$ and $(x_2,y_2)$ with $x_1 \neq x_2$ and those points sufficiently close to $0,$ and satisfy $f(x_j,y_j) = 0, j = 1,2$ then we see $y_1 \neq y_2$ (by the local injectivity part of the implicit function theorem).
Hence $y_1 - y_2 = g(x_1) - g(x_2) = g'(\xi) (x_1 - x_2),$ for some point $\xi \in (x_1,x_2).$
Then note that $g' \neq 0$ on $(-\epsilon,\epsilon).$ The conclusion follows since $g$ is $C^1.$ ($g'$ has a positive minimum on a compact subset of $(-\epsilon,\epsilon)$ containing $x_1,x_2.$ )
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/620968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Understanding the Handshake Problem I need help with this problem. The problem goes like this: In some countries it is customary to shake hands with everybody in the meeting. If there are two people there is 1 handshake, if there are three people there are three handshakes and so on. I know that the formula is $ \dfrac{n(n-1)}{2} $ but how do I get to this solution using a thinking process, specifically how would you solve this? Thanks.
|
If there exists $n$ people, then each person can shake hands with $n-1$ others. Each handshake gets counted twice. So ....
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 0
}
|
How prove this stronger than Weitzenbock's inequality:$(ab+bc+ac)(a+b+c)^2\ge 12\sqrt{3}\cdot S\cdot(a^2+b^2+c^2)$ In $\Delta ABC$,$$AB=c,BC=a,AC=b,S_{ABC}=S$$
show that
$$(ab+bc+ac)(a+b+c)^2\ge 12\sqrt{3}\cdot S\cdot(a^2+b^2+c^2)$$
I know this Weitzenböck's_inequality
$$a^2+b^2+c^2\ge 4\sqrt{3}S$$
But my inequality is stronger than this Weitzenbock's inequality.
my try:
let the semiperimeter, inradius, and circumradius be $s,r,R$ respectively
$$a+b+c=2s,ab+bc+ac=s^2+4Rr+r^2,S=rs$$
$$\Longleftrightarrow (s^2+4Rr+r^2)4s^2\ge 12\sqrt{3}\cdot rs[4s^2-2(s^2+4Rr+r^2)]$$
$$\Longleftrightarrow s^3+4Rrs+r^2s\ge 6\sqrt{3}rs^2-24\sqrt{3}Rr^2-6\sqrt{3}r^3$$
and use this Gerretsen inequality:
$$r(16R-5r)\le s^2\le 4R^2+4Rr+3r^2$$
and Euler inequality
$$R\ge 2r$$
But seems is not usefull,
Thank you
|
Square both sides, put$S^2=\dfrac{(a+b+c)(a+b-c)(a+c-b)(b+c-a)}{16}$ in, we have:
$(ab+bc+ac)^2(a+b+c)^4\ge 27\cdot (a+b+c)(a+b-c)(a+c-b)(b+c-a)(a^2+b^2+c^2)^2$
with brutal force method(BW method), WOLG let $a=$Min{$a,b,c$},$b=a+u,c=a+v,u\ge0,v\ge0$ , put in the inequality and rearrange them, we have:
$ \iff k_6a^6+k_5a^5+k_4a^4+k_3a^3+k_2a^2+k_1a+k_1a+k_0 \ge0 \iff $
$k_6=+486v^2-486uv+486u^2 \\k_5=1404v^3-648uv^2-648u^2v+1404u^3\\k_4=1935v^4-360uv^3-945u^2v^2-360u^3v+1935u^4\\k_3=1572v^5-60uv^4-588u^2v^3-588u^3v^2-60u^4v+1572u^5\\k_2=760v^6+78uv^5-210u^2v^4-352u^3v^3-210u^4v^2+78u^5v+760u^6\\k_1=216v^7+4uv^6+32u^2v^5-140u^3v^4-140u^4v^3+32u^5v^2+4u^6v+216u^7\\k_0=27v^8+u^2v^6+4u^3v^5-48u^4v^4+4u^5v^3+u^6v^2+27u^8$
with $u^n+v^n\ge u^{n-1}v+uv^{n-1} \ge u^{n-2}v^2+u^2v^{n-2} \ge ...$ , all $k_i \ge 0$ above , and it is trivial that when and only when $u=v=0 \implies k_i=0$
QED.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
proving the inequality $\triangle\leq \frac{1}{4}\sqrt{(a+b+c)\cdot abc}$ If $\triangle$ be the area of $\triangle ABC$ with side lengths $a,b,c$. Then show that $\displaystyle \triangle\leq \frac{1}{4}\sqrt{(a+b+c)\cdot abc}$
and also show that equality hold if $a=b=c$.
$\bf{My\; Try}::$ Here we have to prove $4\triangle\leq \sqrt{(a+b+c)\cdot abc}$
Using the formula $$\triangle = \sqrt{s(s-a)(s-b)(s-c)},$$ where $$2s=(a+b+c)$$
So $$4\triangle = \sqrt{2s(2s-2a)(2s-2b)(2s-2c)}=\sqrt{(a+b+c)\cdot(b+c-a)\cdot(c+a-b)\cdot(a+b-c)}$$
Now using $\bf{A.M\geq G.M}$ for $(b+c-a)\;,(c+a-b)\;,(a+b-c)>0$
$$\displaystyle \frac{(b+c-a)+(c+a-b)+(a+b-c)}{3}\geq \sqrt[3]{(b+c-a)\cdot(c+a-b)\cdot(a+b-c)}$$
So we get $\displaystyle (a+b+c)\geq 3\sqrt[3]{(b+c-a)\cdot(c+a-b)\cdot(a+b-c)}$
But I did not understand how can I prove above inequality
help Required
Thanks
|
For a triangle
$\Delta = \frac{abc}{4R} = rs$
Now in your inequality
you can put in the values to get
$R \ge 2r$
This is known to be true since the distance between incentre and circumcentre $d^2 = R(R-2r)$
Thus your inequality is proved
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
}
|
Uniform continuity of continuous functions on compact sets Assume that $f: \mathbb R \rightarrow \mathbb R$ is continuous function on the compact set $A$.
Does for any $\varepsilon >0$ exist a $\delta >0$, such that
$$
\lvert\, f(x)-f(y)\rvert<\varepsilon \,\,\,\,\,\,\textrm{for every}\,\,\,\, x,y\in A,\,\,
\text{with}\,\,\,
\lvert x-y\rvert<\delta?
$$
|
First notice that we can assume that $f$ is identically zero on $A$ by subtracting off a continuous function $g$ extending the restriction of $f$ to $A$. Such a $g$ can be constructed using the distance function $d$ from $x\in\mathbb{R}$ to $A\subset \mathbb{R}$.
The question becomes to show that for every $\epsilon>0$ there exists a $\delta>0$ such that if $d(t,A)<\delta$ then $|f(t)|<\epsilon$. Suppose there were no such $\delta$. Then one can construct a sequence $(t_n)$ with $d(t_n,A)\to 0$ as $n\to\infty$ while $|f(t_n)|\geq\epsilon$. The sequence is obviously bounded and therefore has a convergent subsequence $t_{n_k}\to x_0$. Then $x_0\in A$ by compactness. It follows that $f$ is not continuous at $x_0$, contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Why is the additive group of rational numbers indecomposable? A group $G$ is indecomposable if $G \neq \langle e \rangle$ and $G$ cannot be written as the direct product of two of its proper subgroups. Why is the additive group of rational numbers $(\mathbb{Q},+)$ indecomposable?
|
I suspect, what you want to ask is why $(\mathbb{Q},+)$ is indecomposable, ie. it cannot be written as the direct sum of two subgroups.
The answer is that two non-trivial subgroups must intersect non-trivially. If $\{0\}\neq H, K < \mathbb{Q}$, then choose non-zero $p/q \in H, a/b\in K$, then
$$
qa\frac{p}{q} = ap = pb\frac{a}{b} \in H\cap K\setminus \{0\}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
}
|
Given the scalar equation, 8x + 9y = -45, write a vector equation? scalar equation:
8x + 9y = -45
Attempt:
I took the y-intercept and the x-intercept of the scalar equation and got
(-5.625, 0) and (0,-5)
By subtracting the points i got [5.625, -5]
so my vector equation was
[x,y] = [0,-5] + t[5.625, -5]
the correct answer is
[0,-5] + t[-9,8]
How did they get this??? Am i doing something wrong?
|
You did nothing wrong, but the answer uses $[-9,8]$ instead of $[5.625,-5]$.
This is because $$5.625\times (-1.6)=-9,$$$$-5\times (-1.6)=8.$$ We can do this because we have a parameter $t$ in front of this orientation vector.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
if range of $f(x) = \frac{x^2+ax+b}{x^2+2x+3}$ is $[-5,4]$. Then $a$ and $b$ are If Range of $\displaystyle f(x) = \frac{x^2+ax+b}{x^2+2x+3}$ is $\left[-5,4\; \right]$ for all $\bf{x\in \mathbb{R}}$. Then values of $a$ and $b$.
$\bf{My\; Try}::$ Let $\displaystyle y=f(x) = \frac{x^2+ax+b}{x^2+2x+3} = k$,where $k\in \mathbb{R}$.Then $\displaystyle kx^2+2kx+3k=x^2+ax+b$
$\Rightarrow (k-1)x^2+(2k-a)x+(3k-b) = 0$
Now we will form $2$ cases::
$\bf{\bullet}$ If $(k-1)=0\Rightarrow k=1$, Then equation is $(2-a)x+(3-b)=0$
$\bf{\bullet}$ If $(k-1)\neq 0\Rightarrow k\neq 1$ means either $k>1$ or $k<1$
How can i solve after that
Help Required
Thanks
|
To find the places where $f(x)$ is minimal and maximal, differentiate $f$ wrt $x$. Then solve $f'(x)=0$. Call the solution $x_0$ an $x_1$ (and so on if there are move). Now, you know for which $x$ $f(x)$ is minimal/maximal. Calculate $f(x_0)$ and $f(x_1)$. These should be equal to $-5$ and $4$. You only have to know which one is a minimum and which one a maximum, but you should be able to figure that out.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
A probabilistic problem in graphs Let $G$ be a (simple) graph. Each edge will be deleted or will be reminded with probability $\frac 12$ (independent from the other edges). Let $P_{AB}$ be the probability that (after this process) the vertices $A$ and $B$ are connected. On the other hand starting from $G$ we give each edge a direction with probability $\frac 12$ (and independent). Let $P_{A\longrightarrow B}$ be the probability that there is a directed path from $A$ to $B$. Show that $P_{AB} = P_{A\longrightarrow B}$.
|
It is possible to prove this by induction using contraction of a neighbour set of $A$:
It is enough to count all admissible configurations, because any configuration has the same probability $(\frac 12)^{|E(G)|}$.
The number of admissible configurations in the non-oriented case where $A$ is adjacent to exactly the subset $S$ of its neighbour is given by the admissible configurations of the graph $G$ with the point $A$ removed and the set $S$ contracted to a single vertex which is the new vertex $A$.
The number of admissible configurations in the oriented case where $A$ has out-going arcs exactly to the set $S$ are again the admissible configurations of the graph $G$ with the point $A$ removed and the set $S$ contracted to a single new vertex $A$.
If $S$ is empty, both numbers are 0. If $A=B$, then both numbers are simply all configurations regardless of $S$. If $A$ and $B$ are different points and $S$ is not empty, the removal of $A$ and the contraction give a smaller version of the same problem (for example with respect to the distance between $A$ and $B in the original graph) that has the same number of configurations by induction.
The edges that have been removed during contraction have two possible states in both cases, so their contribution to the total number of configuration is the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Formal power series with all derivatives zero I have the following question.
Suppose I have a formal power series $f(x)=\sum\limits_{i=0}^\infty c_ix^i$ with real coefficients. Suppose that all the derivatives $f'(1),f''(1),\dots,f^{(n)}(1),\dots$ of $f(x)$ at the point $x=1$ are zero.
What can I say about the coefficients $c_i$? I want to say that they all must be zero, but when I try to write down the system of equations, I get infinite systems, and each equation involves infinitely many variables. Is there a nice way to solve this problem?
Thank you!
|
You cannot plug in $1$ in a formal power series. You have to regard it as an analytic function to do so (therefore, check for convergence and so on). However, any calculation from the analytical viewpoint will give you a correct identity for formal power series if both sides admit an interpretation as formal power series, and any calculation from the formal viewpoint will give you a correct identity for analytic series if both sides are indeed convergent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
How to Evaluate $\int^\infty_0\int^\infty_0e^{-(x+y)^2} dx\ dy$ How do you get from$$\int^\infty_0\int^\infty_0e^{-(x+y)^2} dx\ dy$$to
$$\frac{1}{2}\int^\infty_0\int^u_{-u}e^{-u^2} dv\ du?$$ I have tried using a change of variables formula but to no avail.
Edit: Ok as suggested I set $u=x+y$ and $v=x-y$, so I can see this gives $dx dy=\frac{1}{2}dudv$ but I still can't see how to get the new integration limits. Sorry if I'm being slow.
|
$\newcommand{\+}{^{\dagger}}%
\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\ds}[1]{\displaystyle{#1}}%
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}%
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}%
\newcommand{\fermi}{\,{\rm f}}%
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}%
\newcommand{\half}{{1 \over 2}}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\isdiv}{\,\left.\right\vert\,}%
\newcommand{\ket}[1]{\left\vert #1\right\rangle}%
\newcommand{\ol}[1]{\overline{#1}}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}%
\newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}%
\newcommand{\sech}{\,{\rm sech}}%
\newcommand{\sgn}{\,{\rm sgn}}%
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
With $x \equiv \rho\cos\pars{\theta}$, $y \equiv \rho\sin\pars{\theta}$ where $\rho \geq 0$ and $0 \leq \theta < 2\pi$ we'll get
$\ds{{\partial\pars{x,y} \over \partial\pars{\rho,\theta}} = \rho}$ such that
\begin{align}
&\color{#0000ff}{\large%
\int_{0}^{\infty}\int_{0}^{\infty}\expo{-\pars{x + y}^{2}}\,\dd x\,\dd y}
=
\int_{0}^{\pi/2}\dd\theta\int_{0}^{\infty}\expo{-\rho^{2}\bracks{1 + \sin\pars{2\theta}}}\rho\,\dd\rho
\\[3mm]&=
\int_{0}^{\pi/2}\dd\theta\,\left.%
{-\expo{-\rho^{2}\bracks{1 + \sin\pars{2\theta}}} \over 2\bracks{1 + \sin\pars{2\theta}}}
\right\vert_{\rho = 0}^{\rho \to \infty}
=
\half\int_{0}^{\pi/2}
{\dd\theta \over 1 + \sin\pars{2\theta}}
=
{1 \over 4}\int_{0}^{\pi}
{\dd\theta \over 1 + \sin\pars{\theta}} = \color{#0000ff}{\large\half}
\end{align}
since
\begin{align}
&\color{#0000ff}{\large{1 \over 4}\int_{0}^{\pi}{\dd\theta \over 1 + \sin\pars{\theta}}}
=\half\int_{0}^{\pi/2}{\dd\theta \over 1 + \sin\pars{\theta}}
=\half\int_{0}^{\pi/2}{1 - \sin\pars{\theta} \over \cos^{2}\pars{\theta}}\,\dd\theta
\\[3mm]&=
\half\,\lim_{\theta \to \pars{\pi/2}^{-}}\bracks{%
{\sin\pars{\theta} - 1 \over \cos\pars{\theta}}} + \half = \color{#0000ff}{\large\half}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Projective geometry. Interpretation of a cross product between a line coincident with a point Let $p \in \mathcal{P}^2$ be a point in projective 2-space coincident with a line $l\in\mathcal{P}^2$ such that $l^\top p = 0$. What does $l \times p$ mean?
For example, $p = \left(x,y,1\right)^\top$ and $l=\left(-1, 0, x\right)^\top$, the line is coincident with the point, i.e. $(l^\top p = 0)$. The cross product is $v = l \times p = \left(-xy, 1+x^2, -y\right)^\top$. Wondering, what is the physical meaning of $v$?
|
This is not a natural operation between lines and points. The cross product of two different lines is a point (intersection) and the cross product of two different points is a line (connecting the points). In this case you have to take the dual of either the point or the line. In the first case the cross product is the point on $l$ at maximum distance from $p$. In the second case it is the line through $p$ orthogonal to $l$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Continuity in multivariable calculus I want to find out the points, where the function
$f(x,y)=\dfrac{xy}{x-y}$ if $x\neq y$ and $f(x,y)=0$ otherwise, is continuous.
I have shown that at all the points $(x,y)$, where $x\neq y$, $f$ is continuous. Also at all those points $(x,y)\in \mathbb R^2\setminus \{(0,0)\}$ such that $x=y$, $f$ is not continuous. But what would happen at $(0,0)$? I couldn't do. Please give a hint.
|
Hint: Let $(x,y)$ approach $(0,0)$ along the curve $x=t+t^2$, $y=t-t^2$. We can make the behaviour even worse by approaching along $x=t+t^3$, $y=t-t^3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
General integral of an PDE Consider the PDE
$$
\frac{\partial u}{\partial t}+y\frac{\partial u}{\partial x}-a^2x\frac{\partial u}{\partial y}=0
$$
To find the general integral by the method of characteristics, I construct the system
$$
\frac{dt}{1}=\frac{dx}{y}=\frac{dy}{-a^2x}
$$
It is expected to find two constant relations among the variables $C_1=\varphi(t,x,y)$ , $C_2=\psi(t,x,y)$ , so the general solution is $u=f(C_1,C_2)$ .
From the last two equations, I get $-a^2x~dx=y~dy$ , which leads to the relation
$C_1=\varphi(t,x,y)=a^2x^2+y^2$ . But I'm stuck at finding the other one. Any idea? (I know that I can solve explicitly the system in terms of a parameter $s$ and get $t(s)=t+t_0$ , $x(s)=\dfrac{y_0}{a}\sin as+x_0\cos as$ , $y(s)=y_0\cos as-ax_0\sin as$ . I have tried to isolate $s$ from the linear system given by the $(x,y)$ variables and the initial conditions $(x_0,y_0)$ , but with no luck).
|
Follow the method in http://en.wikipedia.org/wiki/Method_of_characteristics#Example:
$\dfrac{dt}{ds}=1$ , letting $t(0)=0$ , we have $t=s$
$\begin{cases}\dfrac{dx}{ds}=y\\\dfrac{dy}{ds}=-a^2x\end{cases}$
$\therefore\dfrac{d^2x}{ds^2}=\dfrac{dy}{ds}=-a^2x$
$x=C_1\sin as+C_2\cos as$
$\therefore y=C_1a\cos as-C_2a\sin as$
$x(0)=x_0$ , $y(0)=y_0$ :
$\begin{cases}C_2=x_0\\C_1=\dfrac{y_0}{a}\end{cases}$
$\therefore\begin{cases}x=\dfrac{y_0}{a}\sin as+x_0\cos as\\y=y_0\cos as-ax_0\sin as\end{cases}$
$\begin{cases}x_0=\dfrac{ax\cos as-y\sin as}{a}=\dfrac{ax\cos at-y\sin at}{a}\\y_0=ax\sin as+y\cos as=ax\sin at+y\cos at\end{cases}$
$\dfrac{du}{ds}=0$ , letting $u(0)=f(x_0,y_0)$ , we have $u(x,y,t)=f(x_0,y_0)=f\left(\dfrac{ax\cos at-y\sin at}{a},ax\sin at+y\cos at\right)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/621981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
noncomputable functions I know that there exist functions such that no computer program can, given arbitrary input, produce the correct function value. There is nothing, however which would prohibit us from knowing the function value for certain specific inputs.
Suppose we have an uncomputable function $f$ defined on N and an infinite sequence of programs $p_1,p_2,p_3,..$ such that $p_n$ computes $f(n)$ no matter what it is given as input.
Since we could use this infinite sequence of programs to compute the function value for an arbitrary input, I am led to believe this sequence of programs cannot exist.
Thus, for any uncomputable function there must exist a particular element in the domain of the function such that its function value cannot ever be computed.
Is my reasoning valid?
|
Yes, you're right !
The main problem of uncomputable function is that at some point (for a specific $n$), we can't compute it.
Obviously, if we could, for each $n$, find the right program $p_n$ that computes $f(n)$, then $f$ would be computable. As Robert Israel mentioned, it does not prevent $p_n$ from existing. We just don't know how to find it.
Note that the "we can't compute it for that particular $n$" is related to the underlying mathematical theory you use to do your computation. It means that there is no proof that $p_n$ is the right program for $f(n)$. But uncomputability is still uncomputabillity, so if you manage to prove that $p_n$ is indeed the program that computes $f(n)$ for that particular $n$ in a particular (recursive) theory $T$, be sure there is some other $m>n$ such that you won't be able to find $p_m$ using $T$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Obtaining a binary operation on $X \rightarrow Y$ from a binary operation on $Y$. What, if anything, to make of this observation? Let $X$ and $Y$ denote sets. Then if $+$ is a binary operation on $Y$, then we can obtain a new binary operation $+'$ on $Y^X$ in a canonical way as follows.
$$(f+' g)(x) = f(x)+g(x)$$
Question. The other day, I noticed a suggestive-looking variant of the above definition. However, I'm not sure what to make of this observation. Does it have any particular significance?
The variant.
Let us firstly assign to each $x \in X$ an "evaluation" function $\tilde{x} : Y^X \rightarrow Y$ with defining property $\tilde{x}(f) = f(x).$ This allows $+'$ to be defined as follows, where it is understood that $f$ and $g$ range over all functions $X \rightarrow Y$ and $x$ ranges over every element of $X$.
$$\tilde{x}(f+' g) = \tilde{x}(f)+\tilde{x}(g)$$
In other words, we're defining that $+'$ is the unique binary operation such that for all $x \in X,$ we have that $\tilde{x}$ is a magma homomorphism with source $(Y^X, +')$ and target $(Y,+)$.
Does this final characterization of $+'$ have any particular significance and/or does it "go anywhere"?
|
In fact, this is part of a larger story, initiated by Freyd in his paper "Algebra valued functors in general and tensor products in particular".
If $\mathcal{A}$ is a category of algebraic structures with forgetful functor $U : \mathcal{A} \to \mathsf{Set}$, then one can define $\mathcal{A}$-objects in an arbitrary category $\mathcal{C}$ as follows: These are objects $X \in \mathcal{C}$ equipped with a factorization of $\hom(-,X) : \mathcal{C}^{op} \to \mathsf{Set}$ over $U$. In other words, $\hom(Y,X)$ acquires the structure of an object of $\mathcal{A}$, naturally in $Y$. If $\mathcal{C}$ has products, these objects can also be described internally to $\mathcal{C}$ - this is an application of the Yoneda Lemma. For example, a group object is an object $X$ equipped with morphisms $X \times X \to X$ (multiplication), $1 \to X$ (unit), $X \to X$ (inversion) such certain diagrams commute, which correspond to the usual group axioms. This is a very important notion, it includes groups, topological groups, Lie groups, and under a suitably generalization even Hopf algebras.
More simply, a magma object is an object $X$ equipped with a morphism $m : X \times X \to X$. If $Y$ is an arbitrary object, then $\hom(Y,X)$ becomes a magma, as already mentioned above. The operation is just
$$\hom(Y,X) \times \hom(Y,X) \cong \hom(Y,X \times X) \xrightarrow{m_*} \hom(Y,X).$$
This is what you have observed for the category of sets.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Improper Integral of $\int_{-1}^0 \frac{e^\frac{1}{x}}{x^3}dx$ I have a question for The Improper Integral of $\int_{-1}^0 \frac{e^\frac{1}{x}}{x^3}dx$
That's what i have done
$u=\frac1x$
$du=\frac{-1}{x^2}$
After integrated by parts I had
$e^{\frac1x}(1-\frac1x)$
So the $\lim_{t\rightarrow 0^-} [e^\frac{1}{t}(1-\frac1t) -e^{-1}(1+1)]$
How can I find $\frac{e^\frac1t}{t}$?
Please help
|
$$\int_{-1}^{0^{^-}}\frac{e^\frac1x}{x^3}dx=-\int_{-1}^{-\infty}e^tt^3\frac{dt}{t^2}=\int_{-\infty}^{-1}te^tdt=-\int_\infty^1(-u)e^{-u}du=-\int_1^\infty ue^{-u}du=$$
$$=\left[\frac{u+1}{e^u}\right]_1^\infty=0-\frac2e=-\frac2e$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Calculation of $\int^{\pi/2}_{0}\cos^{n}(x)\cos (nx)\,dx$, where $n\in \mathbb{N}$
Compute the definite integral
$$
\int^{\pi/2}_{0}\cos^{n}(x)\cos (nx)\,dx
$$
where $n\in \mathbb{N}$.
My Attempt:
Using $\cos (x) = \frac{e^{ix}+e^{-ix}}{2}$, we get
$$
\begin{align}
\int^{\pi/2}_{0}\cos^{n}(x)\cos (nx)\,dx&=\int_{0}^{\pi/2} \left(\frac{e^{ix}+e^{-ix}}{2}\right)^n\left(\frac{e^{inx}+e^{-inx}}{2}\right)\,dx\\
&= \frac{1}{2^n}\mathrm{Re}\left\{\int_{0}^{\pi/2} \left(e^{ix}+e^{-ix}\right)^n\cdot e^{inx}\,dx\right\}\\
&=\frac{1}{2^n}\mathrm{Re}\left\{\int_{0}^{\pi/2}\left(e^{2ix}+1\right)^n\,dx\right\}
\end{align}
$$
Letting $z=e^{4ix}$ gives us
$$
\begin{align}
e^{4ix}dx &= \frac{1}{4i}dz\\
dx &= \frac{dz}{4iz}
\end{align}
$$
So the integral becomes
$$\frac{1}{4\cdot 2^n}\mathrm{Re}\left\{\int_{C}\left(\sqrt{z}+1\right)\cdot \frac{dz}{iz}\right\}$$
How can I complete the solution from here?
|
$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
$\ds{\int_{0}^{\pi/2}\cos^{n}\pars{x}\cos\pars{nx}\,\dd x:\ {\large ?}.\quad
n \in {\mathbb N}}$.
With $\ds{0 < \epsilon < 1}$
$\ds{\pars{~\mbox{we'll take at the end the limit}\ \epsilon \to 0^{+}~}}$:
\begin{align}&\color{#66f}{\large\int_{0}^{\pi/2}\cos^{n}\pars{x}\cos\pars{nx}
\,\dd x}=\Re
\int_{\verts{z}\ =\ 1\atop{\vphantom{\Huge A}0\ <\ {\rm Arg}\pars{z}\ <\ \pi/2}}
\pars{z^{2} + 1 \over 2z}^{n}z^{n}\,{\dd z \over \ic z}
\\[3mm]&={1 \over 2^{n}}\,\Im
\int_{\verts{z}\ =\ 1\atop{\vphantom{\Huge A}0\ <\ {\rm Arg}\pars{z}\ <\ \pi/2}}
{\pars{z^{2} + 1}^{n} \over z}\,\dd z
=-\,{1 \over 2^{n}}\,\Im\int_{1}^{\epsilon}
{\pars{-y^{2} + 1}^{n} \over \ic y}\,\ic\,\dd y
\\[3mm]&\phantom{=}\left.\mbox{}-{1 \over 2^{n}}\,\Im\int_{\pi/2}^{0}
{\pars{z^{2} + 1}^{n} \over z}\,\dd z\,
\right\vert_{\,z\ =\ \epsilon\expo{\ic\theta}}
-{1 \over 2^{n}}\,\Im\int_{\epsilon}^{1}
{\pars{x^{2} + 1}^{n} \over x}\,\dd x
\\[3mm]&=-\,{1 \over 2^{n}}\Im\int_{\pi/2}^{0}\ic\,\dd\theta
=\color{#66f}{\Large{\pi \over 2^{n + 1}}}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 2
}
|
More precise way of solving inequality I need to solve this function:
$$
\lvert x^2-1\rvert\ge 2x-2\\
$$
I solved this equation:
For $x<0$, the solution is non existing, here I got negative root, when I tried to solve quadratic function and for $x\ge 0$ I got points $x_1=-1$ and $x_2=3$.
My question is:
How do I set the solution of equation. Is there any procedure, with wich I can determine is equation valid for $[-1,3]$ or $[-\infty, -1]\lor [3, +\infty]$.
I know that I can just set the numbers and see the result, but I just want to now is there any other different way to do this.
Thanks.
|
As $\displaystyle |x|=\begin{cases} x &\mbox{if } x\ge0 \\-x & \mbox{if } x<0 \end{cases} $
If $x^2-1\ge0\iff x\ge1$ or $x\le-1,$
we get $$x^2-1\ge2x-2\iff x^2-2x+1\ge0\iff (x-1)^2\ge0$$ which is true
If $x^2<1\iff -1<x<1\ \ \ \ (1),$
we get $$-(x^2-1)>2x-2\iff x^2+2x-3<0$$
$$\iff (x+3)(x-1)<0\iff -3<x<1\ \ \ \ (2)$$
Now using $(1),(2)-1<x<1$
So, $x$ can assume any real value
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
}
|
Convergence radius of $\sqrt{\cos(z)}$ Compute the first 3 non zero terms of the Taylor expansion of $\sqrt{\cos(z)}$ at $z=0$ and determine its convergence radius, considering only the principal branch of the square root.
I've computed the first 3 non zero terms which the problem asks for:
$$f(z)=1-\frac{1}{4}z^2-\frac{1}{96}z^4$$
*
*But, how do I determine the radius of convergence if I don't have the general expression for the Taylor series?
*And, what does it mean by the principal branch of the square root? I understand what the principal branch of the logarithm is, but I didn't know it existed for the square root.
|
The answers were obtained in comments, with the help from Daniel Fischer:
*
*radius of convergence is $\pi/2$, because $\sqrt{\cos z}$ is holomorphic for $|z|<\pi/2$, but is not holomorphic in any neighborhood of $\pi/2$, where cosine is zero. (One way to show this is to notice that the derivative of $\sqrt{\cos z}$ is unbounded there.)
*the principal branch of square root (or of any other fractional power) is defined so that the power of $1$ is equal to $1$. This convention removes the ambiguity present in the choice between two possibilities for the square root.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Show that two sums are equal Show that
$\sum_{k=0}^4\left(1+x\right)^k$ =
$\sum_{k=1}^5 \left(5 \choose k\right)x^{k-1}$
I assume that this has something to do with the binomial theorem and a proof of that. But I can't take the first steps...
|
The LHS is a finite Geometric Series with the first term$=1,$ common ratio $=(1+x)$ and the number of terms $=5$
So, the sum is $$1\cdot\frac{(1+x)^5-1}{1+x-1}$$
Please expand using Binomial Expansion and cancel out $x$ to find it to be same as the RHS
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Set of non-decreasing function in bijection with R I've learnt and understood the demonstration for "the set of all non-decreasing function is uncountable" with the diagonalization proof, but how could i demonstrate it is in bijection with R (the set of real numbers) or that the two sets are equipotent.
|
To identify a function it is enough to know its value on rational numbers and on the discontinuities. In fact the value on a continuity point $x$ is determined as the limit on rational points approaching $x$. A non-decreasing function has only a countable number of discontinuities, hence it is enough to know its value on a countable number of points.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Conjugate of the k-th power of a complex number The $k$-th power of a complex number $z$ can be expressed as follows:
$$
z^k=(x+iy)^k=\sum^k_{n=0}\binom{k}{n}x^{k-n}(iy)^n=\sum^k_{n=0}\binom{k}{n}x^{k-n}i^ny^n
$$
Suppose I want to express $\bar{z^k}$, the conjugate of $z^k$, in a similar manner. In other words, for all terms containing $i$, I would like to flip the sign. Is the following expression reasonable and self-evident?
$$
\bar{z^k}=\sum^k_{n=0}\binom{k}{n}x^{k-n}(-i)^ny^n
$$
The motivation behind this is in proving that $\bar{z}^k=\bar{z^k}$ without the use of polar form.
|
That seems right to me. Only think you could remark is that $\bar{i^n}=(-i)^n$, although it is trivial.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Optimization Problem - a rod inside a hallway
The question:
I'm given the figure shown above, and need to calculate the length of the longest rod that can fit inside this figure and rotate the corner.
My thoughts:
I have tried doing the following : put $(0,0)$ at the bottom left corner. This way, the place where the rod touches the upper block is $(2,1) $ , and if we denote by $(2+t,0)$ the place where the rod touches the lower block, we get that the place where it touches the lower block is $y=t+2 $ , and then, the length is $d=\sqrt{2}(t+2)$ which doesn't have a maximum.
What am I doing wrong ?
THe final answer should be $ \sqrt{(1+\sqrt[3]{4} ) ^2 + (2+\sqrt[3]{2})^2 } $ .
Thanks !
|
Your mistake is in the assertion that "the place where it touches the lower block is $y=t+2$." If you draw a picture and label the lengths of the sides of the appropriate right triangles, you'll see that in fact it touches the lower block at $y=1+{2\over t}$. This gives $(2+t)^2+(1+{2\over t})^2$ as the expression for the square of the length of the pipe. Differentiated appropriately, it has a minimum at $t=\sqrt[3]2$, and the rest should follow.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
$\alpha,\beta,\gamma$ are roots of cubic equation $x^3+4x-1=0$ If $\alpha,\beta,\gamma$ are the roots of the equation $x^3+4x-1=0$ and $\displaystyle \frac{1}{\alpha+1},\frac{1}{\beta+1},\frac{1}{\gamma+1}$ are the roots of the equation
$\displaystyle 6x^3-7x^2+3x-1=0$. Then value of $\displaystyle \frac{(\beta+1)(\gamma+1)}{\alpha^2}+\frac{(\gamma+1)(\alpha+1)}{\beta^2}+\frac{(\alpha+1)(\beta+1)}{\gamma^2}=$
$\bf{My\; Try}::$ Given $x=\alpha,\beta,\gamma$ are the roots of the equation $x^3+4x-1=0$, Then
$x^3+4x-1 = (x-\alpha)\cdot(x-\beta)\cdot (x-\gamma)$
put $x=-1$, we get $(1+\alpha)(1+\beta)(1+\gamma) = 6$
Now $\displaystyle \frac{(\beta+1)(\gamma+1)}{\alpha^2}+\frac{(\gamma+1)(\alpha+1)}{\beta^2}+\frac{(\alpha+1)(\beta+1)}{\gamma^2} = 6\left\{\frac{1}{(\alpha+1)\alpha^2}+\frac{1}{(\beta+1)\beta^2}+\frac{1}{(\gamma+1)\gamma^2}\right\}$
Now how can I solve after that,
Help required
thanks
|
Hint: Since $x^{3} + 4x - 1 = (x-\alpha)(x-\beta)(x-\gamma)$, we note that, by the theory of symmetric polynomials,
$$0 = \alpha+\beta+\gamma = s_{1}(\alpha, \beta, \gamma)$$
$$4 = \alpha\beta + \beta\gamma+\alpha\gamma = s_{2}(\alpha, \beta, \gamma)$$
$$1 = \alpha\beta\gamma = s_{3}(\alpha, \beta, \gamma)$$
As suggested by Mariano Suárez-Alvarez, we can observe immediately that the expression
$\displaystyle \frac{(\beta+1)(\gamma+1)}{\alpha^2}+\frac{(\gamma+1)(\alpha+1)}{\beta^2}+\frac{(\alpha+1)(\beta+1)}{\gamma^2}$
is symmetric in $\alpha, \beta, \gamma$, i.e. it is invariant under permuting $\alpha, \beta, \gamma$. Therefore, I would put everything under a common denominator. The numerator then becomes
$$\beta^{2}\gamma^{2}(\beta\gamma+\gamma+\beta+1) + \alpha^{2}\gamma^{2}(\alpha\gamma+\gamma+\alpha+1) + \alpha^{2}\beta^{2}(\alpha\beta + \alpha + \beta+1)$$
It can easily be seen that this is a symmetric polynomial in $\alpha, \beta, \gamma$ as well. It can therefore be written in terms of $s_{1}, s_{2}, s_{3}$. The denominator is $\alpha^{2}\beta^{2}\gamma^{2} = s_{3}^{2} = 1^{2} = 1$. If you would like more information on how to write the numerator in terms of the symmetric polynomials, please comment and I'd be happy to add information.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
How find this minimum of $\sqrt{(x-a)^2+(3-x-\lg{(a)})^2}+\sqrt{(x-b)^2+(3-x-10^{b})^2}$ let $x,a,b\in R$,and $a>0$,find this follow minimum of the value
$$\sqrt{(x-a)^2+(3-x-\lg{(a)})^2}+\sqrt{(x-b)^2+(3-x-10^{b})^2}$$
I see this two function
$$f(x)=10^a,g(x)=\lg{(x)}$$ are Mutually inverse function
maybe can use
$$\sqrt{x^2+y^2}+\sqrt{a^2+b^2}\ge\sqrt{(a+x)^2+(b+y)^2}$$
But I can't,Thank you
|
HINT :
Let $A(a,\log_{10} (a)), B(b,10^{b})$. Also, let $L$ be the line $y=3-x$.
What you want is the minumum of
$$|AP|+|BP|$$
where $P (x,y)$ is a point on $L$ and $|AP|$ represents the distance between $A$ and $P$.
You can solve your question geometrically.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/622997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
There exists an integer with alternating digits $1$ and $2$ which is divisible by $2013$ Could someone give me hints in how to solve the following (rather interesting) problem?
Prove that there exists an integer consisting of an alternance of $1$s
and $2$s with as many $1$s as $2$s (as in $12$, $1212$, $1212121212$, etc.), and
which is divisible by $2013$.
Source: Problem 4 in this document.
I just had to ask it before 2013 ends :)
|
Among the first 2014 numbers of this kind, two must have the same remainder mod $2013$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Rules for cancelling fractions with exponents I have an expression that I need to simplify, I know the answer (wolframalpha) but I'm not sure of the rule that gets me there.
$\dfrac{(\alpha) X_1^{\alpha -1} X_2^{1-\alpha}}{(1-\alpha)X_1^\alpha X_2^{-\alpha}}$
Basically I know that on the $X_1$ side of the fraction it cancels down to $\frac{1}{x_1}$ and the $X_2$ side cancels to $\frac{x_2}{1}$ leaving me with the alphas still outside and $\frac{X_2}{X_1}$.
But I don't know the exponent rule that allows me to do this cancellation. I just want to be sure I understand it fully.
Thanks.
Edit: Thank you very much to both of you! You've really helped with an upcoming exam. This is my first time posting here, I will fill out my profile and hopefully I can help other people from here on in! Thanks again!
|
I assume that $X_1\neq 0,~X_2\neq 0$ and $\alpha\neq 1$.
$$\frac{\alpha X_1^{\alpha -1} X_2^{1-\alpha}}{(1-\alpha)X_1^{\color{red}{\alpha}} X_2^{\color{blue}{-\alpha}}}\longrightarrow\frac{\alpha X_1^{(\alpha -1)-\color{red}{\alpha}} X_2^{{1-\alpha}-(\color{blue}{-\alpha})}}{(1-\alpha) }=\frac{\alpha}{(1-\alpha)}X_1^{-1}X_2^{1}=\frac{\alpha X_2}{(1-\alpha) X_1} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
how many rectangles in this shape I've learned in my high school the solution to such riddle:
How many rectangles are there in this shape:
the solution is through combinations:
in this shape is a $5\times 6$ grid so the number of rectangles would be:
$C^2_5 * C_6^2 $
I would like to know if this is possible in case of triangles? squares? is there a general rule for this?
|
For a big equilateral triangle with sidelength $n$ filled with unit triangles it's
$$
\sum_{i = 1}^{n} \binom{n + 2 - i}{2} = \binom{n + 2}{3} = \frac{n(n+1)(n+2)}{6}
$$
triangles pointing upwards (exception. Triangles pointing downwards is a bit more tricky. The answer is (credit to WolframAlpha for the closed form)
$$
\sum_{i = 1}^{\left\lfloor \frac{n}{2}\right\rfloor}\binom{n + 2-2i}{2} = \frac{1}{6} \bigg\lfloor\frac{n}{2}\bigg\rfloor \left(4 \bigg\lfloor\frac{n}{2}\bigg\rfloor^2-3 (2 n+1) \bigg\lfloor\frac{n}{2}\bigg\rfloor+3 n^2+3 n-1\right)
$$
If you want some help finding these formulas yourself, the sums are built around the question "How many triangles of side length $i$ are there?", and it turns out to always be a triangular number, although slightly nicer spaced for the upward-pointing triangles. A figure for the case $n = 4$ below.
For squares, we have, following the same rule of "How many of size $i$ are there?", we get
$$
\sum_{i = 1}^n (n - i + 1)^2 = \frac{2n^3 + 3n^2 + n}{6}
$$
This is basically just a sum of consecutive squares, only summing from the largest to the smallest because that's the way it turned out.
A figure for $n = 4$ is supplied below.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Jacobson Radical $J(R)$ is a proper ideal I found a remark on my notes:
Jacobson Radical $J(R)$ is a proper ideal. Hint: Zorn's Lemma
I know $J(R)$ is the intersection of all the maximal left ideals of ring $R$. I know the maximal ideals are proper by definition. However, in the remark i guess it must be a two sided ideal.
But, we have showed that $J(R)$ is two sided by the claim that it is the intersection of all annihilators of all simple $R$-modules which are two sided ideals.
Back to my remark, how come Zorn's Lemma is on the table? Could you please enlighten me?
|
This also may help .
If M is an R-module, then Jacobson radical J(M)= $J_{R}$(M) of R-module M is the intersection of all maximal submodules of M. (Maximal submodules mean maximal proper submodules ).
So if M is finitely generated, then every submodule $N$ of M is contained in a maximal submodules, by Zorn's Lemma. (Otherwise, If the union of a chain of proper submodules is equal to M, then the union contains all the generators, hence some member of the chain contains all the generators, contradiction.) Now, take $N=0$, so we get J(M) is a proper submodule of M. Since R is finitely generated, J(R) is always proper left ideal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Definition of the derivative $f(x) = x^{3}\cos{\left(\frac{1}{x^{2}}\right)}, x \not= 0, f(0) = 0$
Show, by definition of the derivative, that $f$ is differentiable at $x = 0$ and find the derivative of $f$ there.
So we know the derivative is defined as:
$$f'{(x)} = \lim_{h \to 0}{\frac{f{(x + h)} - f{(x)}}{h}}$$
So we have:
$$f'{(x)} = \lim_{h \to 0}{\frac{{(x + h)}^{3}\cos{\left(\frac{1}{(x+h)^{2}}\right)} - x^{3}\cos{\left(\frac{1}{x^{2}}\right)}}{h}}$$
How would I evaluate this limit?
|
Your definition is incorrect. The right definition is
$$f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}$$
With this definition, we get
$$f'(0) = \lim_{h \to 0} \frac{h^3 \cos (1/h^2)}{h}$$
Can you take it from here? H: Use L'hôpital's rule. EDIT: Forget L'hôpital, which does not apply in this situation! Just note that $|\cos(1/h^2)| \leq 1$ for all $h \neq 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If real number x and y satisfy $(x+5)^2 +(y-12)^2=14^2$ then find the minimum value of $x^2 +y^2$ Problem :
If real number x and y satisfy $(x+5)^2 +(y-12)^2=14^2$ then find the minimum value of $x^2 +y^2$
Please suggest how to proceed on this question... I got this problem from [1]: http://www.mathstudy.in/
|
As mathlove has already identified the curve to be circle $\displaystyle (x+5)^2+(y-12)^2=14^2$
But I'm not sure how to finish from where he has left of without calculus.
Here is one of the ways:
Using Parametric equation, any point $P(x,y)$ on the circle can be represented as $\displaystyle (14\cos\phi-5, 14\sin\phi+12)$ where $0\le \phi<2\pi$
So, $\displaystyle x^2+y^2=(14\cos\phi-5)^2+(14\sin\phi+12)^2$
$\displaystyle=14^2+5^2+12^2+28(12\sin\phi-5\cos\phi)$
Now can you derive $$-\sqrt{a^2+b^2}\le a\sin\psi-b\cos\psi\le\sqrt{a^2+b^2}?$$
and complete the answer?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
How to calculate coordinates of a vector in relation to a basis Let's say I have a basis and a vector:
$$ \mathcal B_1=\{M_1,M_2,M_3,M_4 \} \ \ M_{2\times2}(\mathbb R)\\ v=\begin{pmatrix}a & b\\ c &d \end{pmatrix}$$
Suppose I have numeral values in all of the above matrices, how do I calculate coordinates of $v$ in relation to the basis ?
Is this the general approach:
$x_1M_1+x_2M_2+x_3M_3+x_4M_4=v$
This will yield a $4\times 5$ matrix, should I get it to RREF and find all of the $x_i$ ? Will these be the coordinates $[v]_{\mathcal B_1}=(x_1,x_2,x_3,x_4)^T$?
|
If you write out $x_1M_1+x_2M_2+x_3M_3+x_4M_4=v$ in terms of the given numbers in the $M_i$ and $v$, and indeterminates $x_1,x_2,x_3,x_4$, you will have a system of four linear equations in those four indeterminates, and you can solve for them however you like. The solution is guaranteed to be unique, and it is the coordinates of that particular $v$.
If you are given a different system of $M_i'$, $i=1\dots 4$, then you could take the same approach. It's just that the coefficients in your linear system will change, and so will the solution for most $v$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
derivative calculation involving floor function $\left\lfloor {{x^2}} \right\rfloor {\sin ^2}(\pi x)$ I was asked to find when the function is differentiable and what is the derivative of:
$$\left\lfloor {{x^2}} \right\rfloor {\sin ^2}(\pi x)$$
Now, I am not sure how to treat the floor function.
I'll be glad for help.
|
Hint: Split into two cases: Either $x^2$ is an integer, or it is not an integer. When $x^2\notin{\mathbb{Z}}$, $\lfloor x^2\rfloor$ will be a constant in some small interval around $x$, and so $$\frac{d}{dx} \lfloor x^2\rfloor\sin^2(\pi x)=\lfloor x^2\rfloor 2\pi \sin (\pi x) \cos(\pi x).$$
Try using the definition of the derivative to determine what happens when $x^2\in\mathbb{Z}$. There will be two cases depending on whether or not $x\in\mathbb{Z}$. In one of the cases, the derivative is not defined, and in the other case it will be zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Last digits number theory. $7^{9999}$? i have looked at/practiced several methods for solving ex: $7^{9999}$. i have looked at techniques using a)modulas/congruence b) binomial theorem c) totient/congruence d) cyclicity.
my actual desire would be a start to finish approach using totient/congruence. i have figured out how to work the individual steps but not how to combine them and in what order to be able to solve any "end digits" questions.
|
Note that $7^2=49\equiv 9 \mod 10$, since $9^2=81 \equiv 1 \mod 10$, we have that
$$
7^4\equiv 1 \mod 10
$$
Now notice that $4 \mid 9996$ because $4$ divides $96$ (a number is divisible by $4$ iff its last two digits are divisible by $4$). That leaves us with a $7^3$ remaining which we know that $7^3\equiv 3 \mod 10$.
So that means that
$$
7^{9999}\equiv 7^3 \equiv 1\cdot 3=3 \mod 10
$$
This means the last digit of $7^{9999}$ is $3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Number of roots of differentiable function I have two related problems that are causing me big trouble:
Let $\;a_1,...,a_n\in\Bbb R\;,\;\;a_i\neq 0\;\;\forall\;i\;$ , and let $\;b_1,...,b_n\in\Bbb R\;,\;\;b_i\neq b_j\;\;\forall\,i\neq j\;$ .
(1) Prove that the equation
$$a_1x^{b_1}+\ldots+a_nx^{b_n}=0$$
has at most $\;n-1\;$ different zeros in $\;(0,\infty)\;$ (Hint: Use induction)
(2) Prove that the equation
$$a_1e^{b_1x}+\ldots+a_ne^{b_nx}=0$$
has at most $\;n-1\;$ different zeros in $\;\Bbb R\;$ .
Now, the following is what I've done so far: for a differentiable function $\;f\;$ , I can prove that if $\;f'(x)=0\;$ has $\;n-1\;$ different zeros, then $\;f(x)\;$ has at most $\;n\;$ different zeros, using the mean value theorem, say.
But I can't see how to use induction in this case: if I differentiate in the first problem, I get $$0=f'(x)=a_1b_1x^{b_1-1}+\ldots +a_nb_nx^{b_n-1}=\frac1x\sum_{i=1}^na_ib_ix^{b_i}$$
...and now?! How does induction kick in here? I though perhap something like: if we suppose WLOG $\;b_1<b_2<...<b_n\;$ , then perhaps inducting on $\;\lfloor b_n\rfloor\;$, but it gets messy and blurry if $\;b_n<0\;$...
Any help, hints are very much appreciated.
|
Hint: $a_1 x^{b_1} + \ldots + a_n x^{b_n}$ has the same number of roots in $(0,\infty)$ as $a_1 + \ldots + a_n x^{b_n - b_1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Derivative of a determinant whose entries are functions I do not understand a remark in Adams' Calculus (page 628 $7^{th}$ edition). This remark is about the derivative of a determinant whose entries are functions as quoted below.
Since every term in the expansion of a determinant of any order is a product involving one element from each row, the general product rule implies that the derivative of an $n\times n$ determinant whose elements are functions will be the sum of $n$ such $n\times n$ determinants, each with the elements of one of the rows differentiated. For the $3\times 3$ case we have
$$\frac{d}{dt}\begin{vmatrix}
a_{11}(t) & a_{12}(t) & a_{13}(t) \\
a_{21}(t) & a_{22}(t) & a_{23}(t) \\
a_{31}(t) & a_{32}(t) & a_{33}(t)
\end{vmatrix}=\begin{vmatrix}
a'_{11}(t) & a'_{12}(t) & a'_{13}(t) \\
a_{21}(t) & a_{22}(t) & a_{23}(t) \\
a_{31}(t) & a_{32}(t) & a_{33}(t)
\end{vmatrix}+\begin{vmatrix}
a_{11}(t) & a_{12}(t) & a_{13}(t) \\
a'_{21}(t) & a'_{22}(t) & a'_{23}(t) \\
a_{31}(t) & a_{32}(t) & a_{33}(t)
\end{vmatrix}+\begin{vmatrix}
a_{11}(t) & a_{12}(t) & a_{13}(t) \\
a_{21}(t) & a_{22}(t) & a_{23}(t) \\
a'_{31}(t) & a'_{32}(t) & a'_{33}(t)
\end{vmatrix}.$$
It is not difficult to check this equality by simply expanding both sides. However, the remark sounds like using some clever trick to get this result. Can anyone explain it to me, please? Thank you!
|
That remarks has said most of what it needs to explain.However, I think a more precise explaination for the example is necessary.Hence, I'll cite one.
i) $a_{11}(t).a_{23}(t).a_{32}(t) $ is abitrary term in expansion of left determinant
ii) $ (a_{11}(t).a_{23}(t).a_{32}(t))' = a'_{11}(t)a_{23}(t).a_{32}(t)+a_{11}(t)a'_{23}(t).a_{32}(t)+a_{11}(t)a_{23}(t).a'_{32}(t)$
iii) $a'_{11}(t)a_{23}(t).a_{32}(t)$ is a term in expansion of the first determinant on the left of equation. This is a determinant in which the first row is differentiated
$a_{11}(t)a'_{23}(t).a_{32}(t),a_{11}(t)a_{23}(t).a'_{32}(t)$ are similar.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 1
}
|
Function fromal definition The relation $R:= \{(x,y) \mid y= \vert x\vert \} \subseteq \mathbb{Z} \times \mathbb{N}$ is a function,
but the relation $R:= \{(y,x) \mid y= \vert x\vert \} \subseteq \mathbb{N} \times \mathbb{Z}$ is not a function...
for me it seems that the second relation has also those two properties 1. total left and 2. right-unique....According to my book I´m wrong :) maybe a hint would help...thx
|
For the second $R$ notice that you have $(1,-1)\in R$, $(1,1)\in R$ which is against "right-unique".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/623902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Modular arithmetics I've been reading about some examples concerning DSS/DSA signature security and there is one part of an example that I do not understand the maths. Namely, how do you calculate this:
$w = (s^{-1}$ $mod$ $q)$
In this example let's say $q = 13$ and $s = 10$.
So we have
$w = (10^{-1}$ $mod$ $13) = 4$
How do we get 4 as a result?
|
As $\displaystyle 10\cdot4=40\equiv1\pmod{13}$
$\displaystyle 10^{-1}\equiv4\pmod{13}$ dividing either sides by $10$ as $(10,13)=1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/624084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Questions about continuity and differentiability $\newcommand{\sgn}{\operatorname{sgn}}$
So i have couple of questions about differentiability and continuity. For example lets consider $f(x)=\sin(\frac1x)$. It is defined and continuous for $x\neq0$ . It's derivative is $\frac{-\cos(\frac1x)}{x^2}$. It looks to me that derivative is also continuous for $x\neq0$. Is that correct? If it is does, this means that $f(x)$ is defferentiable on $(-\infty, 0) \cup (0, + \infty)$? I know that $f(x)$ is not differentiable at $x=0$ because it is not defined there.
Let's look now at function
$$
f(x) = \begin{cases}
|x|^p\sin(\frac1x) & \text{for }x\ne0, \\
0 & \text{for }x=0.
\end{cases}
$$
Here are three questions:
For which $p$ is $f(x)$ continuous? For which $p$ is $f(x)$ differentiable? For which $p$ is $f'(x)$ continuous?
Since $|x|^p\sin(\frac1x)$ for $x\neq0$ is product and composition of continous functions that are all defined for $x\neq0$, it follows that $f(x)$ is also continious for $x\neq0$.
Let $p=0$. In that case for $x\neq0$ i get $f(x)=\sin(\frac1x)$. In that case $f(x)$ discontinous in 0, because $\lim_{x\to 0}\sin(\frac1x)$ is oscilatinge beetween $[-1, 1]$ and $f(0) = 0$. In other words $\lim_{x\to 0}f(x)$ doesn't exist and it cant be fixed.
Let $p<0$. In that case for $x\neq0$ i get $f(x)=\frac{\sin(\frac1x)}{|x|^p}$. In that case $f(x)$ discontinous in 0, because $f(x)=\frac{\sin(\frac1x)}{|x|^p}$ is oscilatinge beetween $[-\infty,+ \infty]$ and $f(0) = 0$. In other words $\lim_{x\to 0}f(x)$ doesn't exist and it cant be fixed.
Let $p>0$. Since $f(x)$ = $|x|^p\sin(\frac1x)$ for $x\neq0$ and it is product and composition of continious functions for $x\neq0$, this means that $f(x)$ = $|x|^p\sin(\frac1x)$ is continious. Since $\lim_{x\to 0}|x|^p = 0$ and $\lim_{x\to 0}\sin(\frac1x) $ is oscilating between $[-1, 1]$, this means that $\lim_{x\to 0}f(x) = 0$. Also $f(0)=0$, this means that f(x) is continious on whole $R$.
Thus $f(x)$ is continuous for $p>0$.
Next question is: for which $p$ is $f(x)$ differentiable?
This is the question i am stuck on. I am assuming that it means differentiable on whole $R$ , if that's the case then answer is for $p>0$.
$$f'(x) = \begin{cases} { p|x|^{p-1}\sin(\frac1x)\sgn(x) - \frac{|x|^{p-1}\cos(\frac1x)}{x^2}}& \text{for }x\neq0, \\ 0& \text{for }x=0 \end{cases}$$
Third question is: For which $p$ is $f'(x)$ continuous?. I tried home and it seems to me that for $p\leq1$ $\lim_{x\to 0}{ p|x|^{p-1}\sin(\frac1x)\sgn(x) - \frac{|x|^{p-1}\cos(\frac1x)}{x^2}}$ doesn't exist . For $p>1$ i don't know how to compute the limit. For lets say $p=3/2$ i get $f'(x) = { p|x|^{\frac12}\sin(\frac1x)\sgn(x) - \frac{|x|^{\frac12}\cos(\frac1x)}{x^2}}$. While $p|x|^{\frac12}\sin(\frac1x)\sgn(x)$ goes to $0$ i dont know how to calculate $\frac{|x|^{\frac12}\cos(\frac1x)}{x^2}$, since it is undefined. Any help would be appreciated.
|
Your discussion about the continuity properties looks acceptable (although I did not check every single statement you make).
I suggest that you compute directly
$$
\lim_{x \to 0} \frac{f(x)-f(0)}{x}
$$
to investigate the differentiability of $f$ at $x=0$. Your approach is not equivalent to the differentiability of the function: if $\lim_{x \to 0}f'(x)$ exists then $f'(0)$ exists too. But the opposite implication is, in general, wrong.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/624153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Taking seats on a plane: probability that the last two persons take their proper seats 100 men are getting on a plane (containing 100 chairs) one by one. Each one has a seat number but the first one forgot his number. So he randomly chooses a chair and sits on it. Others do know their own number. Therefore if their seat is not occupied, they sit on it and otherwise they randomly choose a chair and sit. What is the probability that the last two persons sit on their own chair !?
Edit: As @ByronSchmuland mentioned, a similar but different problem is HERE
|
Using the idea on the rephrased solution Byron linked in the comments. Assume the first guy keeps getting evicted from his seat. Then there will come a time when the first guy is sent to one of 3 positions. Of these only 1 leaves the 2 last guy's places available so the probability of this happening is $\frac{1}{3}$ and if he does get the correct one then the last two passengers will naturally take their seats correctly. So the probability is $\frac{1}{3}$
In fact you can generalize this to see the probability the last $n$ persons get in their places is $\frac{1}{n+1}$ when $n<m$ where $m$ is the total number of seats. For $n=m$, it is trivially $\frac{1}{n}=\frac{1}{m}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/624233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
}
|
How to prove this equation (given two inequalities)? How to prove that:
$$\frac{au+bv}{a+b} < y$$
given that:
$$u<y, v < y$$
Here, a, b are positive integers and u, v and y are real numbers between 0 and 1 (inclusive).
|
Well the other posters have great answers but you can also see that if $v < y$ then $vb < yb$ since $b > 0$ and we can also see that $ua < ya$ for the same reason. Then we can see that:
\begin{eqnarray}
ua +vb < ya + yb
\end{eqnarray}
From there its pretty simply algebraic manipulations to get your inequality (Remember you can do a lot of this because $a$ and $b$ are positive, otherwise we would have to reverse the inequality sign)!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/624314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 3
}
|
Can I switch the order of integration and "Real(z)" operation? Let $f(z,\eta)$ be an entire function.
I need to calculate (numerically) the integral: $$\int\limits _{0}^{\pi}\mbox{Re}\left(\int\limits _{0}^{\pi}f\left(z,\eta\right)d\eta\right)dz$$
Can I switch the inner order and calculate the next integral instead? $$\mbox{Re}\left[\int\limits _{0}^{\pi}\int\limits _{0}^{\pi}f\left(z,\eta\right)d\eta dz\right]$$
|
Yes, due to the linearity of integration.
The justification is as follows: Define $F(z) = \int_0^\pi f(z, \eta) \, d\eta$. Then $F(z) = F_1(z) + i F_2(z)$ where $F_1(z) = \Re F(z)$ and $F_2(z) = \Im F(z)$. On one hand we have
\begin{align}
\int_0^\pi \Re \left( \int_0^\pi f(z, \eta) \, d\eta \right) \, dz &= \int_0^\pi \Re F(z) \, dz \\
&= \int_0^\pi F_1(z) \, dz.
\end{align}
On the other hand we have
\begin{align}
\Re \left( \int_0^\pi \int_0^\pi f(z, \eta) \, d\eta \, dz \right) &= \Re \left( \int_0^\pi F(z) \, dz \right) \\
&= \Re \left( \int_0^\pi (F_1(z) + i F_2(z)) \, dz \right) \\
&= \Re \left( \int_0^\pi F_1(z) \, dz + i \int_0^\pi F_2(z) \, dz \right).
\end{align}
Since $F_1(z)$ and $F_2(z)$ are real, and since they are integrated on the interval $[0, \pi] \subset \mathbb{R}$, the two definite integrals are real. Therefore
$$ \Re \left( \int_0^\pi F_1(z) \, dz + i \int_0^\pi F_2(z) \, dz \right) = \int_0^\pi F_1(z) \, dz $$
Therefore the original two expressions are equal, as claimed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/624419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
ampleness of invertible sheaves Let $f: X\rightarrow Y$ be a morphism of schemes over a field $k$. Let $\mathcal{L}$ be an invertible sheaf on $Y$. My question is
*
*If $\mathcal{L}$ is ample, is $f^*\mathcal{L}$ ample?
*If 1. is not true, is there the condition that $f^*\mathcal{L}$ is ample?
|
The conditions for the pullback of an ample line bundle (at least that I am familiar of) are that $f : X \to Y$ be a finite morphism, and that $X,Y$ be Noetherian schemes.
I will now prove this without cohomology. Suppose we have a coherent sheaf $\mathscr{F}$ on $X$. Then $f_\ast \mathscr{F}$ is coherent because $Y$ is Noetherian and $f$ is finite. As $\mathcal{L}$ is ample, there is $n_0 \in \Bbb{N}$ such that for all $n \geq n_0$, $f_\ast \mathscr{F} \otimes \mathcal{L}^{\otimes n}$ is globally generated. Now fix an $n \geq n_0$; we have a surjection
$$\mathcal{O}_Y^I \to f_\ast\mathscr{F}\otimes \mathcal{L}^{\otimes n}\to0$$
which pulls back to give a surjection
$$\mathcal{O}_X^I \to f^\ast f_\ast \mathscr{F}\otimes f^\ast\mathcal{L}^{\otimes n} \to 0.$$
As $f$ is finite (and thus affine), the natural map $\mathscr{F} \to f^\ast f_\ast \mathscr{F}$ is surjective, so we get a surjection $\mathcal{O}_X^I \to \mathscr{F} \otimes f^\ast\mathcal{L}^n \to 0$, showing that $f^\ast \mathcal{L}$ is ample.
For a converse of the result above, we need that $f : X \to Y$ be finite and surjective. The hypotheses cannot be dropped as the following example will show: Take $f : \Bbb{A}^1 \to \Bbb{P}^1$ that sends $x \mapsto [1:x]$, note $f$ is not surjective. For any $ n < 0$, $\mathcal{O}(n)$ is not ample but its pullback is $\mathcal{O}_{\Bbb{A}^1}$ which is ample (the structure sheaf of a scheme is ample iff the scheme is affine).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/624508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Non-uniqueness of group structure for affine algebraic groups We know that every abelian variety has a unique group structure, but in the affine case, is that every affine algebraic group has more than one (up to isomorphism) group structure?
|
Since a the affine variety underlying a non-trivial affine group always has non-identity automorphisms with no fixed points, you can always conjugate its group structure by one of them to get a different group structure.
Later. As for the more precise question, the answer is also no. Take an algebraically closed field. The only non-complete connected algebraic groups of dimension one are the multiplicative and the additive groups, and they are non-isomorphic as varieties. It follows that each of them is an algebraic group in exactly one way up to isomorphism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/624601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Region of convergence for the series of functions Find out the region of convergence for the following series of function.
$$\sum_{m=1}^\infty x^{\log (m)}$$
Here $x \in \mathbb{R}$. This is a series of function. I was trying to find out the radius of convergence by Root Test and Ratio test. But no suitable solution I am getting.
What will be the region of convergence if we consider $x \in \mathbb{C}$?
Thank you for your help.
|
$x \in \mathbb{R}$ ??. How do we define $(-2)^{log(2)}$ ? And in my opinion, this is not really a power-series .
As what I solved, this series converges if and only if $ x \in [0; \frac{1}{e})$
Hint:
Rewrite the series as :
$ \sum_{k=0}^{\infty} \sum_{m=[e^k]}^{[e^{k+1}]-1} x^{log(m)} $ (where $[.]$ is floor function)
And we see that:
$( [e^{k+1}]-1-[e^k]).x^{k-1} \le \sum_{m=[e^k]}^{[e^{k+1}]-1} x^{log(m)} \le ( [e^{k+1}]-1-[e^k])x^{k+1}$(*)
Which leads to the interval of convergence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/624699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Solution verification: solving $\sqrt{x-4}-\sqrt{x-5}+1=0$ I solved the following equation, and I just want to be sure I did it right.
This is the procedure:
$$
\sqrt{x-4}-\sqrt{x-5}+1=0\\
\sqrt{x-4}=\sqrt{x-5}-1\\
\text{squaring both sides gives me:}\\
(\sqrt{x-4})^2=(\sqrt{x-5}-1)²\\
x-4=(\sqrt{x-5})²-\sqrt{x-5}+1\\
x-4=x-5-\sqrt{x-5}+1\\
x-4=x-4-\sqrt{x-5}\\
\text{substracting x, and adding 4 to both sides}\\
0=-\sqrt(x-5)\\
\text{switching both sides}\\
\sqrt{x-5}=0\\
\text{sqaring both sides}\\
x-5=0\\
x=5\\
\text{When I place 5 in the equation, I get:}\\
\sqrt{5-4}-\sqrt{5-5}+1=0\\
\sqrt{1}-\sqrt{0}+1=0\\
1-0+1=0\\
2=0\\
\text{this means that the equation dosent have any solution, right??}\\
$$
Any advice and suggestion is helpful.
Thanks!!!
|
Easiest way to see it, is take both square roors to the other side and square, to get
$$
\begin{split}
1 &= (x-5)-(x-4) - 2\sqrt{(x-5)(x-4)}\\
2 + 2\sqrt{(x-5)(x-4)} &= 0\\
1 + \sqrt{(x-5)(x-4)} &= 0\\
\end{split}
$$
but $\sqrt{\ldots} \geq 0$ so this is impossible...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/624974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 0
}
|
What's the difference between $f \cdot g$ and $f(g(x))$? For example if
$f(x) = x + 2$
and
$g(x) = 4x - 1$
Then what would be the difference in $f \cdot g$ and $f(g(x))$?
|
The notation $f \cdot g$ means that for every $x$ the function is
$$ (f \cdot g)(x) = f(x) \cdot g(x) $$
which is pointwise multiplication.
On the other hand $f \circ g$ is the composition of functions,
$$ (f \circ g)(x) = f(g(x)) \ . $$
For your examples:
$$ f(x) \cdot g(x) = (x+2) \cdot (4x-1) = 4x^2 + 8x - x - 2 = 4x^2 + 7x -2 $$
while
$$ (f \circ g)(x) = f(g(x)) = f(4x-1) = (4x-1)+2 = 4x + 1 \ . $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/625041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.