Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Is there a function that gives the same result for a number and its reciprocal? Is there a (non-piecewise, non-trivial) function where $f(x) = f(\frac{1}{x})$?
Why?
It would be nice to compare ratios without worrying about the ordering of numerator and denominator. For example, I might want to know whether the "magnitude" of the ratio (maybe the "absolute ratio") of the widths of two objects is greater than $2$, but not care which is larger.
It occurred to me that there's a common solution for this problem when comparing the difference of two numbers: the square of a number is the same as the square of its opposite - $(a-b)^2=(b-a)^2$. This is really useful with Euclidean distances, because you don't have to worry about the order of subtraction or use absolute values. Can we get the same elegance for ratios?
Difference: $g(a-b)=g(b-a) \rightarrow g(x)=x^2$
Ratio: $f(\frac{a}{b})=f(\frac{b}{a}) \rightarrow f(x)=\ ?$
|
For lack of anything worse than this, $f(x)=(x - \frac{1}{x})^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/580147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 1
}
|
Solve Algebraical.ly $0.5=\dfrac{365!}{365^{n}(365-n)!} $ How does one go about solving this equation? Not sure how to approach this as no factorials will cancel out. Im sorry I meant $\dfrac{365!}{365^{n}(365-n)!}=0.5$.
|
We usually solve this equation numerically:
$$a_n=\frac{365!}{365^n(365-n)!}$$
Hence $a_1=1$ and $$a_{n+1}=a_n.\frac{365-n}{365}$$
If you want to solve $a_n=p$, just do a little program that computes $a_n$ from $a_1$ by multiplying at each step by $\frac{365-n}{365}$ until you find $p$.
Here $$a_{23}=0.4927027656$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/580287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
How to solve this sum limit? $\lim_{n \to \infty } \left( \frac{1}{\sqrt{n^2+1}}+\cdots+\frac{1}{\sqrt{n^2+n}} \right)$ How do I solve this limit?
$$\lim_{n \to \infty } \left( \frac{1}{\sqrt{n^2+1}}+\cdots+\frac{1}{\sqrt{n^2+n}} \right)$$
Thanks for the help!
|
Extract n from the radical. So, you have to sum n terms which are close to 1 and you divide the sum by n. Are you able to continue with this ?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/580372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Number of $2n$-letter words using double $n$-letter alphabet, without consecutive identical letters How many words with $2n$ letters can be created if I have an alphabet with $n$ letters and each of the letters have to occur exactly twice in the word, but no two consecutive letters are equal?
Thanks!
|
There is no simple closed formula for this (I think no known closed formula at all), but one can give a formula as a sum by using inclusion-exclusion.
First consider such orderings where the pairs of identical letters are distinguishable. Then $\displaystyle \sum_{k=0}^n (-1)^k2^k(2n-k)!\binom{n}{k}$ gives the number of such such orderings by inclusion-exclusion (there are $(2n-k)!$ ways for $k$ given pairs to be together (by merging the those pairs into single elements) and the rest arbitrarily ordered, $2^k$ ways to order within those given pairs, and $\binom{n}{k}$ ways to pick those $k$ pairs). By dividing by $2^n$ to eliminate the ordering of pairs of identical elements we can obtain the formula $\displaystyle \frac{1}{2^n}\sum_{k=0}^n (-1)^k2^k(2n-k)!\binom{n}{k}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/580435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 0
}
|
Basis of Partial Fractions I need some guidance for the following:
For the polynomial q(x) = (x − 1)(x − 2)(x − 3)(x − 4), with degree 4,
describe the basis for P3 that partial fractions asserts we have, and demonstrate that the
collection of polynomials are linearly independent; that is, that
a1p1(x) + a2p2(x) + a3p3(x) + a4p4(x) = 0 implies that a1 = a2 = a3 = a4 = 0.
Explain why this implies that your collection of polynomials is therefore a basis for P3.
I know that Pn has the “standard” basis consisting of the polynomials (with one
term, usually called monomials)
{1, x, x^2, . . . , x^n}
because they span Pn - the definition, esssentially, of polynomials as the functions
p(x) = a0^xn+a1^xn−1+· · ·+an−1^x+a^n
express polynomials as linear combinations of monomials.
The monomials are also linearly independent:
If p(x) = a0x
n+a1x
n−1+· · ·+an−1x+an =
0, then plugging in x = 0 gives p(0) = an = 0, so p(x) = a0x
n + a1x
n−1 + · · · + an−1x =
x[a0x
n−1+a1x
n−2+· · ·+an−1] = 0, so a0x
n−1+a1x
n−2+· · ·+an−1 = 0, as well. Repeating
this argument (plugging in x = 0) will repeatedly show another coefficient is equal to zero,
resulting in an = an−1 = · · · = a1 = a0 = 0, establishing linear independence.
|
Start with $p(x)=a_0+a_1x+a_2x^2+a_3x^3$ and use the partial fractions setup
$$\frac{p(x)}{q(x)}=\frac{b_1}{x-1}+\frac{b_2}{x-2}+\frac{b_3}{x-3}+\frac{b_4}{x-4}.$$
Now as usual multiply both sides by $q(x)=(x-1)(x-2)(x-3)(x-4)$ and you'll get each $b_k$ multiplied by the product of three of the four factors $x-i$ on the right, and on the left you have the polynomial $p(x)$ with its four coefficients (up to third power). By equating coefficients you can get the values of the $b_i$ in terms of the $a_i.$ It would seem the basis referred to is just that obtained by multiplying (expanding out) each product of three of the four terms $x-i$, because then by use of the multipliers $b_i$ you get any cubic $p(x)$ as a linear combination of those four cubics.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/580506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Axiom of separation The Axiom of separation states that, if A is a set then $\{a \in A ;\Phi(a)\}$ is a set.
Given a set $B \subseteq A$, Suppose I define $B=\{ a \in A ; a\notin B \}$.
This, of course leads to a contradiction. Because we define $B$ by elements not from $B$. My queation is: what part of the axioms sais that this kind of definition is not possible?
Thank you!
|
There is nothing at all to stop you defining a set $\Sigma$ such that $x \in \Sigma$ iff $x \in A \land x \notin B$, so $\Sigma = \{x \in A \mid x \notin B\}$.
But what you've shown is that $\Sigma \neq B$!
No problem so far.
What you can't do is then go on (having a knock-down argument to show that $\Sigma \neq B$) to assert, as you do, $\Sigma = B$. What could possibly legitimate that???
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/580616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
I do not know how to start this problem-help needed There are 6 people who are holding hands, such that each person is holding hands with exactly 2 other people. How many ways are there for them to do that?
My friend challenged me to this problem and i dont know where to start...
Thanks for any help...=)
|
Assuming we don't care which hand is being used to do the holding then with three people there is only one way. You add a fourth person and he/she can go into 1 of 3 positions making 3.
Another (fifth) person and they have 4 positions to choose from so $4 \times 3 = 12$.
Another (sixth) person and they have 5 positions to choose from so $5 \times 12 = 60$.
This assumes they are all in one group If not i.e two groups of three. Then within those two groups there is only way they can hold each others hands but there are $6 \times 5 \times 4 =120$ ways you can pick these groups. We can divide this by two because if Ann, Bob and Carol are holding hands we don't care which table they are at. Because of the way we picked these groups ABC and ACB are counted as separate groups and we don't want to count these twice so we can divide this by 6 to get 10.
So the answer is $60 + 10 = 70$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/580722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
$A = \left\{ (1,x) \in \mathbb{R}^2 : x \in [2,4] \right\} \subseteq \mathbb{R}^2$ is bounded and closed but not compact? Is it true that set $A = \left\{ (1,x) \in \mathbb{R}^2 : x \in [2,4] \right\} \subseteq \mathbb{R}^2$ is bounded and closed but is not compact. We consider space $(\mathbb{R}^2, d_C)$ where $$d_C(x,y) = \begin{cases} d_E(x,y) \quad \text{if x, y, 0 in one line} \\ d_E(x,0)+d_E(0,y) \quad \text{in any other case} \end{cases}$$
Where of course $d_E$ is euclidean metric.
|
Yes, that is right. Note that each point $a\in A$ is isolated, because if you choose $\epsilon<d_E(a,0)$, then this ball does not contain any other point $b\in A$, as the distance $d_C(a,b)$ would be larger than $d_E(a,0)$. That means that $A$ is discrete.
$A$ is bounded since the distance from each point in $A$ to $0$ is just the Euclidean distance.
Finally, $A$ is closed:
*
*$0$ has its Euclidean distance from $A$ which is clearly positive.
*Each point $y=(y_1,y_2)\ne0$ not in $A$ has a $d_E(y,0)$-ball not containing any point on a different line through the origin.
*Such a point also has a positive distance from $\left(1,\frac{y_2}{y_1}\right)$, the only possible point in $A$ which can be on the same line through $0$, namely $(1-y_1)\sqrt{1+\left(\frac{y_2}{y_1}\right)^2}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/580819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Good book for algebra after Herstein? What is a good book to read after herstein's topics in algebra?
I've read in reviews somewhere that it's a bit shallow...
The main interests are algebraic and differential geometry. I prefer books with challenging excersices.
Something that crossed my mind: prehaps
it's preferable that I should learn different topics from different books?
|
One of my favourite texts for mid-level algebra is Dummit and Foote's $\textit{Abstract Algebra}$. Another good text is Eisenbud's $\textit{Commutative Algebra (With a View Towards Algebraic Geometry)}$. This book is more of a graduate level text book.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/580882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
On the definition of the direct sum in vector spaces We say that if $V_1 , V_2, \ldots, V_n$ are vector subspaces, the sum is direct if and only if the morphism $u$ from $V_1 \times \cdots \times V_n$ to $V_1 + \cdots + V_n$ which maps $(x_1, \ldots, x_n)$ to $x_1 + \cdots + x_n$ is an isomorphism.
Looking at the definition of a direct sum in categories in Wikipedia, it is clear that $V_1 \times \cdots \times V_n$ can be given canonical injections so that it is a categorical sum. Thus, we see that if the $u$ is an isomorphism, $V_1 + \cdots + V_n$ with $f_i(v_i)=v_i$ with i between 1 and n as canonical injections is also a sum.
But what if there is another isomorphism between $V_1\times\cdots\times V_n$ and $V_1 +\cdots+V_n$, is $u$ always an isomorphism (true if all $V_i$ are of finite dimension) ? Is $(V_1 + \cdots +V_n,(f_i)_{i=1 \ldots n})$ still a sum (again true if all $V_i$ are of finite dimension) ?
|
No. It is possible that $V_1$ and $V_2$ are subspaces of some vectorspace $W$ such that the subspace $V_1 + V_2$ of $W$ is isomorphic to $V_1 \times V_2$, but the canonical map $V_1 \times V_2 \to V_1 + V_2$ is not an isomorphism.
Example. Take $W$ the vectorspace of infinite sequences of real numbers (with or without finite carrier; doesn't really matter) and take $V_1$ and $V_2$ to be equal to $W$. Then $V_1 \times V_2 \cong W = V_1 + V_2$ (by interleaving the coordinates of $V_1$ and $V_2$), but the canonical map $V_1 \times V_2 \to V_1 + V_2 = W$ is not injective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/580991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Integrate: $ \int_0^\infty \frac{\log(x)}{(1+x^2)^2} \, dx $ without using complex analysis methods Can this integral be solved without using any complex analysis methods: $$ \int_0^\infty \frac{\log(x)}{(1+x^2)^2} \, dx $$
Thanks.
|
A really easy method using the substitution $x=\tan t$ offered by Jack D'Aurizio but followed by elementary integration by parts without using Fourier series.
$$\begin{align}
&\int_0^\frac{\pi}{2}\cos^2x\ln\tan x dx=\frac12 \int_0^\frac\pi2 \cos2x\ln\tan xdx\\
=&\frac14\sin2x\ln\tan x\bigg|^\frac{\pi}{2}_0-\frac14\int_0^\frac{\pi}2\frac{\sin 2x}{\sin x \cos x}dx=-\frac14\int_0^\frac\pi22dx=\color{blue}{-\frac\pi 4}
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/581155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 12,
"answer_id": 6
}
|
Concrete Mathematics - Towers of Hanoi Recurrence Relation I've decided to dive in Concrete Mathematics despite only doing a couple of years of undergraduate maths many years ago. I'm looking to work through all the material whilst plugging gaps in my knowledge no matter how large they are.
However, solving recurrence relation 1.1 - The Towers of Hanoi is causing me issues.
The recurrence relation is as follows:
$T_0 = 0$
$T_n=2T_{n-1}+1$ for $n>0$
To find the closed form the book adds 1 to both sides of the equations to get:
$T_0+1=1$
$T_n+1=2T_{n-1}+2$
Then we let $U_n=T_n+1$. Now I get lost. How do I get from the line above to the second line below after the substitution of $U_n$. It seems like I'm missing something simple.
$U_0=1$
$U_n=2U_{n-1}$
Then the book goes on to say:
It doesn't take genius to discover that the solution to this recurrence is just $U_n=2^n$; hence $T_n=2^n-1$.
Again, I'm lost, how do I get to $U_n=2^n$?
A bit disheartening considering this is meant to be so easy.
Any help is appreciated. Thanks.
|
You’re just missing a little algebra. You have $U_n=T_n+1$ for all $n\ge 0$, so $U_{n-1}=T_{n-1}+1$, and therefore $2T_{n-1}+2=2(T_{n-1}+1)=2U_{n-1}$. Combine this with $T_n+1=2T_{n-1}+2$, and $U_n=T_n+1$, and you get $U_n=2U_{n-1}$, with $U_0=1$.
Now notice that $U_n$ is just doubling each time $n$ is increased by $1$:
$$\begin{align*}
U_1&=2U_0\\
U_2&=2U_1=2^2U_0\\
U_3&=2U_2=2^3U_0\\
U_4&=2U_3=2^4U_0
\end{align*}$$
The pattern is clearly going to persist, so we conjecture that $U_n=2^nU_0$ for each $n\ge 0$. This is certainly true for $n=0$. Suppose that it’s true for some $n\ge 0$; then $$U_{n+1}=2U_n=2\cdot2^nU_0=2^{n+1}U_0\;,$$ and the conjecture follows by mathematical induction.
Now we go back and use the fact that $U_0=1$ to say that $U_n=2^n$ for each $n\ge 0$, and hence $T_n=U_n-1=2^n-1$.
In my answer to this question I solved another problem using this technique; you might find the explanation there helpful.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/581236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
}
|
The Probability of Rolling the Same Number with 10 Dice That Have 19 Sides I like dice and I want to know what the probability is for rolling the same number with 10 dice that have 19 sides. Also, do these dice exist or not?
|
You are rolling $10$ dice each with $19$ sides.
The first die can land on anything, but then all of the rest have to land on that same number.
The chance that each die lands on this number is $\frac{1}{19}$ since they have 19 sides. And since our rolls are independent, this means we have a probability of
$$\frac{1}{19^{9}}$$
to roll the same number on every die.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/581335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
When proving a statement by induction, how do we know which case is the valid 'base'? For example proving 2^n < n!, 4 is the 'base' that works for this exercise, then starting from there we prove p + 1 considering p has to be at least 4 and we have our result. However, I believe determining the first valid value without a previous proof is kind of odd.
The question is, how?
I don't want to upload my homework with assumptions justified like "it's obvious just try using 0,1,2,3 and see that it doesn't work".
|
The base case is usually so trivial that it is either obvious or can be immediately calculated without difficulty. However, in a case such as yours where the first values do not work, one would usually proceed to show that there are a finite number of cases where the relation is not true, and then proceed to perform induction on the first (few) cases that do work.
For instance, in number theory the primes 2 and 3 are often exceptions to a relationship (because one is the only even prime, and they are the only consecutive primes), so $p \ge 5$ or $p \gt 5$ are common caveats.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/581409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Why, while checking consistency in $3\times3$ matrix with unknowns, I check only last row? I would like to know whether my thinking is right. So, having 3 linear equations,
$$
\begin{align}
x_1 + x_2 + 2x_3 & = b_1 \\
x_1 + x_3 & = b_2 \\
2x_1 + x_2 + 3x_3 & = b_3
\end{align}
$$
I build $3\times 3$ matrix
\begin{bmatrix}1&1&2&b_1\\1&0&1&b_{2}\\2&1&3&b_3\end{bmatrix}
That reduces to
\begin{bmatrix}1&1&2&b_1\\0&1&1&b_1-b_2\\0&0&0&b_3-b_2-b_1 \end{bmatrix}
Now the thing is that if I prove that $b_3 = b_2 + b_1$ (which is obvious) then the system is consistent.
My doubt was why I don't need to prove the same about second row? Is it because there is no way that system of two equations (first and second row) that has contains unknowns ($x_1$ and $x_2$) be inconsistent?
|
The second row corresponds to the equation $1x_2+1x_3=b_1-b_2$. This can be solved for any values of $b_1, b_2$. For example, we can take $x_2=b_1-b_2$, and $x_3=0$.
The third row corresponds to the equation $0x_1+0x_2+0x_3=b_3-b_2-b_1$. This cannot be solved if $b_3-b_2-b_1\neq 0$. No matter what $x_1,x_2,x_3$ are, you can't make 0 equal to 1.
However if $b_3-b_2-b_1=0$ then any values of $x_1,x_2,x_3$ will work.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/581519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is statistical dependence transitive? Take any three random variables $X_1$, $X_2$, and $X_3$.
Is it possible for $X_1$ and $X_2$ to be dependent, $X_2$ and $X_3$ to be dependent, but $X_1$ and $X_3$ to be independent?
Is it possible for $X_1$ and $X_2$ to be independent, $X_2$ and $X_3$ to be independent, but $X_1$ and $X_3$ to be dependent?
|
*
*Let $X,Y$ be independent real-valued variables each with the standard normal distribution $\mathcal N(0,1)$
We have then: $$EX = EY = 0, EX^2 = EY^2 = 1.$$
Consider random variables $U = X +Y$, $V = X - Y$. We have $E(U * V) = 0 .$
Since $U$ and $V$ are also normally distributed, this implies that $U$ and $V$ are statistically independent. Meanwhile, $(X, U)$, $(X, V)$ (as well as $(Y, U)$ and $(Y, V)$ are statistically dependent variables in each of these pairs.
*Consider a regular triangular pyramid which 4 faces are colored as follows:
$$(1,R), (2,G), (3, B), (4, RGB).$$ In rolling a pyramid, we have equal probability of $1/4$ for each face to land on.
The probability to observe certain colors on the landed face are:
$$P(R) = P((1,R),(4,RGB)) = 1/2.$$ Similarly, $$P(G) = 1/2, P(B)= 1/2.$$
Then, we have $P(RGB)=1/4$ which is not equal to $P(R)*P(G)*P(G) = 1/8.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/581649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 1
}
|
Is the number 100k+11 ever a square? Is the number 100k+11 ever a square?
Obviously all such numbers end in 11. Can a square end with an 11?
|
Ends by 11 means, $n² \mod 100 = 11$ which also implies end by one $n² \mod 10 = 1$
this is true if and only if $(n \mod 100)^2 \mod 100 = 11$ and $(n \mod 10)² \mod 10 = 1$.
Ergo you juste have to check squares of 1, 11, 21, 31, 41, 51, 61, 71, 81 and 91 and see none ends with 11.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/581726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38",
"answer_count": 7,
"answer_id": 6
}
|
How to determine the dual space of se(2)? In an article there are the following sentences:
The euclidean group $SE(2)=\left\{\left[\begin{array}{cc}1 & 0\\v & R\end{array}\right]:v\in \mathbf{R}^{2\times1}\text{ and }R\in SO(2)\right\}$ is a real three dimensional conected matrix Lie group and its associated Lie alegbra is given by
$$se(2)=\left\{\left[\begin{array}{ccc}0 & 0 & 0\\x_1 & 0 & -x_3\\x_2 & x_3 & 0\end{array}\right]:x_1,x_2,x_3\in \mathbf{R}\right\}.$$
The author identify the dual space $se(2)^*$ with matrices of the form
$$\left[\begin{array}{ccc}0 & p_1 & p_2\\0 & 0 & \frac{1}{2}p_3\\0 & -\frac{1}{2}p_3 & 0\end{array}\right].$$
My question is: how did he "identify" this?
|
On Mathoverflow.net, I found a description of the duality pairing between a Lie Algebra and its dual, namely $\langle X, \alpha \rangle = trace(X\alpha)$ where $X$ is in the algebra and $\alpha$ is in the dual.
But before going into that, let's talk about dual spaces a little. The dual of a vector space $V$ is the set of all linear maps from $V$ to $\mathbb R$, with addition being "addition of functions" and scalar multiplication being "multiply the function by a constant." Page 16 of http://www.tufts.edu/~fgonzale/lie.algebras.book.pdf gives a rather nice and concise description of the dual of a map, which amounts to this: if $T: V \rightarrow W$ is a linear transformation whose matrix with respect to some bases of $V$ and $W$ is $A$, then you can build $T^{*} : W^{*} \rightarrow V^{*}$, and build the matrix $B$ for that with respect to the dual bases of $V^{*}$ and $W^{*}$. Turns out that $B$ is just the transpose of $A$. (If $v_1, \ldots , v_n$ is a basis for $V$, the "dual basis" is $\phi_1, \phi_2, \ldots \phi_n$, where $\phi_i(v_j)$ is zero unless $i = j$, in which case it's $1$.) End of dual space discussion.
So you've got $se(3)$, consisting of a bunch of matrices, which can be regarded as representing linear maps $\mathbb R^3 \rightarrow \mathbb R^3$ with respect to the standard basis. The duals of these maps, expressed with respect to the standard dual basis, consist of the transposes of these matrices. These transposes are matrices of the form
$$
\left[\begin{array}{ccc}0 & a_1 & a_2\\0 & 0 & a_3\\0 & -a_3 & 0\end{array}\right]
$$
That's almost what your author wrote, except that there's the factor of $1/2$. Let's see if we can make sense of that. That's where the "trace pairing" above comes in.
If you look at matrices in $se(2)$ as you've described them above, and put in $x_1 = 1, x_2 = x_3 = 0$ to get a matrix $X_1$, you find that putting in $p_1 = 1, p_2 = p_3 = 0$ gives you a matrix $\alpha_1$ with $tr(X_1 \alpha_1) = 1$. You can similarly build $X_2, X_3$ and $\alpha_2$ and $\alpha_3$, with the property that $tr(X_i \alpha_j) = 0$ unless $i = j$, and $tr(X_i \alpha_i) = 1$, i.e., the $\alpha_j$s form a dual basis to the $X_i$s.
Once you're given the $X_i$s (as the author provided) and you know that the pairing is given by "trace-of-the-product", finding the dual basis amounts to finding some vectors (the columns of the $\alpha_j$s) that are perpendicular to certain rows certain $X_i$s so that the traces come out right, and making sure that the matrices you find are actually elements of the dual algebra (whose definition I don't have).
Let me go ahead and make this explicit. A basis for the Lie Algebra is
$$
X_1 = \left[\begin{array}{ccc}0 & 0 & 0\\1 & 0 & 0\\0 & 0 & 0\end{array}\right]\\
X_2 = \left[\begin{array}{ccc}0 & 0 & 0\\0 & 0 & 0\\1 & 0 & 0\end{array}\right]\\
X_3 = \left[\begin{array}{ccc}0 & 0 & 0\\0 & 0 & -1\\0 & 1 & 0\end{array}\right]
$$
To find $\alpha_1$, you're looking for a matrix, of the form I described above, whose pairings with $X_2$ and $X_3$ both yield $0$. So let's write
$$
\alpha_1 = \left[\begin{array}{ccc}0 & b & c\\ 0 & 0& f\\0 &-f &0\end{array}\right]\\
$$
and compute the entries of $X_2 \alpha_1$:
$$
X_2\alpha_1 = \left[\begin{array}{ccc}0& 0 & 0\\0&0& 0\\0 & b &c\end{array}\right]\\
$$
The trace of this is just $c$, so $c = 0$. Similarly, we find that $X_3 \alpha_1$ is
$$
X_3\alpha_1 = \left[\begin{array}{ccc}0& 0 & 0\\0&f& 0\\0 & 0 &f\end{array}\right]\\
$$
whose trace is $2f$, from which we conclude that $f$ = 0.
Finally, look at $X_1 \alpha_1$. That's
$$
X_1\alpha_1 = \left[\begin{array}{ccc}0& 0 & 0\\0&b& c\\0 & 0 &0\end{array}\right]\\
$$
whose trace is supposed to be $1$, so $b = 1$. So $\alpha_1$ must be
$$
\alpha_1 = \left[\begin{array}{ccc}0& 1 & 0\\0&0& 0\\0 &0 &0\end{array}\right]\\
$$
Essentially the same analysis shows that $\alpha_2$ must be
$$
\alpha_2 = \left[\begin{array}{ccc}0& 0 & 1\\0&0& 0\\0 &0 &0\end{array}\right]\\
$$
and $\alpha_3$ must be (because this time, the trace, $2f$, must equal 1)
$$
\alpha_3 = \left[\begin{array}{ccc}0& 0 & 0\\0&0& \frac{1}{2}\\0 &-\frac{1}{2} &0\end{array}\right]\\
$$
Taking the linear combination $p_1 \alpha_1 + p_2 \alpha_2 + p_3 \alpha_3$ gives the expression that your author wrote.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/581929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Distribution of function of two random variables Let $X$ be the number on a die roll, between 1 and 6. Let $Y$ be a random number which is uniformly distributed on $[0,1]$, independent of $X$. Let $Z = 10X + 10Y$.
What is the distribution of $Z$?
|
Hint:
$$
F_Z(x) = P(Z < x) = P(X + Y < x/10)
$$
Work it out by cases from here based on the potential values of $x$. For instance, if $x < 10$ then $x/10 < 1$, so $F_Z(x) = 0$ as $X \geq 1$. Another sample case: if $10 \leq x < 20$, then the value of $X$ in the right hand expression must be $1$ (in order for $X + Y < x/10$ to hold), and if $20 \leq x < 30$ then the value of $X$ must be either $1$ or $2$, each of which it takes with equal probability.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
CDF of a sum of continuous and discrete dependent random variables Let $\psi_1$ be a Normal random variable with mean $\mu_1$ and standard deviation $\sigma_1$. Let $\xi$ be defined as
$$
\xi=c\,\mathbb{1}_{\left\{\psi_2+\psi_1\leq 0\right\}},
$$
where $\mathbb{1}$ is the indicator function, and $\psi_2$ a Normal random variable with mean $\mu_2$ and standard deviation $\sigma_2$. Thus $\xi$ is a discrete random variable that can be either $c$ or $0$. The problem is to compute the following CDF:
$$
F\left(\alpha\right)=\mathbb{P}\left[\psi_2+\xi\leq \alpha\right].
$$
Since the variable $\xi$ is discrete I cannot use
$$
\mathbb{P}\left[X+Y\leq \alpha\right] = \int_{-\infty}^{\infty}\int_{-\infty}^{v=\alpha-u}f_{X,Y}\left(u,v\right)\,du\,dv = \int_{-\infty}^{\infty}\int_{-\infty}^{v=\alpha-u} f_{Y\mid X}\left(v\mid u\right)\,f_{X}\left(u\right)\,dv\,du
$$
|
Recall that $\{ \psi_2 + \xi \leqslant \alpha \} = \{\psi_2 + c [ \psi_2+\psi_1 \leqslant 0] \leqslant \alpha \} = \{\psi_1 \leqslant -\psi_2, \psi_2 + c \leqslant \alpha \} \lor \{\psi_1 > -\psi_2, \psi_2 \leqslant \alpha \} $
and the latter two events are disjoint. Hence
$$ \begin{eqnarray}
F(\alpha) &=& \Pr\left(\psi_2 + \xi \leqslant \alpha\right) = \Pr\left(\psi_1 \leqslant -\psi_2, \psi_2 + c \leqslant \alpha\right) + \Pr\left(\psi_1 > -\psi_2, \psi_2 \leqslant \alpha\right) \\
&=& \Pr\left( \psi_2 \leqslant \min(\alpha-c, -\psi_1) \right) + \Pr\left(-\psi_1 < \psi_2 \leqslant \alpha\right) \\
&=& \mathbb{E}\left( \Phi\left(\frac{\min(\alpha-c, -\psi_1)-\mu_2}{\sigma_2}\right) \right) + \mathbb{E}\left( \Phi\left( \frac{\alpha-\mu_2}{\sigma_2}\right) - \Phi\left( \frac{-\psi_1-\mu_2}{\sigma_2} \right)\right) \\
&=& \Phi\left( \frac{\alpha-\mu_2}{\sigma_2}\right) + \mathbb{E}\left( \Phi\left(\frac{\min(\alpha-c, -\psi_1)-\mu_2}{\sigma_2}\right) - \Phi\left( \frac{-\psi_1-\mu_2}{\sigma_2} \right) \right) \\
&=& \Phi\left( \frac{\alpha-\mu_2}{\sigma_2}\right) + \mathbb{E}\left( \Phi\left(\frac{\min(\alpha-c, -\psi_1)-\mu_2}{\sigma_2}\right) - \Phi\left( \frac{-\psi_1-\mu_2}{\sigma_2} \right) \mid \alpha-c < -\psi_1 \right) \\
&=& \Phi\left( \frac{\alpha-\mu_2}{\sigma_2}\right) + \mathbb{E}\left( \Phi\left(\frac{\alpha-c-\mu_2}{\sigma_2}\right) - \Phi\left( \frac{-\psi_1-\mu_2}{\sigma_2} \right) \mid \alpha-c < -\psi_1 \right) \\
&=& \Phi\left( \frac{\alpha-\mu_2}{\sigma_2}\right) + \Phi\left(\frac{\alpha-c-\mu_2}{\sigma_2}\right) \Pr\left(\psi_1 < c-\alpha\right) - \mathbb{E}\left(\Phi\left( \frac{-\psi_1-\mu_2}{\sigma_2} \right) \mid \alpha-c < -\psi_1 \right) \\
&=& \Phi\left( \frac{\alpha-\mu_2}{\sigma_2}\right) + \Phi\left(\frac{\alpha-c-\mu_2}{\sigma_2}\right) \Phi\left(\frac{ c-\alpha - \mu_1}{\sigma_1}\right) - \mathbb{E}\left(\Phi\left( \frac{-\psi_1-\mu_2}{\sigma_2} \right) \mid \alpha-c < -\psi_1 \right)
\end{eqnarray}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
For fractional ideal, why $AB=R$ implies $B=A^{-1}$? Let $A,B$ be two fractional ideals of $R$ (an integral domain). Could anyone tell me why $AB=R$ implies $B=A^{-1}$?
|
The inverse of an ideal $I\subseteq R$ is defined as $$I^{-1}:=\{x\in k(R) : xI\subseteq R\}$$ Then if $R$ is a Dedekind domain we have $II^{-1}=R$. Now in a Dedekind domain the set of fractional ideals forms a group under the following operation: $$(r^{-1}I)\cdot (s^{-1} J):=(rs)^{-1}IJ$$ where $r,s\in R$ and $I,J$ are ideals (note that any fractional ideal can be written as $r^{-1}I$). The inverse in this group is given by $$(r^{-1}I)^{-1}=rI^{-1}$$.
So you can not define inverse of a fractional ideal in any general domain, it only makes sense in a dedekind domain and your claim follows by the definition of "inverse".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
open and closed and bounded interval people
interval $[0,1]$, is closed interval
can you say such interval $[0,1]$ is bounded or not??
if it is how to show this interval is bounded ?(proof?)
or interval is not enough to say this is bounded or not
is only apply to the function $f()$?
thank you
|
$[0,1]$ is bounded in $\mathbb R$ because there is a point $0$ in $\mathbb R$ from which the distance to any point in this interval is bounded by $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Irreducibility of a particular polynomial I've got this problem for my homework: find out whether the polynomial
$$f(x)=x(x-1)(x-2)(x-3)(x-4) - a$$
is irreducible over the rationals, where $a$ is integer which is congruent to $3$ modulo $5$.
It is easy to verify that $f(x)$ has no integer zeros (and no rational zeros too) and, because $f(x)$ is primitive, it is irreducible over $\mathbb{Z}$ iff it is irreducible over $\mathbb{Q}$. It is also clear to me that, since $f(x)$ has no integer zeros, only way to factorize $f(x)$ over $\mathbb{Z}$ is $(x^3 + bx^2 + cx +d)(x^2 + ex +f)$, i.e. as a product of one irreducible cubic polynomial and one irreducible quadratic polynomial, both primitive. I've got pretty ugly system of $5$ equations with $5$ variables. So I decided to put the whole story in $(\mathbb{Z}/5\mathbb{Z})[x]$. Then I've got the polynomial $g(x) = x^5 + 4x +2$. Since I firmly believe that $f(x)$ is irreducible over $\mathbb{Z}$, it means that I must check whether the $g(x)$ is irreducible over $\mathbb{Z}/5\mathbb{Z}$. Is there any other way than brute force?
|
Observation: In ${\mathbb F}_5$ we have $x(x-1)(x-2)(x-3)(x-4) = x^5 - x$.
This follows from the fact that over a finite field $F$ we have $x^{|F|} - x = \prod_{g\in F} (x-g)$ which you can easily check by noticing that all elements are roots and that a polynomial of degree $|F|$ can't have any more roots.
Claim: More generally, we will prove that given a prime $p$ and $t\in{\mathbb F}_p\setminus \{0\}$ we have that $f(x)= \prod_{k\in{\mathbb F}_p} (x-k) - t = x^p - x - t \in{\mathbb F_p}[x]$ is irreducible over ${\mathbb F}_p$.
Proof. Let $\alpha$ be a root of $f$ over some extension field.
Note that more generally over a finite field ${\mathbb F}_q$ we have that, the monic irreducible polynomial $m_\alpha (x)$ for $\alpha$ over ${\mathbb F}_q$, satisfies
$$ m_\alpha (x) = \prod_{k=0}^{n-1} (x-\alpha^{q^k}) $$
where $n$ is the smallest positive integer such that $\alpha^{q^n} = \alpha$.
This is a well-known result in finite fields. Without getting into too many details, this follows from the fact that $\text{Gal}\left({\mathbb F}_{q^n}/{\mathbb F}_q\right)$ is cyclic, generated by the Frobenius map $x\mapsto x^q$. Thus the roots above are actually the Galois conjugates of $\alpha$, and the result follows from a well-known theorem in Galois Theory.
We need only check now that in our case, with $p=q$, and $f(\alpha)=0$, our $n$ must be $p$. Indeed, since $f(\alpha) = 0$, we have
$$ \alpha^{p^n} = (\alpha^p)^{p^{n-1}} = \left( \alpha + t \right)^{p^{n-1}} = \alpha^{p^{n-1}} + t^{p^{n-1}} =\alpha^{p^{n-1}} + t = \ldots = \alpha + n \times t $$
Since $t\neq 0$, the smallest such $n$ is clearly $p$, since otherwise $n\times t \neq 0$. Thus the number of linear factors (Galois conjugates) in $m_\alpha(x)$ is $p$ and we get that our polynomial was irreducible. $\square$
Conclusion: This proves that, given a prime $p$, the polynomial $g_p(x) = x\cdot (x-1)\ldots (x-p+1) - a \in {\mathbb Z}[x]$, with $p\not | a$, must be irreducible over ${\mathbb Q}$.
Indeed in ${\mathbb F}_p$ we have that $g_p(x)$ reduces to $f(x) = x^p - x - \bar{a}$ and $\bar{a}=t\neq 0$.
We may also substitute $k$ in the factors $(x-k)$ in the product by any element $\equiv k(\bmod. p)$.
A link: There has recently been a related post, which is more general, and incredibly interesting.
Final remark: When it comes to Finite Fields, the above result on irreducible polynomials is pretty often a good way to prove that a given polynomial is irreducible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
$\varepsilon$-$\delta$ proof of $\sqrt{x+1}$ need to prove that $ \lim_{x\rightarrow 0 } \sqrt{1+x} = 1 $
proof of that is:
need to find a delta such that $ 0 < |x-1| < \delta \Rightarrow 1-\epsilon < \sqrt{x+1} < \epsilon + 1 $ if we choose $ \delta = (\epsilon + 1)^2 -2 $ and consider $ |x-1| < \delta = (\epsilon + 1)^2 - 2 $
$ 4 - (\epsilon + 1)^2 < x +1 < (\epsilon + 1)^2 $
$ \sqrt{4-(\epsilon + 1)^2} -1 < \sqrt{x+1} -1 < \epsilon$ but I need to show that $\sqrt{4-(\epsilon + 1)^2} -1 > -\epsilon $ before this proof is complete...
any help on how to finish the proof?
|
So it remains to show that $\sqrt{4-(\epsilon + 1)^2} -1 > -\epsilon $.
Note that for the $\epsilon-\delta$ proof is enough to show that $|x-1|\lt\delta\Rightarrow\ldots$ only for small epsilons. In your case we may assume therefore that $\epsilon<\frac12$. In that case we have that indeed $\sqrt{4-(\epsilon + 1)^2} -1 > -\epsilon $!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Of Face and Circuit Rank The circuit rank of a graph $G$ is given by
$$r = m - n + c,$$
where $m$ is the number of edges in $G$, $n$ is the number of vertices, and $c$ is the number of connected components.
Doesn't Euler's formula say the same?
$$
\#\text{faces} = \#\text{edges}-\#\text{vertices} +\#\text{components} \;\;\;(+\chi),
$$
where $\chi$ is Euler's characteristic of the surface where the graph lives on.
So the circuit rank is just the number of faces, right?
I just wonder since I can't find any face on the Wiki page...
|
We have
$$
\#\text{circuit rank} = \#\text{edges} - \#\text{vertices} + \#\text{components}
$$
and, for a planar graph (or a graph on the surface of a sphere),
$$
\#\text{vertices} - \#\text{edges} + \#\text{faces} = \#\text{components} + 1 \text{.}
$$
This leads to
$$
\#\text{circuit rank} = \#\text{faces} - 1 \text{.}
$$
The number of faces includes the outer region. If you do not include it then the circuit rank does equal the number of faces.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove $13|19^n-6^n$ by congruences I am trying to prove $13|19^n-6^n$. With induction its not so bad but by congruences its quite difficult to know how to get started.
Any hints?
|
Because $x^n - y^n$ is divisible by $x-y$ as
$$x^n - y^n = (x-y)\sum_{i=0}^{n-1}x^iy^{n-1-i}$$
Substitute $x=19$ and $y = 6$.
$$\begin{align*}
x^n-y^n =& x^n\left[1-\left(\frac yx\right)^n\right]\\
=& x^n \left[1+\left(\frac yx\right)+\left(\frac yx\right)^2+\cdots+\left(\frac yx\right)^{n-1}\right]\left[1-\left(\frac yx\right)\right]\\
=& \left(x^{n-1}y^0 + x^{n-2}y + x^{n-3}y^2 + \cdots + x^0y^{n-1}\right) (x-y)
\end{align*}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
}
|
Negative Base to non-integer power I'm looking to consistently solve the m^n case, including conditions where m is negative and n is non-integer. I'd like to, additionally, catch the error when it isn't possible.
Some examples to think about.
(-.5)^(.2) which is, effectively, the fifth root of (-1/2) which has a solution. If I were to simplify the numbers as something like (-1/32)^(1/5) then the answer is clearly -1/2. However, pow(m,n) will not solve that correctly.
Let's be explicit and compare that to the condition like (-.5)^.125, which is the eighth root, since the power is 1/8, and therefore the code would need to throw/catch that as isComplexNumber error.
The thing is, I know I can make an attempt to resolve the fractional equivalency of the number, check the base, and if it's an odd then negative the result of pow(abs(m),n), but due to irregularities in doubles, it's hard sometimes to resolve that decimal exponent into a fraction. What's more that's just inelegant coding.
The other potential solution path I see is to create a for-loop which brute-force solves for the root using guess and check (with some marginal inherent inaccuracy), which is even less attractive.
Thoughts? This has been solved by TI, Mathematica, etc. Am I missing something basic?
|
The underlying issue here is that (assuming you want to stay within the real numbers) when $c<0$, the function $c^x$ is undefined for most values of $x$. Specifically, it's undefined unless $x$ is a rational number whose denominator is odd. There is no continuous/differentiable function underlying the places where it is defined.
Therefore, there is no possible guess-and-check algorithm that gradually becomes more accurate. First, guess-and-check algorithms require an underlying continuous function. Second, the value you're seeking might simply not be defined.
So the need to determine whether the exponent is a fraction with odd denominator, which in other contexts might be considered inelegant, here is simply a necessary step in the problem you're trying to solve. (And really, people shouldn't be inputting $c^x$ when $c<0$ and $x$ is a decimal ... they're just asking for trouble, for all the reasons mentioned above.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
An inequality related to the number of binary strings with no fixed substring Let $f \in \{0,1\}^k$ and let $S_n(f)$ be the number of strings from $\{0,1\}^n$ that do not contain $f$ as a substring. As an interesting example $S_n(11) = f_{n+2}$ where $f_n$ is the $n$'th Fibonacci number.
I would like to show that if $|f| > |f'|$ then $$S_n(f) > S_n(f')$$ where of course $n \geq \rm{max}(|f|,|f'|).$
I am trying to prove the claim by induction and get stuck in the inductive step. Let us suppose that for all $\rm{max}(|f|,|f'|) \leq n < k$ we have $S_n(f) < S_n(f')$ and let $n = k.$ Without loss of generality we may assume that both $f$ and $f'$ start with $0.$
Now let $S^{0}_n(f), S^{1}_n(f)$ be the number of binary strings of length $n$ that start with 0 or 1 respectively and do not contain $f.$ Then $S_n(f) = S^{1}_n(f) + S^{0}_n(f)$ and since $f$ starts with zero $S^{1}_n(f) = S_{n-1}(f).$ Hence we can apply induction hypothesis on $S_{n-1}(f)$ and $S_{n-1}(f')$ yet it remains to show that $S^{0}_n(f) \geq S^{0}_n(f')$ which does not appear to be any easier than the original inequality.
Hence I would like to ask
1.Is there a way to finish this inductive argument properly?
2.Is there any other way to show this claim, perhaps by using more
advanced tools?
|
Let $Q_n(f)$ be the number of bit strings of length $n$ that contain $f$ as a substring. Also let $B_n=2^n$ be the number of bit strings at all of length $n$. Then clearly
$$
B_n=Q_n(f)+S_n(f)
$$
which just states that bit strings of length $n$ either does or does not contain $f$ as a substring. Furthermore, it is easy to see that
$$
S_n(f)>S_n(f')\iff Q_n(f)<Q_n(f')
$$
Now assume $|f|=k$ and let us see what happens to $Q_n(f)$ for $n\geq k$. Quite obviously
$$
Q_k(f)=1
$$
So what about $Q_{k+1}(f)$? We have four possible candidates for different bit strings of length $k+1$ containing $f$ as a substring, namely $0f,1f,f0,f1$. If $f$ consists of repeating digits, say $f=000...0$ then $0f=f0$ whereas the other two $1f,f1$ are different, so then $Q_{k+1}(f)=3$. If $f$ consists of mixed digits then we are guaranteed that all four are different. Then $Q_{k+1}(f)=4$.
Next case to consider is $Q_{k+2}(f)$: We have the twelve candidates
$$
\begin{align}
00f,&&01f,&&10f,&&11f\\
0f0,&&0f1,&&1f0,&&1f1\\
f00,&&f01,&&f10,&&f11
\end{align}
$$
If again $f=000...0$ then $f$ commutes with zeros so that $00f=0f0=f00$ and $10f=1f0$ and $f01=0f1$. So in this case we only have eight distinct strings so $Q_{k+2}(f)=8$. Next, if $f=0101...01$ (even length) then $f$ commutes with $01$ so $f01=01f$ hence $Q_{k+2}(f)=11$. On the other hand if $f$ is symmetrical and $f=1010...0101$ (odd length) then $10f=f01$ and again $Q_{k+2}(f)=11$. In a way you can say the above examples are $f$'s consisting of repeated length $2$ substrings. So if $f$ cannot be constructed by repeating length-$2$-strings, all $12$ candidates above are different. Then $Q_{k+2}(f)=12$.
So far we have found that for $|f|=k$ some values of $Q_n(f)$ are:
$$
\begin{align}
Q_{k}(f)&=1\\
Q_{k+1}(f)&\in\{3,4\}\\
Q_{k+2}(f)&\in\{8,11,12\}
\end{align}
$$
In general for $|f|=k$ and $Q_{k+r}(f)$, I suspect we will have $2^r(r+1)$ candidates since $f$ can be placed in $r+1$ places in each of the $2^r$ possible bit strings of length $r$, and the case where the fewest of these produce different strings is when $f=00...0$ or $f=11...1$. So suppose $f=00...0$. Then
$$
00...0f=00...f0=0...f00=...=f00...0
$$
are $r+1$ identical candidates thus removing $r$ candidates from the list. Similarly
$$
100..0f=100...f0=10...f00=...=1f00...0\\
\mbox{and}\\
00...0f1=00...f01=0...f001=...=f00...01
$$
are $r$ identical candidates and another $r$ identical candidates thus removing $2(r-1)$ candidates from the list.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Show that the multiplicative group $\mathbb{Z}_{10}^{\times}$ is isomorphic to the additive group $\mathbb{Z}_4$. Show that the multiplicative group $\mathbb{Z}_{10}^{\times}$ is isomorphic to the additive group $\mathbb{Z}_4$.
I'm completely lost with this one.
|
write down all the elements of $\mathbb{Z}^{\times}_{10}$ explicitly. any find a generator by its Cayley Table or by explicit calculation. Now map this generator to $1$ in $\mathbb{Z}_4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Integral equations that can be solved elementary
Solve the following integral equations:
$$
\int_0^xu(y)\, dy=\frac{1}{3}xu(x) \tag 1 \label 1
$$
and
$$
\int_0^xe^{-x}u(y)\, dy=e^{-x}+x-1. \tag 2 \label 2
$$
Concerning $\eqref 1$, I read that it can be solved by differentiation. Differentiation on both sides gives
$$
u(x)=\frac{1}{3}(u(x)+xu'(x))\Leftrightarrow u'(x)=\frac{2}{x}u(x)
$$
and then by separation of this ODE the solution is $u(x)=Cx^2$.
How can I solve $\eqref 2$?
|
Hint:
$$\int_0^xe^{-x}u(y)\, dy=e^{-x}+x-1$$ So $$ e^{-x}\int_0^xu(y)\, dy=e^{-x}+x-1$$ and therefore, $$ \ \int_0^xu(y)\, dy=1+\frac{x-1}{e^{-x}} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/582977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $x^3+\frac{1}{x^3}=18\sqrt{3}$ then to prove $x=\sqrt{3}+\sqrt{2}$
If $x^3+\frac{1}{x^3}=18\sqrt{3}$ then we have to prove $x=\sqrt{3}+\sqrt{2}$
The question would have been simple if it asked us to prove the other way round.
We can multiply by $x^3$ and solve the quadratic to get $x^3$ but that would be unnecessarily complicated.Also, as $x^3$ has 2 solutions,I can't see how x can have only 1 value. But the problem seems to claim that x can take 1 value only.Nevertheless,is there any way to get the values of x without resorting to unnecessarily complicated means?
NOTE: This problem is from a textbook of mine.
|
$$t+\frac1t=18\sqrt3\iff t^2-(2\cdot9\sqrt3)t+1=0\iff t_{1,2}=\frac{9\sqrt3\pm\sqrt{(9\sqrt3)^2-1\cdot1}}1=$$
$$=9\sqrt3\pm\sqrt{81\cdot3-1}\quad=\quad9\sqrt3\pm\sqrt{243-1}\quad=9\sqrt3\pm\sqrt{242}\quad=\quad9\sqrt3\pm\sqrt{2\cdot121}=$$
$$=9\sqrt3\pm\sqrt{2\cdot11^2}\quad=\quad9\sqrt3\pm11\sqrt2\quad\iff\quad x_{1,2}^3=9\sqrt3\pm11\sqrt2=(a\sqrt3+b\sqrt2)^3=$$
$$=(a\sqrt3)^3+(b\sqrt2)^3+3(a\sqrt3)^2b\sqrt2+3a\sqrt3(b\sqrt2)^2\ =\ 3a^3\sqrt3+2b^3\sqrt2+9a^2b\sqrt2+6ab^2\sqrt3$$
$$\iff3a^3+6ab^2=9=3+6\quad,\quad2b^3+9a^2b=\pm11=\pm2\pm9\iff a=1,\quad b=\pm1.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
}
|
The milk sharing problem I found a book with math quizzes. It was my father's when he was young. I encountered a problem with the following quiz. I solved it, but I wonder, is there a faster way to do it? If so, how can I compute the time (polynomial time) that is needed to solve it? Can we build an algorithm?
The problem is this one:
Two brothers have a cow that produces 10 kilos of milk per day. Because they are fair, they share the milk and each one gets 5 kilos.
They only have available three bottles.
$A$ that fits 10 kilos.
$B$ that fits 7 kilos.
$C$ that fits 3 kilos.
How do they share it?
What I did is these steps:
Bottle Sizes
A B C
10 7 3
Moves Bottles What we do:
1st 10 0 0 Fill the bottle A.
2nd 7 0 3 Fill the bottle C.
3rd 7 3 0 Empty the index of C in B.
4th 4 3 3 Refill C.
5th 4 6 0 Empty C in B.
6th 1 6 3 Refill C from A.
7th 1 7 2 Refill B from C.
8th 8 0 2 Empty the index of B in A.
9th 8 2 0 Empty the index of C in B.
10th 5 2 3 Refill C from A.
11th 5 5 0 Empty C in B.
|
I've found a better way, in 10 steps. I tried to improve it but it seems to me that it's the option with less steps. Here it goes:
A B C
(10, 0, 0)-(3, 7, 0)-(3, 4, 3)-(6, 4, 0)-(6, 1, 3)-(9, 1, 0)-(9, 0, 1)-(2, 7, 1)-(2, 5, 3)-(5, 5, 0)
In any case, i'm going to draw a tree (this may sound a bit childish) in order to prove it.
;)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 6,
"answer_id": 4
}
|
How to evaluate $\lim_{n\to\infty}\{(n+1)^{1/3}-n^{1/3}\}?$ How to calculate $$\lim_{n\to\infty}\{(n+1)^{1/3}-n^{1/3}\}?$$
Of course it's a sequence of positive reals but I can't proceed any further.
|
Hint: multiply by $\frac{(n+1)^{2/3}+(n+1)^{1/3}n^{1/3}+n^{2/3}}{(n+1)^{2/3}+(n+1)^{1/3}n^{1/3}+n^{2/3}}$ to get
$$
(n+1)^{1/3}-n^{1/3}=\frac{(n+1)-n}{(n+1)^{2/3}+(n+1)^{1/3}n^{1/3}+n^{2/3}}
$$
Alternatively, use the definition of the derivative:
$$
\begin{align}
\lim_{n\to\infty}\frac{(n+1)^{1/3}-n^{1/3}}{(n+1)-n}
&=\lim_{n\to\infty}n^{-2/3}\frac{(1+1/n)^{1/3}-1}{(1+1/n)-1}\\
&=\lim_{n\to\infty}n^{-2/3}\lim_{n\to\infty}\frac{(1+1/n)^{1/3}-1}{(1+1/n)-1}\\
&=\lim_{n\to\infty}n^{-2/3}\left.\frac{\mathrm{d}}{\mathrm{d}x}x^{1/3}\right|_{x=1}\\
&=0\cdot\frac13\\[8pt]
&=0
\end{align}
$$
I think the simplest method is to use the Mean Value Theorem
$$
(n+1)^{1/3}-n^{1/3}=\frac{(n+1)^{1/3}-n^{1/3}}{(n+1)-n}=\frac13\eta^{-2/3}
$$
for some $\eta\in(n,n+1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
${1\over{n+1}} {2n \choose n} = {2n \choose n} - {2n \choose n-1}$ I need to prove that $${1\over{n+1}} {2n \choose n} = {2n \choose n} - {2n \choose n-1}$$
I started by writing out all the terms using the formula ${n!\over{k!(n-k)!}}$ but I can't make the two sides equal.
Thanks for any help.
|
That should work. Note that $$ \begin {align*} \dbinom {2n}{n-1} &= \dfrac {(2n)!}{(n-1)! (n+1)!} \\&= \dfrac {n}{n+2} \cdot \dfrac {(2n!)}{n!n!} \\&= \dfrac {n}{n+1} \cdot \dbinom {2n}{n}, \end {align*} $$ so we have: $ \dbinom {2n}{n-1} = \left( 1 - \dfrac {1}{n+1} \right) \cdot \dbinom {2n}{n}, $ which is equivalent to what we wanted to show.
It seems I've been beaten by 2 answers . . .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Finding distance between two parallel 3D lines I can handle non-parallel lines and the minimum distance between them (by using the projection of the line and the normal vector to both direction vectors in the line), however, in parallel lines, I'm not sure on how to start. I was thinking of finding a normal vector to one of the direction vectors (which will be as well normal to the other line because they are parallel), then set up a line by a point in the direction of the normal vector, and then find the points of intersection. After finding the line between the two parallel lines, then we can calculate the distance.
Is this reasoning correct? If it is, is there a way to find normal vectors to a line or any vector instead of guessing which terms give a scalar product of 0? I have encountered this problem as well in directional derivatives and the like.
|
Let $P$ be a variable point of $L_1$ and $P_0$ a fixed point of $L_2$. Try to minimize $$\left|\frac{\mathbf{a}\times\vec{PP_0}}{|\mathbf{a}|}\right|$$ where $\mathbf{a}$ is leading vector (for example) for $L_1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Square root of nilpotent matrix How could I show that $\forall n \ge 2$ if $A^n=0$ and $A^{n-1} \ne 0$ then $A$ has no square root? That is there is no $B$ such that $B^2=A$. Both matrices are $n \times n$.
Thank you.
|
I found this post while trying to write solutions for my class's homework assignment. I had asked them precisely this question and couldn't figure it out myself. One of my students actually found a very simple solution, so I wanted to share it here:
Suppose that $N = A^2$. Since $N^n = 0$, we have $A^{2n} = 0$. Thus $A$ is nilpotent, so in fact $A^n = 0$. [This is because of the standard fact that if $P$ is any $n \times n$ nilpotent matrix, then $P^n = 0$.] Now observe that $N^{n-1} = (A^2)^{n-1} = A^{2n-2}$. But if $n \geq 2$, then $2n - 2 = n + (n - 2) \geq n$, so $N^{n-1} = 0$. This is a contradiction, so no such $A$ can exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
}
|
Proof that $\sum_{k=2}^{\infty} \frac{H_k}{k(k-1)} $ where $H_n$ is the sequence of harmonic numbers converges? How to prove that $$\displaystyle \sum_{k=2}^{\infty} \dfrac{H_k}{k(k-1)} $$ where $H_n$ is the sequence of harmonic numbers converges and that $\dfrac{H_n}{n(n-1)}\to 0 \ $
I have already proven by induction that this equals $\left(2-\dfrac{1}{(n+1)}-\dfrac{h_{n+1}}{n} \right)$ for every $n\ge1$ but am not sure how to use this in solving my problem. Could anyone give me some tips?
|
$$\sum_{k=2}^{\infty} \frac{H_k}{k(k-1)}=\sum_{k=1}^{\infty} \frac{H_{k+1}}{k(k+1)}=-\int_0^1 \ln(1-x)\sum_{k=1}^\infty \frac{x^k}{k}dx$$
$$=\int_0^1\ln^2(1-x)dx=\int_0^1 \ln^2xdx=2$$
Note that we use $\int_0^1 x^k \ln(1-x)dx=-\frac{H_{k+1}}{k+1}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Understanding induction proof with inequalities I'm having a hard time proving inequalities with induction proofs. Is there a pattern involved in proving inequalities when it comes to induction? For example:
Prove ( for any integer $n>4$ ):
$$2^n > n^2 \\ $$
Well, the skipping ahead to the iduction portion, here's what I've got so far:
let $n=k$, then $2^{k+1} > (k+1)^2 $.
Starting with the LHS
$$\begin{align} \\ 2^{k+1}=2\cdot2^k \\ > 2\cdot k^2 \end{align}$$
And that 's where it ends for me. What would be the next step, and what suggestions might you give for future proofs like this? Do we always need to prove that the left hand side is equal to the other?
|
Working off Eric's answer/approach:
Let P(n) be the statement that $2^n \gt n^2$.
Basis step: (n=5) $2^5\gt 5^2 $ which is true.
Inductive step: We assume the inductive hypothesis that $P(k)$ is true for an arbitrary integer $k\ge5$.
$$ 2^{k} \gt k^2 \text{ (IH)}$$
Our goal is to show that P(k+1) is true. Let's multiply each side by two:
$$ 2*2^{k}>2(k)^2 $$
$$ 2^{k+1}>2k^2 $$
If this statement is true, then $2k^2$ $\ge$ $(k+1)^2$ must also be true when $k\ge5$:
$$ 2k^{2}\ge(k+1)^2 $$
$$ 2k^{2} \ge k^2+2k+1$$
$$k^2\ge2k+1$$
And it is. Therefore $\forall{_{k\ge5}}P(k) \implies P(k+1)$ and $\forall{_{n\ge5}}P(n)$ follows by induction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Ratio of balls in a box A box contains some identical tennis balls. The ratio of the total volume of the tennis
balls to the volume of empty space surrounding them in the box is $1:k$, where $k$ is an
integer greater than one.
A prime number of balls is removed from the box. The ratio of the total volume of the
remaining tennis balls to the volume of empty space surrounding them in the box is 1:$k^2$.
Find the number of tennis balls that were originally in the box.
A few questions regarding this problem:
Does the shape of the box matter? I just let the volume of the box be a constant $V$. Also I noted that the ratio $\frac{1}{k^2} = \left( \frac{1}{k} \right)^2$. ie. New ratio is the old ratio squared. I also let the amount of balls in the box be $n$ and the amount of balls taken out be $p$ where $p$ is a prime, so the new amount of balls in the box is $n-p$.
This is about all I could do in this problem but I would like to be guided towards solving the problem (and I'm also interested in your thought processes and what ideas you initially think of so I can get a better idea of what to think of when doing problem solving) than just being given the solution.
Your help would be much appreciated.
|
The ratio $1:k$ means in particular that if $\alpha$ is the total volume of the balls originally in the box, then $k\alpha$ is the total volume of the empty space surrounding them in the box. In general, if $V_t$ is the total volume of the tennis balls and $V_b$ is the total volume of the box, then the total volume of empty space surrounding the tennis balls in the box is $V_b-V_t.$ (This has nothing to do with the shape of the box. We simply need the box to be large enough that it can hold all the tennis balls originally in it, and still be closed.) What this means is that the total volume of the box (which will not be changing) is $\alpha+k\alpha=(1+k)\alpha.$
Now, let's suppose that there are $n$ balls in the box originally, and since they are all tennis balls, then we can assume that they all have the same volume. (In fact, if they had different volumes, there would be no way to do the problem, so we need to assume it.) Say that the volume of a single tennis ball is $\beta.$ Then $\alpha=n\beta,$ so the volume of the box is $(1+k)n\beta.$
Next, we remove a prime number of balls from the box, say $p,$ so that there are now $n-p$ balls in the box, so that the total volume of the tennis balls remaining is $(n-p)\beta.$ The total volume of the box is $(1+k)n\beta,$ so the total volume of the empty space around the remaining tennis balls in the box is $$(1+k)n\beta-(n-p)\beta=n\beta+kn\beta-n\beta+p\beta=(kn+p)\beta.$$ By assumption, then, the ratio $1:k^2$ is the same as the ratio $(n-p)\beta:(kn+p)\beta,$ which is clearly the same as $n-p:kn+p.$ This means that $$kn+p=k^2(n-p)\\kn+p=k^2n-k^2p\\k^2p+p=k^2n-kn\\(k^2+1)p=(k^2-k)n.$$ Now, note that since we removed a prime (so non-zero) number of balls from the box, then our new ratio cannot be the same as our old one--that is, $k^2\ne k$--so we may divide by $k^2-k$ to get $$n=\frac{k^2+1}{k^2-k}p.$$ At this point, I don't think there's much more we can say, unless there's some additional information about $k,p,$ or $n$ that you haven't shared. We can conclude readily that $\frac{k^2+1}{k^2-k}$ is a rational number greater than $1,$ but that doesn't really help to determine what $n$ has to be (at least, not as far as I can see).
Oops! I missed the condition that $k$ was an integer! Ignore "Now, note that since we removed...as far as I can see)."
Now, since $k$ is an integer, then $k^2+1$ and $k^2-k$ are integers. In particular, since $$(k^2+1)p=k(k-1)n,$$ then $k\mid(k^2+1)p,$ so since $k$ and $k^2+1$ are relatively prime, then we must have that $k\mid p.$ Since $p$ is prime and $k$ is an integer greater than $1,$ it then follows that $k=p,$ so we have $$(k^2+1)p=(k-1)np\\k^2+1=(k-1)n\\k^2+1=kn-n\\k^2-kn+n+1=0.$$ This gives us a quadratic in $k,$ whose solutions are $$\begin{align}k &= \frac{n\pm\sqrt{n^2-4(n+1)}}2\\ &= \frac{n\pm\sqrt{n^2-4n-4}}2.\end{align}$$ Since $k$ is an integer, then we require $\sqrt{n^2-4n-4}$ to be rational, which means it must be a positive integer, say $m.$ Hence, $$n^2-4n-4=m^2\\n^2-4n+4-8=m^2\\(n-2)^2-8=m^2\\(n-2)^2-m^2=8\\\bigl((n-2)+m\bigr)\bigl((n-2)-m\bigr)=8\\(n-2+m)(n-2-m)=8.$$ Since $m$ is a positive integer, then we can conclude that $m=1,$ whence $n=5.$ (I leave it to you to show this.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
}
|
De Moivre's formula to solve $\cos$ equation Use De Moivre’s formula to show that
$$
\cos\left(x\right) + \cos\left(3x\right) = 2\cos\left(x\right)\cos\left(2x\right)
$$
$$
\mbox{Show also that}\quad
\cos^{5}\left(x\right) = \frac{1}{16}\,\cos\left(5x\right) + \frac{5}{16}\,\cos\left(3x\right) + \frac{5}{8}\,\cos\left(x\right)
$$
Hence solve the equation $\cos\left(5x\right) = 16\cos^{5}\left(x\right)$ completely.
Express your answers as rational multiples of $\pi$.
|
HINT:
$$ (\cos x + i \sin x)^n = \cos nx + i \sin nx $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Is a divisibility language context free? I am working to see if this language would be context free:
L = { 0n1k : n/k is an integer }
After playing with it for a little while, I believe that the language is not context free. Now I am looking to use the pumping lemma to prove that it is not, but am struggling a bit to apply it to this language, which is making me question whether it might be a CFL.
Is this language context free? If not, how should I approach using the pumping lemma to prove that it is not.
|
Let $p$ be the pumping length and pick $k>p$.
As $0^k1^k\in L$, we can write $0^k1^k=xyz$ with $|xy|\le p$, $|y|\ge 1$, and $xy^iz\in L$ for all $i\ge 0$. We conclude that $x=0^r, y=0^s$ wirh $r\ge0$, $s\ge 1$. Then $xy^0z=0^{k-s}1^k\notin L$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove $1<\frac1{e^2(e-1)}\int_e^{e^2}\frac{x}{\ln x}dxProve the following inequalities:
a) $1.43 < \int_0^1e^{x^2}dx < \frac{1+e}2$
b) $2e <\int_0^1 e^{x^2}dx+\int_0^1e^{2-x^2}dx<1+e^2$
c) $1<\frac1{e^2(e-1)}\int_e^{e^2}\frac{x}{\ln x}dx<e/2$
Source: http://www.sms.edu.pk/downloads/prceedingsfordae1/1_intinequal.pdf
We know $\sqrt{\pi}/2=\int_0^\infty e^{x^2}dx>\int_0^1 e^{x^2}dx$, and according to Google calculator (https://www.google.com/search?q=sqrt(pi)%2F2&oq=sqrt(pi)%2F2&aqs=chrome..69i57j0.3176j0j7&sourceid=chrome&espv=210&es_sm=93&ie=UTF-8 ) $\sqrt{\pi}/2\approx 0.88622692545$, while $\frac{1+e}2>1$. But $1.43$ doesn't look right. Maybe there's a typo in the pdf file.
|
for c use mean value theorem for integral calculus.Ie. take $f(x)=\frac{x}{ln x}$ in the interval $[e,e^2]$.
since maximum value of the function in the given interval is $e$ and $\frac{e^2}{2}$ at $x=e$ and $x=e^2$ using maxima minima test.Hence applying meanvalue theorem we get
$$ e<\frac{\int_e^{e^2} \frac{x}{ln x}}{e^2-e}<\frac{e^2}{2}$$
rearranging we get $$1<\frac{\int_e^{e^2} \frac{x}{ln x}}{e^2(e-1)}<\frac{e}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/583984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Solve $z^4 + 4 = 0$ I'm trying to solve it by using its polar form, but then I get
$$
\begin{align*}
z^4 &= (\rho e^{i\phi})^4 = \rho^4 e^{4i\phi}\\
&= -4 = -4 e^{0i}\\
\end{align*}
$$
From the definition of equality of complex numbers, $\rho^4 = -4$ and $4\phi = 0 + 2\pi k$ for some $k \in \mathbb{Z}$.
This would mean $\rho = \sqrt{-2}$ and $\phi = \pi k / 2$. I have no idea how to interpret that imaginary radius, and Wolfram Alpha says the angle is $\pi k / 4$. Should this equation be solved using this method? Am I missing some step, or are my calculations incorrect?
I've already read Solve $z^4+1=0$ algebraically, but I want to know whether this equation is solvable using this method or another method should be used.
|
Something that we all never learned in high school, but should have, is the amazing factorization $X^4+4=(X^2+2X+2)(X^2-2X+2)$. With this, the total factorization is easy.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/584059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
}
|
How to find the limit of $\frac{\ln(n+1)}{\sqrt{n}}$ as $n\to\infty$? I'm working on finding whether sequences converge or diverge. If it converges, I need to find where it converges to.
From my understanding, to find whether a sequence converges, I simply have to find the limit of the function.
I'm having trouble getting started on this one (as well as one more, but I'll stick to one at a time).
I would appreciate if someone could explain how I should start this one.
|
You need to use L'Hopital's rule, twice. What this rule states is that basically you if you are trying to take the limit of something that looks like $\frac{f(x)}{g(x)}$, you can keep on taking the derivative of the numerator and denominator until you get a simple form where the limit is obvious. Note: L'Hopitals rule doesn't always work.
We have:
$$\lim_{n\to \infty} \frac{ln(n+1)}{\sqrt{n}} $$
We can keep on taking the derivative of the numerator and denominator to simplify this into a form where the limit is obvious:
$$= \lim_{n\to \infty} \frac{\frac{1}{n+1}}{\frac{1}{2\sqrt{n}}} $$
$$= \lim_{n\to \infty} \frac{(n+1)^{-1}}{(2\sqrt{n})^{-1}} $$
$$= \lim_{n\to \infty} \frac{2\sqrt{n}}{n+1} $$
Hrm - still not clear. Let's apply L'Hopitals rule one more time:
$$= \lim_{n\to \infty} \frac{\frac{1}{\sqrt{n}}}{1} $$
$$= \lim_{n\to \infty} \frac{1}{\sqrt{n}} $$
The limit should now be obvious. It is now of the form $\frac{1}{\infty}$, which equals zero.
$$\lim_{n\to \infty} \frac{ln(n+1)}{\sqrt{n}} = 0 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/584152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
}
|
Exact probability of random graph being connected The problem: I'm trying to find the probability of a random undirected graph being connected. I'm using the model $G(n,p)$, where there are at most $n(n-1) \over 2$ edges (no self-loops or duplicate edges) and each edge has a probability $p$ of existing. I found a simple formula online where $f(n)$ is the probability of $G(n,p)$ being connected. But apparently it's too trivial for the writer to explain the formula (it was just stated briefly).
The desired formula: $f(n) = 1-\sum\limits_{i=1}^{n-1}f(i){n-1 \choose i-1}(1-p)^{i(n-i)}$
My method is: Consider any vertex $v$. Then there's a probability ${n-1 \choose i}p^i(1-p)^{n-1-i}$ that it will have $i$ neighbours. After connecting $v$ to these $i$ neighbours, we contract them ($v$ and its neighbours) into a single connected component, so we are left with the problem of $n-i$ vertices (the connected component plus $n-i-1$ other "normal" vertices.
Except that now the probability of the vertex representing connected component being connected to any other vertex is $1-(1-p)^i$. So I introduced another parameter $s$ into the formula, giving us:
$g(n,s)=\sum\limits_{i=1}^{n-1}g(n-i,i){n-1 \choose i}q^i(1-q)^{n-1-i}$,
where $q=1-(1-p)^s$. Then $f(n)=g(n,1)$. But this is nowhere as simple as the mentioned formula, as it has an additional parameter...
Can someone explain how the formula $f(n)$ is obtained? Thanks!
|
Here is one way to view the formula. First, note that it suffices to show that
$$
\sum_{i=1}^n {n-1 \choose i-1} f(i) (1-p)^{i(n-i)} = 1,
$$
since ${n-1 \choose n-1}f(n)(1-p)^{n(n-n)}=f(n).$ Let $v_1, v_2, \ldots, v_n$ be $n$ vertices. Trivially,
$$
1= \sum_{i=1}^n P(v_1 \text{ is in a component of size }i).
$$
Now let's consider why $$P(v_1 \text{ is in a component of size }i)={n-1\choose i-1}P(G(i,p) \text{ is connected}) (1-p)^{i(n-i)}.$$
If $\mathcal{A}$ is the set of all $i-1$ subsets of $\{v_2, v_3, \ldots, v_n\}$, then
\begin{align*}
P(v_1 \text{ in comp of size }i)&=\sum_{A \in \mathcal{A}}P(\text{the comp of }v_1\text{ is }\{v_1\}\cup A)\\&=\sum_{A \in \mathcal{A}}P(\{v_1\}\cup A \text{ is conn'd})P(\text{no edge btwn }A\cup\{v_1\} \& \ (A\cup \{v_1\})^c).
\end{align*}
This last equality is due to the fact that the edges within $\{v_1\}\cup A$ are independent of the edges from $A$ to $A^c$. There are precisely ${n-1 \choose i-1}$ elements in $\mathcal{A}$. The probability that $\{v_1\}\cup A$ is connected is equal to the probability that $G(1+|A|,p)$ is connected, which is $f(i)$. There are $i(n-i)$ missing edges from $\{v_1\}\cup A$ to $(\{v_1\}\cup A)^c$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/584228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 2,
"answer_id": 1
}
|
$1-{1\over 2}+{1\over 3}-{1\over 4}+{1\over 5}-{1\over 6}+\dots-{1\over 2012}+{1\over 2013}$ The sum $1-{1\over 2}+{1\over 3}-{1\over 4}+{1\over 5}-{1\over 6}+\dots-{1\over 2012}+{1\over 2013}$ is equal,
a) ${1\over 1006}+{1\over 1007}+{1\over 1008}+\dots+{1\over 2013}$
b) ${1\over 1007}+{1\over 1008}+{1\over 1009}+\dots+{1\over 2013}$
c)${1\over 1006}+{1\over 1007}+{1\over 1008}+\dots +{1\over 2012}$
d) ${1\over 1007}+{1\over 1008}+{1\over 1009}+\dots+{1\over 2012}$
I tried by separating negatives and positives terms but did not get anything nice and simpler. Thank you for helping.
|
$$\sum_{1\le r\le 2n+1}(-1)^{r-1}\frac1r$$
$$=\sum_{1\le r\le 2n+1}\frac1r-2\left(\sum_{1\le r\le n-1}\frac1{2r}\right)$$
$$=\sum_{1\le r\le 2n+1}\frac1r-\left(\sum_{1\le r\le n-1}\frac1{r}\right)$$
$$=\sum_{n\le r\le 2n+1}\frac1r$$
Here $2n+1=2013\implies n=?$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/584323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Let $L = \mathbb F_2[X]/\langle X^4 + X + 1 \rangle$ is a field. Show $L^* = L / \{0\} = \langle X \rangle$ is cyclic. Let $L = \mathbb F_2[X]/\langle X^4 + X + 1 \rangle$ is a field. Show $L^* = L / \{0\} = \langle X \rangle$ is cyclic.
I've proven that $X^4 + X + 1$ is irreducible, so $L$ is a field. I also know that $X^5 + X + 1$ is not irreducible. Also I've proven that $|L| = 16$.
Could someone help me out proving that $L^*$ is cyclic ?
|
since the group is of order 15 any element must have order 1,3,5 or 15.
in any field the equation $x^k = 1 $ can have at most k roots, so the number of elements with order less than $15$ is at most $1 + 3 + 5 = 9$
hence there must be an element of order 15
i will add, after the query from OP, that a slightly more sophisticated form of this argument will show that for any finite field its multiplicative group is cyclic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/584431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Is this function injective / surjective? A question regarding set theory.
Let $g\colon P(\mathbb R)\to P(\mathbb R)$, $g(X)=(X \cap \mathbb N^c)\cup(\mathbb N \cap X^c)$
that is, the symmetric difference between $X$ and the natural numbers.
We are asked to show if this function is injective, surjective, or both.
I tried using different values of $X$ to show that it is injective, and indeed it would seem that it is, I can't find $X$ and $Y$ such that $g(X)=g(Y)$ and $g(X) \neq g(Y)$ but how do I know for sure? How do I formalize a proof for it?
Regarding surjective: I think that it is.
We can take $X=\mathbb R - \mathbb N$ and we get that $g(X)=\mathbb R$
What do I do about the injective part?
|
We can view subsets of $X$ as functions $X \to \{0,1\}$: 0 if the point is not in the set, It is commonly very useful to rewrite questions about subsets as questions about functions.
What form does $g$ take if we do this rewrite?
Terminology wise, if $S \subseteq X$, then $\chi_S$ is the function described above:
$$\chi_S(x) = \begin{cases} 0 & x \notin S \\ 1 & x \in S \end{cases} $$
It is called the characteristic function of $S$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/584601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Are convex function from a convex, bounded and closed set in $\mathbb{R}^n$ continuous? If I have a convex function $f:A\to \mathbb{R}$, where $A$ is a convex, bounded and closed set in $\mathbb{R}^n$, for example $A:=\{x\in\mathbb{R}^n:\|x\|\le 1\}$ the unit ball. Does this imply that $f$ is continuous? I've searched the web and didn't found a theorem for this setting (or which is applicable in this case). If the statement is true, a reference would be appreciated.
|
The continuity of convex functions defined on topological vectors spaces is rather well understood. For functions defined on a finite dimensional Banach space, i.e., $\mathbb{R}^n$, the classical monograph Convex Analysis by R.T. Rockafellar is a good place to check. Let me first point out a simple trick used in convex analysis. $\newcommand{\bR}{\mathbb{R}}$
If $C\subset \bR^n$ is a convex set and $f: C\to \bR$ is a convex function, then we can define an extension
$$ \hat{f}:\bR^n\to (-\infty,\infty],\;\; \hat{f}(x)=\begin{cases} f(x), \;\; x\in C\\
\infty, &x\in\bR^n\setminus C.\end{cases} $$
The above extension is obviously convex. Hence we may as well work from the very beginning with convex functions $f:\bR^n\to (-\infty,\infty]$. The set where $f<\infty$ is called the domain of $f$ and it is denoted by $D(f)$. The domain is a convex subset of $\bR^n$. It carries an induced topology, and the interior of $C$ with respect to the induced topology is called the relative interior.
Theorem 10.1 in the above mentioned book of Rockafellar shows that the restriction of a convex function to the relative interior of its domain is a continuous function.
For example, any convex function defined on the closed unit ball in $\bR^n$, must be continuous in the interior of the ball. Daniel Fisher's example shows that's the best one could hope for.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/584680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Commutation of exponentials of matrices Given two $n \times n$ real matrices $A$ and $B$, prove that the following are equivalent:
(i) $\left[A,B\right]=0$
(ii) $\left[A,{\rm e}^{tB}\right] = 0,\quad$ $\forall\ t\ \in\ \mathbb{R}$
(iii) $\left[{\rm e}^{sA},{\rm e}^{tB}\right] = 0;\quad$ $\forall\ s,t\ \in\ \mathbb{R}$
where $\left[A,B\right] = AB - BA$ is the commutator.
First of all, this is homework, so no need for a complete answer. It is pretty easy to show that $\left(i\right)\Rightarrow\left(ii\right)$ and $\left(ii\right)\Rightarrow(iii)$. However, I have no idea for $\left(iii\right)\Rightarrow\left(i\right)$ other than explicitly writing the exponentials, and that doesn't seem to lead anywhere:
$\displaystyle{\sum_{i=0}^{\infty}\sum_{j = 0}^{\infty}
{s^{i}t^{\,j}\over i!\,j!}\,\left[A^{i},B^{\,j}\right]}.\qquad$
( I think... ) Any tips ?.
|
The easiest way that I see is to do it in two steps, proving $(iii) \Rightarrow (ii) \Rightarrow (i)$.
To prove $(iii) \Rightarrow (ii)$, differentiate $f(s) = [e^{sA},e^{tB}] \equiv 0$. The proof of $(ii) \Rightarrow (i)$ is quite similar.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/584754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Entropy of a distribution over strings Suppose for some parameter $d$, we choose a string from the Hamming cube ($\{0,1\}^d$) by setting each bit to be $0$ with probability $p$ and $1$ with probability $1-p$. What is the entropy of this distribution on the Hamming cube? Clearly, if $p=\frac{1}{2}$, then the entropy would just be $\log \left(\frac{1}{2^d} \right) = d$. If $p = 0$, then the entropy would be $0$. What would be the entropy for the general case in terms of $p$? It would clearly be less than $d$....
|
In the Hamming $d$-cube there are $\binom{d}k$ points with $k$ zeroes and $d-k$ ones, so the entropy is
$$-\sum_{k=0}^d\binom{d}kp^k(1-p)^k\lg\left(p^k(1-p)^{d-k}\right)\;,$$
or
$$-\sum_{k=0}^d\binom{d}kp^k(1-p)^{d-k}\Big(k\lg p+(d-k)\lg(1-p)\Big)\;.$$
Now
$$\begin{align*}
\sum_{k=0}^d\binom{d}kkp^k(1-p)^{d-k}&=d\sum_{k=0}^d\binom{d-1}{k-1}p^k(1-p)^{d-k}\\\\
&=dp\sum_{k=0}^{d-1}\binom{d-1}kp^k(1-p)^{d-k-1}\\\\
&=dp\;,
\end{align*}$$
and
$$\sum_{k=0}^d\binom{d}kdp^k(1-p)^{d-k}=d\;,$$
so
$$-\sum_{k=0}^d\binom{d}kp^k(1-p)^k\lg\left(p^k(1-p)^{d-k}\right)=-d\Big(p\lg p+(1-p)\lg(1-p)\Big)\;.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/584821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Show that if $n$ is not divisible by $2$ or by $3$, then $n^2-1$ is divisible by $24$.
Show that if $n$ is not divisible by $2$ or by $3$, then $n^2-1$ is divisible by $24$.
I thought I would do the following ... As $n$ is not divisible by $2$ and $3$ then $$n=2k+1\;\text{or}\\n=3k+1\;\text{or}\\n=3k+2\;\;\;\;$$for some $n\in\mathbb{N}$.
And then make induction over $k$ in each case.$$24\mid (2k+1)^2-1\;\text{or}\\24\mid (3k+1)^2-1\;\text{or}\\24\mid (3k+2)^2-1\;\;\;\;$$This question is in a text of Euclidean Division I'm reviewing, and I wonder if there is a simpler, faster, or direct this way.
|
$n^2-1=(n-1)(n+1)$
$n$ is not even so $n-1$ and $n+1$ are even.
Also $n=4t+1$ or $4t+3$, this means at least one of $n-1$ or $n+1$ is divisible by 4.
$n$ is not $3k$ so at least one of $n-1$ or $n+1$ must be divisible by 3.
So $n^2-1$ has factors of 4, 2(distinct from the 4) and 3 so $24|n^2-1$
Edit: I updated my post after arbautjc's correction in his comment.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 3
}
|
How to notice that $3^2 + (6t)^2 + (6t^2)^2$ is a binomial expansion. The other day during a seminar, in a calculation, a fellow student encountered this expression:
$$\sqrt{3^2 + (6t)^2 + (6t^2)^2}$$
He, without much thinking, immediately wrote down:
$$(6t^2+3)$$
What bothers me, is that I didn't see that. Although I suspected there might be some binomial expansion, I was too confused by the middle term to see it. Even after actually seeing the answer, it took me a good deal to realize what's happening.
My initial idea, by the way, was to substitute for $t^2$ and solve a quadratic equation. Horribly slow.
My questions is perhaps tackling a greater problem. I now see what happened, I can "fully" understand binomial expansions, yet I feel like I might end up encountering a similar expression and yet again not be able to see the result. My questions is thus:
How can I properly practice so I don't repeat this mistake?
This is perhaps too much of a soft-question, but I feel like many others have been in a position of understanding a problem, yet not feeling like they would be able to replicate it, and might actually have an answer that goes well beyond the obvious "just practice more".
Thanks for any input.
|
$\bigcirc^2+\triangle+\square^2$ is a complete square if $\left|\triangle\right|=2\cdot\bigcirc\cdot\triangle$.
For example, $49x^2-42x+9$ is a complete square since the only candidate for $\triangle$ in $42x$, and indeed $42x=2\cdot7x\cdot3$. In our special case we have three candidates for $\triangle$; your fellow student had three candidates for $\triangle$. “Without much thinking” -- a bit of thinking was implied, since the three choices -- he verified that $(6t)^2=2\cdot3\cdot6t^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
}
|
Proving $\lim_{n \to\infty} \frac{1}{n^p}=0$ for $p > 0$? I'm trying to prove 3.20a) from baby Rudin. We are dealing with sequences of real numbers.
Theorem.
$$\lim_{n \to {\infty}} \frac{1}{n^p} = 0; \hspace{30 pt}\mbox {$p > 0$}$$
Proof. Let $\epsilon > 0$. Because of the Archimedan property of real numbers, there exists an $N \in \Bbb{N}$ such that $n_0 \geq N$ implies $\frac{1}{n_0} < \epsilon$ and thus implies $n_0 > \frac{1}{\epsilon}$. Thus $\exists n \geq n_0 : n > (\frac{1}{\epsilon})^k$ where $k$ is any number. For the interesting case, pick $k = \frac{1}{p}$ where $p > 0$. Thus $n > (\frac{1}{\epsilon})^{\frac{1}{p}}$ implies $$\frac{1}{n} < \frac{1}{\left(\frac{1}{\epsilon}\right)^{\frac{1}{p}}} \Longrightarrow \frac{1}{n^p} < \epsilon$$
which further implies $d(n, N) < \epsilon$. QED.
|
Another approach: you can show $x^p \rightarrow \infty $ as $x\rightarrow \infty$. Rewrite as $$\frac {x^{p+1}}{x}$$ , which is an indeterminate $\infty/\infty$ , and use L'Hopital, to get $$\frac {(p+1)x^{p}}{1} $$. Since $p$ is fixed and $p+1>1$, you can show this goes to $\infty$ , and then $1/x^p\rightarrow 0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Showing that $E[X|XI would like to show that:
$\hspace{2mm} E[X|X<x] \hspace{2mm} \leq \hspace{2mm} E[X] \hspace{2mm} $ for any $x$
X is a continuous R.V. and admits a pdf. I'm guessing this isn't too hard but I can't come up with a rigorous proof. Thanks so much.
|
Hint: $$E[X] = E[X|X<x]P(X<x) + E[X|X\geq x]P(X\geq x)$$
Also:
$$E(X|X<x)< x \leq E(X|X\geq x)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to prove $\frac{y^2-x^2}{x+y+1}=\pm1$ is a hyperbola? How to prove $\frac{y^2-x^2}{x+y+1}=\pm1$ is a hyperbola, knowing the canonical form is $\frac{y^2}{a^2}-\frac{x^2}{b^2}=\pm1$ where $a$ and $b$ are constants? Thanks !
|
Let
$$
\frac{y^2-x^2}{x+y+1}=1\\
\Rightarrow y^2-x^2=x+y+1\\
\Rightarrow y^2-x^2-x-y=1
$$
Complete the squares for x and y . You will get rectangular hyperbola. Similar will be the case if
$$
\frac{y^2-x^2}{x+y+1}=-1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Fourier series formula with finite sums
Let $f\in C(\mathbb{R}/2\pi\mathbb{Z})$, meaning that $f$ is continuous with period $2\pi$. Let $x_N(j)=2\pi j/N$. Define $$c_N(n)=\dfrac1N\sum_{j=1}^Nf(x_N(j))e^{-ix_N(j)n}.$$ Show that for any integer $M$, $$f(x_N(j))=\sum_{n=-M}^{N-M-1}c_N(n)e^{ix_N(j)n}.$$
This looks like the Fourier series formula, but the Fourier series comes with the integral from $-\pi$ to $\pi$. Here there are only finite sums. How do we prove it?
|
You are asked to prove the formula for the Discrete Fourier Transform. The continuous periodic function is largely irrelevant, since we only deal with its values on the uniform grid $x_j$. Presumably there is some way to get DFT from continuous Fourier transform, but I would not bother: the DFT is simpler, since we work in a finite dimensional space: the space of $N$-periodic functions on the grid $\frac{2\pi}{N}\mathbb Z$. This space if $N$-dimensional, with natural inner product $\langle f,g\rangle =\sum_{j=0}^{N-1} f(j)\overline{g(j)}$. The vectors $f_k$, $k=0,\dots,N-1$, defined by $f_k(j) = \exp(ijk/N)$, are orthogonal. Each has norm $\sqrt{N}$. Hence, every function $f$ is expanded as $\sum_{k=0}^{N-1} c_k f_k$ where $c_k=\frac{1}{N}\langle f,f_k\rangle$. This was the formula to be proved. Lastly, the interval $j=0,\dots,N-1$ could be replaced by any interval of the same length, due to periodicity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Proof that the limit of a sequence is equal to the limit of its partial sums divided by n Let $\{ x_n \}_n$ be a sequence of real numbers. Suppose $ \lim_{n \to \infty}x_n=a.$
Show that
$$\lim_{n \to \infty} \frac{x_1+x_2+...+x_n}{n}=a$$
As it is my first proof I'm not really sure whether I am allowed to do the following steps:
$$S_n = \sum_{i=1}^{n}{x_n} = S_{n-1}+x_n$$
Therferore we get $x_n=S_n-S_{n-1}$
Taking $\lim_{n \to \infty} x_n=\lim_{n \to \infty}(S_n-S_{n-1})=\lim_{n \to \infty}S_n-\lim_{n \to \infty}S_{n-1}$
We know that: $\lim_{n \to \infty} x_n=a$ and we also know that $S_{n-1}=S_{n-2}+x_{n-1}$
Therefore we get: $a=\lim_{n \to \infty}S_n-\lim_{n \to \infty}S_{n-2}-\lim_{n \to \infty}x_{n-1}$
Now $\lim_{n \to \infty}x_{n-1}=\lim_{n \to \infty}x_{n}=a$ and $a=\lim_{n \to \infty}S_n-\lim_{n \to \infty}S_{n-2}-a$
Repeating these steps we get: $a=\lim_{n \to \infty}S_n-\lim_{n \to \infty}S_{n-n}-(n-1)a=\lim_{n \to \infty}S_n-\lim_{n \to \infty}S_{0}-(n-1)a$
As $\lim_{n \to \infty}S_{0}=0$ we get $a=\lim_{n \to \infty}S_n-(n-1)a$ or if we rearrange $n*a=\lim_{n \to \infty}S_n$
Using the product rule for limits we get: $a=\frac{\lim_{n \to \infty}S_n}{n}=\lim_{n \to \infty}\frac{S_n}{n}$ q.e.d.
Is this proof consistent and complete?
|
The proof is quite wrong and almost nonsensical (Sorry!).
1) You cannot assume $\lim_{n \to \infty} S_n$ exists.
2) Since $n$ is the variable which tends to infinity, you cannot repeat $n$ times like that and get it out of the limit.
3) The statement $na = \lim_{n \to \infty} S_n$ is nonsensical.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Integration of parabola I have this homework question I am working on:
The base of a sand pile covers the region in the xy-plane that is bounded by the parabola $x^2 +y = 6$ and the line $y = x$: The height of the sand above the point $(x;y)$ is $x^2$: Express the volume of sand as (i) a double integral, (ii) a triple integral. Then (iii) find the volume.
I have drawn the $x^2 + y = 6$ and $y=x$ plane and found the intersection between the functions to be $(-3,-3)$ and $(2,2)$. So I now know what the base looks like. Now I am REALLY confused what the question means about the $x^2$ being the height. What point are they talking about?
Also, if it is a volume then doesn't it HAVE to be a triple integral? How can I possibly express it as a double integral?
|
Now I am REALLY confused what the question means about the x2 being the height. What point are they talking about?
Means exactly that, the height of the surface on the $z$ axis is given by $z=x^2$, you can also look at it like a function on the $xy$ plane given by $z=f(x,y) = x^2$.
Also, if it is a volume then doesn't it HAVE to be a triple integral? How can I possibly express it as a double integral?
No. Take single integrals as example, you can determine the lenght of a path (1 dimension), areas (2 dimensions) or even some volumes (solids of revolution = 3 dimensions).
Generally, you can find the volume of a closed region with $\int\int\int dV$ or an integral of the form $\int\int f(x,y)\;dS$ -asuming that the region is bounded by the surface given by $z=f(x,y)$-.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Positivity of the Coulomb energy in 2d Let $$D(f,g):=\int_{\mathbb{R}^3\times\mathbb{R}^3}\frac{1}{|x-y|}\overline{f(x)}g(y)~dxdy$$
with $f,g$ real valued and sufficiently integrable be the usual Coulomb energy. Under the assumption $D(|f|,|f|)<\infty$ it can be seen that $D(f,f)\geq 0$ (see for example Lieb-Loss, Analysis 9.8). The reason for that is basically that the Fourier transform of $\frac{1}{|\cdot|}$ is non-negative.
In two dimensions the Newton kernel is given by $-\log|\cdot|$ which does not have a positive Fourier transform anymore. I saw the claim that under further assumptions the positivity of the Coulomb energy however does still hold true. Precisely:
If $f\in L^1(\mathbb{R}^2)\cap L^{1+\epsilon}(\mathbb{R}^2)$ for some $\epsilon>0$ such that $$\int_{\mathbb{R}^2}\log(2+|x|)|f(x)|~dx<\infty$$
and $$\int f=0,$$ then $$D(f,f):=-\int_{\mathbb{R}^2\times\mathbb{R}^2}\log{|x-y|}\overline{f(x)}f(y)~dxdy\geq0.$$
The reference given is Carlen, Loss: Competing symmetries, the logarithmic HLS inequalitiy and Onofir's inequality on $S^n$. I don't see how the claim follows from that paper.
Do you have any other reference for the claim above or see a reason why it should be true?
|
This is true when the support of $f$ is contained in the unit disc. If the support is contained in a disc $|z|<R$, then $(f,f)$ is bounded from below by a constant that depends
on $R$. This minor nuisance makes the logarithmic potential somewhat different from the Newtonian
potential, however most statements of potential theory are similar for these two cases,
or can be easily modified.
For the details, the standard reference is
MR0350027 Landkof, N. S. Foundations of modern potential theory. Springer-Verlag,
New York-Heidelberg, 1972.
(This is copied from my ans on math.overflow).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
If a function is smooth is 1 over the function also smooth If $f(x):\mathbb{R}\rightarrow\mathbb{C}$ is $C^\infty$-smooth. Is $1/f(x)$ also $C^\infty$-smooth? $f(x)\neq0$
|
If $f$ is differentiable and non-zero at some point $a$, then $1/f$ is differentiable at $a$, and $(1/f)'=-f'/f^2$. This is a "base case" of an induction argument for the following statement:
$1/f$ is $n$ times differentiable, and $(1/f)^{(n)}$ equals a polynomial in the functions $f,f',f'',\ldots,f^{(n-1)}$ divided by a power of $f$.
The induction step is just applying the quotient rule.
Have fun!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
if the matrix such $B-A,A$ is Positive-semidefinite,then $\sqrt{B}-\sqrt{A}$ is Positive-semidefinite Question:
let the matrix $A,B$ such $B-A,A$ is Positive-semidefinite
show that:
$\sqrt{B}-\sqrt{A}$ is Positive-semidefinite
maybe The general is true?
question 2:
(2)$\sqrt[k]{B}-\sqrt[k]{A}$ is Positive-semidefinite
This problem is very nice,because we are all know this
if $$x\ge y\ge 0$$,then we have
$$\sqrt{x}\ge \sqrt{y}$$
But in matrix,then this is also true,But I can't prove it.Thank you
|
Another proof (short and simple) from "Linear Algebra and Linear Models" by R. B. Bapat.
Lemma Let $A$ and $B$ be $n\times n$ symmetric matrices such that $A$ is positive definite and $AB+BA$ is positive semidefinite, then Y is positive semidefinite.
Proof of $B\geq A \implies B^{\frac{1}{2}}\geq A^{\frac{1}{2}}$
First consider the case, when $A$ and $B$ are positive definite.
Let $X=(B^{\frac{1}{2}}+ A^{\frac{1}{2}})$ and $ Y=(B^{\frac{1}{2}}- A^{\frac{1}{2}})$,
then $XY+YX=2(B-A)$
Now, $(B-A)$ is positive semidefinite implies (given) $\implies 2(B-A)$ is positive semidefinite. Also $X=(B^{\frac{1}{2}}+ A^{\frac{1}{2}})$ is positive definite as positive linear combination of positive definite matrices is positive definite.
Hence by the lemma, $Y=(B^{\frac{1}{2}}- A^{\frac{1}{2}})$ is positive semidefinite. Therefore, $B^{\frac{1}{2}}\geq A^{\frac{1}{2}}$
The case, when $A$ and $B$ are positive semidefinite matrices can be dealt as the other answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Fourier transform supported on compact set Let $f\in L^2(\mathbb{R})$ be such that $\hat{f}$ is supported on $[-\pi,\pi]$. Show that $$\hat{f}(y)=1_{[-\pi,\pi]}(y)\sum_{n=-\infty}^\infty f(n)e^{-iny}$$ in the sense of $L^2(\mathbb{R})$-norm convergence.
I know that $f$ must be continuous and going to $0$ at $\pm\infty$. The Fourier transform on $L^2$ is defined in a rather complicated way as a limit of Fourier transforms of functions in the Schwartz class. The right-hand side is an infinite sum (rather than the integral). How can we relate the two sides?
|
*
*Expand $\hat f$ into a Fourier series on $[-\pi,\pi]$, that is $\hat f(y)=\sum_{n=-\infty}^\infty c_n e^{-iny}$. (I put $-$ in the exponential to get closer to the desired form; this does not change anything since $n$ runs over all integers anyway.)
*Write $c_n $ as an integral of $\hat f(y)e^{iny}$ over $[-\pi,\pi]$.
*Observe that the integral in 2, considered as an integral over $\mathbb R$, is the inverse Fourier transform. Recognize $f(n)$ in it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
derivative of an integral from 0 to x when x is negative? Given a function $$F(x) = \int_0^x \frac{t + 8}{t^3 - 9}dt,$$ is $F'(x)$ different when $x<0$, when $x=0$ and when $x>0$?
When $x<0$, is $$F'(x) = - \frac{x + 8}{x^3 - 9}$$ ... since you can't evaluate an integral going from a smaller number to a bigger number? That's what I initially thought, but when I graphed the antiderivative the intervals of increase and decrease were different from the ones in my calculation.
EDIT: Yes, I was talking about that identity.
So is $$F'(x) = \frac{x + 8}{x^3 - 9}$$ or $$F'(x) = -\frac{x + 8}{x^3 - 9}$$ ??
EDIT: thanks for the answers! but what happens when x is bigger than 9^1/3? The derivative is also the same? Btw, when x = 0 shouldn't F(x) = 0 and hence F'(x) = 0?
|
No, we have
$$F'(x)=\frac{x+8}{x^3-9}$$
for all $x<\sqrt[3]{9}$. The limitation is due to the fact that the integral is meaningful only when the interval doesn't contain $\sqrt[3]{9}$ and so we must consider only the interval $(-\infty,\sqrt[3]{9})$ that contains $0$.
If $b<a$, one sets, by definition,
$$
\int_a^b f(t)\,dt=-\int_b^a f(t)\,dt
$$
so the equality
$$
\int_a^c f(t)\,dt=\int_a^b f(t)\,dt+\int_b^c f(t)\,dt
$$
without restrictions on the limits of integration, provided we don't jump over points where $f$ is not defined so that all the integrals make sense.
This relation is what the fundamental theorem of calculus relies on. Remember that, for continuous $f$, there exists $\xi$ such that
$$
\frac{1}{b-a}\int_a^b f(t)\,dt=f(\xi)
$$
where $\xi\in[a,b]$ if $a<b$. Therefore, for $h>0$,
$$
\int_0^{x+h}f(t)\,dt-\int_0^x f(t)\,dt=
\int_x^0 f(t)\,dt+\int_0^{x+h}f(t)\,dt-\int_0^x f(t)\,dt=
\int_{x}^{x+h}f(t)\,dt=hf(\xi)
$$
for some $\xi\in[x,x+h]$, so that
$$
\lim_{h\to0+}\frac{F(x+h)-F(x)}{h}=f(x).
$$
Similarly for the limit from the left.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/585924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Is a decimal with a predictable pattern a rational number? I'm starting as a private Math tutor for a high school kid; in one of his Math Laboratories (that came with an answer sheet) I was stumped by an answer I encountered in the True or False section (I'm certain it should've been a False):
The number 4.212112111211112... is a rational number.
I've been searching through several threads and search results, but I haven't found anything that confirms or denies this.
My reasoning to answer 'False' is that, since the pattern is non-terminating and will never repeat, then it must be an Irrational number; granted, there is a predictable pattern ... but it is not repeating.
Am I wrong? I just want to make sure I give this kid the correct answer.
|
A real number is rational if and only if its decimal expansion terminates or eventually repeats.
Lemma: Every prime $p \neq 2, 5$ divides a repunit.
Proof of Lemma:
Fix a prime $p \neq 2,5$. Let $\textbf{A}$ be the set of repunits, so
$$\textbf{A} = \left\{\displaystyle\sum\limits_{k=1}^{n} 10^{k-1} \, \mid \, n \in \mathbb{N} \right\} = \left\{\frac{10^n -1}{9} \, \mid \, n \in \mathbb{N} \right\}$$
Consider the repunits, modulo $p$. Since $\mathbb{N}$ is not a finite set, neither is $\textbf{A}$. There are a finite number of remainders modulo $p$ (specifically, $p$ possible remainders).
There are (infinitely) more repunits than remainders modulo $p$. Thus, there must exist two distinct repunits with the same residue modulo $p$. So $$ \exists \, a, b \in \textbf{A} \,\, \text{s.t.} \,\,\,\,\,\, a \equiv b \pmod{p}, \,\, a \neq b$$
Without loss of generality, assume $a > b$.
Since $a, b \in \textbf{A}$, $\exists \, x, y \in \mathbb{N}$ with $x > y$ such that
$$a = \frac{10^x - 1}{9}$$
$$b = \frac{10^y - 1}{9}$$
We can substitute in to $a \equiv b \pmod{p}$ to get:
$$\frac{10^x - 1}{9} \equiv \frac{10^y - 1}{9} \pmod{p}$$
$$\frac{\left(10^x - 1\right)-\left( 10^y - 1\right)}{9}\equiv 0 \pmod{p}$$
$$\frac{10^x-10^y}{9} \equiv 0 \pmod{p}$$
$$\frac{\left(10^y\right)\left(10^{x-y}-1 \right)}{9}\equiv 0 \pmod{p}$$
We know that $p \nmid 10^y$, because $p$ is not $2$ or $5$. Since $\mathbb{Z}/p\,\mathbb{Z}$, the ring of integers modulo $p$, has no zero divisors (because $p$ is prime),
$$\frac{10^{x-y}-1}{9}\equiv 0 \pmod{p}$$
This is a repunit.
Since our choice of $p \neq 2, 5$ was arbitrary, we have proved that every prime that is not $2$ or $5$ divides a repunit. It follows that every prime that is not $2$ or $5$ divides nine times a repunit (a positive integer whose digits are all nines).
Note that this proof applies to any value of $p$ (not necessarily prime) so that $p$ is not divisible by $2$ or $5$. The step involving the absence of zero divisors in $\mathbb{Z}/p\,\mathbb{Z}$ can be modified to state that $\gcd\left(10^y, p\right) = 1$ when $2 \nmid p$ and $5 \nmid p$.
Every rational number has a decimal representation that either terminates or eventually repeats.
Proof:
Consider a positive rational number $N = r/s$ for $r, s \in \mathbb{N}$ with $\gcd(r,s) = 1$.
If $s=1$, $N$ trivially has a terminating decimal expansion. Suppose $s \neq 1$.
Let $m_i$ be positive integers and $q_i \in \mathbb{N}$ be $n$ primes with $q_k < q_{k+1}$ so that
$$s = q_{1}^{m_1} \cdot q_{2}^{m_2} \cdots q_{n}^{m_n} = \displaystyle\prod\limits_{k=1}^{n} q_{k}^{m_k}$$
We'll do casework on the prime factorization of $s$, the denominator of $N$.
*
*Case $1$: The $q_i$ consist only of a $2$ and/or a $5$.
In this case, the decimal expansion of $r/s$ terminates because $N$ can be written as $M/\left(10^z\right)$ for some $M, z \in \mathbb{N}$.
*
*Case $2$: $q_i \neq 2, 5$ for all $i \in \mathbb{N}, \, 1 \leq i \leq n$
As noted above (below the proof of the lemma), every natural number that is not divisible by $2$ or $5$ divides nine times a repunit. Thus, in this case, $s$ divides nine times a repunit. There exist $x_0, y_0 \in \mathbb{N}$ such that $$x_0 \cdot s = 10^{y_0}-1$$
$$ s = \frac{10^{y_0}-1}{x_0}$$
Now we can rewrite $N$:
$$N = \frac{r}{s} = \frac{r \cdot x_0}{10^{y_0}-1}$$
Since $r \cdot x_0 \in \mathbb{N}$, this is a positive integer divided by nine times a repunit. We know that this gives a repeating decimal, with a period that divides $y_0$.
*
*Case $3$: The $q_i$ consist of a mix of primes equal to $2$ or $5$, and other primes.
In this case $N$ can be written as the product of two rational numbers, call them $N_1$ and $N_2$, that fit cases $1$ and $2$, respectively. Then there exist, $M, z, r, x_0, y_0 \in \mathbb{N}$ such that $$ N = N_1 \cdot N_2 = \frac{M}{10^z} \cdot \frac{x_0 \cdot r}{10^{y_0}-1}$$
$$ N = \frac{1}{10^z} \cdot \frac{M \cdot x_0 \cdot r}{10^{y_0} -1}$$
The factor of $1/\left(10^z\right)$ only shifts the decimal representation by $z$ places. The other factor must be a repeating decimal with a period that divides $y_0$. Thus, the decimal expansion of $N$ eventually repeats.
Thus, every rational number has a decimal representation that either terminates or eventually repeats.
The contrapositive of the statement we just proved shows that the number you encountered is irrational. If a real number does not have a terminating or eventually repeating decimal expansion, then it is not rational.
Note that the converse is also true: every decimal number that either terminates or eventually repeats is a rational number. This is easier to prove.
The number you encountered was not rational, not a terminating decimal, nor an eventually repeating decimal.
Well-known examples of other real numbers that have predictable patterns but are not rational include Champernowne's number and Liouville's constant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/586008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 1
}
|
What is the difference between an indefinite integral and an antiderivative? I thought these were different words for the same thing, but it seems I am wrong. Help.
|
An anti-derivative of a function $f$ is a function $F$ such that $F'=f$.
The indefinte integral $\int f(x)\,\mathrm dx$ of $f$ (that is, a function $F$ such that $\int_a^bf(x)\,\mathrm dx=F(b)-F(a)$ for all $a<b$) is an antiderivative if $f$ is continuous, but need not be an antiderivative in the general case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/586107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47",
"answer_count": 8,
"answer_id": 3
}
|
Zagier's proof of the prime number theorem. In Zagier's paper, "Newman's Short Proof of the Prime Number Theorem", (link below) his theorem ${\bf (V) }$ states that,
$$ \int_{1}^{\infty} \frac{\vartheta(x) - x}{x^2} dx \text{ is a convergent integral.} $$
Note: $\vartheta(x) = \sum_{p \le x} \log(p)$, where $p$ is a prime.
Zagier proceeds to say that, for $\Re(s) > 1$ we have
$$\sum_{p} \frac{\log p}{p^s} = \int_{1}^{\infty} \frac{ d \vartheta(x)}{x^s} = s \int_{1}^{\infty} \frac{ \vartheta(x)}{x^{s+1}} dx = s \int_{0}^{\infty} e^{-st} \vartheta(e^{t})dt. $$
My question is how the 2nd equality holds. Using integration by parts, it's easily verified that
$$ \int_{1}^{\infty} \frac{d \vartheta(x)}{x^s} = x^{-s} \vartheta(x) |_{x=1}^{x=\infty} + s \int_{1}^{\infty} \frac{\vartheta(x)}{x^{s+1}}dx. $$
As this theorem is used to show that $\vartheta(x) \sim x$, I do not understand how we can claim that for $\Re(s) > 1,$
$$\lim_{x \rightarrow \infty} x^{-s} \vartheta(x) = 0.$$
http://people.mpim-bonn.mpg.de/zagier/files/doi/10.2307/2975232/fulltext.pdf
|
In step $\mathbf{III}$, it was shown that $\vartheta(x) \leqslant C\cdot x$ for some constant $C$. That is enough to ensure
$$\lim_{x\to\infty} x^{-s}\vartheta(x) = 0$$
for $\Re s > 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/586177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
RSA encryption/decryption scheme I've been having trouble with RSA encryption and decryption schemes (and mods as well) so I would appreciate some help on this question: Find an $e$ and $d$ pair with $e < 6$ for the integer $n = 91$ so that $n,e,d$ are the ingredients of an RSA encryption/decryption scheme. Use it to encrypt the number $9$. Then decrypt the result back to $9$. In the encryption and decryption it will be helpful to use the fact that $81\equiv -10 \pmod {91}$.
Thank you ever so much to whomever lends a hand. I really, really appreciate it.
|
We are given $N$ and that will give us the prime factors $p$ and $q$ as:
$$N = 91 = p \times q = 7 \times 13$$
We need the Euler Totient Function of the modulus, hence we get:
$$\varphi(N) = \varphi(91) = (p-1)(q-1) = 6 \times 12 = 72$$
Now, we choose an encryption exponent $1 \lt e \lt \varphi(N) = 72$. We were told to pick an an $e \lt 6$, so lets choose $e = 5$ and see if that works, where it should be coprime with $\varphi(N)$.
$$(5, 72) = 1 \rightarrow e = 5$$
To find the decryption exponent , we just find the modular inverse of the encryption exponent using the totient result, hence:
$$d = e^{-1} \pmod {\varphi(n)} = 5^{-1} \pmod {72} = 29$$
Encryption:
$$\displaystyle c = m^{\large e} \pmod N \rightarrow 9^5 \pmod {91} = 81$$
Decryption:
$$\displaystyle m = c^{\large d} \pmod N \rightarrow 81^{29} \pmod {91} = 9$$
Where:
*
*$m$ = message to encrypt or plaintext
*$c$ = encrypted message or ciphertext
*$e$ = encryption exponent
*$d$ = decryption exponent
*$N$ = modulus which was formed from the two primes $p$ and $q$
*$\varphi(N)$ = Euler Totient function
Lastly, you might want to read the Wiki RSA.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/586263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find all primes $p$ such that $14$ is a quadratic residue modulo $p$. I want to find all primes $p$ for which $14$ is a quadratic residue modulo $p$. I referred to an example that was already posted for finding all odd primes $p$ for which $15$ is a quadratic residue modulo $p$, but I am getting stuck.
This is what I have done:
$$\left(\frac{14}{p}\right)=\left(\frac{7}{p}\right)\left(\frac{2}{p}\right)=(-1)^{(p-1)/2}\left(\frac{p}{7}\right)\left(\frac{p}{2}\right).
$$
There are now two cases. If $p\equiv 1\pmod{13}$, you have $(14|p)=(p|7)(p|2)$, want $(p|7)$ and $(p|2)$ to have the same sign. The squares modulo $2$ and $7$ are $p\equiv 1\pmod{2}$ and $p\equiv 1,2,4\pmod{7}$, and the nonsquares are $p\equiv 3,5,6 \pmod{7}$.
This is where I am stuck do I just check the possibilities $\pmod {182}$? I think I need to use the condition that $p \equiv 1\pmod{13}$. I am just very confused and would really appreciate some clarification.
|
Quadratic reciprocity modulo $2$ works slightly differently. In fact, it holds that
$$\left(\frac{2}{p}\right) = (-1)^{(p^2-1)/8}.$$
Thus, you have:
$$\left(\frac{14}{p}\right) = (-1)^{(p^2-1)/8} \cdot (-1)^{(p-1)/2} \cdot \left(\frac{p}{7}\right).$$
This means that you need to look at the form of $p$ modulo $8$ (first two terms) and modulo $7$ (last term). For $p \equiv 1, 3, 5, 7 \pmod{8}$, the product of the initial two terms is $+1, +1, -1, -1$ respectively. Thus, you need either of the two options to hold:
*
*$p \equiv 1,3 \pmod{8}$ and $p$ is a quadratic residue modulo $7$, i.e. $p \equiv 1,2,4 \pmod{7}$,
*$p \equiv 5,7 \pmod{8}$ and $p$ is a not quadratic residue modulo $7$ i.e. $p \equiv 3,5,6 \pmod{7}$.
In each situation, you have $6$ possible cases, each corresponding to a "good" residue of $p$ modulo $56$ (by Chinese remainder theorem). It is a little mundane to work these out, but each pair of congruences $p \equiv a \pmod{8}, p \equiv b \pmod{7}$ is equivalent to the single congruence $p \equiv c \pmod{56}$, where $c$ happens to be given by $c = 8 b - 7 a$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/586368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Prove that Baire space $\omega^\omega$ is completely metrizable? When I tried to prove that Baire space $\omega^\omega$ is completely metrizable, I defined a metric $d$ on $\omega^\omega$ as: If $g,h \in \omega^\omega$ then let $d(g,h)=1/(n+1)$ where $n$ is the smallest element in $\omega$ so that $g(n) \ne h(n)$ is such $n$ exists, and $d(g,h)=0$ otherwise.
I am stuck trying to prove this metric is complete. Can you help me please?
Thanks in Advance.
|
The point here is that two functions are close iff they agree on an initial segment, that is, $d(f,g)\le 1/(n+1)$ iff $f(0)=g(0),f(1)=g(1),\dots,f(n-1)=g(n-1)$. Now, if $(f_n)_n$ is a Cauchy sequence, then, for each $n$, there is $N_n$ such that for all $m,k>N_n$ we have $d(f_m,f_k)\le1/(n+1)$. That is, all functions $f_m$ with $m>N_n$ agree on their first $n$ values.
This suggests naturally what the limit of the sequence $(f_n)_n$ should be: Define $f:\omega\to\omega$ simply by setting $f(k)$ to be the common value $f_m(k)$ for all $m$ large enough (say, for all $m>N_{k+1}$). To verify that $f$ is indeed the limit of the $f_n$, note that, by construction, for any $k$, $d(f_m,f)<1/(k+1)$ as long as $m>N_{k+1}$. But this is precisely what $\lim_m f_m=f$ means.
One of the many things that make $\omega^\omega$ with this metric interesting is that, as a topological space, this is just the irrationals. (Of course, the metric is not the Euclidean metric restricted to the irrationals, since the irrationals are clearly not a complete metric space under the standard metric.) A nice proof of this is at the very beginning of Arnie Miller's monograph on descriptive set theory and forcing.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/586469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
linear algebra foundation of Riemann integrals Let $V$ be the vector space of real functions $f\colon [a,b]\to \mathbb R$ and let $X$ be the set of characteristic (indicatrix) functions of subintervals: $X=\{\mathbb 1_I\colon I\subset [a,b] $ interval $\}$. We define $T\colon X \to \mathbb R$ as $T(\mathbb 1_I) = |I|$ where $|I|$ is the length of the interval $I$. Notice that $X$ is not a set of independent vectors because the sum of adjacent intervals is again an interval, but in that case $T$ is defined to be additive.
So it is clear that $T$ can be extended as a linear map on the vector space generated by $X$ (which is the space of so called simple functions).
What are the abstract properties of $X$ and $T$ (in the setting of linear algebra) which can be applied to the above example to prove that $T$ is linear on $X$?
For example $X$ and $T$ have the following property: $x,y,x+y\in X \implies T(x+y) = T(x)+T(y)$ and $x,\lambda x \in X \implies \lambda=1$. Is this enough to prove that $T$ is linear i.e. that $x,y,\lambda x +\mu y \in X \implies T(\lambda x + \mu y) = \lambda T(x) + \mu T(y)$? And is this enough to prove that $T$ has a linear extension to the span of $X$?
|
I think the only thing that needs to be proved for $T$ to be extendable to a linear map on the vector space generated by $X$ is the following. Assume $f=\mathbb{1}_{[c,d)}$ and $g = \mathbb{1}_{[d,e)}$ with $f,g \in X$. Then $f+g$ is also in $X.$ (I am assuming that half-open intervals are used here.) It is then needed that $T(f+g) = T(f) + T(g)$ for all such $f$ and $g$ in $X.$
This is because if the above is true (which it is!), then you can show that any function that can be written as a linear combination of elements of $X$ (i.e. a simple function) maps to a unique value by the extension of $T$, regardless of how it is written as a linear combination.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/586555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Categorization of PBE refinements into forward/backward looking? I have recently come across the term forward / backward looking refinement of a Perfect Bayesian Equilibrium. I am, however, unsure about the meaning of this term, and unable to find any information about this. Does anyone know the difference between the two? For instance, is a PBE refined with Intuitive Criterion a PBE with forward looking refinement (as the IC makes a statement about how beliefs should be influenced by off-equilibrium-path actions, and thus influences plays in further periods)?
If so, what would be an example for a backward looking refinement?
|
The usual examples are backward induction and forward induction. Somewhat surprisingly, backward induction is forward looking and forward induction is backwards looking.
In backward induction you start at the end of the game- which lies in the future. So it is forward looking. Forward induction, which has many differen definition, uses reasoning based on a player having arrived at a node because another player in the past did something for some reason, so it looks backward.
I would not think too much about forward and backward looking behavior, it is often not that illuminating. As a matter of fact, the intuitive criterion was initially derived by Cho and kreps from (Kohlberg-Mertens-)strategic stability, which is a refinement in terms of the normal form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/586712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Why we need $\sigma$ finite measure to define $L^{p}$ space? I am curious why we need $\sigma$ finite measure to define $L^{p}$ space. More generally, Why we need $\sigma$ finite measure instead of just finite measure?
|
You can define $L^p$ on any measure space you like. If it just those measurable functions for which $|f|^p$ is integrable. And then you modulo by functions zero a.e.
But if you don't impose some hypothesis like $\sigma$ finite, many results about $L^p(\mathbb R)$ don't always generalize very well to arbitrary measure spaces.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/586799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Holomorphic problem I have a function $f(z)$ holomorphic in $\mathbb{C}\setminus\mathbb{R}^-$. I have these information:
*
*$f(x+i\epsilon) = f(x-i\epsilon)$ on $\mathbb{R}^+$ (the $\epsilon$ is indented as a shorthand for a limit);
*$f(x+i\epsilon) = - f(x-i\epsilon)$ on $\mathbb{R}^-$;
*$f(z)=\sqrt{z} + O\left(\frac{1}{\sqrt{z}}\right)$ for $|z|\rightarrow\infty$, $z\in\mathbb{C}\setminus\mathbb{R}^-$.
I'm asked to show that $f(z)=\sqrt{z}$.
I tried to write $f(x)=\sqrt{z}+\frac{h(x)}{\sqrt{x}}$. In such a way $h(z)$ is continuos on the whole $\mathbb{C}$ and holomorphic on $\mathbb{C}\setminus\mathbb{R}^-$... If it were holomorphic on the whole $\mathbb{C}$, it would be constant and so $h(z)=\lim_{z\rightarrow 0}h(z)=0$... but I cannot see a way to prove that $f(z)$ is indeed holomorpic even on $\mathbb{R}^-$. Can you help me?
|
Let $\sqrt{z}$ denote the square root which branch cut on the negative real axis. Is there a reason $f(z)=\sqrt{z}(1+\frac{1}{z})$ does not satisfy your conditions?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/586980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How do you prove set with modulo? Given any prime $p$. Prove that $(p-1)! \equiv -1 \pmod p$.
How to prove this?
|
This is known as Wilson's theorem: (not completely, but Wilson's theorem is an if and only if while this is only an if)
https://en.wikipedia.org/wiki/Wilson%27s_theorem
The idea is that (p-1)! is the product of an element of each residue class $\bmod p$, also since all numbers less than p are relatively prime to p each of them has a unique inverse.
except for p-1 and 1 each of them has an inverse different to itself, therefore the ones that aren't 1 or p-1 cancel out, and $1*(p-1)\equiv -1 \bmod p$ as desired.
Note: to prove that only 1 and -1 are inverses of themselves see that
$a\equiv a^{-1}\rightarrow a^2\equiv 1 \bmod p \rightarrow p|a^2-1\rightarrow p|(a+1)(a-1)\rightarrow p|a+1 $ or $ p|a-1\rightarrow a\equiv1$ or $a\equiv -1 \bmod p$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/587077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
does this series converge? $\sum_{n=1}^\infty{\left( \sqrt[3]{n+1} - \sqrt[3]{n-1} \right)^\alpha} $
show the the following series converge\diverge
$\sum_{n=1}^\infty{\left( \sqrt[3]{n+1} - \sqrt[3]{n-1} \right)^\alpha} $
all the test i tried failed (root test, ratio test,direct comparison)
please dont use integrals as this is out of the scope for me right now
|
Ratio test is inconclusive, but using Raabe's test we can see that the series converges when $\alpha>\frac{3}{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/587153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Proving a limit of a sequence I have to prove that for a > 1, $$\lim_{ n\rightarrow \infty }{ { \left (\frac { 2^{n^a} }{ n! } \right ) } } = \infty$$
I've tried to apply L'Hôpital's rule and d'Alembert's ratio test, but without any success...
Any help would be greatly appreciated!
|
Using that $(n-k)/n<1$ for all $k=1,\ldots,n-1$, we have (for $n$ big enough such that $n^{a-1}\log 2\geq \log n$)
$$
\frac{2^{n^a}}{n!}\geq\frac{n^n}{n!}=\frac1{1 \;\frac{n-1}n\;\cdots\;\frac1n}\geq\frac1{\frac1n}=n\to\infty
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/587236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Is this formula for $\zeta(2n+1)$ correct or am I making a mistake somewhere? I am calculating $\zeta(3)$ from this formula:
$$\zeta(2n+1)=\frac{1}{(2n)!}\int_0^{\infty} \frac{t^{2n}}{e^t -1}dt$$
From Grapher.app, I get $\int_0^{\infty} \frac{x^{2}}{e^x -1}dx = .4318$ approximately which, when multiplied by $\frac{1}{2}$, does not give me the known value of $\zeta(3)$.
In Grapher.app I entered the integrand with variable $x$ instead of $t$. Does that make a difference here?
Is the formula correct or am I making a mistake somewhere?
|
Computing the integral with Wolfram Alpha on a truncated interval (I chose $[0, 10000]$) gives $2.40411$, which is just about the correct answer. It would appear to be an issue in the application you're using.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/587350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How did they get this answer for critical value I know how to get a critical value but I am not sure now how to do it when they added "when testing the claim that p = 1/2"....
how did they get the answer for critical value ?
|
It would be good if we could see the entire question, but if it is the Sign-Test, then it make sense to use p = 1/2 because it is either "yes" (assigned a plus) or a "no" (assigned a minus, hence the name of the test) But again, more information about the question would be useful
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/587488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What is the purpose of the first test in an inductive proof? Learning about proof by induction, I take it that the first step is always something like "test if the proposition holds for $n = \textrm{[the minimum value]}$"
Like this:
Prove that $1+2+3+...+n = \frac{n(n + 1)}{2}$ for all $n \ge 1$.
Test it for $n = 1$:
$$n = \frac{n(n + 1)}{2} \implies 1 = \frac{2}{2}\textrm{, which holds.}$$
* The rest of the proof goes here *
So, I do it all the time (like a standard). But I never really thought about why. Why do I do such test? I can see that if the test does not hold, it cannot be proven by induction, I guess. But is there another reason we do this?
|
Imagine a pond with an infinite linear progression of lily pads. You have a frog who, if he hops on one pad, he is guaranteed to hop on the next one. If he hops on the first pad, he'll visit them all. But if he never makes the first lilypad, all bets are off.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/587560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 11,
"answer_id": 1
}
|
Can an algorithm be part of a proof? I am an undergraduate student. I have read several papers in graph theory and found something may be strange: an algorithm is part of a proof. In the paper, except the last two sentences, other sentences describe a labeling algorithm.
Can an algorithm be part of a proof? I do not understand why it can be part of a proof. I asked my supervisor but he did not explain it.
LEMMA$\,\,\,\bf 1.$ If $B(G)-b$, then $B(G+e)<2b$. Proof:
Let $f$ be an optimal numbering of $G$, and let $V(G)-\{v_1,\ldots,v_n\}$, numbered so that $f(v_i)-i$. Let $v_lv_m$ be the added edge. We define a new numbering $f'$ such that $|f'(x)-f'(y)|<2|f(x)-f(y)|$, and also $|f'(v_l)-f'(v_m)|-1$. Let $r-\lfloor(l+m)/2\rfloor$, and set $f'(v_r)-1$ and $f'(v_{r+1})-2$. For every other $v_i$ such that $|i-r|<\min\{r-1,n-r\}$, let $f'(v_i)-f'(v_{i+1})+2$ if $i<r$ and $f'(v_i)-f'(v_{i-1})+2$ if $i>r$. This defines $f'$ for all vertices except a set of the form $v_i,\ldots,v_k$ or $v_{n+1-k},\ldots,v_n$, depending on the sign of $r-\lfloor(n+1)/2\rfloor$. In the first case, we assign $f'(v_i)-n+1-i$ for $i<k$; in the second, we assign $f'(v_i)-i$ for $i>n-k$. The renumbering $f'$ numbers the vertices outward from $v_r$ to achieve $|f'(x)-f'(y)|<2|f(x)-f(y)|$. Since we begin midway between $v_l$ and $v_m$, we also have $|f'(v_l)-f'(v_m)|-1$.
|
If you ask me, I would say that everything can be part of a proof as long as you bring a convincing reasonable argument. When I listen to my colleagues topologist, they make a few drawings (generally sketches of knots) on the blackboard and they claim it's a proof. Thus, a drawing or an algorithm, there is no big difference, the most important at the end of the day is that you are sure of what you assert, and leave nothing in the darkness.
Have a nice day :-)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/587650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 6,
"answer_id": 5
}
|
Equality of discriminants of integral bases (statement in Ireland and Rosen, A Classical Introduction to Modern Number Theory) I'm doing independent study and need assistance.
This is taken from Ireland and Rosen's A Classical Introduction to Modern Number Theory,
Chapter 12.
Let F/Q be an algebraic number field, D the ring of integers in F, and A an ideal of D. In a discussion after Proposition 12.2.2 (if the discriminant of an ideal is minimal then that ideal is spanned by an integral basis), the authors state that it follows from Proposition 12.1.2 (if you have two bases, the discriminant of one is equal to a determinant times the other) that the discriminant of any integral basis for of an ideal of D is constant. I'm trying to prove how this follows.
thanks,
|
Just to put this in an answer.
Let $\{\omega_1,\ldots,\omega_n\}$ and $\{\omega_1',\ldots,\omega_n'\}$ be two integral bases for $D$. By definition, this means that $D$ is a free $\mathbb{Z}$-module, and that these are two bases for $D$. Thus, by definition there must exists some $M\in\text{GL}_n(\mathbb{Z})$ such that $M(\omega_i)=\omega'_i$. Note then that
$$M[\sigma_i(\omega_j)]=[\sigma_i(\omega_j')]$$
and so
$$\begin{aligned}\text{disc}(\omega_1',\ldots,\omega_n') &=\det([\sigma_i(\omega_j')])^2\\ &=\det(M[\sigma_i(\omega_j)])^2\\ &=\det(M)^2\det([\sigma_i(\omega_j)])^2\\ &= \det(M)^2\text{disc}(\omega_1,\ldots,\omega_n)\end{aligned}$$
But, note that since $M\in\text{GL}_n(\mathbb{Z})$ that $\det(M)\in\mathbb{Z}^\times=\{\pm 1\}$. Thus, $\det(M)^2=1$, and so $\text{disc}(\omega_1,\ldots,\omega_n)=\text{disc}(\omega_1',\ldots,\omega_n')$ as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/587714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Checking on some convergent series I need some verification on the following 2 problems I attemped:
I have to show that the following series
is convergent: $$1-\frac{1}{3 \cdot 4}+\frac{1}{ 5 \cdot 4^2 }-\frac{1}{7 \cdot 4^3}+ \ldots$$ .
My Attempt: I notice that the general term is given by $$\,\,a_n=(-1)^{n}{1 \over {(2n+1)4^n}} \,\,\text{by ignoring the first term of the given series.}$$ Using the fact that An absolutely convergent series is convergent, $$\sum_{1}^{\infty}|a_n|=\sum_{1}^{\infty} {1 \over {(2n+1)4^n}}\le \sum_{1}^{\infty} {1 \over 4^n}=\sum_{1}^{\infty}{1 \over {2^{2n}}}$$ which is clearly convergent by comparing it with the p-series with $p >1$.
I have to show that the following series
is convergent:$$1-\frac{1}{2!}+\frac{1}{4!}-\frac{1}{6!}+ \ldots $$
My Attempt:$$1-\frac{1}{2!}+\frac{1}{4!}-\frac{1}{6!}+ \ldots \le 1+\frac{1}{1!}+\frac{1}{2!}+\frac{1}{3!}+ \frac{1}{4!}+\frac{1}{5!}+\frac{1}{6!}+\ldots $$. Now, using the fact that $n! >n^2$ for $n \ge 4$ and the fact that omitting first few terms of the series does not affect the characteristics of the series ,we see that $$\frac{1}{4!}-\frac{1}{6!}+\frac{1}{8!}+ \ldots \le \frac{1}{4!}+\frac{1}{5!}+\frac{1}{6!}+ \frac{1}{6!}+\frac{1}{7!}+\frac{1}{8!}+\ldots =\sum_{4}^{\infty}{1 \over n!} <\sum_{4}^{\infty}{1 \over n^2}$$ and it is clearly convergent by comparing it with the p-series with $p >1$.
Now,I am stuck on the third one.
I have to show that the following series
is convergent:$$\frac{\log 2}{2^2}-\frac{\log 3}{3^2}+\frac{\log 4}{4^2}- \ldots $$
I see that $$\frac{\log 2}{2^2}-\frac{\log 3}{3^2}+\frac{\log 4}{4^2}- \ldots \le \sum_{2}^{\infty} {{\log n} \over {n^2}}= ?? $$
Thanks and regards to all.
|
Hint: For all sufficiently large $n$ (in fact, $n \ge 1$ suffices for this), we have $\ln{n} \le \sqrt{n}$; thus
$$\sum\limits_{n = 2}^{\infty} \frac{\ln n}{n^2} \le \sum\limits_{n = 2}^{\infty} \frac{1}{n^{3/2}}$$
which is a $p$-series.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/587782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Integral through Fourier Transform and Parseval's Identity $$
\int_{-\infty}^{\infty}{\rm sinc}^{4}\left(\pi t\right)\,{\rm d}t\,.
$$
Can you help me evaluate this integral with the help of Fourier Transform and Parseval Identity. I could not see how it is implemented. Thank you..
|
There are two correct answers to this question, depending on how you understand sinc. My guess is that your convention is $\operatorname{sinc}x = \frac{\sin x}{x}$.
I don't know your FT convention, but I will use $\hat f(\xi)=\int f(x)e^{-2\pi i \xi x}\,dx$. Then
$$\hat \chi_{[-a,a]}(\xi)= \int_{-a}^a e^{-2\pi i \xi x}\,dx =
\frac{e^{2\pi i a\xi}-e^{-2 \pi i a\xi}}{2\pi i \xi} = 2a\operatorname{sinc}(2 \pi a \xi)$$
To square the righthand side, convolve $\chi_{[-a,a]}$ with itself. This convolution is $f(x) = (2a-|x|)^+$. Thus, $\hat f(\xi) = 4a^2 \operatorname{sinc}^2(2 \pi a \xi)$. Since $$
\int_{\mathbb R} f^2
= 2\int_0^{2a} (2a-x)^2\,dx = \frac{16 a^3}{3}
$$
Parseval's identity implies
$$
\int_{\mathbb R} \operatorname{sinc}^4(2 \pi a \xi)\,d\xi
= \frac{1}{16a^4}\cdot \frac{16a^3}{3} = \frac{1}{3a}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/587866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Boundedness of functional In the setting of $2\pi$-periodic $C^1$ functions (whose Fourier series converge to themselves), and given a linear functional $D:C^1_{\text{per}}\to\mathbb R$ satisfying $\sup_{n}|D(e^{inx})|<\infty$ I would like to show that $D$ is continuous (or equivalently, bounded).
Attempt
The supremum condition seems it should imply boundedness, but I'm not managing to formalise that and keep running into a circular argument. For example, by contradiction, let's assume a series $\{f_n\}$ where $\|f_n\|=1$ such that $D(f_n) > n^2$, then taking $\frac {f_n}n$ which has norm $\frac 1n$ we have $D(\frac {f_n}n) > n$ which is a contradiction if $D$ is continuous, but that's what we're trying to prove...
Cheers.
Edit
I think in $L^2$ the hypothesis is wrong since we can construct a sequence
\begin{align}
f_1 &= e_1 \\
f_2 &= \frac 1 {\sqrt 2}e_1 + \frac 1{\sqrt 2}e_2 \\
&\ \vdots \\
f_n &= \sum_{k=1}^n \frac 1{\sqrt n}e_k
\end{align}
So $\|f_n\|_2^2 = 1$ but if $D(e_k)=M$ (which satisfies our bound condition) we have $D(f_n) = \frac {nM}{\sqrt n}\to \infty$
In light of the failure of the $L^2$ norm, I suppose the $C^1$ norm $\|f\|_\infty+\|f'\|_\infty$ is the focus here.
|
This is not true. The functions $\{e^{inx}\}$ do not span $C^1_{\text{per}}$; indeed, no countable set can. So using the axiom of choice, one can show the existence of a linear functional that vanishes on all the $e^{inx}$ but is not continuous.
You can't tell whether a linear functional is continuous by looking at a proper subspace, even if that subspace is dense.
You can, however, show that there exists a unique continuous linear functional $D_1$ such that $D_1(e^{inx}) = D(e^{inx})$ for every $n$. Is that what you want?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/588013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
$T(v_1), \ldots,T(v_k)$ are independent if and only if $\operatorname{span}(v_1,\ldots,v_k)\cap \ker(T)=\{0\}$ I need help with this:
If $T:V\rightarrow W$ is a linear transformation and $\{v_1,v_2,\ldots,v_k\}$ is a linearly independent set in $V$, prove that $T(v_1), T(v_2),\ldots,T(v_k)$ are independent in $W$ if and only if
$$\operatorname{span}(v_1,\ldots,v_k)\cap \ker(T)=\{0\}.$$
I know how to prove that a independent set transforms to a independent set, but how do I show that this intersection equals the zero vector?
|
Let $v=c_1v_1+\cdots+c_nv_n\in\mathrm{ker}~T$, noting that also $v\in\mathrm{span}~\{v_1,\ldots,v_n\}$, then $$T(v)=T(c_1v_1+\cdots+c_nv_n)=c_1T(v_1)+\cdots+c_nT(v_n)=0$$
implies $c_1=\cdots=c_n=0$ since $T(v_1),\ldots,T(v_n)$ are linear independent. It follows that $\mathrm{span}~\{v_1,\ldots,v_n\}\bigcap\mathrm{ker}~T=\{0\}$. The other side is immediately obtained from the equation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/588100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How to plot circle with Maxima? I cannot get equitation plotted
Expression:
f1: (x^2) + (y^2) = 9;
I try this command:
wxplot2d(f1, [x, -5, 5], [y, -5, 5]);
And it gives:
plot2d: expression evaluates to non-numeric value everywhere in plotting range.
plot2d: nothing to plot.
What is correct way to plot such expressions?
|
There are 4 methods to draw circle with radius = 3 and centered at the origin :
*
*load(draw); draw2d(polar(3,theta,0,2*%pi));
*load(draw); draw2d(ellipse (0, 0, 3, 3, 0,360)
*plot2d ([parametric, 3*cos(t), 3*sin(t), [t,-%pi,%pi],[nticks,80]], [x, -4, 4])$
*load(implicit_plot);
z:x+y*%i;
implicit_plot (abs(z) = 3, [x, -4, 4], [y, -4, 4]);
Your function is implicit and multivalued
HTH
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/588180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Study convergence of the series of the $\sum_{n=1}^{\infty}(-1)^{n-1}\frac{\sin{(\ln{n})}}{n^a}$ Question:
Study convergence of the series
$$\sum_{n=1}^{\infty}(-1)^{n-1}\dfrac{\sin{(\ln{n})}}{n^a},a\in R$$
My try: Thank you DonAntonio help,
(1):$a>1$,
since
$$|\sin{(\ln{n})}|\le1$$
so
$$\left|\dfrac{\sin{(\ln{n})}}{n^a}\right|\le \dfrac{1}{n^a}$$
Use Alternating Series Test
we have $$\sum_{n=1}^{\infty}(-1)^n\dfrac{1}{n^a},a>1$$ is converge.
(2): $a\le 0$
then we have
$$\sum_{n=1}^{\infty}(-1)^{n-1}\dfrac{\sin{(\ln{n})}}{n^a}\to \infty$$
because
$$|(-1)^n\sin{(\ln{n})}|\le 1,\dfrac{1}{n^a}\to \infty,n\to\infty$$
for this case
(3) $0\le a\le 1$
we can't solve it,
Thank you for you help.
|
Use absolute value
$$\left|(-1)^{n-1}\frac{\sin\log n}{n^a}\right|\le\frac1{n^a}$$
and thus your series converges absolutely for $\;a>1\;$ .
It's clear that for $\;a\le 0\;$ the series diverges (why?), so we're left only with the case $\;0<a\le1\;$...and perhaps later I'll think of something.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/588259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 7,
"answer_id": 1
}
|
Find Laurent series Let $$f(z):=\frac{e^{\frac{1}{z}}}{z^2+1}$$ and let $$\sum_{k}a_kz^k$$ with k in Z the Laurent series of $f(z)$ for $0<|z|<1$. I have to find a formula for $a_k$. I've tried a lot, but I'm stuck. Can somebody help me? I want to do it with ordening of absolute series.
|
We have $$e^\frac{1}{z}=1+\frac{1}{z}+\frac{1}{2!} \frac{1}{z^2}+ \dots$$ as well as the geometric expansion$$\frac{1}{z^2+1}=1-z^2+z^4 -+ \dots $$
valid in the annulus $0<|z|<1$. Multiply the two and collect coefficients from terms of the same degree.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/588336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
NormalDistribution: Problem Question:Cans of regular Coke are labeled as containing $12 \mbox{ oz}$.
Statistics students weighted the content of 6 randomly chosen cans, and found the mean weight to be $12.11$.
Assume that cans of Coke are filled so that the actual amounts are normally distributed with a mean of $12.00 \mbox{ oz}$ and a standard deviation of $0.13 \mbox{ oz}$. Find the probability that a sample of 6 cans will have a mean amount of at least $12.11 \mbox{ oz}$.
I think the answer should be : $(z-μ)$/$σ$ = $(12.11-12)/0.13$ = $0.846$,which $z=.7995$ What I did worng? Can someone help me? Thank!
|
You need to find $z$ that corresponds to the sample mean from this exercise. So first a comment on your notation - it should be:
$$z=\frac{\bar{x}-\mu_{\bar{x}}}{\sigma_{\bar{x}}}$$
Of course $\mu_{\bar{x}}=\mu$. But $\sigma_{\bar{x}}\neq\sigma$, as you have computed with. Look up the standard deviation of the sample mean. (Since this is homework I won't give everything away.)
The $z$ that you have computed corresponds to choosing a single can that would weigh $12.11$. While that has a certain likelihood associated to it, it would be more rare to find the mean of 6 cans to be that far from $12.00$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/588438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Law of large numbers for Brownian Motion Let $\{B_t: 0 \leq t < \infty\}$ be standard Brownian motion and let $T_n$ be an increasing sequence of finite stopping times converging to infinity a.s. Does the following property hold?
$$\lim_{n \to \infty}\frac{B_{T_n}}{T_n} = 0$$ a.s.
|
This is an almost sure property hence the result you are asking to check is equivalent to the following.
Let $f:\mathbb R_+\to\mathbb R$ denote a function such that $\lim\limits_{t\to+\infty}f(t)=0$ and $(t_n)$ a sequence of nonnegative real numbers such that $\lim\limits_{n\to\infty}t_n=+\infty$. Then $\lim\limits_{n\to\infty}f(t_n)=0$.
Surely you can prove this. Then fix some $\omega$ in $\Omega$, consider the function $f$ defined by $f(t)=B_t(\omega)/t$ and the real numbers $t_n=T_n(\omega)$, and apply the deterministic result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/588499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $\{u,v\}$ is an orthonormal set in an inner product space, then find $\lVert 6u-8v\rVert$ If $\{u,v\}$ is an orthonormal set in an inner product space, then find $\|6u-8v\|$.
That's pretty much it. I'm trying to study for a quiz and can't figure it out. There are no examples in my book to help me out, just a question. I know that if $\{u,v\}$ are orthonormal then $\|u\|=1$ and $\|v\|=1$, but how does that translate into $\|6u-8v\|$? Thanks in advance to anyone that can help!
|
Compute the square of the norm and use the definition of the inner product as being bilinear:
$\|6u-8v\|^2 = \langle 6u-8v, 6u-8v \rangle = 36\langle u,u \rangle - 48\langle u, v \rangle - 48 \langle v,u \rangle + 64 \langle v,v \rangle$.
Now use that $u$ and $v$ are orthonormal and the fact that $\langle w,w \rangle$ is the squared length of the vector $w$. Take the square root to obtain the answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/588700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Relationship between primitive roots and quadratic residues I understand that if $g$ is a primitive root modulo an odd prime $p$, then Euler's Criterion tells us that $g$ cannot be a quadratic residue.
My question is, does this result generalize to prime powers? That is, if $g$ is a primitive root modulo $p^m$ for an odd prime $p$, must $g$ be a quadratic non-residue?
I spent a little while trying to come up with a proof but was unable to achieve anything useful so any proofs (or counterexamples) are welcome.
|
The order of a quadratic residue modulo $n$ divides $\varphi(n)/2$. A primitive root has order $\varphi(n)$. Hence a primitive root is always a quadratic nonresidue.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/588774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Show two groups are isomorphic I need to show two groups are isomorphic. Since I know they are of the same order, would finding an element that generates the other elements, in both groups, suffice to show that they are isomorphic?
|
Yes. If you can find an element $x$ which generates the finite group $G$ then it is cyclic. If $G=\langle x\rangle$ and $G'=\langle x'\rangle$ are both cyclic, and $|G|=|G'|$ then $G$ is isomorphic to $G'$ by an isomorphism $\phi\colon G\to G'$ which is defined by $\phi(x^k)=x'^k$. Show that this map is an isomorphism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/588837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If $a=a'$ mod $n$ and $b=b'$ mod $n$ does $ab=a'b'$ mod $n$? So far this is my working out:
$n|a-a'$ and $n|b-b'$
So $n|(a-a')(b-b')$
Expanding: $n|ab-a'b-ab'+a'b'$
I'm not sure what to do next. I need to show that $n|ab-a'b'$
|
We can write $a' = a + n k$ for some $k$, and likewise $b' = b + n m$; then
$$a'b' = (a + nk)(b + nm) = ab + n(\text{stuff}) \equiv ab \pmod{n}$$
upon distribution the multiplication.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/588925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Showing there is some point interor to the unit disk, where V=(0,0) using covering maps Let $B$ be the closed unit disk in $\mathbb{R}^2$
and suppose that $V = (p(x, y), q(x, y))$
is a vector field (p,q are continuous functions) defined on B. The boundary
of $B$ is the unit circle $S^1$
. Show that if at every point of $S^
1$
, $V$ is tangent
to $S^1$
and not equal to $(0,0)$, then there is some point in the interior of B
where $V = (0,0)$.
I tried to prove this by contradiction. Suppose that the hypothesis is not true. Then at every $(x,y) \in B$, we can divide $V$ by its length. Taking at each point a vector perpendicular to this unit vector, we can define a continuous map from from $B$ to $S^1$. This is a covering map. I'm not quite too sure where to go from here...
|
The map $V/|V|$ had better not be a covering map because the disk is two-dimensional and the circle is one-dimensional.
Here's a guide to a solution using a little bit of homotopy theory. First, observe that the map $V/|V|$ restricted to the boundary circle of $B$ is a degree-one self-map of the circle. Now answer this question: If a map $f:S^1\to S^1$ extends to an entire map $B\to S^1$, what does that imply about the degree of $f$? Finish the argument by showing that if $V$ were everywhere nonzero, you would reach a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/589009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
The contraction mapping theorem Let $f : \mathbb{R} \to \mathbb{R} : f(x) = 1 + x + \mathbb{e}^{-x}$ and let $ U [1,\infty)$. Firstly i need to show that $f$ maps $U$ into itself. But im can only see how $f$ maps $U$ to $[2 +1/\mathbb{e},\infty) $ as if we set $x=1$ then $f(1) = 2 + \mathbb{e}^{-1}$.
As $x \to \infty$ we get that $f(x) \to \infty$ but i can't justify how it maps $U$ to itself.
From there i have to show that its contractive, i.e. $|f(x) - f(y)| < |x - y| $. Is it true that this would require the mean value theorem?
|
When someone says that $f$ maps $U$ into $U$, they don't mean that every point in $U$ must be $f(x)$ for some $x \in U$. They merely mean that $f(x) \in U$ for every $x \in U$. And since $[2+1/e,\infty) \subset U$, you have solved that part of the problem.
And yes, the mean value theorem would be an excellent way to do the second part.
I think you know how to do this problem - it is just that you don't trust yourself to believe in your answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/589103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Cyclic Groups - Existence of the identity
How is it claimed that the identity of a cyclic group $G$ with generator 'a' can be written in the form of $e = a^m$ where $m$ is a positive integer.
A proof for a subgroup of $G$ being in itself cyclic claims that any element in the subgroup can be written in the the form $a ^ n$ and assume that $n > 0.$ Furthermore the Well Ordering principle is used to claim the existence of $m$ above.
The definition of a cyclic group only requires its members be able to be raised to any power of the generator.
Suppose g is an element of $G$. Then $g = a ^ i$ where $i < 0$. How can I proceed to prove that g can be written in the form of a positive power of $a$?
|
First I assume that we're talking about a finite cyclic group. The group $(\mathbb Z, +)$ is an example of what some people would call an infinite cyclic group and it is certainly not true that $n\cdot 1 = 0$ for some $n > 0$ (note everything is translated into additive notation in that example, but I switch back to multiplicative notation below).
Now assume the group is finite cyclic and generated by $a$. Look at the sequence $(a, a^2, a^3, a^4, \ldots)$. As the group is finite this sequence must repeat at some point, say $a^n = a^m$ for some $n > m$. Now multiply both sides by $a^{-m}$ to get $a^{n - m} = 1$ and $n - m > 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/589175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Computing the area and length of a curve
Using the Riemannian hyperbolic metric $$g = \frac{4}{(1-(u^2+v^2))^2}\pmatrix{ 1 & 0 \\ 0 & 1 \\}$$ on the disk $D_p = \{(u,v)\ | \ u^2 +v^2 \le p^2\}$ compute the area of $D_p$ and the length of the curve $\partial D_p$.
|
This is just the hyperbolic metric
$$ds={2|dw|\over 1-|w|^2}, \qquad w:=u+iv,\tag{1}$$
on the unit disk in the $(u+iv)$-plane. Therefore the length of $\partial D_p$ computes to
$$L(\partial D_p)=\int\nolimits_{\partial D_p} ds={2\over 1-p^2}\int\nolimits_{\partial D_p} |dw|={4\pi p\over1-p^2}\ .$$
For any conformal metric, i.e., a metric of the form $ds=g(w)|dw|$, one has
$$d{\rm area}_g(w)=g^2(w)\>d{\rm area}(w)\ ,$$
whereby $d{\rm area}$ denotes the euclidean area in the $w$-plane. Therefore the $g$-area of $D_p$ for the metric $(1)$ computes to
$${\rm area}_g(D_p)=4\int\nolimits_{D_p}{1\over(1-|w|^2)^2}\ d{\rm area}(w)=
8\pi\int_0^p{r\over(1-r^2)^2}\ dr={4\pi p^2\over 1-p^2}\ .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/589295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Residues at singularities I have the following question: Show that the integral
$$\int_{-\infty}^{+\infty}\frac{\cos\pi x}{2x-1}dx = -\frac\pi2$$
Clearly there is a singularity at $z=1/2$ but I think this is a removable singularity so it has $0$ residue. Is this right or have I missed another singularity? If I am right, could someone help me to proceed with this question please because I'm not sure how to.
Thanks
|
You are right, the function
$$f(z) = \frac{\cos \pi z}{2z-1}$$
is entire. However, to evaluate the integral, one considers a different function,
$$g(z) = \frac{e^{i\pi z}}{2z-1},$$
which has a pole in $z = \frac12$. We then have
$$\int_{-\infty}^\infty f(x)\,dx = \operatorname{Re} \int_{-\infty}^\infty g(x)\,dx.$$
The reason to use $g$ instead of $f$ is that directly, the Cauchy integral theorem and residue theorem only allow us to evaluate integrals over closed contours, and to evaluate an integral over the real line, we must know the limit behaviour of the integral over the auxiliary path closing the contour. If the integrand decays fast enough, we know that the integral over the auxiliary part tends to $0$. But $f(z)$ doesn't decay, since $\cos \pi z$ grows exponentially for $\lvert \operatorname{Im} z\rvert \to \infty$. So we replace it with a closely related function that decays fast, $e^{i\pi z} \to 0$ for $\operatorname{Im} z \to +\infty$, and that guarantees that the integral over the auxiliary part of the contour (in the upper half plane) tends to $0$, so we can use the residue theorem to evaluate that integral.
Since the pole lies on the real line, we have
$$\int_{-\infty}^\infty g(x)\,dx = \pi i \operatorname{Res}\left(g(z); \frac12\right),$$
only half of the residue counts.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/589409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.