Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
No group of following property. Is this true? Let $p$ be a prime greater than 3 and $G$ be group of order $p^5$. Is it true that there is no group $G$ of order $p^5$ such that the order of frattini subgroup is $p^3$ and the order of center is $p^2$?
If the answer is yes, how to prove it.
|
The class $3$ quotient of the Burnside group $B(2,p)$ has these properties. It has the presentation
$\langle a,b,c,d,e \mid a^p=b^p=c^p=d^p=e^p=1, [b,a]=c, [c,a]=d, [c,b]=e, d,e\ {\rm central}\ \rangle$
The Frattini subgroup is $\langle c,d,e \rangle$ and the centre is $\langle d,e\rangle$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/634404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
A second order recurrence relation problem I was asked by a friend with this problem but I can't solve it. Can anyone help?
We have a sequence $\left\{a_n\right\}$ that satisfies $a_1=1$, $a_2=2$,
$$a_n+\frac{1}{a_n} =\frac{a_{n+1}^2+1}{a_{n+2}}$$
where $n$ is a positive integer.
Prove that
*
*$a_{n+1}=a_n+\frac{1}{a_n}$
*$2n-1\le a_n^2\le3n-2$
*Let $S_n$ be the sum of the sequence $\left\{1\over a_n\right\}$, prove $62<S_{2014}<77$.
|
(a)
By induction, assume $a_{k+1} = a_k + \frac1{a_k}$. Then to get the result for $n=k+1$, consider
$$\begin{align*}
a_k+\frac1{a_k} =& \frac{a_{k+1}^2+1}{a_{k+2}}\\
a_{k+1}=&\frac{a_{k+1}^2+1}{a_{k+2}}\\
a_{k+2} =& a_{k+1}+\frac1{a_{k+1}}
\end{align*}$$
Also prove the base case for $n=1$ holds.
(b)
Using $a_{n+1}^2 = a_{n}^2+2+\dfrac1{a_n^2}$ and by induction, assume $2k-1\le a_k^2 \le 3k-2$. Then for $n=k+1$, the first inequality is
$$a_{k+1}^2 = a_k^2 + 2 +\frac1{a_k^2}\ge2k-1+2 = 2(k+1)-1$$
and the second inequality is similar. Use the fact that $0\le\dfrac1{a_k^2}\le1$.
(c)
$$\begin{array}{rcl}
2n+1\le&a_{n+1}^2&\le3n+1\\
\sqrt{2n+1}\le& a_{n+1} &\le \sqrt{3n+1}\\
\sqrt{2n+1}\le& a_1+\frac1{a_1}+\frac1{a_2}+\cdots+\frac1{a_n}&\le \sqrt{3n+1}\\
\sqrt{2\times2014+1}-a_1 \le& \sum_{i=1}^{2014}\frac1{a_i} &\le \sqrt{3\times2014+1}-a_1\\
62.4\cdots \le& \sum_{i=1}^{2014}\frac1{a_i} &\le 76.7\ldots\\
\end{array}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/634488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Graphs with both Eulerian circuits and Hamiltonian paths Which graphs have both Eulerian circuits and Hamiltonian paths, simultaneously?
Honestly I don't know the level of the question. One of my friens asked me to put the question on math.stackexchange.
Thanks.
|
Actually there is no such specific class of graphs. You can always find examples that will be both Eulerian and Hamiltonian but not fit within any specification. The set of graphs you are looking for is not those compiled of cycles.
For any $G$ with an even number of vertices the regular graph with, $$ degree(v) = \frac{n}{2} ,\frac{n}{2} +2, \frac{n}{2} + 4 \; \;..... \; \; or \; \; n-1 \; \; for \; \; \forall v\in V(G) $$ will be both Euler and Hamiltonian by Dirac's Theorem.
Similarly, Given $G$ with an odd number of vertices if, $$ degree(v) = \frac{n+1}{2} ,\frac{n+1}{2} +2, \frac{n+1}{2} + 4 \; \;..... \; \; or \; \; n \; \; for \; \; \forall v\in V(G) $$
then $G$ will be both Eulerian and Hamiltonian. Again the above sets do not contain all graphs which are both Hamiltonian and Euler. The main reason we cannot classify all such graphs is due to the absence of a necessary and sufficient condition for a graph to be Hamiltnonian.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/634570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
munkres analysis integration question Let $[0,1]^2 = [0,1] \times [0,1]$. Let $f: [0,1]^2 \to \mathbb{R}$ be defined by setting $f(x,y)=0$ if $y \neq x$, and $f(x,y) = 1$ if $y=x$. Show that $f$ is integrable over $[0,1]^2$.
|
Consider the partition $P$ like the above picture.
$U(f, P) - L(f, P) = \frac{3 n - 2}{n^2}$.
$\frac{3 n - 2}{n^2} \to 0 (n \to \infty)$.
So, for any $\epsilon > 0$, there exists a partition $P$ such that $U(f, P) - L(f, P) < \epsilon$.
By Theorem 10.3 (p.86), $f$ is integrable over $[0, 1] \times [0, 1]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/634680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Why does this assumption change the formula this way I am working through some notes and I cannot understand why the following assumption changes the formula as such.
The formula is basically referring to a right angled triangle of base $ L $ and height $ \frac{D}{2} $. The difference between the hypotenuse and the base being $ \Delta L $.
The formula is as follows
$$ \Delta\theta = \frac{2\pi}{\lambda}[\Delta L] $$
$$ \Delta\theta = \frac{2\pi}{\lambda}[\sqrt{L^2+\frac{D^2}{4}}-L] $$
It then states that assuming $ L >> \frac{D}{2} $
$$ \Delta\theta \approx \frac{\pi D^2}{4\lambda L} $$
But, why!?
|
Let us rewrite the second formula this way:
$$
\Delta\theta = \frac{2\pi}{\lambda} \cdot L \cdot \left[ \left(1 + \frac{D^2}{4L^2}\right)^{1/2} - 1\right] = \frac{2\pi}{\lambda} \cdot L \cdot \left[ \left(1 + x \right)^{1/2} - 1\right],
$$
where $x = \frac{D^2}{4L^2}$.
Then, since $D << L$, we can approximate $\left(1 + x \right)^{1/2}$ around $x \approx 0$ by Taylor series:
$$
\left(1 + x \right)^{1/2} = 1 + \frac{1}{2} x +o(x).
$$
So the expression of $\Delta\theta$ can be rewritten like this:
$$
\Delta \theta = \frac{2\pi}{\lambda} \cdot L \cdot (1 + \frac{1}{2} x + o(x)- 1) \approx \frac{2\pi L}{\lambda} \cdot \frac{D^2}{8L^2} = \frac{\pi D^2}{4 \lambda L}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/634777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
An example of a $P$-primary ideal $I$ satisfying $I^2 = IP$ Give some examples of a $P$-primary ideal $I \not=P $ in a noetherian domain $R$ such that $I^2=PI $.
|
Let $R = k [[t^3, t^4, t^5]]$, $P = (t^3, t^4, t^5)$, and $I = (t^3, t^4)$. Then $$IP = I^2 = (t^6, t^7, t^8). $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/634847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If $\,\,\frac{x^2}{cy+bz}=\frac{y^2}{az+cx}=\frac{z^2}{bx+ay}=1,$ then show that .... I am stuck on the following problem that one of my friends gave me:
If $\,\,\frac{x^2}{cy+bz}=\frac{y^2}{az+cx}=\frac{z^2}{bx+ay}=1,$ then show that $$\frac{a}{a+x}+\frac{b}{b+y}+\frac{c}{c+z}=1$$. I did a problem which was similar to this one but I could not tackle this particular one. Can someone help?
|
HINT:
We have $$x^2=cy+bz\iff ax+cy+bz=x^2+ax=x(x+a)$$
$$\implies\frac1{a+x}=\frac x{ax+cy+bz}\implies \frac a{a+x}=\frac{ax}{ax+cy+bz}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/634909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Mathematical and Theoretical Physics Books Which are the good introductory books on modern mathematical physics? Which are the good advanced books?
I read Whittaker's Analytical Dynamics, and I am reading Arnold's Mathematical Methods of Classical Mechanics. However, I am not very interested in books on classical mechanics, nor books for engineers. The related questions offer mostly books on classical mechanics, or books for engineers or the public. I am aware of Spivak's and of Landau’s books. I appreciate rigor and I am not interested in popular books. For instance, A Road to Reality is not appropriate but Lectures on Quantum Mechanics for Mathematics Students is.
I am asking in particular about books on quantum theories, gravity and on cosmology. I am also asking about unfalsifiable theories. I mean anything from string theory thorough conformal cyclic cosmology to loop quantum gravity.
To state my background I am a master student of mathematics. I took courses in classical mechanics, continuum mechanics, mathematical models of physics, quantum mechanics, field theory, etc. I audited courses on calculating conformal Feynman amplitudes in $\phi^4$ and in string theory, both of which assumed knowledge in conformal field theories, that I lack.
I have taken or partially audited diverse courses in mathematics. I will make these explicit if need be. I have some knowledge of group theory, representation theory, Lie groups, operator algebras, symplectic geometry, analysis on manifolds, complex manifolds, differential topology, etc.
So, what are good books for a young mathematician who wants to dabble in physics?
|
If Lectures on Quantum Mechanics for Mathematical Student work, then you should check Quantum Mechanics for Mathematicians written to provide a somewhat more modern and thorough exposition of Quantum Mechanics for Mathematical Student. (The author is a student of one of the authors of the former book).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/635060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 4,
"answer_id": 0
}
|
One-to-one correspondence between these two problems?
How many 3-digit positive integers are there whose middle digit is
equal to the sum of the first and last digits?
I noticed that the solution to this problem, $45$, is the same as the solution to the problem
How many 3-digit positive integers are there whose middle digit is the average
of the first and last digits?
Is this purely a coincidence or is there some sort of a bijection/one-to-one correspondence that links these two?
|
Note that in both problems, the middle digit (if valid) is determined by the first and last digits.
There are 90 possible pairs of first and last digits $(f,\ell)$, since $1\le f\le 9$ and $0\le\ell\le9$. These can be grouped into 45 pairs $\{ (f,\ell), (10-f,9-\ell) \}$.
Note that exactly one member of each such pair leads to an answer to the first problem: exactly one of $f+\ell$ and $(10-f)+(9-\ell)$ is between 0 and 9, since the two sums add to 19.
Note also that exactly one member of each such pair leads to an answer to the second problem: exactly one of $\frac12(f+\ell)$ and $\frac12((10-f)+(9-\ell))$ is an integer, because the two expressions add to $\frac{19}2$ and both are half-integers.
Therefore the trivial bijection between the sets of 45 pairs induces a bijection between the solution sets of the two problems. For example, $110, 121, 132, 143$ are bijectively mapped to $999, 111, 987, 123$ respectively, while the preimages of $333, 345, 357, 369$ are $363, 385, 792, 770$ respectively.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/635125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
limit of $\frac{2xy^3}{7x^2+4y^6}$, different answers. Good evening, everyone I've tried my possible best to evaluate the limit as $(x,y) \to (0,0)$ but Using sage the answer is 0 either ways but one textbook is saying the limit doesn't exist. Could the 2 answers be correct?
|
My guess is the problem is that sage cannot test every approach curve, so it probably tests a small grid of points about the origin. Since the place where the limit is bad is shaoed like a curved cubic, the grid will miss the curve and give the wrong answer.
Note that Wolfram Alpha has the same problem: http://m.wolframalpha.com/input/?i=limit+as+x+goes+to++0%2C+y+goes+to+0+of+%5Cfrac%7B2xy%5E3%7D%7B7x%5E2%2B4y%5E6%7D&incCompTime=true
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/635202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
}
|
What is the value of $\lim_{x\to0}\frac{x^2\sin\left(\frac{1}{x}\right)}{\sin x}$? So this is the question in my text book $$\lim_{x\to0}\frac{x^2\sin\left(\frac{1}{x}\right)}{\sin x}$$
here what i have done
It can be rearranged as $$\lim_{x\to0}\left(\frac{x}{\sin x}\right)x\sin\left(\frac{1}{x}\right)$$$$\Rightarrow\lim_{x\to0}(1)x\sin\left(\frac{1}{x}\right)$$$$\Rightarrow\lim_{x\to0}\frac{\sin\left(\frac{1}{x}\right)}{\frac{1}{x}}$$$$\Rightarrow1$$
This is what i got. Have done something wrong cause in my text-book the answer is $0$
Here the solution in my text book $$\lim_{x\to0}\left(\frac{x}{\sin x}\right)x\sin\left(\frac{1}{x}\right)$$$$\Rightarrow\lim_{x\to0}(1)x\sin\left(\frac{1}{x}\right)$$$$\Rightarrow0$$
as$$\left|x\sin\frac{1}{x}\right|\le|x|$$
can anyone help me what is wrong
Thanks
Akash
|
$$\lim_{x\to0}\frac{x^2\sin\frac1x}{\sin x}=\lim_{x\to0}\frac{x\sin\frac1x}{\frac{\sin x}x}=\lim_{x\to0}\frac{x\sin\frac1x}1=0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/635364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Proof that the derivative of the prime counting function is the probability of prime? The derivative of the estimation of the prime counting function, $\frac{x}{ln(x)}$, is $\frac{ln(x)-1}{ln(x)^2}$, which is approximately $\frac{1}{lnx}$ for large values of $x$. According to the prime number theory, $\frac{1}{lnx}$ is the probability that a randomly chosen integer between 2 and $x$ is a prime number.
Why is the derivative of the prime counting function the probability of getting a prime number?
Edit: I've only studied up to basic integrals, so try to keep it simple!
|
You are describing Cramer's model of the primes, which is pretty good. However, Maier showed about 1985 that it gave incorrect estimates for short intervals. See if i can find it, it's a famous episode.
Maier's theorem
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/635445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
small generalization of a linear algebra exercise I came aross the following exercise and was wondering if the conditions are actually necessary or if we could generalize this. Here Hom$(\mathbb R, \mathbb R)$ is the Vectorspace of linear functions from $\mathbb R$ to $\mathbb R$
let $F$ be a function Hom$(\mathbb R,\mathbb R) \rightarrow \mathbb R$
with $F$: $u \mapsto u(1)$ show that $F$ is linear.
The point is that I think this is valid also for $F$ defined on any function space. For instance
we know that $(f+g)(x)=f(x)+g(y)$ and $(\alpha f)(x)=\alpha \cdot f(x)$ from how functions are generally defined.
so $F(\alpha u)=(\alpha u)(1)=\alpha \cdot u(1)=\alpha F(u)$
and $F(\alpha u+\beta v)=(\alpha u+ \beta v)(1)=(\alpha u)(1)+(\beta v)(1)=\alpha u(1)+\beta u(1)=\alpha F(u)+ \beta F(v)$ and hence we have linearity of $F$.
It appears a bit strange to me that the condition Hom$(\mathbb R, \mathbb R)$ is added so maybe I misunderstood the exercise or the definition of a Functionspace in general. Any clarification would be really great.
For me a funtion space is a Vectorspace wich has functions as vectors.
|
Yes, it works much more generally. If $V$ is a $K$-vector space ($K$ any field), and $X$ any set, for every $x\in X$, the evaluation map
$$\operatorname{ev}_x \colon \mathscr{F}(X,V) \to V;\quad \operatorname{ev}_x (f) = f(x)$$
is $K$-linear. That indeed follows directly from the definition of addition and scalar multiplication of functions as the pointwise operations.
($\mathscr{F}(A,B)$ denotes the set of all maps $A\to B$)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/635560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Simple limit of function How do I show that $\lim_{x \rightarrow \infty } \frac {\log(x^{2}+1)}{x}=0$?I was able to do that using L'Hôpital's rule. But is there any other way?
|
$$\lim_{x\to\infty}\frac{\ln(x^2+1)}x\sim\lim_{x\to\infty}\frac{\ln(x^2)}x=\lim_{x\to\infty}\frac{2\cdot\ln x}x=2\cdot\lim_{t\to\infty}\frac t{e^t}=2\cdot0=0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/635658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
}
|
Other than $\models$, is there standardized notation for semantic consequence? It is common practice to use $\models$ both for the satisfaction relation between models and sentences, and for the corresponding semantic consequence relation.
Question. Suppose I don't want to use $\models$ for semantic consequence (personally, I think this particular convention causes more confusion than its worth), what should I use instead? In particular, is there a standardized alternative?
|
In Ben-Gurion University, where I did my B.Sc. and M.Sc. we used $T\implies\varphi$ to denote logical implication, which was really a semantic property:
$T\implies\varphi$ if and only if for every interpretation for the language $M$ and assignment $s$ for $M$, such that every formula in $T$ is true under that assignment; $\varphi$ is true under $s$ as well.
This makes all the more reason to pay attention as to what is $\implies$ and what is $\rightarrow$ (statement about propositions vs. a connective in the language).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/635734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
$\mathbb{Q}$ is not isomorphic to $\mathbb{Q}^+$ How can we show that $\mathbb{Q}$ as an additive group is not isomorphic to $\mathbb{Q}^+$ as a multiplicative group?
Both have a countable number of elements, neither is cyclic, neither has an element $x\neq e$ such that $x^2=e$, both are abelian ... I don't know what to use.
|
If there were an isomorphism $f:(Q^+,\times)\to (Q,+)$ then $f(2)$ would be the sum $a+a$ where $a=f(2)/2$, but then $a$ could not have an inverse image under $f$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/635823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Is $f=0$ if the integral is zero If $f: [0,1]\to \mathbb R$ is continuous and for all $s\in [0,1]$
$$ \int_0^s f(t)dt = 0$$
does it then follows that $f=0$? I can show it for $f \ge 0$ but I am wondering if it is also true if $f$ not positive.
|
Define $F(x) = \int_0^x f(s)\,ds$. By the Fundamental Theorem of Calculus, $F'=f$. But by your assumption, $F(x) = 0$ for all $x \in [0,1]$. So $f \equiv 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/635874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Is this set convex ?2 Is this set convex for every arbitrary $\alpha\in \mathbb R$?
$$\Big\{(x_1,x_2)\in \mathbb R^2_{++} \,\Big|\, x_1x_2\geq \alpha\Big\}$$
Where $\mathbb R^2_{++}=[0,+\infty)\times [0,+\infty)$.
|
Yes, it is. You should separate the case $a<0$ (when it is the quadrant) and $a>0$ when it is bounded by an arc of a hyperbola.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/635941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Sum of $\sum\limits_{i=1}^\infty (-1)^i\frac{x^{2i+1}}{2i+1}$ Can someone help me with this series? It was on my exam and I don't know how to do it.
For $|x| < 1$ determine the sum of
$$\sum\limits_{i=1}^\infty (-1)^i\frac{x^{2i+1}}{2i+1}$$
|
The derivative of the given sum is the geometric sum
$$\sum_{n=1}^\infty (-x^2)^n=-\frac{x^2}{1+x^2}=-1+\frac{1}{1+x^2}$$
so the given sum which vanish at $0$ is
$$\int_0^x\left(-1+\frac{1}{1+t^2}\right)dt=-x+\arctan x$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/636032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find DNF and CNF of an expression I want to find the DNF and CNF of the following expression
$$ x \oplus y \oplus z $$
I tried by using
$$x \oplus y = (\neg x\wedge y) \vee (x\wedge \neg y)$$
but it got all messy.
I also plotted it in Wolfram Alpha, and of course it showed them, but not the steps you need to make to get there.
Any ideas to how this could be done?
|
For DNF:
*
*look at each row where $p = 1$
*encode a proposition from the atoms $p_i$ for row $i$ (that gives $p$ is 1) that has $a_i$ if that atom is 1 in the truth table and $\neg a_i$ if it's 0. You are using an and to combine the atoms so that only this terms is 1 when you are on that row. You can think of this conjunction as a product.
*take the OR of all such proposition corresponding to the rows being 1
*since this proposition is a disjunction (think of it as an addition) that are only 1s for unique rows, you get the whole thing is only 1 when you need it to be 1.
For CNF:
*
*Look at the rows where $p=0$
*encode a proposition from the atoms $p_i$ for row $i$ (that gives p being zero) that has $a_i$ if that atom is 1 in the truth table and $\neg a_i$ if it's 0. Now conjunct them. This is not the form you actually want so negate $p_i$ to get $\neg p_i$. By Demorgans all disjunctions became conjunctions.
*now take the AND of all such disjunctive propositions.
*This is correct because whenever you choose a row, the proposition you built returns 1 IFF you are not in that row. Since you are not in that row, all those rows return 1 simultaneously indicating your not in any of the rows that gives a zero. Thus you get the whole thing giving a 1.
If you need more help check this video:
https://www.youtube.com/watch?v=tpdDlsg4Cws
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/636119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 2
}
|
Probability that the second roll comes up yellow given the first roll was purple. A bag contains $20$ dice. $5$ of the dice have entirely purple sides, $7$ of the dice have $2$ purple and $4$ yellow sides, and $8$ of the dice have $3$ purple and $3$ yellow sides. If you randomly pick a die, roll it, and observe that the roll comes up purple, what is the probability that if you roll the same die again, the roll comes up yellow?
Update I have tried the following: The probability that you pick die 1 and roll a purple is $5/20*6/6=5/20$; the probability you pick die 2 and roll a purple is $7/20*2/6=7/60$; and the probability you pick die 3 and roll a purple is $8/20*3/6=1/5$. The sum of these probabilities is $5/20+7/60+1/5=17/30$. Now the probability that the second roll is yellow given the first is purple is given by: $(5/20)\div(17/30)*0+(7/60)\div(17/30)*(4/6)+(1/5)\div(17/30)*(3/6)=.31$. This is what I think is right; can someone verify it or point out where it is wrong if it is?
|
Let $P$ be the event the first roll gave purple, and $Y$ the event the second roll gave yellow. We want $\Pr(Y|P)$. By the definition of conditional probability, we have
$$\Pr(Y|P)=\frac{\Pr(P\cap Y)}{\Pr(P)}.$$
You calculated $\Pr(P)$ using the correct approach. I have not checked the arithmetic. We need $\Pr(P\cap Y)$.
The event $P\cap Y$ can happen in two ways: (i) we pick a die of type 2, and roll purple then yellow or (ii) we pick a die of type 3, and roll purple then yellow.
The probability of (i) is $\frac{7}{20}\cdot \frac{2}{6}\cdot \frac{4}{6}$. The probability of (ii) is $\frac{8}{20}\cdot \frac{3}{6}\cdot \frac{3}{6}$. Now you have all of the ingredients.
Remark: You have obtained the same number, by essentially similar reasoning.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/636171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
An element does not belong to an ideal
How can I prove that the element $x-5$ does not belong to the ideal $(x^2-25,-4x+20)$ in $\mathbb Z[x]$.
I tried to show that by proving $x-5\neq(x^2-25)f(x)+(-4x+20)g(x)$ for all $f,g$. Any help?
|
${\rm mod}\ \color{#c00}2\!:\ (x^2\!-25) f -\color{#c00}2\, g\ $ is either $\,0\,$ or of degree $\ge 2,\,$ so is $\,\not\equiv\, x-5$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/636267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
}
|
How do I make pi = 3? This question emerges from a discussion on quora which concluded that if a circle was drawn on the surface of a sphere, the ratio of radius (from the circle's centre as projected to the sphere's surface, measured over the surface of the sphere) to the circumference could be made to equal exactly 1:3 So there is a "world" in which pi is actual a rational integer.
Q. What is the required ratio of diameter of the sphere to the diameter of the circle for this to happen?
|
If you take the unit sphere $r=1$, denoted by $S^2$ and take the circle's center to be the north pole $n=(0,0,1)^T$, you want to know the diameter of the circle to be such that $\pi \cdot d = 3$ so $d = \frac3\pi$. From that you can compute backwards the height of the hyperplane $H:= \{x\in\mathbb R^3, x_3 = h\}$ such that $H\cap S^2$ yields this circle of diameter $\frac3\pi$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/636349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Find slope of a curve without calculus Is it possible to find the slope of a curve at a point without using calculus?
|
Slope of a curve at a specific point MUST be a limit, which I am not sure whether you classify as calculus or not. Slope is by definition a function of two distinct points, and the only interpretation of "slope of a curve at a point" is that of two points approaching each other along the curve.
If you are not allowed to use limits, then no because both points have the same x and y coordinates, causing a 0/0 evaluation which is meaningless.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/636420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
evaluation of series Evaluate :
a.$$\frac{1}{2\cdot3}+\frac{1}{4\cdot5}+\frac{1}{6\cdot7}+\cdots$$
For this I looked at
$$x+x^3+x^5+\cdots=\frac{x}{1-x^2} \text{ for }|x|<1$$
Integrating the above series from $0$ to $t$ yields
$$\frac{t^2}{2}+\frac{t^4}{4}+\frac{t^6}{6}+\cdots=\int_{0}^{t}\frac{x}{1-x^2} \, dx$$
Again integrating the series from $0$ to $1$ will give
$$\frac{1}{2\cdot3}+\frac{1}{4\cdot5}+\frac{1}{6\cdot7}+\cdots=\int_{0}^{1}\left(\int_{0}^{t}\frac{x}{1-x^2} \, dx\right) \, dt$$
But upon integration $\log(1-t^2) $ comes which is not defined at $t=1$
|
Using $\displaystyle \frac1{n(n+1)}=\frac1n-\frac1{n+1}$
$$\frac1{2\cdot3}+\frac1{4\cdot5}+\frac1{6\cdot7}+\cdots=\frac12-\frac13+\frac14-\frac15+\frac16-\frac17+\cdots$$
Now use Convergence for log 2 or Taylor series for $\log(1+x)$ and its convergence
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/636496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How are the values $3\sqrt{2}$ and $\sqrt{2}$ determined? Those values $\sqrt{2}$ and $3\sqrt{2}$
How do they suppose to match with $MB=BN=2$?
|
Draw the bottom square $ABCD$!
I've added the point $T$: the centre of the square $ABCD$.
The computation is essentially not different from the one given by mathlove in the earlier answer: the square $MBNT$ has sides of length $2$, its diagonal $TB$ has length $2 \sqrt{2}$, and $S$ is in the middle, so $SB = \sqrt{2}$.
And, of course, analoguously, $ABCD$ has sides of length $4$, its diagonal $DB$ has length $4 \sqrt{2}$, so $DS = 3\sqrt{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/636612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
The Limit of $x\left(\sqrt[x]{a}-1\right)$ as $x\to\infty$. How to find the limit of:
$$ \lim_{x\to \infty }x\left(\sqrt[x]{a}-1\right)$$
Without using L'hôspital rule.
I've tried to bound the term and use the squeeze theorem but I couldn't find the right upper bound. I've also tried to convert $a^\frac{1}{x}$ to $e^{\frac{1}{x}\ln{a}}$ but it didn't help me.
Whats the right way to evaluate that limit?
|
Setting $\displaystyle \frac1x=h$
$$\lim_{n\to\infty}x(\sqrt[x]a-1)=\lim_{h\to0}\frac{a^h-1}h=\ln a\lim_{h\to0}\frac{e^{h\ln a}-1}{h\ln a}=\cdots$$
Use Proof of $ f(x) = (e^x-1)/x = 1 \text{ as } x\to 0$ using epsilon-delta definition of a limit
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/636702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Calc expected value of 5 random number with uniform distribution Assume we have a random numbers $\sim U(0,100)$.
Then the expected value of that number will be: $\int_{0}^{100} \frac{x}{100}$ = 50.5
Now assume we have 5 random numbers $\sim U(0,100)$.
How can I calculate what would be the expected value of the maximal number?
Thanks.
|
You need to learn about order statistics:
https://en.wikipedia.org/wiki/Order_statistics
The maximum of five independent observations is the fifth order statistic (of that sample). In your case, that will have a certain (scaled) beta distribution. You can find the datails in wikipedia above. In your case it will be (100 multiplied by) a beta(5,1)-variable, with expectation $100 \cdot \frac{5}{6}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/636851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Why - not how - do you solve Differential Equations? I know HOW to mechanically solve basic diff. equations. To recap, you start out with the derivative $\frac{dy}{dx}=...$ and you aim to find out y=... To do this, you separate the variables, and then integrate.
But, can someone give me a some context? A simple example or general sense of WHY you solve a differential equation. Know a common situation they are used to model, and then the purpose of then finding the original function from whence the derivative came? When do you initially know the derivative? When you only know the rate of change?
Thanks!
|
In the introduction of Arnold's book, talking about differential equations:
[...]
Newton considered this invention of his so important that he encoded it as an anagram whose meaning in modern terms can be freely translated as follows:
“The laws of Nature are expressed by differential equations.”
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/636928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
}
|
Solve $12x\equiv9\pmod{15}$ Question:
Solve $12x\equiv9\pmod{15}$
My try:
$\gcd(12,15)=3$ so it has at least $3$ solutions.
Now
$15=12\times1+3\\
3=15-12\times 1\\
3=15+2\times(-1)\\
\implies9=15\times3+12\times(-3)\\
\implies12\times(-3)\equiv9\pmod{15}$
So $x\equiv-3\pmod{15}$
Am I correct?
|
Hint: $12x\equiv 9 \pmod{15}$ if and only if $4x\equiv 3\pmod{5}$, this is easy to verify by definition. Now, everything is co-prime to the modulus, so the problem is trivial.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/637041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Gramian matrix test Are there some test to know if a matrix $M$ is gramian ? $M$ is gramian if it exists a matrix W such $M=W^HW$. Also if it is possible to determine $W$.
Thanks
|
Gramian matrix is positive-definite. So it's possible to find square root. It will be symmetrical so this it is possible to write such decomposition for any positive definite matrix. The solution of $M=WW^{T}$ it is not unique, because starting from nonsymmetric matrix $A$ one can construct matrix $B=AA^T$ and then a symmetric solution of $B=WW^T$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/637128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that the limit definition of the exponential function implies its infinite series definition. Here's the problem: Let $x$ be any real number. Show that
$$
\lim_{m \to \infty} \left( 1 + \frac{x}{m} \right)^m = \sum_{n=0}^ \infty \frac{x^n}{n!}
$$
I'm sure there are many ways of pulling this off, but there are 3 very important hints to complete the exercise in the desired manner:
*
*Expand the left side as a finite sum using the Binomial Theorem. Call the summation variable $n$.
*Now add into the finite sum extra terms which are $0$ for $n>m$, in order to make it look like an infinite series.
*What happens to the limit on $m$ outside the series?
So far I was able to use Hint 1 to expand the left side:
$$
\lim_{m \to \infty} \left( 1 + \frac{x}{m} \right)^m = \lim_{m \to \infty} \sum_{n=0}^m \binom {m}{n} \left( \frac{x}{m} \right)^n
$$
No matter what I do with the binomial coefficients and factorials, I can't figure out what extra terms to add per Hint 2. Any suggestions?
|
Recall that
$$\binom{m}{n}=\frac{m!}{n! (m-n)!}$$
By Stirling's formula you get for large $m$
$$m!\approx \sqrt{2\pi m}\left(\frac{m}{e}\right)^{m}\\
(m-n)!\approx \sqrt{2\pi(m-n)}\left(\frac{m-n}{e}\right)^{m-n} $$
Then note
$$ \left(\frac{m-n}{e}\right)^{m-n}=e^{(m-n)\ln(m-n)-m+n}=e^{(m-n)\ln(m)+(m-n)\ln(1-\frac{n}{m})-m+n} $$
By Taylor series
$$f(x+\epsilon)=f(x)+f'(x)\epsilon+O(\epsilon^2)~~~\text{get}~~~\ln(1-\frac{n}{m})=\underbrace{\ln(1)}_{=0}-\frac{n}{m}+O(m^{-2})$$
and we proceed to
$$ \left(\frac{m-n}{e}\right)^{m-n}\approx e^{(m-n)\ln(m)-(m-n)\frac{n}{m}-m+n}\approx e^{(m-n)\ln(m)-m}= \frac{m^{m-n}}{e^m}$$
where we neglected terms that certainly vanish in the limit $m\to\infty$.
Therefore, we conclude that
$$\frac{m!}{(m-n)!}= m^n+\text{subleading}$$
And all together
$$\sum_{n=0}^\infty \binom{m}{n} \left(\frac{x}{m}\right)^n= \sum_{n=0}^\infty \frac{m^n}{n!} \left(\frac{x}{m}\right)^n+\text{subleading}\to e^x$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/637255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
}
|
If $\lim_{x\rightarrow\infty}(f(x+1)-f(x))=L$ prove that $f=O(x)$. Let $f:\mathbb{R}\rightarrow\mathbb{R}$ such that
$$\lim_{x\rightarrow\infty}(f(x+1)-f(x))=L$$
Prove that $$\lim_{x\rightarrow \infty}\dfrac{f(x)}{x}=L$$
This was an exam question that I was given and got nowhere on it. Going back now, I don't think I'm any closer.
This is my idea so far. We know that
$$\lim_{x\rightarrow\infty}\dfrac{f(x+1)-f(x)}{x}=0$$
I think I'm supposed to add the apprapraite $0$ to
$$\left\vert \dfrac{f(x+1)-f(x)}{x}\right\vert$$ but I just keep getting a lower bound.
A hint would be much appreciated. Thanks
|
You need to impose that $f$ maps bounded intervals onto bounded intervals:
By the way, the proof by contraposition seems to be the more appropriate. But first, we recall the basic identity
$$ (f(x+1)-f(x)-L)^2=\left([f(x+1)-L(x+1)]+[Lx-f(x)]\right)^2= \\ \ \\
=(f(x+1)-L(x+1))^2+(f(x)-Lx)^2- \\ \ \\ -2(f(x)-Lx)(f(x+1)-L(x+1)).
$$
So, if $\displaystyle \lim_{x \rightarrow \infty} \frac{f(x)}{x} \neq L$, one can find $\varepsilon_0,\varepsilon_1>0$ s.t. for every $\delta>0$, one can find $x>\delta$ satisfying the set of inequalities
$$ |f(x)-Lx|\geq \varepsilon_0 |x| ~~~\&~~~ |f(x+1)-L(x+1)|\geq \varepsilon_1 |x+1| .$$
Now pick $\varepsilon=\min \{ \varepsilon_0,\varepsilon_1\}$. Then one gets
$$(f(x)-Lx)^2\geq \varepsilon^2 x^2~~~\& ~~~~ (f(x)-Lx)^2\geq \varepsilon^2 (x+1)^2 $$
so that
$$
(f(x+1)-f(x)-L)^2\geq \varepsilon^2(x+1)^2+\varepsilon^2 x^2-2|x||x+1|\varepsilon^2.
$$
Remark: Although it is not said, the inequality $-2(f(x)-Lx)(f(x+1)-L(x+1))\geq -2\varepsilon^2 |x|~|x+1|$ only
fulfils only in case that $f$ maps a bounded interval onto a bounded
interval. That is the case when one impose the condition $x>\delta$
[the set $f(]\delta,\infty[)$ is always bounded]. That means that the set of inequalities $f(x)-Lx\leq -\varepsilon |x|$ and $f(x+1)-L(x+1)\leq -\varepsilon |x+1|$ are always satisfied.
Finally, a short computation based on the binomial identity $(a-b)^2=a^2+b^2-2ab$ [$a=\varepsilon |x|$ & $b=\varepsilon |x+1|$] allows us to conclude that
$$
(f(x+1)-f(x)-L)^2\geq \varepsilon^2.
$$
From the arbitrary of $\delta>0$, that is equivalent to say that
$\displaystyle \lim_{x \rightarrow \infty} (f(x+1)-f(x)) \neq L$.
In conclusion, we have proved that $\displaystyle \lim_{x \rightarrow \infty} \frac{f(x)}{x} \neq L$ implies that $\displaystyle \lim_{x \rightarrow \infty} (f(x+1)-f(x)) \neq L$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/637328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Hardy-Littlewood maximal function weak type estimate Show that if $f\in L^1(\mathbb{R}^d)$ and $E\subset \mathbb{R}^d$ has finite measure, then for any $0<q<1$,
$$\int_E |f^{*}(x)|^q dx\leq C_q|E|^{1-q}||f||_{L^1(\mathbb{R}^d)}^{q}$$
where $C_q$ is a positive constant depending only on $q$ and $d$.
Here the function $f^*(x)=\sup_{x\in B}\frac{1}{|B|}\int_B |f(y)|dy$ is the Hardy-Littlewood maximal function.
Notes
It seems to me the weak type estimate $\forall \alpha>0,\enspace |\{x: f^*(x)>\alpha\}|\leq \frac{3^d}{\alpha}||f||_{L^1(\mathbb{R}^d)}$ is of great use but I am having trouble putting this to any use. Any help is appreciated.
|
Indeed, the weak type estimate is useful. Using Fubini's theorem, we have
$$\int_E|f^{*}(x)|\mathrm dx=q\int_0^\infty t^{q-1}\lambda\{|f^*(x)|\chi_E\geqslant t\}\mathrm dt.$$
Notice that $$\lambda\{|f^*(x)|\chi_E\geqslant t\}\leqslant \min\left\{|E|;\frac{3^d}t\lVert f\rVert_{\mathbb L^1}\right\},$$
hence cut the integrals and conclude.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/637393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
find the domain of root of a logarithmic function I'm a little confused about this question since output of a logarithmic function varies from $ -\infty $ to $\infty$ .I should find the domain of this function: $ y=\sqrt{\log_x(10-x^2)} $ . How can I find the interval that makes $\log_x(10-x^2)$ greater than zero?
|
Recall for $a>0, a\neq 1,b>0$:
$$\log_ab=\frac{\ln b}{\ln a}$$
Thus we have
$$f(x)=\log_x(10-x^2)=\frac{\ln(10-x^2)}{\ln x}$$
Then $f(x)\geq 0$ if and only if $10-x^2\geq 1,x>1$ or $0<10-x^2\leq 1,0<x<1$. Since the later case cannot happen, then we must have $10-x^2\geq 1$ and $x>1$, which gives $1<x\leq3$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/637460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
What is a series? This question is rather pedantic, but it is something that has been bothering me for some time.
Summing up infinitely many terms of a sequence is something that is done in pretty much every subfield of mathematics, so series are right at the core of mathematics. But strangely, I have never seen a formal definition of a series of the form "A series is...", whether I look in books on calculus or on Banach space theory.
Also, the use of language seems somewhat inconsistent. Many texts formally define $\sum_{n=1}^\infty x_n$ to be $\lim_{N\to\infty}\sum_{n=1}^N x_n$ but then write something like "The series $\sum_{n=1}^\infty x_n$ converges if...", which would then mean "$\lim_{N\to\infty}\sum_{n=1}^N x_n$ converges if...", which makes no sense for then $\sum_{n=1}^\infty x_n$ is either a number (or a vector) or a meaningless expression such as "the largest natural number".
So what is the definition of a series? Or are series really just a way to speak about sequences and series do not exist as mathematical objects?
|
I think I remember that when I first learned about this, my professor said that this is the first 'abuse of notation' that we would encounter- the symbol $\sum_{n=0}^\infty a_n$ is both used for the sequence and its limit.
One way to answer your original question could be to think of a series as a pair of sequences $(a_n,b_n)$ such that $b_{n+1}-b_n=a_n$ and so make both the underlying sequence and the series to part of the data.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/637542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
}
|
Find $\int_0^{+\infty}\cos 2x\prod_{n=1}^{\infty}\cos\frac{x}{n}dx$ Evaluate the following integral
$$\int_0^{+\infty}\cos 2x\prod_{n=1}^{\infty}\cos\frac{x}{n}dx$$
I was thinking of a way which do not need to explicitly find the closed form of the infinite product, since I don't have any idea to tackle that. Any hints are welcomed.
|
The integral
$$g(y)={1\over \pi}\int_0^\infty \cos(xy)\prod_{n=1}^\infty\cos{x\over n}\,dx$$
is the density function of a random variable that I call the Random Harmonic Series.
The value $g(2)$ is particularly interesting as it is almost, but not quite equal, $1/8$.
To fifty decimal places, it is
$$g(2)=.12499999999999999999999999999999999999999976421683.$$
If you read my paper, you will discover why it is so close to $1/8$.
Random harmonic series. American Mathematical Monthly 110, 407-416 (May 2003).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/637650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
Probability of getting two pair in poker I was looking at this website http://www.cwu.edu/~glasbys/POKER.HTM and I read the explanation for how to calculate the probability of getting a full house. To me, the logic basically looked like you figure out the number of possible ranks and multiply by the number of ways to choose the cards from that given rank.
In other words, for a full house $P=$
$$\frac{{13\choose1}{4\choose3}{12\choose1}{4\choose2}}{52\choose5}$$
Following this logic, I tried to calculate the probability of getting two pair. My (incorrect) logic was that there are 13 possible ranks for the first pair and $4\choose2$ ways to choose two cards from that rank, 12 possible ranks for the second pair and $4\choose2$ ways to choose two cards from that rank, and 11 possible ranks for last card and $4\choose1$ ways to choose a card from that rank.
So I tried $P=$
$$\frac{{13\choose1}{4\choose2}{12\choose1}{4\choose2}{11\choose1}{4\choose1}}{52\choose5}$$
Obviously my solution was incorrect. I read explanation and the correct answer is $P=$
$$\frac{{13\choose2}{4\choose2}{4\choose2}{11\choose1}{4\choose1}}{52\choose5}$$
I'm still a bit fuzzy on where I went wrong though. Can anyone help me understand this problem a little better? Thank you very much for your help.
|
I find permutation more intuitive to follow for this kind of problems. For people like me:
We have five slots to fill: - - - - - . The first slot can take all 52 cards. The second slot can take only three cards so that they can make a pair. Similarly, the third and fourth slots can take 48 and 3 cards, respectively. The last and final slot can take any of remaining 44 cars. Therefore:
52 * 3 * 48 * 3 * 44 = 988416. Please note, this is order dependent. In other words, this is the count of x x y y z. However, we should count all the possibilities (i.e., z x y x y). Therefore, we multiply 988416 with 5! and divide by 2! (order between two xs) * 2! (order between two ys) and 2! (order between the pair of xs and ys). The total count is 14826240.
This is the numerator. The denominator is 52*51*50*49*48 = 311875200. The probability is 0.0475390156062425.
Note that if you want to count how many different hands can be dealt, then you have to divide 14826240 by 5! to compute the combination.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/637750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 6,
"answer_id": 2
}
|
Number of Irreducible Factors of $x^{63} - 1$ I have to find the number of irreducible factors of $x^{63}-1$ over $\mathbb F_2$ using the $2$-cyclotomic cosets modulo $63$.
Is there a way to see how many the cyclotomic cosets are and what is their cardinality which is faster than the direct computation?
Thank you.
|
Note that $x^{p^n}-x\in\mathbb{Z}_p[x]$ equals to product of all irreducible factors of degree $d$ such that $d|n$. Suppose $w_p(d)$ is the number of irreducible factors of degree $d$ on $\mathbb{Z}_p$, then we have
$$p^n=\sum_{d|n}dw_p(d)$$
now use Mobius Inversion Formula to obtain
$$w_p(n)=\frac1{n}\sum_{d|n}\mu(\frac{n}{d})p^d.$$
use above identity to obtain
$$w_p(1)=p$$
$$w_p(q)=\frac{p^q-p}{q}$$
$$w_p(rs)=\frac{p^{rs}-p^r-p^s+p}{rs}$$
where $q$ is a prime number and $r,s$ distinct prime numbers.
Now you need to calculate $w_2(1)+w_2(2)+w_2(3)+w_2(6)\color{#ff0000}{-{1}}$. By using above formulas you can see that the final answer is $13$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/637898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
How to prove that $\lim_{(x,y) \to (0,0)} \frac{x^3y}{x^4+y^2} = 0?$
How to prove that $\lim_{(x,y) \to (0,0)} \dfrac{x^3y}{x^4+y^2} = 0?$
First I tried to contradict by using $y = mx$ , but I found that the limit exists.
Secondly I tried to use polar coordinates, $x = \cos\theta $ and $y = \sin\theta$,
And failed .. How would you prove this limit equals $0$?
|
Observe that $x^4 + y^2 \geq |x^2y|$ (for instance, because $x^4+y^2+2x^2y = (x^2+y)^2\geq0$ and $x^4+y^2-2x^2y = (x^2-y)^2 \geq0$). Hence $\displaystyle \left|\frac{x^2y}{x^4+y^2}\right| \leq 1$ when $(x,y)\neq (0,0)$ and thus $$\lim_{(x,y)\rightarrow (0,0)} \left|\frac{x^3y}{x^4+y^2}\right| \leq \lim_{(x,y)\rightarrow(0,0)} |x| = 0,$$ so the limit is 0 by the squeeze theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/637987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 2
}
|
How do I calculate the intersection between two cosine functions?
$f(x) = A_1 \cdot \cos\left(B_1 \cdot (x + C_1)\right) + D_1$
$g(x) = A_2 \cdot \cos\left(B_2 \cdot (x + C_2)\right) + D_2$
Is it possible at all to solve this analytically? I can start doing this but I get stuck half way.
$A_1 \cdot \cos\left(B_1 \cdot (x + C_1)\right) + D_1 = A_2 \cdot \cos\left(B_2 \cdot (x + C_2)\right) + D_2$
$\Longleftrightarrow A_1 \cdot \cos\left(B_1 \cdot (x + C_1)\right) - A_2 \cdot \cos\left(B_2 \cdot (x + C_2)\right) = D_2 - D_1$
I'm not sure how to use arccosine on this expression. Therefore I'm asking for help to solve this.
Thanks in advance!
|
Substituting $\xi:=B_1(x+C_1)$ brings it to the fundamental form
$$ \cos (\xi) = p\cos (a\xi+b)+q$$
with $p=\frac{A_2}{A_1}$, $q=\frac{D_2-D_1}{A_1}$, $a=\frac{A_2}{B_1}$ and $b=\frac{C_2-C_1}{B_1}$. As far as I know, this form cannot be solved analytically in general.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/638060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why these points are isolated points in this exercise This is an exercise from a calculus book I'm reading:
I can do the exercise but I don't understand the $(\dots)$ in $(c)$. The $x$ are in $\mathbb R$ and $\mathbb R$ does not have isolated points (an isolated point of a set is a point that is not an accumulation point of the set). Please can someone explain me why these points are isolated?
|
Isolated in this context means isolated within the set: that there are not points arbitrarily close. If I define the set $\{\frac 1n: n \in \Bbb N\}$ each point is isolated because for each point $x_n$ I can find an $\epsilon$ so that there are no other points of the set within $(x_n-\epsilon,x_n+\epsilon)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/638166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Probability of a path of a given length between two vertices of a random graph Suppose that in random graph $G$ on $n$ vertices any $2$ vertices can be connected by an edge with probability $\dfrac{1}{2}$, independently of all other edges. What is the probability $P_n(k)$ that two arbitrary vertices are connected by a simple path of length $k$, $0<k \leq n-1$?
My attempt. Fix $2$ vertices. To build a path of length $k$ we have to choose $k-1$ vertices from the remaining $n-2$ vertices. Since order is important we can do it in $(k-1)! {n-2 \choose k-1}$ ways. There are $2^k$ configurations of the edges for a path of length $k$. Thus the probability seems to be
$$
P_n(k)=\frac{(k-1)! {n-2 \choose k-1}}{2^k}.
$$
The sum over all path lengths, $\displaystyle \sum_{k=1}^{n-1}P_n(k)$, must equal $1$, but my calculations for small $n$ show that it is not $1$.
Where is my mistake?
|
It's an old question, but I was thinking about the same problem today, and here's a solution for a special case.
Let's say that there are $n+2$ total vertices and the edge-probability is $p$. Edge-probability is the probability that there exists a direct edge between two vertices. Let's say we want to find the probability that ∃ a path of length two between two vertices $x$, and $z$.
This probability equals $1 - (1 - p^2)^n \approx np^2$ when $p$ is small enough.
For example, if there are 100 vertices, and the edges of the graph are in a matching, i.e. the total edges in the graph equals 100, then $p = 100/(100*100)$, and the probability of length 2 path equals $0.009 \approx 0.01$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/638251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
When do parametric equations constitute a line? The given equations specifically are
$x=3t^3 + 7$
$y=2-t^3$
$z=5t^3 + 3$
And
$x=5t^2-1$
$y=2t^2 + 3$
$z=1-t^2$
|
For the first case, $r=(7,2,3)+t^3(3,-1,5)$. As $t$ varies through $\mathbb R$, $t^3$ varies through $\mathbb R$, so we have a line.
For the second case, $r=(-1,3,1)+t^2(5,2,-1)$. As $t$ varies through $\mathbb R$, $t^2$ varies through the non-negative reals, so we have a ray.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/638340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Why doesn't this work in Geogebra I've got a really simple equation that I want GeoGebra to plot:
$\sqrt {2x}-\sqrt {3y} =2$
It says it's an illegal operation so I try:
$3y=2x-4\sqrt{2x}+4$
When this doesn't try, I try changing $\sqrt{2x}$ to $(2x)^{1/2}$ and it informs me that exponents can only be integers. Since when can't geogebra handle rational exponents?!
|
The original equation in this thread now plots correctly as well as several others that previously would not plot such as $\ \sin (x+y) = x/y \ $. Even simple ones like $ \ \sin y = x \ $ would not plot a year ago and now work with no problem. Nice work GeoGebra.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/638590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Union of Chain of Ideals I'm writing a project in a "Rings and Modules" course, and I've come across the following proposition, stated without proof:
Proposition 1.2.
In a commutative ring R , the product of ideals is commutative and associative, and distributes sums and unions of chains.
Generally, whenever one of these occurs I try to prove it myself. However in this case I'm struggling to understand what the proposition is even stating. I've already proven that the product distributes sums of ideals, but I can't find a nice definition of what a chain is.
As far as I can gather, the question is saying show that
$A(B\cup C)=A\cup C + B\cup C)$ where A is an ideal of R, and $B,C$ are chains. However without knowing what a chain is I can't really progress. I know that a chain must be an ideal, else we couldn't have ideal multiplication between A and B, for example. So I assume that
$$I_1\subseteq I_2 \subseteq ... = B$$
not that I find that an acceptable definition of a chain at all. Then would a union of chains be $\cup_i I_i=B$ or would it be $B \cup C$?
Well that's left me really confused anyway. If anyone could clarify I'd be greatly appreciative. Thanks for any replies!
|
I would interpret it as saying
$I \sum_{\alpha\in \mathcal{A}} J_\alpha = \sum_\alpha IJ_\alpha$
whenever $\{J_\alpha : \alpha \in \mathcal{A}\}$ is a set of ideals,
(product distributes over sums)
and
$I \cdot \bigcup_\alpha J_\alpha = \bigcup_\alpha IJ_\alpha$
whenever $\{J_\alpha : \alpha \in \mathcal{A}\}$ is a chain of ideals
(product distributes over chains).
A set of ideals $\{J_\alpha : \alpha \in \mathcal{A}\}$ is a chain of ideals if for all $\alpha, \beta \in \mathcal{A}$, either $J_\alpha \subseteq J_\beta$ or $J_\beta \subseteq J_\alpha$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/638676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Orbit space of S3/S1 is S2
I'm having trouble finishing this homework assignment. I did the first part by showing that the orbits are invariant: every element from the same $(S^1(z_1, z_2) \in S^3/S^1)$ is mapped to the same point in $\mathbb{R}$.
For the second part I found the following equations. For $(z_1, z_2), (w_1, w_2) \in S^3$:
$z_1 z_2 = w_1 w_2$
$\bar{z_1} \bar{z_2} = \bar{w_1} \bar{w_2}$
$z_1 \bar{z_1} = w_1 \bar{w_1} = r$ (for some $r$)
$z_2 \bar{z_2} = w_2 \bar{w_2} = 1 - r$
But I don't think that's enough to prove $(z_1, z_2) = (w_1, w_2)$, is there any more information I left out?
For the third part I showed that $(f, g, h)$ maps to $S^2$ since $||(f,g,h)(z_1, z_2) = 1||^2$ for $(z_1, z_2) \in S^3$. I think that because it is well defined and point-separating we know certainly know that it's image of $X$ is the whole of $S^2$ but I can't grasp why. Don't I still have left to prove that every element of $S^2$ has an inverse in $S^3/S^2$?
|
To do part $(2),$ note that if you know the values of the first two functions, you know $\Re z_1 z_2$ and $\Im z_1 z_2,$ so you know $z_1 z_2 = C.$ So, you know that $z_2 = C/z_1.$ you can then solve $|z_1|^2 + |z_2|^2 = 1; |z_1|^2 - |z_2|^2 = D$ (where $D$ is what ever $\tilde{h}$ tells you) for $|z_1|$ so, you know $z_1$ up to multiplication by $\exp(i \theta)$ for some theta. Since you know $z_1 z_2,$ to get the same $C,$ you have to multiply $z_2$ by $\exp(-i \theta) = \overline{\exp(i \theta).}$ so that tells you that the the three functions separate points.
For the third part: Since $X$ is compact, the image of $X$ is a compact subset of $S^2.$ The quotient $X$ is a manifold (you should check that), and the map is a smooth non-singular map (check that too). Therefore, by invariance of domain, the image is open in $S^2.$ The only open and closed subsets of $S^2$ are itself and the empty set, since $S^2$ is connected.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/638767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Satisfying the inequality of a bounded derivative Hi I am having an issue of proving this inequality.
The problem:
Suppose: $G(x) = |x|^2 + 1$
Show: $\left|\frac{d}{dx}G(x)\right| \leq cG$ (A bounded derivative)
My initial attempt would use the lipschitz (or Gronwalls Lemma?) However, I am unsure how to finish the problem.
|
First we know that
$$ |\frac{d}{dx} G(x)| = |2x| = 2|x|$$
but when $x \geq 1$ you know that $|x| \leq |x|^2$, moreover this implies that it is always true that $|x| \leq |x|^2 +1$ but we know that $ \frac{1}{c}|\frac{d}{dx} G(x)| = |x|$ ( in our case the constant $1/c =1/2$), so from this we get:
$$ \frac{1}{c} |\frac{d}{dx} G(x)| = |x| \leq x^2+1 = G(x)$$
$$ \rightarrow |\frac{d}{dx} G(x)| \leq c G(x) $$
Hope this helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/638848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Quaternion identity proof If $q \in \mathbb{H}$ satisfies $qi = iq$, prove that $q \in \mathbb{C}$
This seems kinda of intuitive since quaternions extend the complex numbers. I am thinking that $q=i$ because i know that $ij = k , ji = -k$, which is expand to all combinations of $i,j,k,$ which I think means that I have to use $ijk = i^2 = -1$
|
If you put $\;q=a+bi+cj+dk\;$ , then
$$\begin{align*}qi=ai-b-ck+dj\\
iq=ai-b+ck-dj\end{align*}$$
Well, what do you deduce about the coefficients $\;a,b,c,d\in\Bbb R\;$ above?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/638922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Why isn't $\int\sin(ix)~dx$ equal to $i\cos(ix)+C$ ? I was playing around with imaginary numbers, and I tried to solve $$\int\sin(ix)~dx$$ and ended up getting $$i\cos(ix)+C$$
But apparently the answer is $$i\cosh(x)+C$$
I was just wondering, is this correct? And what does the "$h$" stand for/mean? Where did it even come from. Thanks in advance.
|
Here are two useful definitions / relations
$$\cosh(x) = \frac{e^x + e^{-x}}{2}$$
$$\cos(x) = \frac{e^{ix} + e^{-ix}}{2}$$
Using these definitions you can see that
$$\cos(ix) = \frac{e^{i(ix)} + e^{-i(ix)}}{2} = \frac{e^{-x} + e^x}{2} = \cosh (x)$$
So you did get the same answer, but you just had it in a different form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/639102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Prove that the Lebesgue measure of a particular set is zero. I am doing revision and got extremely stuck with the following exercise, which appeared in an exam from the previous year.
Consider the measure space $(\mathbb R, \mathbb B, \lambda)$ where $\mathbb B$ is the Borel sigma algebra and $\lambda$ the Lebesgue measure. Let $\delta > 0$ and
\begin{align*}
V := \left\{x \in \mathbb R: \exists \text{ infinitely many } p \in \mathbb Z, \, q \in \mathbb N \text{ where } \left|x - \frac{p}{q}\right| \le \frac{1}{q^{2+\delta}}\right\}.
\end{align*}
Show that $\lambda(V) = 0$.
I wish I had an attempt for this, but unfortunately I don't know how to start, I'm afraid.
Please help me.
|
For any finite $K > 0$, consider the set
$$
V(K) := \left\{x \in \mathbb R: (\lvert x\rvert < K)\land \left( \exists \text{ infinitely many } p \in \mathbb Z, \, q \in \mathbb N \text{ where } \left|x - \frac{p}{q}\right| \le \frac{1}{q^{2+\delta}}\right)\right\}.
$$
Proving that each $V(K)$ has measure $0$ suffices, because $V = \bigcup\limits_{k=1}^\infty V(k)$.
For $p \in \mathbb{Z}, q\in \mathbb{N}$, let
$$A(p,q)= \left\lbrace x \in \mathbb{R} : \left\lvert x-\frac{p}{q}\right\rvert \leqslant \frac{1}{q^{2+\delta}} \right\rbrace.$$
Then $\lambda(A(p,q)) = 2q^{-(2+\delta)}$. Further, let
$$B(q,K) = \bigcup_{\lvert p\rvert < 2Kq} A(p,q).$$
Then $\lambda(B(q,K)) \leqslant 5Kq\cdot 2q^{-(2+\delta)} = 10K\cdot q^{-(1+\delta)}$, and
$$\sum_{q=1}^\infty \lambda (B(q,K)) \leqslant 10 K \sum_{q=1}^\infty \frac{1}{q^{1+\delta}} < \infty.$$
By the Borel-Cantelli lemma, the set
$$W(K) = \bigcap_{n=1}^\infty \bigcup_{q=n}^\infty B(q,K)$$
is a null set. But $V(K) \subset W(K)$ for $K \geqslant 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/639198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
The sum of areas of 2 squares is 400and the difference between their perimeters is 16cm. Find the sides of both squares. The sum of areas of 2 squares is 400and the difference between their perimeters is 16cm. Find the sides of both squares.
I HAVE TRIED IT AS BELOW BUT ANSWER IS NOT CORRECT.......CHECK - HELP!
Let side of 1st square=x cm.
∴ Area of 1st square=x²cm²
GIVEN,
Sum of areas =400cm²
∴ Area of 2nd square=(400-x²)cm²
AND side of 2nd square=√[(20-x)² i.e.20-x .......(1)
Difference of perimeters=16cm.
THEN-
4x-4(20-x)=16 (ASSUMING THAT 1ST SQUARE HAS LARGER SIDE)
X=12
HENCE - SIDE OF 1ST SQUARE = 12CM ;
SIDE OF 2ND SQUARE=20-12=8CM. [FROM (1)]
WHICH IS NOT THE REQUIRED ANSWER AS SUM OF AREAS OF SQUARES OF FOUNDED SIDES IS NOT 400CM²
|
Well, your mistake came about in the step $$\sqrt{400-x^2}=20-x,\tag{$\star$}$$ which isn't true in general. But why can't we draw this conclusion? Observe that if we let $y=-x,$ then $y^2=x^2,$ so $400-y^2=400-x^2.$ But then we can use the same (erroneous) reasoning to conclude that $$20-y \overset{(\star)}{=} \sqrt{400-y^2} = \sqrt{400-x^2} \overset{(\star)}{=} 20-x,$$ from which we can conclude that $y=x.$ But $y=-x,$ so the only way we can have $y=x$ is if $x=y=0.$ Hence, $(\star)$ is true if and only if $x=0,$ and we certainly can't have $x=0$ in this context.
Instead, note that since the difference in perimeters is $16$ cm, then the smaller of the two squares must have sides that are $4$ cm shorter than those of the larger square's sides. That is, if $x$ is the length of the larger squares sides, we need $$x^2+(x-4)^2=400.$$ Can you expand that and take it from there?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/639271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Homomorphism of modules and Tensor Product. Let $\phi: A \rightarrow B$ be a ring homomorphism. Let $M$ be an $A$-module. We can think $B$ as $A$-module via the map $\phi$ defined by $\phi:A\times B \rightarrow B$, $(a,b)\mapsto\phi(a)\cdot b$.
So we can construct the $A$-module $M \otimes_A B$. Furthermore, $M \otimes_AB$ can be thought of as a $B$-module via $B\times (B \otimes _AM)\rightarrow (B\otimes _A M)$ defined by $(b',(b\otimes m))\mapsto b'b\otimes m$.
Question: Why is $\operatorname{Hom}_A(M,N)=\operatorname{Hom}_B(B\otimes_A M,N)$ where $N$ is an $B$-module and in the left hand side $N$ is considered as a $A$-module as described above?
|
Hint: Given an $A$-linear map $f : M \to N$, define $B \times M \to N$ by $(b,m) \mapsto b \, f(m)$. Check that this is $A$-bilinear, hence lifts to an $A$-linear map $h : B \otimes_A M \to N$ characterized by $h(b \otimes m)= b \, f(m)$. Check that it is actually $B$-linear. Conversely, given a $B$-linear map $h : B \otimes_A M \to N$, check that $f : M \to N$, $m \mapsto h(1 \otimes m)$ is $A$-linear. Finally, prove that these constructions are inverse to each other.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/639347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Give an example of Euclidean space. In the question it is asking what will be if we take out one condition of theorem.
$\textbf{Theorem}$: Let {$\phi_{n}$} be orthonormal system in a complete Euclidean Space R. Then {$\phi_{n}$} is complete if and only if R contains no nonzero element orthogonal to all elements of { $\phi_{n}$}.
$\textbf{Question}$: Give an example of Euclidean Space $R$ and orthonormal system {$\phi_{n}$} in $R$ such that R contains no nonzero element orthogonal to every $\phi_{n}$, even though {$\phi_{n}$} fails to be complete.
So, if we can give such example, by above theorem, $R$ can not be complete.
$\textbf{Definition 1}$:{$\phi_{n}$} is complete if a linear combinations of elements of {$\phi_{n}$} are everywhere dense in $R$
$\textbf{Definition 2}$ : Euclidean space $R$ if $R$ is linear space with scalar product.
To be honest, I couldn't find any example myself. I always find this quite challenging when you drop one condition of theorem, it is obviously doesn't hold. Otherwise it wouldn't be theorem. Thanks in advance.
|
Take separable Hilbert space $H$ with basis $e_1, ..., e_n, ...$ Now take the subspace generated (algebraically) by $e_2,e_3, ..., e_n, ...$ and the vector $e_1 + 1/2 e_2 + ... + 1/n e_n + ...$. This is a Euclidean space. The system $e_2, ... , e_n, ...$ is maximal orthonormal, but it is not complete, because the subspace itself is dense in $H$ and $e_2, ..., e_n,...$ is not a complete system in $H$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/639524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Locally path-connected implies that the components are open If $X$ is a locally path-connected space, then its connected components are open.
I am trying to prove this, but for some reason it doesn't seem right to me, knowing that components are always closed. If the statement is true, wouldn't it be the case the components are the whole space $X$?
|
Hints:
1) If $X$ is locally path-connected, then path components of $X$ are open
2) If $X$ is locally path-connected, then path components and connected components coincide
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/639606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
Solving a differential equation with natural log I am given:
$x\dfrac{dy}{dx}=\dfrac{1}{y^3}$
After separating and integrating, I have:
$y^4/4=\ln x+C$
I am supposed to solve this equation, but I'm stuck here. Should I solve explicitly so I can keep $C$?
EDIT:
A solution I came up with last night was:
$y=(4\ln x+C)^{1/4}$
|
Try differentiating to see if you got the correct solution!
You can compute
$$
\frac{dy}{dx} = \frac{1}{4}\left(4 \ln x + C\right)^{-3/4}\left(\frac{4}{x}\right) = \frac{1}{x}\left(4 \ln x + C\right)^{-3/4}
$$
so
$$
x\frac{dy}{dx} = \left(4 \ln x + C\right)^{-3/4}.
$$
Is this equal to $\dfrac{1}{y^{3}}$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/639668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Infinite direct product of rings free. Let $A$ be a commutative ring (viewed as an $A$-module over itself) that is not a field. Are there some conditions that guarantee that $\prod_{k=0}^\infty A$ is free? What if $A=\mathbf{Z}$ or more generally any pid?
|
Well, if $A$ is a field, then $\prod_\mathbb{N} A$ is certainly free. I claim that the direct product $\prod_\mathbb{N} \mathbb{Z}_4$ is also free, where $\mathbb{Z}_4 = \mathbb{Z}/4\mathbb{Z}$.
To prove this let
$\{c_i\}_{i\in\mathcal{I}}$ be a basis for $\prod_\mathbb{N}\mathbb{Z}_2$, where $\mathcal{I}$ is some indexing set. For each $i$ let $b_i$ be the corresponding element of $\prod_\mathbb{N}\mathbb{Z}_4$. That is, the entries of $b_i$ are all $0$'s and $1$'s, and agree with the entries of $c_i$. I claim that $\{b_i\}_{i\in\mathcal{I}}$ is a basis for $\prod_\mathbb{N}\mathbb{Z}_4$.
To see this, consider, the following commutative diagram of abelian groups
$$
\begin{array}{ccccc}
\textstyle\bigoplus_\mathcal{I} \mathbb{Z}_2 & \xrightarrow{\times2} & \textstyle\bigoplus_\mathcal{I} \mathbb{Z}_4 & \longrightarrow & \textstyle\bigoplus_\mathcal{I} \mathbb{Z}_2 \\
\downarrow & & \downarrow & & \downarrow \\
\textstyle\prod_\mathbb{N} \mathbb{Z}_2 & \xrightarrow{\times2} & \textstyle\prod_\mathbb{N} \mathbb{Z}_4 & \longrightarrow & \textstyle\prod_\mathbb{N} \mathbb{Z}_2
\end{array}
$$
where the first and third vertical arrows are determined by the $c_i$'s, and the middle one is determined by the $b_i$'s. Both rows are short exact sequences and the vertical arrows on the left and right are known to be isomorphisms, so the middle arrow is also an isomorphism by the short five lemma.
The same argument shows that $\prod_{\mathbb{N}} \mathbb{Z}/p^2\mathbb{Z}$ is free for any prime $p$. Moreover, we can iterate the argument to prove that $\prod_{\mathbb{N}} \mathbb{Z}/p^{k}\mathbb{Z}$ is free for any prime $p$, where $k$ is a power of $2$.
I have no idea whether $\prod_{\mathbb{N}}\mathbb{Z}/8\mathbb{Z}$ is free.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/639874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Pre-cal trigonometric equation problem am i correct to factor out a 2 first?
$$2\sin^2 2x =1$$
$$2(\sin^2 x-1) =1$$
$$2\cos^2 x =1$$
$$\cos^2 x ={1\over2}$$
$$\cos x =\pm{\sqrt{2}\over 2 }$$
i'm only looking for solutions from $$0≤ x ≤ 2\pi $$
$$x = {\frac{\pi}{4}},{\frac{7\pi}{4}},{\frac{3\pi}{4}},{\frac{5\pi}{4}}$$
thanks for any corrections
|
Hint: let $y = \sin 2x$ and solve for $y$ first using algebra. Then figure out what value(s) of $x$ would make the equation true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Equality case of triangle inequality with functions If $f, g: \mathbb{R} \rightarrow \mathbb{R}$ such that $\left|f(x) + g(x)\right| = \left|f(x)\right| + \left|g(x)\right|$ for all $x \in \mathbb{R}$, then must $f = cg$ for some $c \gt 0$?
|
No. Consider $f(x)=x^2$, $g(x)=x^4$. $f+g$ is always positive, and so are $f$ and $g$. So the equality holds, but your condition is false.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Check this is a hilbert norm: $ \ell^2 $ with norm $\| \cdot \| := \| \cdot \|_{\ell^2} + \| \cdot \|_{\ell^p}$ Clearly $ p \geq 2 $ so it gains sense calculating the $\ell^p $-norm.
According to my calculation this norm is equivalent to the $\ell^2 $ norm, in fact given a cauchy sequence w.r.t $\| \cdot \| $ it his a cauchy sequence w.r.t. each norm. Using the completeness of $\ell^2 $ and the fact that $\|u\|_{\ell^p}^p \leq \| u \|_{\ell^2}^2 $ i'm forcing the the sequence to converge to the limit of the norm 2. So it is complete, and by the inverse mapping theorem $\ell^2$ norm is equivalent to my norm (used the identity map).
So this norm is hilbertizable. But i can't prove in an efficient way that is or (isn't) an hilbert norm
|
In order for a norm, of a normed space to come from an inner product, it has to satisfy the parallelogram identity:
$$
\|x+y\|^2+\|x-y\|^2=2\|x\|^2+2\|y\|^2.
$$
Your norm does not satisfy such an identity, i.e.,
$$
x=e_1,\,\,y=e_2,\quad \|x\|=\|x\|_2+\|x\|_p=2=\|y\|,\,\,
$$
$$
\|x-y\|=\|x+y\|=\sqrt{2}+\sqrt[p]{2}
$$
Thus
$$
\|x+y\|^2+\|x-y\|^2=(2\sqrt{2}+2\sqrt[p]{2})^2,
$$
while
$$
2\|x\|^2+2\|y\|^2=8.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Please explain this simple rule of logarithms to me Right I know this one is simple and I know that I just need a push to make it sink in in my head..
I am studying control systems and in one of the tutorial examples the tutor says
Show that
$$20\log(1/x) = -20\log(x)$$
I know that when you have a divide or a multiply with logarithms you add them and subtract them but for my own understanding I just need someone to like slowly show me how this works..
If I take the log of the numerator I have $20\log(1) = 0$ but I don't know where to go from here.. So do I now just take the log of the denominator and as the numerator was zero it is just minus whatever the log of the denominator is... Getting myself a bit muddled.. Thanks
|
You have : $0 = 20\times\log(1) = 20\times\log(x\times\frac1{x})$
But $\log(x\times \frac1{x})=\log(x)+\log(\frac1{x})$.
So $0=20(\log(x)+\log(\frac1{x}))$ and you have your result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 2
}
|
Condition for $N! > A^N$ I am given $A$ , I need to find minimum value of $N$ such that the condition $N! > A^N$ holds.
EXAMPLE : If $A=2$ then minimum $N=4$ and similarly if $A=3$ then minimum $N=7$.
How to solve this problem?
|
Solve
$$({\frac{n}{e})}^n\sqrt{2\pi n}-A^n = 0$$
with numerical methods, for example the bisection method.
The approximation of n! is good enough even to solve the A=2-case.
The uprounded result is the desired number.
Since
$$n^n > n!$$
for all $n>1$, the desired number must be >A .
So, by induction you can easily prove that the inequality also holds for
all larger numbers.
Since
$$(3A)! > A^{3A}$$
for any $A\ge1$ (because of $n!>(\frac{n}{3})^n$ , which follows easily
from stirlings formula), 3A is an upper bound for all A.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How to determine if $2+x+y$ is a factor of $4-(x+y)^2$? I know it is a factor but how could have I determined that it was? Feel free to link whatever concept is needed than solve it. Studying for clep and it's one of the practice problems. When I expand it I get nonsense.
|
The solution's already in the other answers, but in many cases, as in this one, you can try some substitution:
$$t:=x+y\implies\;\text{is}\;\;2+t\;\;\text{a factor of}\;\;4-t^2\;?$$
and now all depends on you remembering the high school algebra's slick formula, namely difference of squares:
$$4-t^2=(2^2)-t^2=(2-t)(2+t)\;\;\text{and etc.}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Unique combination of sets We start with a finite number of $N$ sets, $\boldsymbol{X}_1,\ldots,\boldsymbol{X}_N$, each containing a finite number of integers. The sets do not in general have the same number of elements. The goal is to find all possible unique combination that you can get by taking the union of some of these sets. All sets need to be used.
Example for $N = 4$ the possible combinations are:
*
*$\boldsymbol{X}_1,\boldsymbol{X}_2,\boldsymbol{X}_3, \boldsymbol{X}_4$
*($\boldsymbol{X}_1\bigcup\boldsymbol{X}_2),\boldsymbol{X}_3, \boldsymbol{X}_4$
*($\boldsymbol{X}_1\bigcup\boldsymbol{X}_2),(\boldsymbol{X}_3 \bigcup\boldsymbol{X}_4)$
*$\boldsymbol{X}_1,\boldsymbol{X}_2,(\boldsymbol{X}_3 \bigcup\boldsymbol{X}_4)$
*($\boldsymbol{X}_1\bigcup\boldsymbol{X}_3),\boldsymbol{X}_2,\boldsymbol{X}_4$
*($\boldsymbol{X}_1\bigcup\boldsymbol{X}_3),(\boldsymbol{X}_2\bigcup\boldsymbol{X}_4)$
*$\boldsymbol{X}_1,\boldsymbol{X}_3,(\boldsymbol{X}_2\bigcup\boldsymbol{X}_4)$
*($\boldsymbol{X}_1\bigcup\boldsymbol{X}_4),\boldsymbol{X}_2,\boldsymbol{X}_3$
*($\boldsymbol{X}_1\bigcup\boldsymbol{X}_4),(\boldsymbol{X}_2\bigcup\boldsymbol{X}_3)$
*$\boldsymbol{X}_1,\boldsymbol{X}_4,(\boldsymbol{X}_2\bigcup\boldsymbol{X}_3)$
*($\boldsymbol{X}_1\bigcup\boldsymbol{X}_2\bigcup\boldsymbol{X}_3),\boldsymbol{X}_4$
*($\boldsymbol{X}_1\bigcup\boldsymbol{X}_2\bigcup\boldsymbol{X}_4),\boldsymbol{X}_3$
*($\boldsymbol{X}_1\bigcup\boldsymbol{X}_3\bigcup\boldsymbol{X}_4),\boldsymbol{X}_2$
*$\boldsymbol{X}_1,(\boldsymbol{X}_2\bigcup\boldsymbol{X}_3\bigcup\boldsymbol{X}_1)$
*($\boldsymbol{X}_1\bigcup\boldsymbol{X}_2\bigcup\boldsymbol{X}_3\bigcup\boldsymbol{X}_1$)
This could be encoded in a matrix as:
$\begin{bmatrix}0,0,0,0\\1,1,0,0\\1,1,2,2\\0,0,1,1\\1,0,1,0\\1,2,1,2\\0,1,0,1\\1,0,0,1\\1,2,2,1\\0,1,1,0\\1,1,1,0\\1,1,0,1\\1,0,1,1\\0,1,1,1\\1,1,1,1\end{bmatrix}$,
where each column corresponds with a set and each row with a combination; a $0$ indicates that the set is not combined with any other set and $1$, $2$, etc. indicates a union of the sets with the same number.
How do I generate this sequence in general (for my particular application I expect $N<20$). Can anyone point me in the right direction?
|
I would generate the partitions of $N$. For $4$, they are $4, 3+1, 2+2, 2+1+1, 1+1+1+1$, then assign that many sets to each partition. The $4$ corresponds to your combination $15$. For $3+1$ there are four ways to choose the $3$, giving your combinations $11,12,13,14$. When you have multiple partitions of the same size, you need some way not to repeat, like saying the lowest number set in any of those is in the first partition. Then $2+2$ gives $3,6,9$ and so on.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Consequences of Schur's Lemma Schur's Lemma states that given two irreducible representations $(\rho,V)$ and $(\pi,W)$ of a finite group $G$ (V and W are linear vector spaces over the same field), and a homomorphism $\phi\colon V\to W$ such that $\phi(\rho(g)\mathbf{v}) = \pi(\phi(\mathbf{v}))$ for all $g$ in $G$ and all $\mathbf{v}$ in $V$, then either $\phi$ is an isomorphism or $\phi(\mathbf{v}) = \mathbf{0}$ for all $\mathbf{v}$ in $V$. I understand that the first of these would imply that $V$ and $W$ are equivalent irreducible representations (meaning that $\rho$ and $\pi$ are related by a similarity transformation). But what is the consequence for $V$ and $W$ if $\phi(\mathbf{v}) = \mathbf{0}$ for all elements of $V$? A textbook I have read said that this indicates $V$ and $W$ are distinct irreducible representations. What does this mean? Does it mean that $V\bigcap W = \mathbf{0}?$ Or would it only mean that if $\rho$ and $\pi$ are identical maps?If so, how can I derive this consequence?
|
It doesn't imply anything about $V$ and $W$, just that $\phi$ is the zero map; there is always a zero map between any two representations.
The point of Schur's Lemma is that any non-zero map between irreducible representations is an isomorphism. So if you have a non-zero map $V\to W$, then it is an isomorphism, and $V$ and $W$ are equivalent. On the other hand, if $V$ and $W$ are not equivalent, then the only map $V\to W$ is the zero map. (Perhaps what your textbook is saying is that if all maps $V\to W$ are zero, then $V$ and $W$ are distinct; but the "all" is necessary).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Again, improper integrals involving $\ln(1+x^2)$ How can I check for which values of $\alpha $ this integral $$\int_{0}^{\infty} \frac{\ln(1+x^2)}{x^\alpha}\,dx $$ converges?
I managed to do this in $0$ because I know $\ln(1+x)\sim x$ near 0.
I have no idea how to do this in $\infty$ where nothing is known about $\ln(x)$ .
Will someone help me ?
Thanks!
|
HINT
I suggest you develop $\log(1+x^2)$ as an infinite series (Taylor); divide each term by $x^a$, compute the anti-derivative and look where and when arrive the problems when you compute the integral between zero and infinity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
}
|
Show that $f(x)=0$ for all $x\geq0$ I have been struggling with this problem..
Q. Let $f(x)$, $x\geq 0$, be a non-negative continuous function, and let $F(x)=\int_0^x f(t) dt$, $x\geq0$. If for some $c>0$, $f(x)\leq cF(x)$ for all $x\geq 0$, then show that $f(x)=0$ for all $x\geq0$ .
I have tried everything in my ability, but in vain. I get a feeling that this can be solved using Mean Value theorem. Any ideas? Please help!!
|
Let $$\phi(t) = e^{-ct} F(t)$$ Then $\phi(0) = 0$, and $\phi(t) \ge 0$ for all $ t \ge 0$.
Furthermore, $$\phi'(t) = e^{-ct}(F'(t) - c F(t)) \le 0$$hence $\phi(t) = \int_0^t \phi'(\tau) d \tau \le 0$, and so $\phi(t) =0 $ for all $t \ge 0$.
If $ϕ(t)=0$ for all $t≥0$ , then $F(t)=0$ for all $t≥0$ . Since $f$ is continuous, $F$ is differentiable and $F ′ =f$ , hence $f=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Prove that $4^{2n+1}+3^{n+2} : \forall n\in\mathbb{N}$ is a multiple of $13$ How to prove that $\forall n\in\mathbb{N},\exists k\in\mathbb{Z}:4^{2n+1}+3^{n+2}=13\cdot k$
I've tried to do it by induction. For $n=0$ it's trivial.
Now for the general case, I decided to throw the idea of working with $13\cdot k$ and try to prove it via congruences. So I'd need to prove that $4^{2n+1}+3^{n+2}\equiv0\pmod{13} \longrightarrow 4^{2n+3}+3^{n+3}\equiv0\pmod{13}$, that is, $4^{2n+1}+3^{n+2}\equiv4^{2n+3}+3^{n+3}\pmod{13}$
But I have no clue how to do is. Any help?
|
Since you can use congruence arithmetic, you can exploit it to the hilt to prove it more simply
$\qquad\qquad\qquad \begin{eqnarray}{\rm mod}\ 13\!:\,\ 4^{2n+1}\!+3^{n+2} &=\,& 4\cdot \color{#0a0}{16}^n +\, \color{#c00}9\cdot 3^n \\ &\equiv\,& 4\,\cdot\, \color{#0a0}3^n\, \color{#c00}{-\,4} \cdot 3^n\equiv 0\quad {\bf QED} \\
\ \ {\rm by}\ \ &&\!\!\!\!\!\color{#0a0}{16}\equiv \color{#0a0}3,\,\ \color{#c00}{-4}\equiv \color{#c00}{9}\end{eqnarray}$
That $\color{#0a0}{\,16\equiv 3\,\Rightarrow\,16^n\equiv 3^n}$ follows by by the Congruence Power Rule (below), i.e. by inductive application of Congruence Product Rule. Notice how structuring the proof in this (congruence) arithmetical form makes the induction obvious: $ $ congruent numbers have congruent powers.
This illustrates the great power of congruence arithmetic - it enables reuse of our well-honed knowledge of integer arithmetic to simplify many problems that are innately arithmetical.
Congruence Product Rule $\rm\quad\ A\equiv a,\ \ and \ \ B\equiv b\ \Rightarrow\ \color{#c00}{AB\equiv ab}\ \ \ (mod\ m)$
Proof $\rm\ \ m\: |\: A\!-\!a,\ B\!-\!b\ \Rightarrow\ m\ |\ (A\!-\!a)\ B + a\ (B\!-\!b)\ =\ \color{#c00}{AB - ab} $
Congruence Power Rule $\rm\qquad \color{}{A\equiv a}\ \Rightarrow\ \color{blue}{A^n\equiv a^n}\ \ (mod\ m)$
Proof $\ $ It is true for $\rm\,n=1\,$ and $\rm\,A\equiv a,\ A^n\equiv a^n \Rightarrow\, \color{blue}{A^{n+1}\equiv a^{n+1}},\,$ by the Product Rule, therefore the result follows by induction on $\,n.$
See this answer for further congruence arithmetic rules and their proofs.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 0
}
|
Position and nature of singularities of an algebraic function (Ahlfors) I want to solve the following exercise, from Ahlfors' Complex Analysis text, page 306:
Determine the position and nature of the singularities of the algebraic function defined by $w^3-3wz+2z^3=0.$
Here is my solution attempt. I would appreciate your opinion (is it true?)
The critical points $\{c_k\}$ are the zeros of the leading coefficients as well as the zeros of the discriminant of the polynomial above, which I'll denote by $P(w,z)$ from now on. Since the leading coefficient $a_0(z)=1$ is non-vanishing, the critical points can only occur as zeros of the discriminant of $P(w,z)$ (which is the resultant of $P$ and $P_w)$. According to my calculations (i.e. obtaining the resultant by a sequence of polynomial divisions, starting with $P$ divided by $P_w$) the discriminant is $3z^4-3z$
. It has zeros whenever $z_0=0$ or $z_k=\exp(2\pi ik/3)$ for $k \in \{1,2,3\}$. According to the analysis on page 304, all points $\{z_k \}_{k=0}^3$ are ordinary algebraic singularities.
We are left with examining the point $z=\infty$. Unfortunately, the book only finds a bound on the degree of the pole at infinity. Moreover, it is not even guarantee that it is a pole (apparently, it could also be an ordinary point). I have no idea how to determine the nature of the singular point at $\infty.$
To sum up, I have two questions:
*
*Is the part regarding the zeros of the discriminant true?
*How can I determine the nature at $z=\infty$?
Thanks.
|
Regarding 1, yes.
Regarding 2, we can show the following.
*
*$w$ cannot be bounded as $z$ tends to infinity, by dividing both sides of $w^3-4wz+3z^3=0$ by $z^3$.
*$\frac{w}{z}$ must be bounded as $z$ tends to infinity, by dividing both sides of $w^3-4wz+3z^3=0$ by $w^3$.
Therefore $z=\infty$ is an algebraic pole of order 1.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/640938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Integration Techniques - Adding [arbitrary] values to the numerator. Suppose you wanted to evaluate the following integral.
Where did the 4 come from? I understand that it makes the solution but how would you make an educated guess to put a 4? And how in the future would I solve similar questions?
|
We want to have a fraction with the form
$$\frac{f'}f$$
so since the derivative of the denominator $x^2+4x+13$ is $2x+4$ so we write the numerator
$$x-2=\frac 1 2 (2x-4)=\frac 1 2 (2x+4)-4$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Comparison of notation for sets Different authors use different notation, no question here...but doesn't this make the study of maths a little more difficult, always chasing different definitions of how a set is represented? I ask this because I've studied linear algebra from two different sources; Axler and Cooperstein.
For example, the set of all polynomials over a field $\mathbb{F}$.
$$\text{ Axler: }\mathcal{P}(\mathbf{F})$$
$$\text{ Cooperstein: }\mathbb{F}[x]$$
I understand that, if one truly understands the material, then the notation of the set makes little to no difference. But some notations are just a difference in font, like the real numbers; $\mathbb{R}$ or $\mathbf{R}$, simple to infer immediately. Does there come a point where agreement is decided upon as notation or, at this level, does it really make a difference?
(I know this is a soft question, but I've never really asked and always wondered, and I suppose this is the forum for such questions, so long as the tag refers to it...)
|
I'm a second year graduate student and I used to wonder the same thing. There are commonly used symbols, but by no means is there agreement on what symbol to use in every case. I think it mainly comes down to style and the fact there are only a finite number of symbols out there to represent the ideas you want to get across. I prefer $\mathbb{R}$ for the reals, but someone might think $\bf{R}$ looks more aesthetically pleasing (or I might already be using $\bf{R}$ to denote a ring, so I can't use it to denote the reals). Sometimes you can't avoid introducing new notation if a symbol is already in use. The higher up in mathematics you go, the more you start to see different notations, but it does become natural. I hope that helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Room for computational geometry in advanced algorithms course I am currently putting together an independent study in advanced algorithms and because of my interest in (computational) geometry, wanted to include as many interesting algorithms from this field as possible. Does anyone have any suggestions for material that might be suitable?
You may assume the math background acquired from an undergraduate CS degree, along with a high motivation to learn more if necessary (i.e. to learn about interesting problems).
|
Given a bunch of ink blobs on a plane. Find the shortest path between any two given points out of the ink and not touching the ink.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
There is a positive integer $y$ such that for a polynomial with integer coefficients we have $f(y)$ as composite Show that if $f(x)=a_nx^n+\cdots a_1x+a_0$ with $a_i \in \mathbb{Z}$, then there is a positive integer $y$ such that $f(y)$ is composite.
To prove this, we suppose that $f(x)=p$. Then for $f(x+kp)$ we have $f(x)+Kp$ where $K$ is a constant. I'm stuck on a specific part of understanding this proof, why is it that since $p \mid f(x) \Rightarrow p \mid (f(x)+kP)$. I'm also stuck understanding why $f(x+kp)$ is necessarily also prime.
|
Look at $(x+kp)^n$
$$
(x+kp)^n = x^n + {n \choose 1} x^{n-1} k p + {n \choose 2} x^{n-2} k^2 p^2 + \cdots \\ =
x^n + \alpha_n p \text{ where $\alpha_n$ is an integer} $$
Similarly for other powers. This gives
$$ f(x+kp) = a_n (x^n + \alpha_n p) + a_{n-1} (x^{n-1} + \alpha_{n-1} p) \cdot \\=
a_n x^n + a_{n-1} x^{n-1} + \cdots + (a_n \alpha_n + a_{n-1} \alpha_{n-1}+\cdots) p\\
=f(x) + K p
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Integers divide several solutions to Greatest Common Divisor equation I'm not sure about the topic's correctness but my problem is following:
Suppose $u_1,v_1$ and $u_2,v_2$ are two different solutions for $au_i + bv_i = 1$, then $a \mid v_2-v_1$ and $b\mid u_1-u_2$.
Well, I have tried to prove this without success, but here are some of my thoughts so far. I want to show that $(v_2-v_1)=ak$ for some $k\in \mathbb{Z}$, and I also know that $au_1+bv_1 = au_2+bv_2 \implies a(u_1-u_2) = b(v_2-v_1)$. In this last equality I know that $gcd(a,b)=1$ from the initial assumption, but what can I say about $gcd((v_2-v_1),(u_1-u_2))$ ? Does that help me in any way?
Best regards
|
If $u_1,v_1$ and $u_2,v_2$ are solutions to $au_i+bv_i=1$, then $$0=au_1+bv_1-au_2-bv_2=a(u_1-u_2)+b(v_1-v_2).$$
Thus $a \mid b(v_1-v_2)$. Now use $(a,b)=1$ to conclude $a \mid (v_1-v_2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How can efficiently derive $x$ and $y$ from $z$ where $z=2^x+3^y$. How can efficiently derive $x$ and $y$ from $z$ where $z=2^x+3^y$.
Note. $x$,$y$ and $z$ are integer values and $z$ is $4096$ bits integer or even more.
For all $z>1$.
And if equation be $z$=$2^x$.$3^y$ then what is your answer?
|
If $z = 2^x \ 3^y$, then define $a_k = {\log_2 (z) \over{2^k}}$ rounded to the nearest integer.
Let $k = 2$ and $\beta = a_1$
If $ z \equiv_{2^\beta} 0$, $\beta \to \beta + a_k$
Else, $\beta \to \beta - a_k$
Regardless, $k \to k+1$
Repeat until $a_k = 0$, then $x \in \{\beta-1, \beta, \beta+1 \}$ (simple to check which)
After you know $x$, it becomes clear that $y = \log_3 ({z \over{2^x}})$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
(Highschool Pre-calculus) Solving quadratic via completing the square I'm trying to solve the following equation by completing the square:
$x^2 - 6x = 16$
The correct answer is -6,1. This is my attempt:
$x^2 - 6x = 16$
$(x - 3)^2 = 16$
$(x - 3)^2 = 25$
$\sqrt(x -3)^2 = \sqrt(25)$
$x - 3 = \pm5$
$x =\pm5 - 3$
$x = -8,2$
I did everything according to what I know,but my answer was obviously wrong. Any help is appreciated. Thanks
|
$x^2 - 6x$ isn't the same as $(x-3)^2$, so your equation $(x-3)^2 = 16$ is wrong.
$(x-3)^2$ is actually $x^2 - 6x + 9$, so you should write
$$x^2 - 6x + 9 = 25$$
and then
$$(x-3)^2 = 25$$.
So your third equation is correct, even though your second wasn't.
I don't agree with your equation $\sqrt{(x-3)^2} = \sqrt{25}$. Technically it is correct, but you should know that in general $\sqrt{A^2}$ is not always $A$. In general, $\sqrt{A^2} = |A|$. So taking the square roots of both sides is not a good way to explain this. Instead, write $x-3 = \pm \sqrt{25}$. (It is a fact that if $z^2 = a$ and $a \geq 0$, then $z = \pm \sqrt{a}$.)
Your main mistake, and the only one that leads to an error in the result, is where you add 3 to the left side of your equation, but subtract 3 from the right side, and obtain $x = \pm 5 - 3$. You should instead add 3 to both sides. You need to do the same thing to both sides.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Inequality proof involving series I have a question which starts like this:
Show that for $n>m$ we have $S_m < S_n\leq S_m+\frac{1}{m}\frac{1}{m!}$
Where $S_n=\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+ ... +\frac{1}{n!}$
I have tried using induction on n but that doesn't work for me, can somebody just point me in the right direction on how I can go about proving this.
thanks
|
Hint.
First note that the required bound is independent of $n$. So you won't need to use $n$ in an essential way. In the worst case, $n$ will be very large, so you might as well write down the infinite sum $S = \frac{1}{(m+1)!} + \frac{1}{(m+2)!} + \cdots$. You want to prove that we have $S \leq \frac{1}{m} \frac{1}{m!}$. To do this, compare the series $S$ with a convenient geometric series.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Most efficient way to solve this combination problem I have 4 very large list (l1, l2, l3, l4) of components (approx 9 million items in each list). Each list item has a cost and a value.
I want to know how I can achieve the maximum combined value for an agreed combined total cost:
So, over all combinations in the 4 lists I want to know:
Max( l1[a].value + l2[b].value + l3[c].value + l4[d].value)
where
l1[a].cost + l2[b].cost + l3[c].cost + l4[d].cost <= X
X in the above is the total amount that I can spend.
I can obviously do this using a brute force method but that's going to take a long time to compute. I wondered if that was a more efficient way of doing this.
Any suggestions?
|
This is not the knapsack problem... But going through every possibility would take $O(n^4)$ time. This can be reduced to $O(n^3\log(n))$ by sorting the last list and going through every possible triple from the first 3 lists and finding by binary search the best object to take from the last list.
In practice you can probably prune off large parts of the search space by taking the 'best' object (with the highest value/cost ratio) from one list, then the best from another that can fit, until you have one from each list. This takes $O(n\log(n))$ time, and can be repeated for all $4!$ ways you can order the lists in the sequence. This will give a probably reasonably good bound on the optimal solution, and then you can run the above procedure but in each loop skipping all objects that make the total value necessarily less.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Does dividing by zero ever make sense? Good afternoon,
The square root of $-1$, AKA $i$, seemed a crazy number allowing contradictions as $1=-1$ by the usual rules of the real numbers. However, it proved to be useful and non-self-contradicting, after removing identities like $\sqrt{ab}=\sqrt a\sqrt b$.
Could dividing by $0$ be subject to removing stuff like $a\times\frac1a= 1$?
A way I can think of defining this is to have $\varepsilon=\frac10$. Remeber it may be that $\varepsilon\neq\varepsilon^2$, we don't know whether $\left(\frac ab\right)^n=\frac{a^n}{b^n}$, and other things alike; so they can't disprove anything.
This seems fun! What about defining a set $\mathbb E$ of numbers of form $a + b\varepsilon$? There would be interesting results as $(a+b\varepsilon)\times0 =b$.
Are these rules self consistent? Can such a number exist? If not, what rules can we change to make it exist? If yes, what can we find about it? How can it help us?
If there is no utter way of dividing by zero without getting a contradiction no matter the rules used, how can that be proven?
|
If $\,0\,$ has an inverse then $\ \color{#c00}1 = 0\cdot 0^{-1} = \color{#c00}0\,\Rightarrow\ a = a\cdot \color{#c00}1 = a\cdot \color{#c00}0 = 0\,$ so every element $= 0,\,$ i.e. the ring is the trivial one element ring.
So you need to drop some ring axiom(s) if you wish to divide by zero with nontrivial consequences. For one way to do so see Jesper Carlström's theory of wheels, which includes, e.g. the Riemann sphere.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Is the set of all invertible $n \times n$ matrices a vector space? I'm studying Algebra and I'm asked to prove or disprove "Is the set of all invertible $n \times n$ matrices a vector space?" I assume with respect to the usual matrix-sum and scalar multiplication. I found that is true, but I'm not sure how to prove it.
My problem here is that this statement is too "broad", i.e. I cannot create a matrix with arbitrary values a,b,c,d,(...) considering that I don't know the size of the matrix.
My idea was to prove in a first time that the set of all invertible 2 x 2 matrices of real number is a vector space and then to show that this property could be extended to bigger matrices. However I don't know how to do it and it's precisely here that I need a little help ; how can I show that we can extend our statement?
Is it enough to say that the matrix's size doesn't affect in any way properties we need to check?
Is my overall strategy wrong?
Any form of help would be very appreciated on this dubious answer.
|
The set of all invertible $n\times n$ matrices of real numbers is NOT a vector space.
Let for example $I$, the unit matrix is invertible and so is $-I$. But their sum $I+(-I)=0$ is definitely not invertible!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
}
|
Prove that $f(p(X))=p(X)-p'(X)$, where p is a polynomial, is bijective Let $V_d$ be the vector space of all polynomials with real coefficients of degree less than or equal to $d$. The linear map $f: V_3 \rightarrow V_3$ is given by: $f(p(X)) := p(X)-p'(X)$. Show that $f$ is bijective.
To show that $f$ is injective I could find that $\dim(\operatorname*{null}(f))=0$. So I guess I would need to show that a polynomial of degree less than or equal to $3$ is equal to its derivative only at $x=0$. But I don't know how to do that.
If I would simply set up this equation: $$\sum_{j=0}^3(\alpha_j - (j+1)\alpha_{j+1})X^j = 0$$ how could I show that the trivial solution is the only solution?
And to show that $f$ is surjective, would it be sufficient to say that $p(X)-p'(X)$ yields an equation that contains a linear combination of the basis of $V_3$ and with arbitrary coefficients that linear combination spans $V_3$?
|
Your linear map $f:V_3\to V_3$ is equal to $f=1-L$ with $L:p\in V_3\mapsto p'\in V_3$.
Notice that $L$ is nilpotent, that is, that some power of $L$ is zero.
It follows by a calculation then that $g=1+L+L^2+\cdots+L^k$, for any $k$ sufficiently large so that $L^{k+1}=0$, is an inverse map to $g$: indeed, you can simply compute $fg$ and $gf$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/641992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Field containing sum of square roots also contains individual square roots
Let $F$ be a field of characteristic $\neq 2$. Let $a\neq b $ be in $F$. Suppose $\sqrt{a}+\sqrt{b}\in F$. Prove that $\sqrt{a}\in F$.
We have $(\sqrt{a}+\sqrt{b})^2=a+b+2\sqrt{ab}\in F$, so $\sqrt{ab}\in F$. Then $\sqrt{ab}(\sqrt{a}+\sqrt{b})=a\sqrt{b}+b\sqrt{a}\in F$. What then?
|
Hint $\ $ If a field $F$ has two $F$-linear independent combinations of $\ \sqrt{a},\ \sqrt{b}\ $ then
you can solve for $\ \sqrt{a},\ \sqrt{b}\ $ in $F.\,$ For example, the Primitive Element Theorem
works that way, obtaining two such independent combinations by
Pigeonholing the infinite set $\ F(\sqrt{a} + r\ \sqrt{b}),\ r \in F,\ |F| = \infty,\,$
into the finitely many fields between F and $\ F(\sqrt{a}, \sqrt{b}),$ e.g. see PlanetMath's proof.
In the OP, note that $F$ contains the independent $\ \sqrt{a} - \sqrt{b}\ $ since
$$ \sqrt{a}\ - \sqrt{b}\ =\ \dfrac{\,\ a\ -\ b}{\sqrt{a}+\sqrt{b}}\ \in\ F $$
To be explicit, notice that $\ u = \sqrt{a}+\sqrt{b},\ \ v = \sqrt{a}-\sqrt{b}\in F\ $ so solving the linear system for the roots yields $\ \sqrt{a}\ =\ (u+v)/\color{#c00}2,\ \ \sqrt{b}\ =\ (v-u)/\color{#c00}2,\ $ both of which are clearly $\,\in F,\,$ since $\,u,\,v\in F\,$ and $\,\color{#c00}2\ne 0\,$ in $\,F,\,$ so $\,1/\color{#c00}2\,\in F.\,$ This works over any field where $\,2\ne 0\,,\,$ i.e. where the determinant (here $2$) of the linear system is invertible, i.e. where the linear combinations $\,u,v\,$ of the square-roots are linearly independent over the base field.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/642187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Why is that an automorphism that preserves $B$ and $H$ an automorphism of $\Phi$ that leaves $\Delta$ invariant? Let $L$ be a semisimple finite dimensional Lie algebra, $H$ its CSA and $\Phi$ its root system with base $\Delta$ and $B = B(\Delta) = H\bigoplus_{\alpha \succ 0}L_\alpha$. If we have an automorphism of $L$ that keeps $B$ and $H$ invariant, why is it an automorphism of $\Phi$? (Why does that keep $\Phi$ invariant?) Further, why does it keep $\Delta$ invariant?
|
$\Phi$ is determined by the pair $(L,H)$, so if the automorphism preserves both $L$ and $H$ it must preserve $\Phi$.
Again, $\Delta$ is determined by the set of all positive roots. But from borel you can reconstruct positive roots (they are eigenvalues of $ad$ on $B$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/642248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Galois group command for Magma online calculator? I need to test if a family of 7th deg and 13 deg equations are solvable. I'm new to Magma, so my apologies, but what would I type in,
http://magma.maths.usyd.edu.au/calc/
to determine the Galois group of $x^5+5x-12=0$ (for example)?
|
> P< x >:=PolynomialAlgebra(Rationals());
> f:=x^5+5*x-12;
> G:=GaloisGroup(f);
> print G;
Symmetric group G acting on a set of cardinality 5
Order = 120 = 2^3 * 3 * 5
Although the permutation group on [1..Degree($f$)] is permutationally isomorphic to the Galois group, the bijection with the set of roots of your separable irreducible polynomial $f$ is not determined. For more details see the Magma handbook.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/642329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
}
|
Is there an entire function which is not a polynomial such that $\lim_{z\to\infty}\frac{f\left(z\right)}{z}=\infty $ I'm wondering if there's an entire function $f$ which is not a polynomial such that $\lim_{z\to\infty}\frac{f\left(z\right)}{z}=\infty$?
Thanks in advance!
|
Your hypotheses imply that $f(z)/z$ is everywhere meromorphic on the Riemann sphere, and is thus a rational function.
The only affine singularity is at zero, so the denominator of $f(z) / z$ must be a power of $z$.
But because $f(z)$ is entire, the denominator of $f(z) / z$ cannot have more than one factor of $z$, and thus $f(z)$ is a polynomial.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/642433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If $n,k\in\mathbb N$, solve $2^8+2^{11}+2^n=k^2$. If $n,k\in\mathbb N$, solve $$2^8+2^{11}+2^n=k^2$$
It's hard for me to find an idea. Some help would be great. Thanks.
|
HINT:
We have $\displaystyle2^8+2^{11}=2304$
Now, $2^8+2^{11}+2^0=2304+1\ne k^2$
$\displaystyle 2^8+2^{11}+2^1=2304+2\equiv2\pmod8$, but $a^2\equiv0,1,4\pmod8$
So, $n\ge2$ let $n=m+2$ where $m\ge0$
$\displaystyle 2^8+2^{11}+2^{m+2}=4(576+2^m)\implies 576+2^m$ must be perfect square
Like either method $m\ge2$ let $m=r+2$ where $r\ge0$
$\displaystyle576+2^{r+2}=4(144+2^r)$
Follow this step
One Observation :
$$(2^4)^2+2\cdot2^4\cdot2^6+(2^6)^2=(2^4+2^6)^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/642564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
Calculate the pseudo inverse of the matrix The subject is to calculate the pseudo inverse if matrix $\begin{equation*}
\mathbf{A} = \left(
\begin{array}{ccc}
1 & 0 \\
2 & 1 \\
0 & 1 \\
\end{array}
\right)
\end{equation*}$
My answer is as follows: (SVD decomposition)
First, $\begin{equation*}
\mathbf{A^TA} = \left(
\begin{array}{ccc}
5 & 2 \\
2 & 2 \\
\end{array}
\right)
\end{equation*}$, with eigenvalues $\lambda_1 = 6, \lambda_2 = 1$, and eigenvectors $\begin{equation*}
\mathbf{x_1} = \frac{1}{\sqrt{5}}\left(
\begin{array}{ccc}
2 \\
1 \\
\end{array}
\right)
\end{equation*}$,
$\begin{equation*}
\mathbf{x_2} = \frac{1}{\sqrt{5}}\left(
\begin{array}{ccc}
-1 \\
2 \\
\end{array}
\right)
\end{equation*}$, so the matrix $\begin{equation*}
\mathbf{V} = \frac{1}{\sqrt{5}}\left(
\begin{array}{ccc}
2 & -1 \\
1 & 2 \\
\end{array}
\right)
\end{equation*}$.
Second, $\begin{equation*}
\mathbf{AA^T} = \left(
\begin{array}{ccc}
1 & 2 & 0 \\
2 & 5 & 1 \\
0 & 1 & 1 \\
\end{array}
\right)
\end{equation*}$, with eigenvalues $\lambda_1 = 6, \lambda_2 = 1,\lambda_3 = 0$, and eigenvectors $\begin{equation*}
\mathbf{x_1} = \frac{1}{\sqrt{30}}\left(
\begin{array}{ccc}
2 \\
5 \\
1 \\
\end{array}
\right)
\end{equation*}$,
$\begin{equation*}
\mathbf{x_2} = \frac{1}{\sqrt{5}}\left(
\begin{array}{ccc}
-1 \\
0 \\
2 \\
\end{array}
\right)
\end{equation*}$,
$\begin{equation*}
\mathbf{x_3} = \frac{1}{\sqrt{6}}\left(
\begin{array}{ccc}
2 \\
-1 \\
1 \\
\end{array}
\right)
\end{equation*}$
, so the matrix $\begin{equation*}
\mathbf{U} = \left(
\begin{array}{ccc}
\frac{2}{\sqrt{30}} & - \frac{1}{\sqrt{5}} & \frac{2}{\sqrt{6}} \\
\frac{5}{\sqrt{30}} & 0 & - \frac{1}{\sqrt{6}} \\
\frac{1}{\sqrt{30}} & \frac{2}{\sqrt{5}} & \frac{1}{\sqrt{6}} \\
\end{array}
\right)
\end{equation*}$, and
$\begin{equation*}
\mathbf{\Sigma} = \left(
\begin{array}{ccc}
6 & 0 \\
0 & 1 \\
0 & 0 \\
\end{array}
\right)
\end{equation*}$.
Then, the pseudo inverse becomes: $A^+ = V \Sigma^+U^T$.
The problem comes to: when I was checking the SVD decomposition, I found $A\ne U\Sigma V^T$. However, I find nothing odds in the calculation. Please help me to point out the error.
|
Recall, for $\mathbf{\Sigma}$ we take the square roots of the non-zero eigenvalues and populate the diagonal with them, putting the largest in $\mathbf{\Sigma}_{11}$, the next largest in $\mathbf{\Sigma}_{22}$ and so on until the smallest value
ends up in $\mathbf{\Sigma}_{mm}$.
$$\begin{equation*}
\mathbf{\Sigma} = \left(
\begin{array}{ccc}
\sqrt{6} & 0 \\
0 & 1 \\
0 & 0 \\
\end{array}
\right)
\end{equation*}$$
Everything else is correct (great job), although you can simplify some of the items in $\mathbf{U}$. For example, $\dfrac{2}{\sqrt{6}} = \sqrt{\dfrac{2}{3}}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/642647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Solve $x^3 = 27\pmod {41}$ I don't know how to approach this problem. Can anyone give me a hint? If it matters, the first part of the question was to find the order of $5$ in the field $\mathbb Z_{41}$ (the field mod $41$), which I did, but I'm not sure how it relates to the second part.
Thanks for your help.
|
We shall use the lemma that if $a^r\equiv a^s\pmod{n}$, then $r\equiv s\pmod{\operatorname{ord}_n(a)}$. Note that $6$ is a primitive root $\pmod{41}$, and that $6^5\equiv 27\pmod{41}$. We can write $x^3 = 6^{3k}$ for some $k$, so that $$6^{3k}\equiv 6^5\pmod{41}\implies 3k\equiv 5\pmod{\varphi(41)=40}.$$ It is straightforward to proceed from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/642746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Question regarding specific limits I was going through some past final exams for my "Analysis 1" class and I came across the following problem, which I've been so far unable to solve.
Let $f''$ be continuous in $(-1,1)$; with $f(0)=0$, $f'(0)=3$ and $f''(0)=5$.
$$
\lim_{h\rightarrow 0}\frac{f(h)+f(-h)}{h^{2}}
\\
\lim_{h\rightarrow 0}\frac{1}{h^{2}}\int_{0}^{h}f(x)dx
$$
Regarding the first limit, I came up with he following, although I'm not sure it's correct:
Given that:
$$\lim_{h\rightarrow 0}\frac{f(h))}{h^{2}}=\lim_{h\rightarrow 0}\frac{f(-h))}{h^{2}}$$
This gives:
$$\lim_{h\rightarrow 0}\frac{f(h)+f(-h)}{h^{2}}=2\lim_{h\rightarrow 0}\frac{f(h)}{h^{2}}=2\lim_{h\rightarrow 0}\frac{f(0+h)-f(0)}{h} \frac{1}{h}=
2 \left [\lim_{h\rightarrow 0}\frac{f(0+h)-f(0)}{h} \ \lim_{h\rightarrow 0}\frac{1}{h} \right ]$$
Which tends to $+\infty$
As I said, I'm not completely convinced by my reasoning. Confirmation/another solution would be helpful.
I haven't been able to solve the 2nd limit. Any ideas?
EDIT: I made a typo on the 2nd limit. Its $\frac{1}{h^{2}}$ rather than $h^{2}$. Sorry...
|
For the second limit, note that $f$ is continuous, and hence so is $|f|$. So, there exists $m,M$ such that $m \leq |f| \leq M$. So, by using these bounds, we can show that the second limit is $0$. The first limit does not exist, as $lim_{h \rightarrow 0}\frac{1}{h}$ does not exist. (for $h \rightarrow 0+$the limit is $\infty$, while for $h \rightarrow 0-$ the limit is $-\infty$)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/642844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Binary concatenation In decimal, I have the numbers 4 and 5 for example. I want to concatenate them into the number 45, but then in binary. In decimal, it's just a matter of sticking the numbers together, but I need to do that in binary. I'm not using a programming language, I'm using logical electronic circuitry. But that aside, I need to know how to do it for the general mathematical case if you can call it like that.
4 in decimal = 100 in binary,
5 in decimal = 101 in binary,
45 in decimal = 101101 in binary
As you can see, with binary it's not just a matter of sticking both sequences together.
What I need is an explanation of how to get to the answer 45, in binary, by performing arithmetic, number swapping or whatever the two binary sequences that is nessecary.
Thanks for reading, I'd appreciate any help.
|
$(45)_{10}$ (i.e., in decimal) is just
$$(45)_{10}=5_{10}((4)_{10}+(5)_{10})$$
$(4)_{10}+(5)_{10}$ is in binary, $(100)_{2}+(101)_2=(1001)_2$. Multiply this by $(101)_2$ to get
$$(1001)_2\times (101)_2=(1001)_2+(100100)_2$$
This on addition of course gives $(101101)_2=(45)_{10}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/643002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How do I deal with sines or cosines greater than $1$? When I'm solving trigonometric equations, I occasionally end up with a sine or cosine that's greater than $1$ -- and not on the unit circle. For example, today I had one that was $3 \tan 3x = \sqrt{3}$, which simplified to $\tan 3x = \frac{\sqrt{3}}{3}$, which simplified to $\sin 3x = \sqrt{3}$ and $\cos 3x = 3$. So far as I know, I can't divide this by $3$ (to isolate $x$) until I get angles with a sine of $\sqrt{3}$ and a cosine of $3$, respectively. So, how do I reduce this cosine so that I can find an angle on the unit circle with that cosine? Or, am I doing something very wrong to get this as a cosine in the first place?
Thanks!
evamvid
|
Hint: If you are solving $\tan x = a/b$, you have to realize $a/b$ can be written as $(a/c)/(b/c)$ for any nonzero $c$. You have to pick a value of $c$ before you can try to do that. Look at $c= \pm \sqrt{a^2+b^2}$ and see if either of those values gets you further.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/643156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Powers of 10 in binary expansion I noticed an interesting pattern the other day. Let's take a look at the powers of 10 in binary:
*
*$10^0$ = 1 = 1 b
*$10^1$ = 10 = 10 10 b
*$10^2$ = 100 = 1100 100 b
*$10^3$ = 1000 = 111110 1000 b
Basically, it seems that $10^n$ for any non-negative integer $n$ written out in base 2 ends with its base 10 representation.
Does this pattern go on forever, and if so, can anyone provide me with a satisfactory explanation as to why this happens?
|
Multiplication of a number in binary by $2^{n}$ adds $n$ zeroes to the expression. $10^{n}=2^{n}5^{n}$, so as $2^{n}$ divides $10^{n}$, when expressed in binary you are adding $n$ zeroes to the number $5^{n}$ expressed in binary form. This is directly analogous to the base $10$ case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/643244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Solve Bessel equation with order $\frac{1}{2}$ using Frobenius method Consider the Bessel equation of index $n= \frac{1}{2}$
$x^2y''(x) +xy'(x)+(x^2-\frac{1}{4})y(x) = 0$ $, x>0$
$(i)$ Show that $y(x) = u(x)x^{\frac{-1}{2}}$ solves the equation above if and only if $u$ satiesfies a familiar differential equation.
$(ii)$ Find the general solution for the Bessel equation valid when $x>0$
$(iii)$ What condition(s) must the arbitrary constants in $(ii)$ satisfy if the corresponding solution of the Bessel equation is to be bounded on $(0,\infty)$
I solved $(ii)$ and found the solutions to be $y_1(x) =x^{\frac{-1}{2}}sin(x)$ and $y_2(x) = a_0x^{\frac{-1}{2}}cos(x)+a_1x^{\frac{-1}{2}}sin(x)$, where $a_0$ and $a_1$ are arbitrary.
But I'm not sure how to approach $(i)$ and $(iii)$, can anyone help me?
Thank you very much!
|
For (i), what do you get when you substitute $y(x) = u(x) x^{-1/2}$ in the differential equation?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/643311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What is the probability that no letter is in its proper envelope? Five letters are addressed to five different persons and the corresponding
envelopes are prepared. The letters are put into the envelopes at random.
What is the probability that no letter is in its proper envelope?
|
Let $1,2,3,4,5$ be the letters and let $A,B,C,D,E$ be the proper envelope resepctively.
1) In case of five proper envelopes, only $1$ pattern.
2) In case of only four proper envelopes, the last one is in the proper envelope, so $0$ pattern.
3) In case of only three proper envelopes, you have $\binom{5}{3}\times 1=10$ patterns.
4) In case of only two proper envelopes, you have $\binom{5}{2}\times 2=20$ patterns.
For example, for $A=1, B=2,C,D,E$, you have two patterns as $(C,D,E)=(4,5,3),(5,3,4).$
5) In case of only one proper envelope, you have $\binom{5}{1}\times 9=45$ patterns.
For example, for $A,B,C,D,E=5$ you have nine patterns as
$$(A,B,C,D)=(2,1,4,3),(2,3,4,1),(2,4,1,3),(3,1,4,2),$$$$(3,4,1,2),(3,4,2,1),(4,1,2,3),(4,3,1,2),(4,3,2,1).$$
Hence, what you want is
$$1-\frac{1+0+10+20+45}{5!}=\frac{5!-76}{5!}=\frac{44}{120}=\frac{11}{30}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/643434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 3
}
|
If $\,A^k=0$ and $AB=BA$, then $\,\det(A+B)=\det B$ Assume that the matrices $A,\: B\in \mathbb{R}^{n\times n}$ satisfy
$$
A^k=0,\,\, \text{for some $\,k\in \mathbb{Z^+}$}\quad\text{and}\quad
AB=BA.
$$
Prove that $$\det(A+B)=\det B.$$
|
Here's an alternative to the (admittedly excellent) answer by Andreas:
Since $A$ and $B$ commute, they are simulaneously triangularizable. However, since $A$ is nilpotent, for any basis in which $A$ is triangular, the diagonal entries of $A$ will in fact be $0$. Thus in the chosen basis, the diagonal entries of $A + B$ and $B$ agree, so the determinants (being the product of the diagonal entries in this basis) are equal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/643531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 3,
"answer_id": 0
}
|
Looking for an elementary solution of this limit I was collecting some exercises for my students, and I found this one in a book: compute, if it exists, the limit
$$
\lim_{x \to +\infty} \int_x^{2x} \sin \left( \frac{1}{t} \right) \, dt.
$$
It seems to me that this limit exists by monotonicity. Moreover, since $\frac{2}{\pi}x \leq \sin x \leq x$ for $0 \leq x \leq \pi/2$, I could easily show that
$$
\frac{2}{\pi} \log 2 \leq \lim_{x \to +\infty} \int_x^{2x} \sin \left( \frac{1}{t} \right) \, dt \leq \log 2.
$$
WolframAlpha suggests a "closed" form for the integral, and by dominated convergence the limit turns out to be $\log 2$. However, passing to a limit in the $\operatorname{Ci}(\cdot)$ function is not really elementary. I wonder if there is a simpler approach that a student can understand at the end of a first course in mathematical analysis.
|
$$
\int_x^{2x}\sin\left(\frac{1}{t} \right)\,dt=\int_{\frac{1}{2x}}^{\frac{1}{x}}\frac{\sin(u)}{u^2}\,du,
$$
$$
\sin(u)=u+o(1)u,
$$
if $u$ is around $0$,
(here we use the elemetary $\lim_{u\to0}\frac{\sin(u)}{u}=1$)
so
$$
\int_{\frac{1}{2x}}^{\frac{1}{x}}\frac{\sin(u)}{u^2}\,du=\int_{\frac{1}{2x}}^{\frac{1}{x}}\frac{1}{u}(1+o(1))\,du,
$$
where $o(1)\to 0$ as $u\to0$, which is the case, because $x\to+\infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/643612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 2
}
|
Non-homotopy equivalent spaces with isomorphic fundamental groups I know that if two spaces $X,Y$ have the same fundamental group : $\pi_1(X,a) \cong \pi_1(Y,b) $
does not imply that they are homotopy equivalent.
But I can't find an example.
I was thinking about the Moebius strip and the circle, as both of them have the fundamental group $\mathbb Z$. I can't show that they are not homotopy equivalent, the circle is the Moebius strip boundary so it is not a deformation retract of it, but it also does not necessary means that the are not homotopy equivalent.
Can some one give me an example.
Is there a way to proof that two spaces are not homotopy equivalent or not deformation retract?
|
The standard way to prove two spaces are not homotopy equivalent is to find some homotopy invariant that distinguishes them. Since they are going to have the same fundamental group, the obvious candidates are homology groups and higher homotopy groups (either of which will tell you a sphere is not homotopy equivalent to a point, as in the answer above).
If you don't have those tools available, you're going to have to do some hard work.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/643692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Function approximating this product Is there any function approximating, for large values of $p$, the quotient between the product of all primes and the product of all primes $-1$?
Basically: $2/1 \cdot 3/2 \cdot 5/4 \cdot 7/6 \cdot 11/10 \cdots $
Can that be approximated, for large values of $p$, with some known function?
Thank you very much.
|
To find the behavior of
$$
\prod_{p \leq x} \frac{p}{p-1} = \prod_{p \leq x} \left(1 - \frac{1}{p}\right)^{-1}
$$
We begin by taking its logarithm, which we then rewrite as
$$
\log \prod_{p \leq x} \left(1 - \frac{1}{p}\right)^{-1} = - \sum_{p \leq x} \log\left(1 - \frac{1}{p}\right).
$$
Now $\log(1-1/p) \sim -1/p$ for large $p$, so this should yield a good first approximation. After pulling this out we obtain a sum which converges, so let's rewrite the quantity as
$$
\begin{align}
&- \sum_{p \leq x} \log\left(1 - \frac{1}{p}\right) \\
&\qquad = \sum_{p \leq x} \frac{1}{p} - \sum_{p \leq x} \left[\log\left(1 - \frac{1}{p}\right) + \frac{1}{p}\right] \\
&\qquad = \sum_{p \leq x} \frac{1}{p} - \sum_{p} \left[\log\left(1 - \frac{1}{p}\right) + \frac{1}{p}\right] + \sum_{p > x} \left[\log\left(1 - \frac{1}{p}\right) + \frac{1}{p}\right]. \tag{$*$}
\end{align}
$$
According to Mertens' formula (see the second entry here, where $M$ is the Meissel-Mertens constant),
$$
\sum_{p \leq x} \frac{1}{p} - \sum_{p} \left[\log\left(1 - \frac{1}{p}\right) + \frac{1}{p}\right] = \log\log x + \gamma + o\left(\frac{1}{\log x}\right).
$$
Using Mertens' formula again with the Abel summation formula it's possible to show that the final sum in $(*)$ is also $o\left(\frac{1}{\log x}\right)$, allowing us to conclude that
$$
-\sum_{p \leq x} \log\left(1 - \frac{1}{p}\right) = \log\log x + \gamma + o\left(\frac{1}{\log x}\right).
$$
Exponentiating this we find that
$$
\prod_{p \leq x} \left(1 - \frac{1}{p}\right)^{-1} = e^\gamma \log x + o(1).
$$
If you'd like, you can replace $x$ by $p_n$, the $n^\text{th}$ prime, and use the estimate
$$
p_n \approx n\log n + n\log\log n - n + \cdots
$$
to see that the product of the first $n$ primes is
$$
\begin{align}
\prod_{k=1}^{n} \left(1-\frac{1}{p_k}\right)^{-1} &= e^\gamma \log p_n + o(1) \\
&\approx e^\gamma \log\Bigl(n \log n + n\log\log n - n\Bigr).
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/643783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find all even natural numbers which can be written as a sum of two odd composite numbers. Find all even natural numbers which can be written as a sum of two odd composite numbers.
Please help in in solving the above problem.
|
Let $n\geq 100$ an even number. Consider the quantities $n-91$, $n-93$ and $n-95$, one of these is a multiple of 3, and not exactly 3 cause $100-95>3$, then is a composite odd number.
Observing thet 91,93 and 95 are composite, you conclude that every $n\geq 100$ works. Now check directly the remaining numbers, and you have the solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/643847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Showing that $\int \frac{ \sinh (az)}{\sinh (\pi z)} \, e^{ibz} \, dz $ vanishes along three sides of a rectangle in the upper half-plane One of several ways to evaluate $$\int_{0}^{\infty} \frac{\sinh (ax)}{\sinh (\pi x)} \, \cos (bx) \, dx \, , \quad \, |a|< \pi,$$ is to sum the residues of $$ f(z) = \frac{\sinh (az)}{\sinh (\pi z)} \,e^{ibz}$$ in the upper half-plane.
But if you restrict $b$ to positive values, how do show that $\int f(z) \, dz$ vanishes along the right, left, and upper sides of a rectangle with vertices at $\pm N, \pm N + i\left(N+\frac{1}{2} \right)$ as $N \to \infty$ through the positive integers?
I think we can use the M-L inequality (in combination with the triangle and reverse triangle inequalities) to show that that integral vanishes along the vertical sides of the rectangle.
But showing that the integral vanishes along the top of the rectangle seems a bit tricky.
|
So that my question doesn't remain unanswered, I'm going to post an answer using the hints that Marko Riedel provided in the comments.
On the right side of the rectangle (and similiarly on the left side of the rectangle), we have
\begin{align} \left| \int_{0}^{N + \frac{1}{2}} \frac{\sinh\big(a(N+it)\big)}{\sinh\big(\pi(N+it)\big)} \, i e^{ib(N+it)} \, dt \right| &\le \int_{0}^{N + \frac{1}{2}} \left| \frac{\sinh\big(a(N+it)\big)}{\sinh\big(\pi(N+it)\big)} \, i e^{ib(N+it)} \right| \, dt \\ &\le \int_{0}^{N+\frac{1}{2}} \frac{e^{aN}+e^{-aN}}{e^{\pi N}-e^{-\pi N}} \, e^{-bt} \, dt \\ &\le \left(N+ \frac{1}{2}\right) \frac{e^{aN}+e^{-aN}}{e^{\pi N}-e^{-\pi N}} \to 0 \ \text{as} \ N \to \infty\end{align} since $|a| < \pi$.
And on the upper side of the rectangle, we have
$$ \begin{align} \left| \int_{-N}^{N} \frac{\sinh \big(a(t+i(N+\frac{1}{2}) \big)}{\sinh \big(\pi (t + i (N+\frac{1}{2}) \big)} \, e^{ib\left(t+i(N+1/2)\right)} \, dt \right| &\le \int_{-N}^{N} \left|\frac{\sinh \big(a(t+i(N+\frac{1}{2}) \big)}{\sinh \big(\pi (t + i (N+\frac{1}{2}) \big)} \, e^{ib\left(t+i(N+1/2)\right)} \right| \, dt \\ &\le \frac{e^{-b (N+1/2)}}{2} \int_{-N}^{N} \frac{e^{at}+e^{-at}}{\cosh \pi t} \, dt \\ &< e^{-b(N+1/2)} \int_{-\infty}^{\infty} \frac{\cosh(at)}{\cosh(\pi t)} \, dt \to 0 \ \text{as} \ N \to \infty \end{align} $$ since the restriction $|a| < \pi$ means that$\int_{-\infty}^{\infty} \frac{\cosh(at)}{\cosh(\pi t)} \, dt$ converges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/643944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.