url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/214000/surface-integral-on-a-sphere-using-spherical-coordinates?answertab=votes
|
# Surface integral on a sphere using spherical coordinates
I have a question involving surface integral on a unit sphere. Suppose $s_1$ and $s_2$ are two points on a unit sphere with spherical coordinates $(\theta_1, \psi_1)$ and $(\theta_2, \psi_2)$, respectively. I want to compute
$$\int_{{\bf x}\in \mathbb{S}^2} \exp\{s^T_1 \cdot {\bf x}\} \exp\{s^T_2 \cdot {\bf x}\}d\mathbf{x}$$ where $\mathbb{S}^2$ stands for the unit sphere. $s_1^T \cdot \mathbf{x}$ means the dot product of the vectors $\vec{os_1}$ and $\vec{o\mathbf{x}}$ where $o$ is the center of the sphere.
Here is my idea. I can write $d\mathbf{x}$ as $\sin \theta d\theta d\psi$ where $(\theta, \psi)$ is the spherical coordinates of $\mathbf{x}$. Without loss of generality, I can assume $s_1$ is the top point of the sphere (on the $z$ axis) so that $s_1^T \cdot \mathbf{x}$ becomes $\cos \theta$. Now I think if I can write $s_2^T \cdot \mathbf{x}$ as a function of $\theta, \theta_2, \psi, \psi_2$, I can try to do this double integral. So my question boils down to "What is the form of $s_2^T \cdot \mathbf{x}$ in terms of their spherical coordinates $\theta, \theta_2, \psi, \psi_2$".
Maybe I can do this surface integral in an easier way. Any comments and suggestions are welcome.
-
## 3 Answers
Let \begin{equation} I=\int d\Omega\, e^{({\bf s}_1+{\bf s}_2)\cdot \hat{\bf r}}, \end{equation} where $d\Omega=\sin(\theta) d\phi d\theta$ and $\hat{\bf r}$ is a unit vector in spherical coordinates.
We are free to choose our coordinate system so, we choose the $\hat{\bf z}$ to lie along the ${\bf s}_1+{\bf s}_2$ direction. Then using $({\bf s}_1+{\bf s}_2)\cdot \hat{\bf r}=|{\bf s}_1+{\bf s}_2|\cos(\theta)$, where $|{\bf s}_1+{\bf s}_2|$ is the norm of the vector, \begin{equation} I=\int\limits_0^{2\pi}d\phi\int\limits^{\pi}_{0}d\theta\, \sin(\theta)e^{|{\bf s}_1+{\bf s}_2|\cos(\theta)}. \end{equation} The $\phi$ integral is trivial, while the $\theta$ integral can be easily done with the substitution $u=\cos(\theta)$ giving \begin{equation} I=4\pi\frac{\sinh(|{\bf s}_1+{\bf s}_2|)}{|{\bf s}_1+{\bf s}_2|}. \end{equation}
-
The value $(=:J)$ of the integral depends solely on the angle $2\alpha$ between the two vectors ${\bf s}_1$ and ${\bf s}_2$. Therefore we may assume $${\bf s}_1=(-\sin\alpha,0,\cos\alpha)\ ,\quad {\bf s}_2=(\sin\alpha,0,\cos\alpha)\ .$$ As ${\bf x}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$ the integrand then becomes $$\exp\bigl(({\bf s}_1+{\bf s}_2)\cdot{\bf x}\bigr)=\exp(2\cos\alpha\cos\theta)\ .$$ Therefore we obtain $$\eqalign{J&=\int_0^{2\pi}\int_0^\pi \exp(2\cos\alpha\cos\theta)\ \sin\theta\ d\theta\ d\phi\cr &=-{2\pi\exp(2\cos\alpha\cos\theta)\over 2\cos\alpha}\biggr|_{\theta=0}^\pi\cr &={2\pi\over\cos\alpha}\ \sinh(2\cos\alpha)\ .\cr}$$
-
Rotational symmetry gives you the tools to reduce a lot of the complexity in this problem. Since you have the freedom to choose your coordinate axes, one trick is to put the two points on the zx-plane, with the $z$-axis bisecting the angle between them. Then, the coordinates of the two points are $(\theta, \varphi) = (\pm \alpha, 0)$ for some fixed angle $\alpha$. The cartesian components would then be $\hat z \cos \alpha \pm \hat x \sin \alpha$ for the two vectors.
This is basically one way to arrive at Christian's insight--that the answer just depends on the angle between the points.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8836246728897095, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/3649/why-do-some-people-claim-the-delta-of-an-atm-call-option-is-0-5?answertab=votes
|
# Why do some people claim the delta of an ATM call option is 0.5?
I am looking for a mathematical proof in terms of differentiating the BS equation to calculate Delta and then prove it that ATM delta is equal to 0.5. I have seen many books quoting delta of ATM call option is 0.5, with explanations like the probability of finishing in the money is 0.5, but I am looking for a mathematical proof.
-
Since your original statement to prove is false, I've changed the question title to asking about the claim rather than asking for a proof. – Tal Fishman Jun 21 '12 at 15:02
Can someone explain me why [(r+sigma^2/2)*t]/[sigma *sqrt(t)]is equal to zero? I see r and sigma as positive nos , so am I correct in thinking that mathematically a delta of call option is not exactly equal to 0.5? – ladz Jun 21 '12 at 16:25
Are you asking when [(r+sigma^2/2)*(T-t)]/[sigma * sqrt(T-t)] is equal to zero? If so, please see my answer below – DoubleTrouble Jun 21 '12 at 16:29
thanks got it, so basically at maturity only the delta of ATM call option is equal to 0.5 and not during the life of option. – ladz Jun 21 '12 at 16:32
1
I agree to @FKaria, Delta at maturity does not make too much sense, especially if the option is ATM. I have a call (or a put) with strike $100$, the stock price is $100$ and time to maturity is zero. This is probably not the set-up that people talk about when they talk about options with $0.5$ Delta. – Richard Jun 22 '12 at 9:18
show 2 more comments
## 3 Answers
Your question is not really well formulated since you do not specify at which time the delta is equal to 0.5. What you claim is in fact only true for an ATM call option at the time of maturity.
In the Black-Scholes model the price of a call option on the asset S with with strike price $K$ and time of maturity $T$ equals
$$c(t,S(t),K,T) = S(t)\Phi\left(\frac{\ln\frac{S(t)}{K} + \left(r+\frac{\sigma^2}{2} \right)\tau}{\sigma \sqrt{\tau}} \right) - Ke^{-r \tau}\Phi\left(\frac{\ln\frac{S(t)}{K} + \left(r-\frac{\sigma^2}{2} \right)\tau}{\sigma \sqrt{\tau}} \right)$$
where $r$ is the risk-free rate, $\sigma$ the volatility and $\tau = T-t$. The "delta" in the Black-Scholes model is
$$\Delta(t,S(t),K,T) = \frac{\partial c}{\partial S}(t,S(t),K,T) = \Phi\left(\frac{\ln\frac{S(t)}{K} + \left(r+\frac{\sigma^2}{2} \right)\tau}{\sigma \sqrt{\tau}} \right)$$
In the case of an at the money call option we have $K=S(t)$ which means that we get
$$\ln\frac{S(t)}{K} = \ln(1) = 0$$
and we are left with
$$\Delta(t,S(t),S(t),T) = \Phi\left(\frac{\left(r+\frac{\sigma^2}{2} \right)\tau}{\sigma \sqrt{\tau}} \right)$$
This expression equals $0.5$ when $\tau = 0$ that is when $t=T$. This is because $\Phi(x)=0.5$ if and only if $x=0$.
Hope this helps you understand. Otherwise, do not hesitate to ask again!
-
The Delta of a call is not defined when t=T and S_T=K, there is a spike in the payoff funcion. We can say that Delta will be 0.5 every time that log(F_T/K) = -0.5v*T (where F_T is the forward at time T conditioned by S_t). So necessarily F_T<K always, otherwise the Delta is greater than 0.5 – FKaria Jun 21 '12 at 19:35
If you look at the BS formula as you find it e.g. in wikipedia straight forward differentiation of the call price gives the call's Delta.
You find the formula for the Delta on the wikipedia page under "The Greeks". $\Delta=\Phi(d_1)$ where $\Phi$ is the standard normal cdf and $d_1$ is given by
$$d_1 = \frac{\ln(\frac{S}K)+(r+\frac{\sigma^2}2)(T-t)}{\sigma \sqrt{T-t}}$$
where I assume that all parameters are clear. You also find it on wikipedia. If $d_1=0$ then $\Phi(d_1)=\frac12$ per the definition of the normal cdf.
When people refer to ATM options having 50 delta they usually mean ATMF, or at the money forward, given by $$S=Ke^{-r(T-t)}$$ Note that sometimes forward prices are derived from put-call parity. Then the forward price can be different)
Thus, when the stock is ATMF, $\ln(\frac{S}K)+r(T-t) = 0$, but the terms with sigma remain. In this case $d_1$ is very small but not exactly zero, and $\Delta$ is close to 1/2.
-
Richard, I started writing essentially the same answer when yours came up, so I decided I'd clean up yours instead. – Tal Fishman Jun 21 '12 at 14:55
Thank you Tal, I appreciate your cleaning. I think it is worth mentioning that Delta is not excactly 1/2 ... as we are doing strict mathematics here. – Richard Jun 21 '12 at 15:00
Forwards cannot be deduced from Put-Call parity. The ATM forward of S is F_T = S_0*exp(rT) so I assume that K=S_0. Regardless, the forward price has to be less than the strike to obtain a Delta of 0.5 – FKaria Jun 21 '12 at 19:41
FKaria , of course it makes sense to imply a forward price from put-call-parity. Look at wikipedia and rearrange terms just like e.g. Bloomberg does. – Richard Jun 22 '12 at 7:20
@Richard, You're right, it can be done but only for ATM strikes, which is the case we're talking about. – FKaria Jun 22 '12 at 17:58
Given that the mathematical proofs have already been given above, let me stress the intuitive aspects of it.
If you use a normal model, then you will find that the delta of an ATM option is equal to 50%, and at the same time, the probability of ending ITM (in the money) is also 50%.
Now, with a lognormal model, there is a difference between the probability and the delta. The reason is actually very simple. Imagine you run a Montecarlo to figure the delta of an ATM call option. Say you've got around half the paths ending above the strike, and half below. Then clearly, if you were to rerun the Montecarlo, but starting from a slightly higher spot (say 1% because you want to calculate the delta so you 'bump' the spot up), then roughly speaking
• all the paths that finished below the strike in the original MC will probably still finish below in the bumped MC. So for these the payoff of the option is unchanged.
• for those paths which ended in the money, given that your original spot is 1% higher, that means that the simulated spot is also 1% higher, but that 1% if obviously larger than 1% of spot because the simulated spot is ITM. So for these the payoff is increased by more than 1% of spot.
If you combine the 2 points above, the price impact of bumping spot up by 1% is going to be 50% x 0 + 50% x (something > 1% ), so the delta is going to be higher than the probability of ending ITM. You can even see that the 'something' is itself very tied to that actual value of the call option.
Obviously this relationship works irrespective of whether the option is ATM or not.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410932660102844, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/165551/does-int-mathbb-rn-1-x2-1-dx-converge-in-the-riemann-sense?answertab=votes
|
Does $\int_{\mathbb R^n} (1 + |x|^2)^{-1} dx$ converge in the Riemann sense?
Recently I was reading an article and came across the following integral:
$$\int_{\mathbb R^n}\dfrac{1}{1+|x|^2}\ dx$$
Is this integral convergent in the Riemann sense?
-
I removed the (reference-request) and (measure-theory) tags, since they were inappropriate, but I'm not sure what to replace them with... – Ben Millwood Jul 2 '12 at 2:26
2 Answers
The integrand is positive and smooth so Riemann or Lebesgue makes no difference. Change coordinates as follows. $$\int_{\mathbb{R}^n} {dx\over 1 + |x|^2} = \int_0^\infty\int_{S^{n-1}} {r^{n-1}\,d\sigma\,dr\over 1 + r^2} = c_n \int_0^\infty {r^{n-1}\,dr\over 1 + r^2}$$ This converges if $n = 1$. If $n\ge 2$, it diverges.
-
It converges (in the improper Riemann sense) if and only if $n < 2$, i.e. if $n=1$. This follows immediately by comparison from the following theorem.
Suppose $E \subseteq \mathbb{R}^n$ and $f:E\to \mathbb{R}$ is continuous. Let $x_0 \in \mathbb{R}^n\setminus E$ and assume $a, C >0$ are such that $|f|$ is bounded by $$\frac{C}{|x-x_0|^a}$$ for all $x \in E$. Then
(1) If $E$ is Peano-Jordan measurable and $a < n$, then $f$ is Riemann integrable in the improper sense with finite integral.
(2) If $E \subseteq \mathbb{R}^n \setminus B(x_0,r)$ admits an exhausting sequence and $a >n$, then $f$ is Riemann integrable in the improper sense with finite integral.
-
Please can you give some reference for meee ??? – user27456 Jul 2 '12 at 1:49
1
If you wanted a reference specifically, it would be good to say that in your question (edit it, making clear that the reference part of the question is new, and re-adding the reference-request tag that I just removed, sorry) – Ben Millwood Jul 2 '12 at 2:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272652864456177, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/275482/is-there-any-trick-to-evaluate-this-integral?answertab=oldest
|
# Is there any trick to evaluate this integral? [duplicate]
Possible Duplicate:
Please help me to evaluate $\int\frac{dx}{1+x^{2n}}$.
Is there any trick to evaluate
$$\int_{-\infty}^\infty \frac{{\rm d} x}{x^{2n}+1}?$$
-
– Mhenni Benghorbal Jan 11 at 4:28
## marked as duplicate by user17762, amWhy, Belgi, Zev Chonoles♦Jan 10 at 23:09
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 1 Answer
This is just a try ...
Since the function is even, we have $2 \int_{0}^\infty \frac{1}{x^{2n}+1}dx$
$$\int_0^1 \frac{1}{x^{2n}+1} dx + \int_1^\infty\frac{1}{x^{2n}+1} dx \\ = \int_0^1 \sum_{k=0}^\infty (-1)^k (x)^{2kn} + \int_1^\infty \frac{1}{x^{2n}} \sum_{k=0}^\infty (-1)^kx^{-2nk} \\ = \sum_{k=0}^\infty (-1)^k \left[\frac{(x)^{2kn+1}}{2kn+1}\right]_0^1 + \sum_{k=0}^{\infty}(-1)^k \left[ \frac{x^{-2nk-2n+1}}{-2nk-2n+1}\right]_1^\infty \\$$
So, we have
$$=\sum_{k=0}^\infty(-1)^k \frac{1}{2kn+1} + \sum_{k=0}^\infty (-1)^k \frac{1}{2nk+2n-1} \\ =\sum_{k=0}^\infty (-1)^k \left( \frac{1}{2kn+1} + \frac{1}{2n(k+1) - 1}\right)$$
$$\int_{-\infty}^\infty \frac{{\rm d} x}{x^{2n}+1}? = 2 \sum_{k=0}^\infty (-1)^k \left( \frac{1}{2kn+1} + \frac{1}{2n(k+1) - 1}\right)$$
Or on complex analysis this is just special case of this problem.
-
1
Try shifting some indices. Then you get $$2 \sum_{k=0}^\infty (-1)^k \left( \frac{1}{2kn+1} + \frac{1}{2n(k+1) - 1}\right)$$ $$2 \sum_{k=0}^\infty (-1)^k \frac{1}{2kn+1} - 2\sum_{k=0}^\infty (-1)^{k+1} \frac{1}{2n(k+1) - 1}$$ $$2 \sum_{k=0}^\infty (-1)^k \frac{1}{2kn+1} - 2\sum_{k=1}^\infty (-1)^{k} \frac{1}{2nk - 1}$$ $$2 \sum_{k=0}^\infty (-1)^k \frac{1}{2kn+1} + 2\sum_{k=1}^\infty (-1)^{-k} \frac{1}{2n(-k) + 1}$$ $$2 \sum_{k\in \Bbb Z}^\infty (-1)^k \frac{1}{2kn+1}$$ It seems sums of this form inevitably lead to trigonometric functions. – Peter Tamaroff Jan 10 at 23:26
@PeterTamaroff thanks for the hint!! – experimentX Jan 10 at 23:27
@PeterTamaroff how do you know it's a trigonometric function? could you give me any links? – experimentX Jan 10 at 23:32
See the duplicate of this. It is a "partial fraction decomposition" sum over the "singularities" of the cosecant of $\pi/2n$. – Peter Tamaroff Jan 10 at 23:35
@PeterTamaroff awesome!! thanks ... – experimentX Jan 10 at 23:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8808771967887878, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/powers
|
Tagged Questions
The powers tag has no wiki summary.
2answers
27 views
rounding up to nearest square
Say I have x and want to round it up to the nearest square. How might I do that in a constant time manner? ie. $2^2$ is 4 and $3^2$ is 9. So I want a formula whereby f(x) = 9 when x is 5, 6, 7 or 8. ...
6answers
875 views
10 to the power of 3.5 [closed]
So $10^3 = 10\times 10\times 10 = 1000$, this is really easy to understand. But what about: $10^{3.5}$ My logic would suggest this was $10\times 10 \times 10\times 5 = 5000$ but the calculator ...
1answer
70 views
taking the log of $a^b$ (Project Euler problem 29)
I've been stuck on Project Euler problem 29 and thus asked a friend who solved it how to do it. What he basically did was for each power was: $\left(\frac{\log_{10}(a)}{\log_{10}(2)}\right)\cdot b$ ...
1answer
50 views
Solving an equation with $x$ as powers
How would I go about solving $$2^x -2^{x-2}=3 *2^{13}$$Hints please. Thank you.
16answers
1k views
Why is $\;n^2-\frac{n^2}{2} =\frac{n^2}{2}\;$? [closed]
Could someone please expand on how to get from $\;\displaystyle\left( n^2-\frac{n^2}{2}\right)\;$ to $\;\left(\dfrac{n^2}{2}\right)\;?\;$ I can't seem to wrap my head around that.
3answers
19 views
why if x in 1/n power >(<) y in 1/m power then x in c/n power >(<) y in c/m power?
As you might guess this is one more stupid question from non-matematician, and you are right. I found this exercise in "Algebra and trigonometry book": $7^{1/2}$ or $4^{1/4}$. After some googling I ...
6answers
355 views
How is this proof flawed?
$\sqrt{x}=-1$ $\sqrt{x}^2=(-1)^2$ $x=1$ Now substitute it into the original equation $\sqrt{1}=-1$ $1=-1$
1answer
35 views
Can we write $\sqrt[w]{z}=z^\frac{1}{w}$ when both $w$ and $z$ are complex numbers? [duplicate]
Let $w$ and $z$ be complex numbers defined in terms of real numbers $a$, $b$, $c$ and $d$ as follows: $$w = a+bi \\ z = c+di$$ Can we analogically write \sqrt[w]{z} = z^\frac{1}{w} \qquad ...
1answer
210 views
Is it possible to prove the positive root of the equation ${^4}x=2$, $x=1.4466014324…$ is irrational?
(somewhat related to my earlier question) Let ${^n}a$ denote tetration $\underbrace{a^{a^{.^{.^{.^a}}}}}_{n \text{ times}}$ (or, defined recursively, ${^1}a=a$, ${^{n+1}}a=a^{({^n}a)}$). The ...
5answers
412 views
What is the value of $2^{3000}$
What is the value of $2^{3000}$? How to calculate it using a programming language like C#?
1answer
368 views
How do I calculate the 2nd term of continued fraction for the power tower ${^5}e=e^{e^{e^{e^{e}}}}$
I need to find the 2nd term of continued fraction for the power tower ${^5}e=e^{e^{e^{e^{e}}}}$ ( i.e. $\lfloor\{e^{e^{e^{e^{e}}}}\}^{-1}\rfloor$), or even higher towers. The number is too big to ...
1answer
71 views
Definite integral including the ratio and power functions of a single variable
I find trouble in calculating the following integral: $$\int_0^R \frac{m\cdot x}{m+s\cdot x^a} \,dx$$ Mathematica does not provide an output for this function, however, there seems to be an output ...
4answers
70 views
Complex power of a complex number: Find $x$ and $y$ in $x + yi = (a + bi)^{c+di}$
$$x + yi = (a + bi)^{c+di}$$ Find $x$ and $y$ in terms of $a$, $b$, $c$ and $d$. Where, $i$ is defined as $\sqrt{-1}$ and $a$, $b$, $c$, $d$ are real numbers. I defined two new real number ...
4answers
66 views
Can you raise a Matrix to a non integer number? [duplicate]
So I heard you can take a matrix A to the power 2, take it to a -3th power and multiply it by an irrational number. You can also do some other non-intuitive things like taking e to the power of a ...
2answers
65 views
Trace of the matrix power
Say I have matrix $A = \begin{bmatrix} a & 0 & -c\\ 0 & b & 0\\ -c & 0 & a \end{bmatrix}$. What is matrix trace tr(A^200) Thanks much!
1answer
42 views
Does $\ (g^a Mod\ p)^b\,$ $\equiv$ $\ (g^a)^b (Mod\ p)\,$ hold true?
Are these two equations: $$\ (g^a Mod\ p)^b\,$$ $$\ (g^a)^b (Mod\ p)\,$$ one and the same? If yes then how And if no then how to solve the first equation?
2answers
96 views
How to evaluate powers of powers (i.e. $2^3^4$) in absence of parentheses?
If you look at $2^{3^4}$, what is the expected result? Should it be read as $2^{(3^4)}$ or $(2^3)^4$? Normally I would use parentheses to make the meaning clear, but if none are shown, what would you ...
1answer
55 views
$n^0 = 1$ ? Try for this case? [duplicate]
We know that anything to the power $0 = 1$ i.e. $n^0 = 1$ My question is that, is $0^0 = 0$ or $1$ and why?
7answers
654 views
Pattern to last three digits of power of $3$?
I'm wondering if there is a pattern to the last three digits of a a power of $3$? I need to find out the last three digits of $3^{27}$, without a calculator. I've tried to find a pattern but can not ...
4answers
102 views
If $3^x \bmod 7 = 5$, what is $x$ and how?
I am an amateur java programmer who is stuck on this problem: $$3^x \bmod 7 = 5$$ then what is $x$ and how? If you can even explain the method for how to arrive at the solution, then it will be very ...
0answers
62 views
Are there other powers which follow the rule $a^b = b^a$ than $2^4$? [duplicate]
I was trying to find these powers, but to my disappointment I only found $2^4 = 4^2$. Edit: $a$ must be different to $b$ of course. Is that the only possible setting, and why? If we assume the number ...
4answers
72 views
Square and square root and negative numbers [duplicate]
Are they equal? -5 = $\sqrt{(-5)^2}$
5answers
341 views
Why does the power rule work?
If $$f(x)=x^u$$ then the derivative function will always be $$f'(x)=u*x^{u-1}$$ I've been trying to figure out why that makes sense and I can't quite get there. I know it can be proven with limits, ...
0answers
36 views
Name of odd powered polynomial graph (Opposite of parabola(ic))
I am writing an assignment and have to describe the graphs for when the powers are even and when they are odd. I described the even power graphs as being parabolic or parabolas. The only problem is, I ...
1answer
63 views
Why the nth power of a Jordan matrix involves the binomial coefficient?
I've searched a lot for a simple explanation of this. Given a Jordan block $J_k(\lambda)$, its $n$th power is: J_k(\lambda)^n = \begin{bmatrix} \lambda^n & \binom{n}{1}\lambda^{n-1} & ...
5answers
100 views
Why when indices cancelled out leave 1 at top?
This is a really basic question, but I have just got interested in math and learning rules about powers/indices and this confused me a little. $\dfrac{a^3}{a^7}$ after they cancel out we get ...
2answers
65 views
Calculating new vector positions
I'm using the following formula to calculate the new vector positions for each point selected, I loop through each point selected and get the $(X_i,Y_i,Z_i)$ values, I also get the center values of ...
4answers
165 views
1 to the power of infinity, why is it indeterminate? [duplicate]
I've been taught that $1^\infty$ is undetermined case. Why is it so? Isn't $1*1*1...=1$ whatever times you would multiply it? So if you take a limit, say $\lim_{n\to\infty} 1^n$, doesn't it converge ...
5answers
68 views
Proof for power functions
Which is greater? $\sqrt{n}^{\sqrt{n+1}}$ or $\sqrt{n+1}^\sqrt{n}$ I know that $\sqrt{n}^{\sqrt{n+1}}$ is greater but I tried using induction and I couldn't figure it out. Thanks for the help.
1answer
66 views
Units in exponent - e.g. solve: $2^{3 years}$
What happens to units in an exponent? My math textbook just introduced the exponential equation: $$A_t = Pe^{rt}$$ I've always made it a point in solving math problems to include the units in every ...
2answers
72 views
Simplifying $\;x({y^{3}}/{x^{4}})^{1/4}$
I’m a little unsure how to simplify the following expression: $$x\left(\frac{y^{3}}{x^{4}}\right)^{1/4}$$ According to the answer, this should get you $\;\; x y^{3/4} x^{-1} = y^{3/4}$. My ...
5answers
130 views
Solving an equation with fractional powers
I was trying to find the maximum value for a function. I took the first derivative and arrived at this horrible expression: (x^2 + y^2)^\frac{3}{2} - y {\frac{3}{2}}(x^2 + y^2)^{\frac{1}{2}}2y = ...
3answers
66 views
Powers and the logarithm
By example: $4^{\log_2(n)}$ evaluates to $n^2$ $2^{\log_2(n)}$ evaluates to $n$ What is the rule behind this?
2answers
73 views
How to prove that the following language is not regular?
This is the following problem that I've been having difficulty on: For this problem, we will show that there are non-regular languages over the alphabet $\{0\}$. The language that will be used is the ...
4answers
110 views
What is the value of $i^i$? [duplicate]
I understand that when you raise any number $x$ to a power, you multiply $x$ by itself the number of times indicated in the power. However, what happens when $i^i$ is performed? How can a number be ...
1answer
59 views
How to expand a fraction in powers of $z$ or $\dfrac{1}{z}$, and which to do, in determining Laurent series
I have a function $f(z)=\dfrac{12}{z(2-z)(1+z)}$, I'm trying to find the Laurent series for each of the three annuli. The singularities are at $z = 0$, $z = 2$, and $z = -1$, so I'm looking for three ...
3answers
161 views
Comparing $\large 3^{3^{3^3}}$, googol, googolplex
How to show that $\large 3^{3^{3^3}}$ (Third Ackermann number) is larger than a googol ($\large 10^{100}$) but smaller than googoplex ($\large 10^{10^{100}}$). Thanks much in advance!!!
1answer
62 views
How do I solve this (multiplication of exponents)
I can not find any help to spread the exponent across the brackets $$\left[-3(-2)^2\right]^{-1}$$
4answers
290 views
What is wrong with this problem
We know that: $(a^n)^m=a^{nm}$ From this we have: $-3^3=[(-3)^2]^\frac{3}{2}=(3^2)^\frac{3}{2}=27$ Find what's wrong
1answer
59 views
Realistic Example of Power-Law Distribution?
I'm missing a bit of inbetween-math, and having some trouble understanding this, but: I want to generate a set of data that follows a power law. Let's say I have 10,000,000 people who like a ...
1answer
127 views
Prime decomposition of an integer: methods of determining the prime factors $p_1, p_2, …, p_r$ and powers $k_1,k_2, …, k_r$
Any integer n can be written in the form $n = p_1^{k_1}p_2^{k_2} ... p_r^{k_r}$, where the powers $k_1, k_2, ...,k_r$ are integers and $p_1, p_2, ..., p_r$ are primes. Now I am interested in ...
2answers
93 views
Calculating powers
I was thinking how I could program powers into my application. And I want to do this without using the default math libraries. I want to use only +,-,* and /. So I wondered what is the definition of a ...
5answers
172 views
Solving an equation with a logarithm in the exponent
I try to solve the following equation: $$(N+1)^{\log_N{125}} = 216$$ I know the answer is 5 here but how could I rewrite the equations so I can solve it? I tried to take the log of both sides but ...
2answers
59 views
How to efficiently compute the coefficients in a bi-binomial expansion?
Is there a computationally efficient way of calculating the coefficients of the polynomial expansion of expressions like $(1+x^a)^m(1+x^b)^n$ for arbitrary positive integers $m,n,a,b$ (and especially ...
2answers
103 views
pow$(X,Y)$ $>$ pow$(Y,X)$, if $X<Y$.
How can we proof following? if $X < Y$, then: $X^{Y} > Y^{X}$ , Where X, and Y are integers. Also $X,Y > 1$. Except a special case $2^{3} < 3^{2}$. I think for other ...
0answers
100 views
What is the relationship between these expression?
Moderator Note: This is a Project Euler question If ...
2answers
254 views
Prove that for any nonnegative integer n the number $5^{5^{n+1}} + 5^{5 ^n} + 1$ is not prime
My math teacher gave us problems to work on proofs, but this problem has been driving me crazy. I tried to factor or find patterns in the numbers and all I can come up with is that for $n > 0$, the ...
3answers
147 views
What's the intuition behind non-integer exponents/powers
Consider some $a \in \mathbb{R}$ and $x \in \mathbb{R}\backslash \mathbb{N}$. Is there some intuition to be had for the number $a^x$? For example the intuition of $a^2$ is obvious; it's $a*a$ which ...
3answers
95 views
What is the difference between exponentials and powers?
I am a java programmer. But I have a doubt regarding a mathematics. There was a method called Math.exp(double a) description:Returns Euler's number e raised to the power of a double value. and another ...
1answer
81 views
Solve for variable inside multiple power in terms of the powers.
I'm a programmer working to write test software. Currently estimates the values it needs with by testing with a brute force algorithm. I'm trying to improve the math behind the software so that I can ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 137, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385274648666382, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/210951/cofactor-matrix-4x4-evaluated-by-hand
|
# Cofactor matrix 4x4, evaluated by hand
I am currently studying linear algebra. So far I have studied method for aquiring the inverse of a matrix A. Now I would like to evaluate the inverse of a $4\times 4$ matrix using the following formula:
$$\mathbf{A}^{-1} = \frac{ \mathrm{adj}(\mathbf{A}) }{ \det(\mathbf{A}) } % A^(-1)=1/det(A) adj(A)$$
My question is in order to get the adjoint matrix I will need the cofactor matrix. In a $4\times 4$ matrix this means I could do it with row expansion in 2 dimensions. For instance, I take the entry A[a11], I cross out first row and first column. Next I choose B[a11] in the submatrix to get my minor. Now, do I have to multiply the minor of the submatrix with the original entry A[a11]? Does it matter what minor I choose in the submatrix in further calculation? For example, using A[a12] to get the minor of the main matrix, can I use any row or column in the submatrix?
I appreciate any help in this manner. Thank you.
-Daniel
-
1
That was quite incomprehensible... But you know how to compute a 3 by 3 determinant, right? That's what you have to do (16 times!), since each cofactor is a 3 by 3 determinant (namely the determinant that you get by crossing out one row and one column of $A$). – Hans Lundmark Oct 11 '12 at 7:48
## 1 Answer
If I understand your question correctly, when calculating the adjoint matrix, you do not need to multiply the determinant of the submatrix by the entry you're finding the cofactor for.
One reason this can be confusing is that you do multiply the submatrix determinants by the entries to find the determinant of the matrix.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.925274670124054, "perplexity_flag": "head"}
|
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aop/1176988277
|
### Existence of Quasi-Stationary Distributions. A Renewal Dynamical Approach
P. A. Ferrari, H. Kesten, S. Martinez, and P. Picco
Source: Ann. Probab. Volume 23, Number 2 (1995), 501-521.
#### Abstract
We consider Markov processes on the positive integers for which the origin is an absorbing state. Quasi-stationary distributions (qsd's) are described as fixed points of a transformation $\Phi$ in the space of probability measures. Under the assumption that the absorption time at the origin, $R,$ of the process starting from state $x$ goes to infinity in probability as $x \rightarrow \infty$, we show that the existence of a $\operatorname{qsd}$ is equivalent to $E_xe^{\lambda R} < \infty$ for some positive $\lambda$ and $x$. We also prove that a subsequence of $\Phi^n\delta_x$ converges to a minimal $\operatorname{qsd}$. For a birth and death process we prove that $\Phi^n\delta_x$ converges along the full sequence to the minimal $\operatorname{qsd}$. The method is based on the study of the renewal process with interarrival times distributed as the absorption time of the Markov process with a given initial measure $\mu$. The key tool is the fact that the residual time in that renewal process has as stationary distribution the distribution of the absorption time of $\Phi\mu$.
First Page:
Primary Subjects: 60J27
Secondary Subjects: 60J80, 60K05
Full-text: Open access
Permanent link to this document: http://projecteuclid.org/euclid.aop/1176988277
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8948414325714111, "perplexity_flag": "head"}
|
http://rjlipton.wordpress.com/2010/12/07/logarithms-tables-and-babylonians/
|
## a personal view of the theory of computation
by
On pre-computation methods for speeding up algorithms
John Napier is the discoverer of logarithms, and was sometimes called the “Marvellous Merchiston,” which is a pretty neat name. His publication, in 1614, of his book entitled “Mirifici Logarithmorum Canonis Descriptio” was a milestone for science. This work changed science at the time—it made calculations possible that were previously impossible.
Today I want to talk about logarithms as an example of a more general principle: when can pre-computed tables be used to speed up computations?
Some credit Michael Stifel with discovering logarithms years before Napier, but Napier’s book included ninety pages of log tables that made them a practical tool. They were used by scientists of all kinds—including apparently astrologists.
Stifel did make important advances in mathematical notation: he was the first to use juxtaposition to denote multiplication, for example. I should have included his work in our recent discussion on notation. Stifel also tried his hand at predictions: he predicted the world would end at 8am on October 19, 1533. This points out a key rule for anyone who attempts to make predictions:
A prediction should be vague in effect or far in the future or both.
I ran across an interesting discussion of how log tables were used at Cambridge during math exams. Students were allowed to bring only basic tools such as a compass, a protractor and writing instruments to an exam. They were checked carefully as they entered the examination room to see if they might have any notes or other forbidden materials with them. They could also bring a special book of logarithms—it was often checked for illegal notes.
I learned how to use logarithms at Massapequa High School, not at Cambridge. In his first lecture on logarithms, our math teacher explained that we would each be supplied with a special printed sheet of paper whenever we were tested. These sheets contained a printed version of the log tables that were exactly the same as the tables in the back of our math books. This method avoided the need to check that we might have written some notes on the tables, since they were given out and collected for each test.
Our teacher also told us a story about the sheets that he claims really happened. In a class, years before, he had two students who were identical twins—very smart identical twins. They missed the first day of his class on logarithms.
After a while he gave his class their first exam on logarithms. As he was handing out the sheets he gave one to the first twin. The twin looked at the sheet and asked “what is this for?” Our teacher said they were the log tables they could use during the exam—they were copies of the tables from the math book. The twin handed the sheet back and said that he did not need it, since he and his brother had assumed they would need to know the tables. So they had memorized them.
The twins were smart, but they also had photographic memories. They assumed that part of learning logarithms was having to know the log tables. They were wrong: the whole point is that you do not have to remember the tables, and that is a main part of today’s discussion. They were impressive: I cannot imagine being able to remember even one page of the logarithm table.
Tables
The point of pre-computed tables is they can be used to speed up a computation. This idea is ancient, powerful, and probably not as widely used today as it could be. Perhaps the tremendous speed of our computers has made it less important to do pre-computation. Certainly logarithm tables are no longer needed—just use your calculator.
However, we do see tables used in a number of significant situations.
• Perhaps the best example is search—especially of the Web. All search engines have precomputed tables that allow the search of the countless pages of the Web in a fraction of a second. Okay there are only a finite number of pages, but the number is huge. It is claimed that the number of unique URL’s is above one trillion—as in 1,000,000,000,000. So fast search would be impossible without tables.
• Another important class of examples come from pervasive computing. Table lookup has been suggested as a way to save energy. The basic idea is to have the smart device pre-compute parts of a protocol, for example, when the device is being re-charged. Then the protocol will run faster when the device is running on batteries.
Napier’s logarithms reduce multiplication to additions and lookups. There are other ways to do this, to replace multiplication by “easier” operations. One dates from ancient times, perhaps thousands of years ago. Two clay tablets found at Senkerah on the Euphrates in 1854 show that the Babylonian’s seem to have used tables to reduce multiplication to squaring. The key equations are:
$\displaystyle \begin{array}{rcl} ab &=& \frac{(a+b)^2 - (a-b)^2}{4}. \\ ab &=& \frac{(a+b)^2 - a^2 - b^2}{2}. \end{array}$
Both can be used to reduce multiplication to squaring.
What is the advantage of log tables over square tables? Let’s take a careful look at the cost of each of these methods.
The log method to multiply ${a}$ times ${b}$ is:
1. Lookup and get ${\log a}$ and ${\log b}$ from the logarithm table.
2. Add these to form ${c}$.
3. Then look up ${c}$ in another table and get the answer: ${a \cdot b}$.
This takes: three lookups, one add, and two tables.
The square method to multiply ${a}$ times ${b}$ is:
1. Add to get ${c= a+b}$.
2. Look up the squares of ${a,b,c}$ in the table: let the values be respectively ${a',b',c'}$.
3. Form ${c'-a'-b'}$, and then divide by two. This yields ${a \cdot b}$.
This takes: three lookups, three add’s, one divide by two, and only one table. I grouped addition and subtraction together, since they have about the same complexity.
Clearly, the logarithm method uses less total operations. But it does require two tables. From an asymptotic point of view both the above methods are linear time plus the three lookups.
Issues
Several questions come to mind. Why did the squaring method not get wider practical application? It uses such basic mathematics that it could have been used years before Napier’s work. Preparing a squaring table is not to difficult.
One answer is based on a property that Subrahmanyam Kalyanasundaram pointed out that distinguishes the log method from the squaring method. Here are his comments:
An inherent feature of the log based multiplication is that we take the log over the “normalized” number. Basically ${123 \cdot 0.01}$ is multiplied as
$\displaystyle 123 \cdot 0.01 = 1.23 \cdot 10^2 \cdot 1 \cdot 10^{-2} = 1.23 \cdot 1 \cdot 10^{2 + -2}.$
I feel that this normalizing makes sure that we shift to the most significant digit and we spend our best efforts computing these digits. The log based method does it when we divide the number into characteristic and mantissa. Can the square method also do something like this?
Another question is what is the best that you can do with tables? I have never thought about this question. Given different costs on lookups and add’s what is the optimal method for reducing multiplication to table lookups? Of course multiplication is a very well studied problem. The current best is the beautiful algorithm of Martin Fürer that uses no tables and multiplies in time
$\displaystyle n \log n \ 2^{O(\log^* n)}.$
Here ${\log^* n}$ is almost a constant, so his algorithm is very close to ${n \log n}$. Yet we still do not know if multiplication requires more than linear time.
Open Problems
A more general question is what is the power of tables in computing a general function. Given a function ${f(x,y,z,\dots)}$ what is the table lookup cost? Probably we should restrict the tables to be unary: each table should only be able to compute ${t(x)}$ where ${x}$ runs over a range of values.
### Like this:
from → History, People
26 Comments leave one →
1. Arno Nym
December 7, 2010 9:30 am
Isn’t poly-time computation with pre-computed tables just P/poly?
2. December 7, 2010 10:02 am
A small but (I think) fascinating tidbit on the limits of lookup tables in web search: Google’s Udi Manber has stated that 20-25% of search queries are new queries, which limits Google’s ability to cache the results in a lookup table. Of course they use lookup tables for all kinds of things, but I think this is a surprising bottleneck in the problem of doing search. Ref: http://www.readwriteweb.com/archives/udi_manber_search_is_a_hard_problem.php
3. December 7, 2010 10:18 am
In practical computing, tables can be horribly expensive. If a table lookup misses in the processor cache, as it will for a large table, the time cost of the lookup can be hundreds of CPU cycles lost that could have been used to compute the value. The ratio of CPU speed to memory speed has changed dramatically in the last 15 years. In addition multiplication and addition are now pretty much the same speed (at the cost of many nearly free transistors!)
On an energy basis, the cost of a main memory cycle can also be greater than recomputing a result, due to all the wires that must change state to communicate between the CPU and the memory.
Even internalized truths like “FFT is better than DFT” need to be reevaluated because random access patterns are very much more expensive than sequential patterns of data access.
4. December 7, 2010 10:50 am
Pre-compiled and pre-computed tables may be very useful. But I think, common rules are not possible here – very often we need experiment to find optimal solution.
5. anon
December 7, 2010 11:30 am
distance oracles come to mind when thinking about when pre-computed data structures can be used to speed up computations.
the database community is surely investigating these questions all the time
6. December 7, 2010 12:56 pm
Tables are still taught at least briefly in integral calculus: find antiderivates, this week you can use these tables. Of course, freshman integral calculus is way over obsessed with antiderivatives and it’s blatantly stupid to teach people tables when we have the internet and widely available free symbolic integration tools. At least with the increasingly exotic “techniques” (trig substitution, partial fractions, …) you could make an argument that “someone’s gotta preserve the knowledge of how the packages work!” But tables…ugh…
7. Anonymous
December 7, 2010 1:42 pm
Another good example is Andrew Goldberg (and other’s) work on computing shortest paths in continental-scale road networks: they precompute shortcuts and transit nodes to enable very fast source-sink shortest path queries, trading off memory for time.
8. Koray
December 7, 2010 1:49 pm
I find this definition of pre-computation too narrow. Whenever space is traded for runtime I see it as a pre-computation. For me this starts with CPU caches and goes all the way to any datastructure that intentionally does not lay out its contents as compactly as possible.
9. jianying
December 7, 2010 3:50 pm
first babylonian equation is more efficient:
c=a+b
d=a-b
lookup c’=c^2, d’=d^2
ab=(c’-d’)/4
three add equivalent, two table lookup one division by four, one table used.
10. Frédéric Grosshans
December 8, 2010 6:28 am
I heard the following story and I think it is true, even if I cannot check for it.
In France, there is an engineering school (les Arts et Métiers) where logarithm tables where traditionally forbidden at exams, but the student where numerous enough to reconstruct the table with minimal individual effort during the exam. Each student knows “his” logarithm, so when someone needs log(534), he quickly shouts “534″, and the “knowing student” shouts the answer. If the shout quickly enough, it is difficult to know who shouted and they cannot punished ! Nowadays, the method is not used anymore, but the tradition for each student to know his logarithm survives.
• rjlipton *
December 8, 2010 7:55 am
Frédéric Grosshans
Great story.
11. Fring
December 8, 2010 7:26 am
Very interesting post.
The computation complexity of using tables seems to be model dependent. Using random access memory is not the same as using Turing machines, at least it seems to me.
How is computational complexity counted when one uses large tables? Suppose you form a table of size n log n, in time O(n log n), and then use it repeatedly by computing the address of the information to be retrieved from the table, say by retrieving log n consecutive bits at a time. Does the step of retrieving the value from the table count as number of consecutive bits plus the number of bits needed for address, or the product of the two?
This question seems to be important for studying complexity of problems like multiplication, which is sensitive to log n factors.
I would really appreciate if someone answered this question: Having a table of size O(n log n) which needs O(log n) bits to address, and being interested in chunks of O(log n) consecutive bits, does retrieving m blocks of log n consecutive bits count as O(m log n) or O(m log^2 n) in time cost?
For models like multi-tape Turing machines, retrieving from the table seems to be time consuming (perhaps I do not know how it is dealt with); random access memory seems to count time of retrieval differently. On realistic models for large memory storage, it seems plausible to assume that time for retrieval of B consecutive bits that need A bits to address in the table requires time O(A+B) rather than time O(A*B).
So, my question (once again, sorry) is the following: Having A address bits and retrieving B consecutive bits from the memory, what is the time cost of looking up the table? Is it O(A+B), O(A*B) or something else? How does this time depend on the computation model?
• December 8, 2010 5:08 pm
Yes, it is fact that Pentium 4 CPU (or multi-core processor) and Turing machines are different. This is open problem for computational complexity theory! V.N.Kasianov & V.A.Evstigneev offered new machine model in their book “Graphs in Programming…” , 2003, (1104 pages in Russian).
• Fring
December 8, 2010 5:19 pm
Sorry, this does not answer my question. Perhaps there is answer in this book?
It seems to me that squaring table would not offer any advantage if time consumed was O(log n ^2) to look up log(n) bits in a table addressed by log(n) bits, since multiplication is more efficient than looking up the table.
So, I assume that answer is O(A+B), but if one comes say with a new algorithm for multiplication using O(n log n) steps, that relies on such counting, would this result be worth anything if in fact looking up tables is O(A*B), i.e. expensive and hence useless (even example of squaring from the main post)
• December 8, 2010 5:23 pm
Sorry. Yes, this does not answer your question. I do not know the answer. I only noted this open problem.
• Sasho
December 9, 2010 11:29 am
Actually, despite repeating your question many times, it is still not quite clear to me what you’re asking. Surely, you could come up with arbitrary models in which table lookups have arbitrary time cost. The complexity really is very dependent on the model and your question does not really have a single answer.
1) A Turing machine with a constant number of tapes and sequential access could take time O(N) to find a bit in an array of N bits (imagine an adversary argument).
2) A RAM model will take O(1) time to look up an entry in a table that’s addressed by B bits, as long as B is less than the word size. I am not sure what’s the standard treatment of that case, but I would think that if a table can be addressed by B bits and the word size is W, then the table lookup would cost W/B.
As Larry Stewart pointed out above, it’s difficult to say what actually is the case on a real machine, i.e. what is the most appropriate model. You can safely say that the truth is somewhere betweeen 1) and 2) (due to cache and energy issues, etc.), and there are models studied in CS that stand between 1) and 2). An interesting example is the cache oblivious model which does put the cache into the model but does not “tell” the algorithm the cache parameters. Algorithms that work well in the cache oblivious model seem to use the complicated memory hierarchy of real machines quite well,too. Here is a link: http://courses.csail.mit.edu/6.897/spring03/scribe_notes/L15/lecture15.pdf.
• johne
December 18, 2010 1:03 am
For a synchronous logic (i.e., CMOS VLSI) circuit model, which is what I think you’re asking about, the answer is… it depends.
For this circuit model, there is a global signal, the clock, that every gate within the circuit is relatively synchronized to. As a very rough general rule, a “table look up” circuit will be able to perform a new look up every clock period, and provide a result every clock period. One thing that is pervasive in synchronous logic circuit models but is rarely mentioned in complexity theory models is the concept of “latency”, which is the number of clock cycles it takes for the lookup request that began in t_0 to appear at the output, which is usually ≥ 1.
The trick is to try to have enough work to do where you can have as many things “in-flight” as possible so that this latency is “hidden”. If you can do this, then the operation has an effective cost of O(1). Programming languages and algorithms tend to have a “left to right, top to bottom” model of time. This concept does not exist in synchronous logic- everything is happening all the time. It takes some getting used to.
12. December 9, 2010 11:03 am
Available for free (via Google Books) is the complete text of Nathaniel Bowditch’s 1807 masterwork of applied mathematics, The new American practical navigator: being an epitome of navigation, containing all the tables necessary. This classic text remains in-print today, 203 years later, and is known globally by the simple eponym “Bowditch“.
From a computational point-of-view, Bowditch is the fulfillment in every detail of John Napier’s vision. As the Preface asserts:
“Bowditch vowed while writing this edition to ‘put down in the book nothing I can’t teach the crew,” and it is said that every member of his crew including the cook could take a lunar observation and plot the ship’s position.”
Moreover, in service of this ambitious educational goal, Bowditch corrected many thousands of errors in the tables of logarithms (and other functions) of the eighteenth century.
Furthermore, beginning on page 100, Bowditch asserts a startlingly modern view of differential geometry as being (essentially) the study of integral curves of tangent spaces … thus anticipating Gauss (for example) by about two decades.
Fans of computation … fans of geometry … fans of history … fans of technology … and especially, fans of Patrick O’Brian’s novels, all can hardly go wrong by consulting Bowditch.
13. Sniffnoy
December 9, 2010 11:02 pm
Let’s not forget prosthaphaeresis…
• December 10, 2010 4:41 am
Sniffnoy, thank you for introducing us to the wonderful word “prosthaphaeresis” Surely, it is only a matter of time until a SteamPunk novel appears having this title!
With the help of Google Books, we find many highly entertaining articles by the prolific nineteenth century mathematician J. W. L. Glaisher on the history and practice of pre-computer computation; both logarithms and prosthaphaeresis feature prominently.
No doubt, in a another century or two, our present mathematical conceptions of computation and complexity will seem similarly quaint … and (hopefully) similarly seminal.
14. December 10, 2010 12:12 pm
The table-of-squares method of multiplication can be improved somewhat from (lookups, adds, shifts) = (3,3,1) to (2,3,0) by using the formula
ab = ((a+b)² div 4) – ((a-b)² div 4)
and precomputing the squares div 4. Here we only need two lookups instead of three, and no shifts are required.
• December 10, 2010 1:04 pm
Iconjack, by your method the triple product abc requires 2*(2,3,0) = (4,6,0) shifts. Is it possible to do better?
In passing, I noticed a 19th century article by the redoubtable J. W. L. Glaisher that treated the triple product (to advantage) by prosthaphaeretic methods (but to my regret, I did not write down the specific reference). In any event, the computation of triple and higher-order products is a natural line of inquiry.
With enormous luck, perhaps these multiple-product algorithms might generalize even to multiple matrix-products … in which event, the Victorian era of prosthaphaeretic computation may not yet be over!
15. rjlipton *
December 28, 2010 11:15 am
MR,
Thanks a lot.
### Trackbacks
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492621421813965, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/67153-poisson-distribution-help.html
|
# Thread:
1. ## Poisson Distribution HELP!!!
Hi, i have an exam on monday and am really confused as i dont know how on earth to answer questions on poisson approximation / distribution . Heres an example question ive found which i cant do. If anyone can give me a hand i would be so grateful!!
" Suppose that a random variable X has a Poisson Distribution with parameter λ. Show that it's Probability Generating Function is given by Gx(S) = e^-λ+λs. Using this or otherwise, obtain the expected value of X."
Another question is
"Suppose that on average, 1% of a certain brand of christmas lightbulbs are defective. Compute the probability that in a box of 25 lightbulbs, there will at most be one defective bulb. Use the Poisson Approximation to compute the same probability and briefly explain whether a close match could be anticipated (Answers to 4 d.p)"
2. Originally Posted by Bexii
Hi, i have an exam on monday and am really confused as i dont know how on earth to answer questions on poisson approximation / distribution . Heres an example question ive found which i cant do. If anyone can give me a hand i would be so grateful!!
" Suppose that a random variable X has a Poisson Distribution with parameter λ. Show that it's Probability Generating Function is given by Gx(S) = e^-λ+λs. Using this or otherwise, obtain the expected value of X."
[snip]
Definition: $G_X(s) = E\left(s^X\right) = \sum_{0}^{+ \infty} s^x \frac{e^{-\lambda} \lambda^x}{x!} = e^{-\lambda} \sum_{0}^{+ \infty}\frac{(s \lambda)^x}{x!}$
and the sum is recognised as the Maclaurin series for $e^{s \lambda} \, ....$
Definition: $E(X) = \left. \frac{dG}{ds}\right|_{s = 1} = \, ....$
3. Originally Posted by Bexii
[snip]
"Suppose that on average, 1% of a certain brand of christmas lightbulbs are defective. Compute the probability that in a box of 25 lightbulbs, there will at most be one defective bulb. Use the Poisson Approximation to compute the same probability and briefly explain whether a close match could be anticipated (Answers to 4 d.p)"
Let X be the random variable number of defective bulbs.
X ~ Binomial(n = 25, p = 0.01)
Calculate Pr(X = 0) + Pr(X = 1).
Poisson approximation:
$\lambda = np = 0.25$.
Therefore $\Pr(X = x) = \frac{e^{-0.25} (0.25)^x}{x!}$
Calculate Pr(X = 0) + Pr(X = 1).
What are the conditions for using the Poisson approximation to the binomial distribution? Are they met in this question?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8882054090499878, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Gamma_distribution
|
# Gamma distribution
Not to be confused with Gamma function.
Parameters Support
Probability density function
Cumulative distribution function
k > 0 shape θ > 0 scale α > 0 shape β > 0 rate
x ∈ (0, ∞) $\scriptstyle x \;\in\; (0,\, \infty)$
$\scriptstyle \frac{1}{\Gamma(k) \theta^k} x^{k \,-\, 1} e^{-\frac{x}{\theta}}$ $\scriptstyle \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha \,-\, 1} e^{- \beta x }$
$\scriptstyle \frac{1}{\Gamma(k)} \gamma\left(k,\, \frac{x}{\theta}\right)$ $\scriptstyle \frac{1}{\Gamma(\alpha)} \gamma(\alpha,\, \beta x)$
$\scriptstyle \mathbf{E}[ X] = k \theta$ $\scriptstyle \mathbf{E}[\ln X] = \psi(k) +\ln(\theta)$ (see digamma function) $\scriptstyle\mathbf{E}[ X] = \frac{\alpha}{\beta}$ $\scriptstyle \mathbf{E}[\ln X] = \psi(\alpha) -\ln(\beta)$ (see digamma function)
No simple closed form No simple closed form
(k−1)θ for k > 1 $\scriptstyle \frac{\alpha \,-\, 1}{\beta} \text{ for } \alpha \;>\; 1$
$\scriptstyle\operatorname{Var}[ X] = k \theta^2$ $\scriptstyle\operatorname{Var}[\ln X] = \psi_1(k)$ (see trigamma function ) $\scriptstyle \operatorname{Var}[ X] = \frac{\alpha}{\beta^2}$ $\scriptstyle\operatorname{Var}[\ln X] = \psi_1(\alpha)$ (see trigamma function )
$\scriptstyle \frac{2}{\sqrt{k}}$ $\scriptstyle \frac{2}{\sqrt{\alpha}}$
$\scriptstyle \frac{6}{k}$ $\scriptstyle \frac{6}{\alpha}$
$\scriptstyle \begin{align} \scriptstyle k &\scriptstyle \,+\, \ln\theta \,+\, \ln[\Gamma(k)]\\ \scriptstyle &\scriptstyle \,+\, (1 \,-\, k)\psi(k) \end{align}$ $\scriptstyle \begin{align} \scriptstyle \alpha &\scriptstyle \,-\, \ln \beta \,+\, \ln[\Gamma(\alpha)]\\ \scriptstyle &\scriptstyle \,+\, (1 \,-\, \alpha)\psi(\alpha) \end{align}$
$\scriptstyle (1 \,-\, \theta t)^{-k} \text{ for } t \;<\; \frac{1}{\theta}$ $\scriptstyle \left(1 \,-\, \frac{t}{\beta}\right)^{-\alpha} \text{ for } t \;<\; \beta$
$\scriptstyle (1 \,-\, \theta i\,t)^{-k}$ $\scriptstyle \left(1 \,-\, \frac{i\,t}{\beta}\right)^{-\alpha}$
In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. There are three different parameterizations in common use:
1. With a shape parameter k and a scale parameter θ.
2. With a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter.
3. With a shape parameter k and a mean parameter μ = k/β.
In each of these three forms, both parameters are positive real numbers.
The parameterization with k and θ appears to be more common in econometrics and certain other applied fields, where e.g. the gamma distribution is frequently used to model waiting times. For instance, in life testing, the waiting time until death is a random variable that is frequently modeled with a gamma distribution.[1]
The parameterization with α and β is more common in Bayesian statistics, where the gamma distribution is used as a conjugate prior distribution for various types of inverse scale (aka rate) parameters, such as the λ of an exponential distribution or a Poisson distribution – or for that matter, the β of the gamma distribution itself. (The closely related inverse gamma distribution is used as a conjugate prior for scale parameters, such as the variance of a normal distribution.)
If k is an integer, then the distribution represents an Erlang distribution; i.e., the sum of k independent exponentially distributed random variables, each of which has a mean of θ (which is equivalent to a rate parameter of 1/θ).
The gamma distribution is the maximum entropy probability distribution for a random variable X for which E[X] = kθ = α/β is fixed and greater than zero, and E[ln(X)] = ψ(k) + ln(θ) = ψ(α) − ln(β) is fixed (ψ is the digamma function).[2]
## Characterization using shape k and scale θ
A random variable X that is gamma-distributed with shape k and scale θ is denoted
$X \sim \Gamma(k, \theta) \equiv \textrm{Gamma}(k, \theta)$
### Probability density function
Illustration of the Gamma PDF for parameter values over k and x with θ set to 1, 2, 3, 4, 5 and 6. One can see each θ layer by itself here [1] as well as by k [2] and x. [3].
The probability density function using the shape-scale parametrization is
$f(x;k,\theta) = \frac{1}{\theta^k}\frac{1}{\Gamma(k)}x^{k-1}e^{-\frac{x}{\theta}} \quad \text{ for } x > 0 \text{ and } k, \theta > 0.$
Here Γ(k) is the gamma function evaluated at k.
### Cumulative distribution function
The cumulative distribution function is the regularized gamma function:
$F(x;k,\theta) = \int_0^x f(u;k,\theta)\,du = \frac{\gamma\left(k, \frac{x}{\theta}\right)}{\Gamma(k)}$
where γ(k, x/θ) is the lower incomplete gamma function.
It can also be expressed as follows, if k is a positive integer (i.e., the distribution is an Erlang distribution):[3]
$F(x;k,\theta) = 1-\sum_{i=0}^{k-1} \frac{1}{i!} \left(\frac{x}{\theta}\right)^i e^{-\frac{x}{\theta}} = \sum_{i=k}^{\infty} \frac{1}{i!} \left(\frac{x}{\theta}\right)^i e^{-\frac{x}{\theta}}$
## Characterization using shape α and rate β
Alternatively, the gamma distribution can be parameterized in terms of a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter. A random variable X that is gamma-distributed with shape α and rate β is denoted
$X \sim \Gamma(\alpha, \beta) \equiv \textrm{Gamma}(\alpha,\beta)$
### Probability density function
The corresponding density function in the shape-rate parametrization is
$g(x;\alpha,\beta) = \beta^{\alpha}\frac{1}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x} \quad \text{ for } x \geq 0 \text{ and } \alpha, \beta > 0$
Both parametrizations are common because either can be more convenient depending on the situation.
### Cumulative distribution function
The cumulative distribution function is the regularized gamma function:
$F(x;\alpha,\beta) = \int_0^x f(u;\alpha,\beta)\,du= \frac{\gamma(\alpha, \beta x)}{\Gamma(\alpha)}$
where γ(α, βx) is the lower incomplete gamma function.
If α is a positive integer (i.e., the distribution is an Erlang distribution), the cumulative distribution function has the following series expansion:[3]
$F(x;\alpha,\beta) = 1-\sum_{i=0}^{\alpha-1} \frac{1}{i!} (\beta x)^i e^{-\beta x} = \sum_{i=\alpha}^{\infty} \frac{1}{i!} (\beta x)^i e^{-\beta x}$
## Properties
### Skewness
The skewness is equal to $2/\sqrt{k}$, it depends only on the shape parameter (k) and approaches a normal distribution when k is large (approximately when k > 10).
### Median calculation
Unlike the mode and the mean which have readily calculable formulas based on the parameters, the median does not have an easy closed form equation. The median for this distribution is defined as the constant x0 such that
$\frac{1}{\Gamma(k) \theta^k} \int_0^{ x_0 } x^{ k - 1 } e^{ - \frac{ x }{ \theta } } dx = \tfrac{1}{2}$
The ease of this calculation is dependent on the k parameter. This is best achieved by a computer since the calculations can quickly grow out of control.
A method of approximating the median (ν) for any Gamma distribution has been derived based on the ratio μ/(μ − ν) which to a very good approximation is a linear function of the shape parameter α when α ≥ 1.[4] This gives this approximation
$\nu \sim \mu \frac{3 \alpha - 0.8}{3 \alpha + 0.2} ,$
where μ is the mean.
### Summation
If Xi has a Gamma(ki, θ) distribution for i = 1, 2, ..., N (i.e., all distributions have the same scale parameter θ), then
$\sum_{i=1}^N X_i \sim\mathrm{Gamma} \left( \sum_{i=1}^N k_i, \theta \right)$
provided all Xi are independent.
For the cases where the Xi are independent but have different scale parameters see Mathai (1982) and Moschopoulos (1984).
The gamma distribution exhibits infinite divisibility.
### Scaling
If
$X \sim \mathrm{Gamma}(k, \theta)$
then for any c > 0,
$cX \sim \mathrm{Gamma}( k, c\theta)$
Hence the use of the term "scale parameter" to describe θ.
Equivalently, if
$X \sim \mathrm{Gamma}(\alpha, \beta)$
then for any c > 0,
$cX \sim \mathrm{Gamma}( \alpha, \beta/c)$
Hence the use of the term "inverse scale parameter" to describe β.
### Exponential family
The Gamma distribution is a two-parameter exponential family with natural parameters k − 1 and −1/θ (equivalently, α − 1 and −β), and natural statistics X and ln(X).
If the shape parameter k is held fixed, the resulting one-parameter family of distributions is a natural exponential family.
### Logarithmic expectation
One can show that
$\mathbf{E}[\ln(X)] = \psi(\alpha) - \ln(\beta)$
or equivalently,
$\mathbf{E}[\ln(X)] = \psi(k) + \ln(\theta)$
where ψ is the digamma function.
This can be derived using the exponential family formula for the moment generating function of the sufficient statistic, because one of the sufficient statistics of the gamma distribution is ln(x).
### Information entropy
The information entropy is
$\operatorname{H}(X) = \mathbf{E}[-\ln(p(X))] = \mathbf{E}[-\alpha\ln(\beta) + \ln(\Gamma(\alpha)) - (\alpha-1)\ln(X) + \beta X] = \alpha - \ln(\beta) + \ln(\Gamma(\alpha)) + (1-\alpha)\psi(\alpha).$
In the k, θ parameterization, the information entropy is given by
$\operatorname{H}(X) =k + \ln(\theta) + \ln(\Gamma(k)) + (1-k)\psi(k).$
### Kullback–Leibler divergence
Illustration of the Kullback–Leibler (KL) divergence for two Gamma PDFs. Here β = β0 + 1 which are set to 1, 2, 3, 4, 5 and 6. The typical asymmetry for the KL divergence is clearly visible.
The Kullback–Leibler divergence (KL-divergence), as with the information entropy and various other theoretical properties, are more commonly seen using the α, β parameterization because of their uses in Bayesian and other theoretical statistics frameworks.
The KL-divergence of Gamma(αp, βp) ("true" distribution) from Gamma(αq, βq) ("approximating" distribution) is given by[5]
$D_{\mathrm{KL}}(\alpha_p,\beta_p; \alpha_q, \beta_q) = (\alpha_p-\alpha_q)\psi(\alpha_p) - \log\Gamma(\alpha_p) + \log\Gamma(\alpha_q) + \alpha_q(\log \beta_p - \log \beta_q) + \alpha_p\frac{\beta_q-\beta_p}{\beta_p}$
Written using the k, θ parameterization, the KL-divergence of Gamma(kp, θp) from Gamma(kq, θq) is given by
$D_{\mathrm{KL}}(k_p,\theta_p; k_q, \theta_q) = (k_p-k_q)\psi(k_p) - \log\Gamma(k_p) + \log\Gamma(k_q) + k_q(\log \theta_q - \log \theta_p) + k_p\frac{\theta_p - \theta_q}{\theta_q}$
### Laplace transform
The Laplace transform of the gamma PDF is
$F(s) = (1 + \theta s)^{-k} = \frac{\beta^\alpha}{(s + \beta)^\alpha} .$
## Parameter estimation
### Maximum likelihood estimation
The likelihood function for N iid observations (x1, ..., xN) is
$L(k, \theta) = \prod_{i=1}^N f(x_i;k,\theta)$
from which we calculate the log-likelihood function
$\ell(k, \theta) = (k - 1) \sum_{i=1}^N \ln{(x_i)} - \sum_{i=1}^N \frac{x_i}{\theta} - Nk\ln(\theta) - N\ln(\Gamma(k))$
Finding the maximum with respect to θ by taking the derivative and setting it equal to zero yields the maximum likelihood estimator of the θ parameter:
$\hat{\theta} = \frac{1}{kN}\sum_{i=1}^N x_i$
Substituting this into the log-likelihood function gives
$\ell = (k-1)\sum_{i=1}^N\ln{(x_i)} - Nk - Nk\ln{\left(\frac{\sum x_i}{kN}\right)} - N\ln(\Gamma(k))$
Finding the maximum with respect to k by taking the derivative and setting it equal to zero yields
$\ln(k) - \psi(k) = \ln\left(\frac{1}{N}\sum_{i=1}^N x_i\right) - \frac{1}{N}\sum_{i=1}^N\ln(x_i)$
There is no closed-form solution for k. The function is numerically very well behaved, so if a numerical solution is desired, it can be found using, for example, Newton's method. An initial value of k can be found either using the method of moments, or using the approximation
$\ln(k) - \psi(k) \approx \frac{1}{2k}\left(1 + \frac{1}{6k + 1}\right)$
If we let
$s = \ln{\left(\frac{1}{N}\sum_{i=1}^N x_i\right)} - \frac{1}{N}\sum_{i=1}^N\ln{(x_i)}$
then k is approximately
$k \approx \frac{3 - s + \sqrt{(s - 3)^2 + 24s}}{12s}$
which is within 1.5% of the correct value.[6] An explicit form for the Newton-Raphson update of this initial guess is:[7]
$k \leftarrow k - \frac{ \ln(k) - \psi(k) - s }{ \frac{1}{k} - \psi^{\prime}(k) }.$
### Bayesian minimum mean-squared error
With known k and unknown θ, the posterior density function for theta (using the standard scale-invariant prior for θ) is
$P(\theta | k, x_1, \dots, x_N) \propto \frac{1}{\theta} \prod_{i=1}^N f(x_i; k, \theta)$
Denoting
$y \equiv \sum_{i=1}^Nx_i , \qquad P(\theta | k, x_1, \dots, x_N) = C(x_i) \theta^{-N k-1} e^{-\frac{y}{\theta}}$
Integration over θ can be carried out using a change of variables, revealing that 1/θ is gamma-distributed with parameters α = Nk, β = y.
$\int_0^{\infty} \theta^{-Nk - 1 + m} e^{-\frac{y}{\theta}}\, d\theta = \int_0^{\infty} x^{Nk - 1 - m} e^{-xy} \, dx = y^{-(Nk - m)} \Gamma(Nk - m) \!$
The moments can be computed by taking the ratio (m by m = 0)
$\mathbf{E} [x^m] = \frac {\Gamma (Nk - m)} {\Gamma(Nk)} y^m$
which shows that the mean ± standard deviation estimate of the posterior distribution for theta is
$\frac {y} {Nk - 1} \pm \frac {y^2} {(Nk - 1)^2 (Nk - 2)}$
## Generating gamma-distributed random variables
Given the scaling property above, it is enough to generate gamma variables with θ = 1 as we can later convert to any value of β with simple division.
Using the fact that a Gamma(1, 1) distribution is the same as an Exp(1) distribution, and noting the method of generating exponential variables, we conclude that if U is uniformly distributed on (0, 1], then −ln(U) is distributed Gamma(1, 1) Now, using the "α-addition" property of gamma distribution, we expand this result:
$\sum_{k=1}^n {-\ln U_k} \sim \Gamma(n, 1)$
where Uk are all uniformly distributed on (0, 1] and independent. All that is left now is to generate a variable distributed as Gamma(δ, 1) for 0 < δ < 1 and apply the "α-addition" property once more. This is the most difficult part.
Random generation of gamma variates is discussed in detail by Devroye,[8] noting that none are uniformly fast for all shape parameters. For small values of the shape parameter, the algorithms are often not valid.[9] For arbitrary values of the shape parameter, one can apply the Ahrens and Dieter[10] modified acceptance-rejection method Algorithm GD (shape k ≥ 1), or transformation method[11] when 0 < k < 1. Also see Cheng and Feast Algorithm GKM 3[12] or Marsaglia's squeeze method.[13]
The following is a version of the Ahrens-Dieter acceptance-rejection method:[10]
1. Let m be 1.
2. Generate V3m−2, V3m−1 and V3m as independent uniformly distributed on (0, 1] variables.
3. If $\scriptstyle V_{3m - 2} \;\le\; v_0$, where $\scriptstyle v_0 \;=\; \frac{e}{e + \delta}$, then go to step 4, else go to step 5.
4. Let $\scriptstyle \xi_m \;=\; V_{3m - 1}^{1 / \delta}, \ \eta_m \;=\; V_{3m} \xi _m^ {\delta - 1}$. Go to step 6.
5. Let $\scriptstyle \xi_m \;=\; 1 \,-\, \ln {V_{3m - 1}}, \ \eta_m \;=\; V_{3m} e^{-\xi_m}$.
6. If $\scriptstyle \eta_m \;>\; \xi_m^{\delta - 1} e^{-\xi_m}$, then increment m and go to step 2.
7. Assume ξ = ξm to be the realization of Γ(δ, 1).
A summary of this is
$\theta \left( \xi - \sum _{i=1} ^{\lfloor{k}\rfloor} {\ln(U_i)} \right) \sim \Gamma (k, \theta)$
where
• $\scriptstyle \lfloor{k}\rfloor$ is the integral part of k,
• ξ has been generated using the algorithm above with δ = {k} (the fractional part of k),
• Uk and Vl are distributed as explained above and are all independent.
## Related distributions
### Special cases
• If X ~ Gamma(k = 1, θ = λ−1), then X has an exponential distribution with rate parameter λ.
• If X ~ Gamma(k = ν/2, θ = 2), then X is identical to χ2(ν), the chi-squared distribution with ν degrees of freedom. Conversely, if Q ~ χ2(ν) and c is a positive constant, then $\scriptstyle c \cdot Q \;\sim\; {\Gamma}(k \;=\; \nu/2,\, \theta \;=\; 2c)$.
• If k is an integer, the gamma distribution is an Erlang distribution and is the probability distribution of the waiting time until the k-th "arrival" in a one-dimensional Poisson process with intensity 1/θ. If $\scriptstyle X \;\sim\; {\Gamma}(k \;\in\; \mathbf{Z},\, \theta)$ and $\scriptstyle Y \;\sim\; \mathrm{Pois}\left(\frac{x}{\theta}\right)$, then $\scriptstyle P(X \,>\, x) \;=\; P(Y \,<\, k)$.
• If X has a Maxwell-Boltzmann distribution with parameter a, then $\scriptstyle X^2 \;\sim\; {\Gamma}\left(\frac{3}{2},\, 2a^2\right)$.
• X ~ Gamma(k, θ), then $\scriptstyle \sqrt{X}$ follows a generalized gamma distribution with parameters p = 2, d = 2k, and $\scriptstyle a \;=\; \sqrt{\theta}$[citation needed] .
• $\scriptstyle X \;\sim\; \mathrm{SkewLogistic}(\theta)$, then $\scriptstyle \log\left(1 \,+\, e^{-X}\right) \;\sim\; \Gamma (1,\, \theta)\,$; i.e. an exponential distribution: see skew-logistic distribution.
### Conjugate prior
In Bayesian inference, the gamma distribution is the conjugate prior to many likelihood distributions: the Poisson, exponential, normal (with known mean), Pareto, gamma with known shape σ, inverse gamma with known shape parameter, and Gompertz with known scale parameter.
The Gamma distribution's conjugate prior is:[14]
$p(k,\theta | p, q, r, s) = \frac{1}{Z} \frac{p^{k-1} e^{-\theta^{-1} q}}{\Gamma(k)^r \theta^{k s}}$
Where Z is the normalizing constant, which has no closed form solution. The posterior distribution can be found by updating the parameters as follows.
$\begin{align} p' &= p\prod_i x_i\\ q' &= q + \sum_i x_i\\ r' &= r + n\\ s' &= s + n \end{align}$
Where n is the number of observations, and xi is the $\scriptstyle i^{th}$ observation.
### Compound gamma
If the shape parameter of the gamma distribution is known, but the inverse-scale parameter is unknown, then a gamma distribution for the inverse-scale forms a conjugate prior. The compound distribution, which results from integrating out the inverse-scale has a closed form solution, known as the compound gamma distribution.[15]
### Others
• If X ~ Gamma(k, θ) distribution, then 1/X has an inverse-gamma distribution with shape parameter k and scale parameter θ−1 using the parameterization given by inverse-gamma distribution.
• If X ~ Gamma(α, θ) and Y ~ Gamma(β, θ) are independently distributed, then X/(X + Y) has a beta distribution with parameters α and β.
• If Xi are independently distributed Gamma(αi, 1) respectively, then the vector (X1/S, ..., Xn/S), where S = X1 + ... + Xn, follows a Dirichlet distribution with parameters α1, …, αn.
• For large k the gamma distribution converges to Gaussian distribution with mean μ = kθ and variance σ2 = kθ2.
• The Gamma distribution is the conjugate prior for the precision of the normal distribution with known mean.
• The Wishart distribution is a multivariate generalization of the gamma distribution (samples are positive-definite matrices rather than positive real numbers).
• The Gamma distribution is a special case of the generalized gamma distribution, the generalized integer gamma distribution, and the generalized inverse Gaussian distribution.
• Among the discrete distributions, the negative binomial distribution is sometimes considered the discrete analogue of the Gamma distribution.
• Tweedie distributions – the gamma distribution is a member of the family of Tweedie exponential dispersion models.
## Applications
This section requires expansion. (March 2009)
The gamma distribution has been used to model the size of insurance claims[16] and rainfalls.[17] This means that aggregate insurance claims and the amount of rainfall accumulated in a reservoir are modelled by a gamma process. The gamma distribution is also used to model errors in multi-level Poisson regression models, because the combination of the Poisson distribution and a gamma distribution is a negative binomial distribution.
In neuroscience, the gamma distribution is often used to describe the distribution of inter-spike intervals.[18] Although in practice the gamma distribution often provides a good fit, there is no underlying biophysical motivation for using it.
In bacterial gene expression, the copy number of a constitutively expressed protein often follows the gamma distribution, where the scale and shape parameter are, respectively, the mean number of bursts per cell cycle and the mean number of protein molecules produced by a single mRNA during its lifetime.[19]
The gamma distribution is widely used as a conjugate prior in Bayesian statistics. It is the conjugate prior for the precision (i.e. inverse of the variance) of a normal distribution. It is also the conjugate prior for the exponential distribution.
## Notes
1. See Hogg and Craig (1978, Remark 3.3.1) for an explicit motivation
2. Park, Sung Y.; Bera, Anil K. (2009). "Maximum entropy autoregressive conditional heteroskedasticity model". Journal of Econometrics (Elsevier): 219–230. Retrieved 2011-06-02.
3. ^ a b Papoulis, Pillai, Probability, Random Variables, and Stochastic Processes, Fourth Edition
4. Banneheka BMSG, Ekanayake GEMUPD (2009) "A new point estimator for the median of Gamma distribution". Viyodaya J Science, 14:95-103
5. W.D. Penny, KL-Divergences of Normal, Gamma, Dirichlet, and Wishart densities[]
6. Choi, S.C.; Wette, R. (1969) "Maximum Likelihood Estimation of the Parameters of the Gamma Distribution and Their Bias", Technometrics, 11(4) 683–690
7. See Chapter 9, Section 3, pages 401–428.
8. Devroye (1986), p. 406.
9. ^ a b Ahrens, J. H. and Dieter, U. (1982). Generating gamma variates by a modified rejection technique. Communications of the ACM, 25, 47–54. Algorithm GD, p. 53.
10. Ahrens, J. H.; Dieter, U. (1974). "Computer methods for sampling from gamma, beta, Poisson and binomial distributions". Computing 12: 223–246. CiteSeerX: .
11. Cheng, R.C.H., and Feast, G.M. Some simple gamma variate generators. Appl. Stat. 28 (1979), 290-295.
12. Marsaglia, G. The squeeze method for generating gamma variates. Comput, Math. Appl. 3 (1977), 321-325.
13. Dubey, Satya D. (December 1970). "Compound gamma, beta and F distributions". Metrika 16: 27–31. doi:10.1007/BF02613934.
14. p. 43, Philip J. Boland, Statistical and Probabilistic Methods in Actuarial Science, Chapman & Hall CRC 2007
15. Aksoy, H. (2000) "Use of Gamma Distribution in Hydrological Analysis", Turk J. Engin Environ Sci, 24, 419 – 428.
16. J. G. Robson and J. B. Troy, "Nature of the maintained discharge of Q, X, and Y retinal ganglion cells of the cat," J. Opt. Soc. Am. A 4, 2301-2307 (1987)
17. N. Friedman, L. Cai and X. S. Xie (2006) "Linking stochastic dynamics to population distribution: An analytical framework of gene expression," Phys. Rev. Lett. 97, 168302.
## References
• R. V. Hogg and A. T. Craig (1978) Introduction to Mathematical Statistics, 4th edition. New York: Macmillan. (See Section 3.3.)'
• P. G. Moschopoulos (1985) The distribution of the sum of independent gamma random variables, Annals of the Institute of Statistical Mathematics, 37, 541-544
• A. M. Mathai (1982) Storage capacity of a dam with gamma type inputs, Annals of the Institute of Statistical Mathematics, 34, 591-597
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 81, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7973445653915405, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/30402/how-to-calculate-mean-and-standard-deviation-in-r-given-confidence-interval-and?answertab=votes
|
# How to calculate mean and standard deviation in R given confidence interval and a normal or gamma distribution?
Suppose you are given a $95 \%$ CI $(1,6)$ based on the normal distribution. Is there any easy way to find $\mu$ and $\sigma$? What if it came from a gamma distribution? Can we do this in R?
-
5
Toby, be careful about how you interpret answers to this question, because "based on the normal distribution" can mean various things. For instance, some people might interpret a CI which uses the Student T distribution to be "based on the normal distribution" (because it is, indirectly). Moreover, there are many kinds of CIs: they are often "symmetric" in some sense, but not always (in particular, a CI related to a gamma distribution might not be). It all comes down to what formula was used to compute the CI. Do you have any information about that? – whuber♦ Jun 13 '12 at 19:03
## 3 Answers
Please read the erratum at the end of the answer.
First note that there is not enough information to solve this problem. In both cases, $n$ the sample size is missing. In the case of the Gaussian distribution, assuming you know $n$, you can easily do it by following @Michael Chernick's instructions. In R that would give something like this (with $n=43$ for the sake of the example).
````n <- 43
ci <- c(1,6)
# Take the middle of the CI to get x_bar (3.5).
x_bar <- mean(ci)
# Use 1 = x_bar - 1.96 * sd/sqrt(n)
S2 <- n^2 * (x_bar - ci[1])/1.96
````
For the case of the Gamma distribution, things are a bit more complicated because it is not symmetric. So the mean is not in the center of the CI.
For example, say you sample from a Gamma population $\Gamma(\alpha,1)$ where $\alpha$ is unknown. The sample mean is the sum of $n$ variables distributed as $\Gamma(\alpha,1)$ divided by $n$, so it is a variable distributed as $\Gamma(n\alpha,1/n)$. Say that we observed a mean of $1.7$ for a sample size of $n=5$. There are several CI that contain this value as we can check.
````> qgamma(.975, shape=1.7*5, scale=1/5)
[1] 3.019101
> qgamma(.975, shape=1.7*5, scale=1/5, lower.tail=F)
[1] 0.7564186
````
A 95% CI for $\alpha$ is $(.756, 3.019)$, the middle of which is $1.89$, not $1.70$. In short, finding the $\alpha$ and $\theta$ that produce a 95% CI is possible because the solution is unique, but it is a hack.
Fortunately, as $n$ increases, the distribution becomes more and more Gaussian and symmetric, so the CI will be symmetric around the mean. The mean and variance of a $\Gamma(n\alpha,\theta/n)$ are $\alpha\theta$ and $\alpha\theta/n$, so you can use the results of the Gaussian case and solve this very simple equation to get $\alpha$ and $\theta$.
Erratum: Following @whuber's comment I realized that the proposed way to get a confidence interval for $\alpha$ is not good.
The example given above was meant to demonstrate that getting CI with Gamma variables is much more tedious than with Gaussian variables. My mistake proves the point even better. At @whuber's prompt I will show that the CI I proposed is incorrect.
````set.seed(123)
# Simulate 100,000 means of 5 Gaussian(0,1) variables (positive control).
means <- rnorm(100000, sd=1/sqrt(5))
upper <- means + qnorm(.975)/sqrt(5)
lower <- means - qnorm(.975)/sqrt(5)
mean((upper > 0) & (lower < 0))
[1] 0.95007 # OK.
# Simulate 100,000 means of 5 Gamma(1,1) variables.
means <- rgamma(100000, shape=5, scale=1/5)
upper <- qgamma(.975, shape=5*means, scale=1/5)
lower <- qgamma(.975, shape=5*means, scale=1/5, lower.tail=FALSE)
mean((upper > 1) & (lower < 1))
[1] 0.94666 # Almost, but not quite.
````
-
2
This reply starts off really well. The second part, though, does not give a correct CI for a gamma distribution. (Simulate it for small values of `shape` and small sample sizes.) – whuber♦ Jun 14 '12 at 14:06
Thanks @whuber. Not much escapes you ;-) – gui11aume Jun 14 '12 at 21:53
+1 For a convincing simulation, try a shape parameter of 0.5 and scale of 2 :-). – whuber♦ Jun 14 '12 at 22:01
Aha, thanks @Macro for the edit :D – gui11aume Jun 15 '12 at 20:27
If you mean the true parameters of course the answer is no. But if you mean that you want to recover the sample estimates from the confidence interval the answer is yes for the normal distribution if the sample size $n$ is also given.
If the confidence interval was $(1,6)$, then $1= \overline{X}-1.96 \cdot S/\sqrt{n}$ and $6=\overline{X}+1.96 \cdot S/\sqrt{n}$. So $\overline{X}= (6+1)/2=3.5$ and then $6=3.5 +1.96 \cdot S/\sqrt{n}$ or $S=\sqrt{n} 2.5/(1.96)$.
For the gamma distribution this paper shows various ways to get the approximate and exact confidence intervals for rates. Getting the parameter estimates from these confidence intervals may be complicated.
-
3
Please consider using TeX in your answers. It makes it much easier to read equations! – MånsT Jun 13 '12 at 18:56
2
@MånsT, or we can earn the 'Copy Editor' badge by editing his posts for him :) – Macro Jun 13 '12 at 19:03
2
@MichaelChernick - that's fine. When I first learned TeX, I found it to be very useful to look at examples. If you look at how I formatted the equations in your answer here, you can see, for example, that you enter "Equation" environment by typing `$` (or `$$` to make in a centered equation on a new line) and, once you're in equation environment that `\overline{X}` will type $\overline{X}$, and `\sqrt{n}` will type $\sqrt{n}$, and so on – Macro Jun 13 '12 at 19:13
2
That's a great start! Have a look at the edits to your answers and I'm sure that you'll be TeXing in no time. – MånsT Jun 13 '12 at 19:13
5
Some tricks, Michael: (1) math.harvard.edu/texman lets you quickly look up some common expressions. (2) Start $\TeX$ expressions with both the enclosing "$\$$" characters, so that as you type you can see the expression previewed below. (3) For using$\TeX$in comments, open a separate window and start to ask a new question on this site. It will give you a preview. Cut and paste it into your comment, then abandon the new question. (4) You can right-click on any$\TeX\$ on a page to see the original markup: learn from it (and use copy-and-paste judiciously). – whuber♦ Jun 13 '12 at 19:16
show 5 more comments
You can construct a confidence interval around anything that can be estimated, whether it be a mean, standard deviation, even a maximum for any given probability distribution.
Assuming you have a CI around the mean estimated from an experiment in which a finite sample of size $n$ were taken from independent, identically distributed random normal variables, then you know the exact confidence interval is given by the sample mean plus or minus 1.96 times the standard error, which is the sample standard deviation scaled by the square root of the sample size. $\bar{x} \pm \mathcal{Z}_{\alpha/2} \left( s/\sqrt{n} \right)$. Your estimates of these parameters, conventionally labeled as $\bar{x}$ and $s$ are the "best guesses" for the "population mean" $\mu$ and "standard deviation" $\sigma$.
These estimators also estimate the same values regardless of the distribution of your independent samples or the corresponding sampling distribution of the mean. Note however, that the confidence interval is asymptotic and these estimates are not necessarily the best anymore.
-
1
The first paragraph is insightful. Concerning the second, your "exact" CI went out of vogue 104 years ago when "Student" showed that it is not exact and, in finding an exact CI, discovered the t distribution. The third paragraph is difficult to make sense of. Overall, though, what is your answer to the question? – whuber♦ Jun 14 '12 at 14:10
1
That's correct, I haven't accounted for the degrees of freedom in what I called the exact CI. What I'm getting at is this: assuming the CI is an asymptotic, dist-n free estimate based on the CLT, then yes the sample mean and sample standard deviation can be arithmetically derived, given the sample size. Without knowledge of the construction of the CI, this may be perilous. CIs for the odds ratio in logistic regression models are non-symmetric and you couldn't estimate the "mean" and "standard deviation" of the sampling distribution of the sample odds ratio based on such a CI. – AdamO Jun 14 '12 at 16:39
default
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9074998497962952, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/7505/calculate-the-expectation-of-a-shift-cdf/7508
|
# Calculate the expectation of a shift CDF
Suppose $X$ is a normal random variable with mean 0, and variance $\sigma^2$. $F(x)$ is the CDF(cumulative distribution function) of a standard normal random variable(mean 0 and variable 1), how to calculate the expectation of $F(X+a)$, where $a>0$.
This was a quant interview question. I know how to calculate the expectation of F(X), i.e when $a=0$, but I have no idea when $a \neq 0$.
My solution for $a=0$:
Method 1: Since F(x) is the CDF of a normal random variable with mean 0, and variance $\sigma^2$. We will have F(x)=1-F(-x). Suppose that f(x) is the corresponding pdf, and f(x)=f(-x). Then \begin{align} \mathbb{E}[F(X)]&=\int_{-\inf}^{+\inf} F(s)f(s)ds\\ &=\int_{-\inf}^{+\inf} (1-F(-s))f(s)ds\\ &=1-\int_{-\inf}^{+\inf} F(-s)f(s)ds\\ &=1-\int_{-\inf}^{+\inf} F(m)f(m)dm \end{align} Hence we have $$\int_{-\inf}^{+\inf} F(s)f(s)ds=\frac{1}{2}$$
Method 2(This method seems it didn't require that X is normal random variable): Let's first compute the distribution for $F(X)$: \begin{align} \mathbb{P}\{F(X) \leq y\}&=\mathbb{P}\{X \leq F^{-1}(y)\}\\ &=F\cdot F^{-1}(y)=y \end{align} So $F(X)$ is uniformly distributed, hence the mean is $\frac{1}{2}$.
-
You say that you know how to calculate $E[F(X)]$, where $F$ is the distribution function of $F$, right? What is the trick. If you show us this, then we can work on $E[F(X+a)]$. – Richard Mar 12 at 11:04
1
Wait a moment. Isn't it true that $F(X)$ is uniform if $X$ has a continuous density. Therefore $E[F(X)] = 1/2$? I am not sure whether this helps us for $E[F(X+a)]$ ... – Richard Mar 12 at 11:06
@Richard I was thinking along the same line, but as you say I'm not sure how this will help us calculate the excpecation of F(X+a). However, I think this question should be asked in math.stackexchange? – Good Guy Mike Mar 12 at 11:29
I think the question fits for both as it could really be asked in a quant interview. In my mind it is ok. – Richard Mar 12 at 12:32
Wouldn't this just be positive drift? – jeff m Mar 12 at 16:08
show 6 more comments
## 3 Answers
This leads to the same result as Alexeys answer. However, my reasoning is different. $$E[F_X(X+a)]=\int_{-\infty}^{\infty} F_X(x+a) f_X(x)dx=\int_{-\infty}^{\infty} \int_{-\infty}^{x+a}f_X(y)dy f_X(x)dx=\\ \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} 1_{(-\infty,x+a]}(y) f_X(y) f_X(x)dydx= \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} 1_{_{\{y-x\le a\}}}(y) f_X(y) f_X(x)dydx.$$
The product of the two densities in the integral is the density of a bivariate Gaussian vector (X,Y), whose components are independent and follow a normal distribution $N(\mu,\sigma^2)$. Hence this integral is the same as
$$E[1_{\{Y-X\le a\}}]=P[\{Y-X\le a\}],$$
where $Y,X$ are iid $N(\mu,\sigma^2)$. Thus $Y-X$ has a $N(0,2\sigma^2)$ distribution. We get
$$E[F_X(X+a)]=F_{Y-X}(a)=\Phi\left(\frac{a}{\sqrt{2}\sigma}\right).$$
-
+1 Good solution. – Alexey Kalmykov Mar 18 at 14:29
Great @Andreas, thanks for sharing this proof! – Richard Mar 18 at 14:42
As you already mentioned you can get the CDF of this distribution using distribution transform:
$$P(F(X+a)\le y)=P(F^{-1}(F(X+a))\le F^{-1}(y))=P(X+a\le F^{-1}(y))=P(X\le F^{-1}(y)-a)=F(F^{-1}(y)-a)$$
Then you could write down expectation in integral form, but on the first glance it seems not so trivial how to get an explicit expression. Probably they didn't expect you to do this.
You can also try to run quick Monte-Carlo in R to see what your distribution looks like:
```` a <- 1
sigma <- 2
m <- pnorm(rnorm(10000,0,sigma)+a,0,sigma)
hist(m)
plot(ecdf(m))
mean(m)
````
You will definitely see that it's not uniform for the values of $a$ that are significantly bigger than 0.
-
All that I have tried ended up here too. Yes, maybe there is no (at least no easy) closed-form solution. The MC simulation persuades me ... – Richard Mar 13 at 13:37
Hopefully this is correct...
So firstly, what is $F(X)$ when $X$ is a random variabel?
This is where I might be completely wrong, but at least it gives the correct answer in the case of a=0, so let's try that one first.
We have two random variables with the same normal distribution, mean 0 and sd $\sigma^2$. $X_1, X_2$
So we are supposed to calculate $F_{X_1}(X_2) = P(X_1 \leq X_2) = P(X_1 - X_2 \leq 0)$
Since both $X_1$ and $X_2$ are both normal we have that $X =X_1 - X_2$ is also normal with mean $0$ and s.d. $2\sigma^2$
So we have $P(X \leq 0)$ which is 1/2.
For the case of $a \neq 0$ we would get $P(X \leq a)$ which is $F_{X}(a)$. However it seems like it would always end up as a constant, so the expected value seems a bit 'redundant', which makes me think that this might not be the correct solution..
edit: Also, in $P(X \leq a)$ you could divide to get, since mean is 0, divide to get standard normal and you would get $\Phi(\frac{a}{\sqrt2sigma})$ or smth like that.
edit2:
````sigma = 2;
n = 5000000;
a=5;
mean(normcdf(normrnd(0,sigma,1,n)+a,0,sigma))
normcdf(a/(sqrt(2)*sigma),0,1)
````
-
It seems that you assume that $X_{1}$ and $X_{2}$ are independent, but we need to assume they are the same random variable. – nkhuyu Mar 12 at 23:17
If they are the same r.v. you would end up with $P(X_1 \leq X_1) =1$? I see that you provided the correct (?) proof, interesting, I'll look into that when I get home – Good Guy Mike Mar 13 at 7:07
@GoodGuyMike Sorry to say, but this looks too complicated. Alexey's approach in the other answer is more direct (I tried something similar). – Richard Mar 13 at 13:38
I tried simulating this in matlab, and it seems to work for $\sigma = 1$, otherwise it fails. Do you know why? – Good Guy Mike Mar 13 at 15:48
1
@nkhuyu are you sure they were asking for $F_X(X+a)$ instead of $F_{X+a}(X+a)$? That would require extra care in specification, instead of just saying $F(X+a)$ which should refer to the latter... One could infer from the level of other questions :-) – Quartz Mar 14 at 10:29
show 11 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384610056877136, "perplexity_flag": "head"}
|
http://mathhelpforum.com/math-topics/57846-finding-inverse-funtion-print.html
|
# Finding the Inverse of a Funtion
Printable View
• November 5th 2008, 02:34 PM
Finding the Inverse of a Funtion
http://i255.photobucket.com/albums/h...hdude/math.jpg
How would I go about solving this?
• November 5th 2008, 03:14 PM
Luke774
First make it a "Y=" equation.
Y = x2-5x+6
Switch the Y with the x's
X = Y2-5Y+6
Solve for X
X-6 = Y2-5Y
X-6/5 = Y2
Square root of ((x-6)/5)) = Y
Kind of confusing, but I hope this helps!
• November 5th 2008, 06:01 PM
euclid2
Quote:
http://i255.photobucket.com/albums/h...hdude/math.jpg
How would I go about solving this?
The easiest way to go abouts doing this is making a table of values for the function and switching the values of the y and x. what i mean is make the x values the y values and the y values the x values. This will give you the inverse, f-1(x)
• November 6th 2008, 03:29 AM
HallsofIvy
Quote:
You don't. That is not a "one to one" function and so does not have an inverse. If $f^{-1}(x)$ is the inverse of f(x), then we must have $f^{-1}(f(x))= x$, that is, $f^{-1}$ must 'undo' whatever f does. But f(6)= 0 and f(-1)= 0. $f^{-1}(0)$ can't be both 6 and -1!
Luke774, for some reason, chose to take only the "+" sign on the square root. That's valid but gives the inverse of a slightly different function: $f(x)= x^2- 5x- 6$ with $x\ge 5/2$ and undefined for x< 5/2. Same formula but different domain so different function.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.887603759765625, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/104482/parabolic-immediate-basins-always-simply-connected
|
## parabolic immediate basins always simply connected?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Edit: So, my original question (stated below) was to find an error in my "proof" that immediate parabolic basins for rational maps are always simply connected. Since I have not received any answers as of yet I would ask alternatively if someone could point out an explicit example of a rational map with a parabolic fixed point which has a non-simply connected immediate basin, so that I could hopefully check by examining this example where my argument goes wrong. Kind regards an idiot
Hello! Sorry if this seems stupid. I know there must be an error in my thinking.
Let f be a rational map on the Riemann sphere with a parabolic fixed point $f(z_0)=z_0$, $f'(z_0)=e^{2\pi t}$ with $t\in \mathbb{Q}$.
I will try to demonstrate that each parabolic immediate basin is simply connected. I know this is wrong. But I don't find the mistake in my "proof". So please help me out.
By Leau-Fatou Flower theorem, in each immediate parabolic basin of $z_0$ there is an attractive petal $V$ such that each point in that immediate basin tends to $z_0$ via $V$.
Let $V_0$ be such a petal, and for simplicity's sake let's say that $f(\overline{V_0})\subset V_0\cup{z_0}$ (i.e. no periodic jumping between different petals, can be achieved by simply taking an iterate $f^n$ instead of $f$ for suitable n).
So we have:
- $f(\overline{V_0})\subset V_0\cup{z_0}$
- $V_0\subset A^*(z_0)$ open and simply connected
- For every $z\in A^*(z_0)$ there is some $n\in\mathbb{N}$ with $f^n(z)\in V_0$
We may slightly shrink $V_0$ if necessary such that $\partial V_0$ does not contain any postcritical points and $\overline{V_0}$ is homeomorphic to a closed disk.
Now for $k\in\mathbb{N}$ let $V_k$ be the component of $f^{-1}(V_0)$ that contains $V_0$. It's easy to see that $A^*(z_0)=\cup_{k=0}^{\infty}V_k$.
If $A^*(z_0)$ is not simply connected then there must be a minimal $m\in\mathbb{N}$ such that $V_m$ is not simply connected.
In that case let $B$ be a component of $\hat{\mathbb{C}}-\overline{V_m}$, such that $\partial B$ does not contain $z_0$.
Then $\partial B\subset \partial V_m$ and so $f(\partial B)\subset\partial V_{m-1}$.
Since $\partial V_0$ contains no postcritical points, $\cup_{k=0}^m \partial V_m$ contains no critical points.
Thus $f^m$ is locally injective on $\partial B\subset\partial V_m$ and $f^m(\partial B)$ is a full component of $\partial V_0$ (proper covering), hence $f^m(\partial B)=\partial V_0$, since $\partial V_0$ has only one component.
But then there is $z\in\partial B\subset F(f)$ with $f^n(z)=z_0\in J(f)$. That's a contradiction.
Can someone help me see my mistake? I hope it's a simple one.
-
Just to bring this question to the top once more (hope this is allowed). I just edited the question and asked an alternative question which might help me. Kind regards, an idiot – idiot_1337 Aug 12 at 12:11
## 1 Answer
The mistake is in the statement that $\partial B\subset F(f)$. There can be points on $\partial B$ and $\partial V_m$ which are in $J(f)$, namely preimages of $z_0$ :-)
An example is $f(z)=z+1-1/z$. There is one petal for the neutral point at infinity. Let $A$ be the dmain of attraction of $\infty$. Critical points are $\pm i$. Everything is symmetric with respect to the real line, because the function is real. One critical point is in $A$, so by symmetry the other one is also in $A$. The map $f:A\to A$ is 2-to-1 (because $f$ is of degree $2$), so Riemann and Hurwitz tell us that $A$ is infinitely connected.
-
Thank you. And sorry, I was very sloppy. Where I wrote: "In that case let $B$ be a component of $\hat{\mathbb{C}}-\overline{V_m}$", I now added: "such that $\partial B$ does not contain $z_0$."<br> But I see now that this also does not help.<br> Thank you very much. I truly am an idiot.<br> And thanks for the example. May the force be with you. – idiot_1337 Aug 12 at 14:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552891850471497, "perplexity_flag": "head"}
|
http://www.conservapedia.com/Differentiability
|
# Differentiable function
### From Conservapedia
(Redirected from Differentiability)
A function f(x) is differentiable at the point a if and only if, as x approaches a (which it is never allowed to reach), the value of the quotient:
$\frac{f(x) - f(a)}{(x - a)}$
approaches a limiting value that we call the derivative of the function f(x) at x=a.
There is also the more rigorous ε − δ definition: a function f is said to be differntiable at point a if ∀ε > 0 ∃δ > 0 such that if
$|x - a| < \delta\,$
then
$|\frac{f(x) - f(a)}{x-a} - f'(a) | < \epsilon \,$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8951658010482788, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/46116-help-exponent-expression.html
|
# Thread:
1. ## Help with Exponent Expression
I am going to post the entire problem & its solution. I am confused with one particular step in working towards the solution.
$(2x^3+4)^{-6}(2x^3+4)^4=(2x^3+4)^{-6+4} =(2x^3+4)^{-2} = \frac{1}{(2x^3+4)^2}$
How are the two " $(2x^3+4)$" combined to form one $(2x^3+4)$? Is it because they both have the same base therefore just add the exponents together? Im thinking that must be correct, otherwise I wouldnt be adding these exponents to begin with.
2. Originally Posted by cmf0106
I am going to post the entire problem & its solution. I am confused with one particular step in working towards the solution.
$(2x^3+4)^{-6}(2x^3+4)^4=(2x^3+4)^{-6+4} =(2x^3+4)^{-2} = \frac{1}{(2x^3+4)^2}$
How are the two " $(2x^3+4)$" combined to form one $(2x^3+4)$? Is it because they both have the same base therefore just add the exponents together? Im thinking that must be correct, otherwise I wouldnt be adding these exponents to begin with.
Remember this rule of exponents:
$x^a \cdot x^b = x^{a+b}$
This explains:
$(2x^3+4)^{-6}(2x^3+4)^4=(2x^3+4)^{-6+4}$
And then, simply add the exponents-6+4 to get:
$(2x^3+4)^{-2}$
And finally, use the reciprocol of the above to reach the conclusion with a positive exponent:
$\frac{1}{(2x^3+4)^2}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202668070793152, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/6920/efficient-online-linear-regression/6923
|
# Efficient online linear regression
I'm analysing some data where I would like to perform ordinary linear regression, however this is not possible as I am dealing with an on-line setting with a continuous stream of input data (which will quickly get too large for memory) and need to update parameter estimates while this is being consumed. i.e. I cannot just load it all into memory and perform linear regression on the entire data set.
I'm assuming a simple linear multivariate regression model, i.e.
y = Ax + b + e
What's the best algorithm for creating a continuously updating estimate of the linear regression parameters A and b?
Ideally:
• I'd like an algorithm that is most O(N*M) space and time complexity per update, where N is the dimensionality of the independent variable (x) and M is the dimensionality of the dependent variable (y).
• I'd like to be able to specify some parameter to determine how much the parameters are updated by each new sample, e.g. 0.000001 would mean that the next sample would provide one millionth of the parameter estimate. This would give some kind of exponential decay for the effect of samples in the distant past.
-
## 5 Answers
Maindonald describes a sequential method based on Givens rotations. (A Givens rotation is an orthogonal transformation of two vectors that zeros out a given entry in one of the vectors.) At the previous step you have decomposed the design matrix $\mathbf{X}$ into a triangular matrix $\mathbf{T}$ via an orthogonal transformation $\mathbf{Q}$ so that $\mathbf{Q}\mathbf{X} = (\mathbf{T}, \mathbf{0})'$. (It's fast and easy to get the regression results from a triangular matrix.) Upon adjoining a new row $v$ below $\mathbf{X}$, you effectively extend $(\mathbf{T}, \mathbf{0})'$ by a nonzero row, too, say $t$. The task is to zero out this row while keeping the entries in the position of $\mathbf{T}$ diagonal. A sequence of Givens rotations does this: the rotation with the first row of $\mathbf{T}$ zeros the first element of $t$; then the rotation with the second row of $\mathbf{T}$ zeros the second element, and so on. The effect is to premultiply $\mathbf{Q}$ by a series of rotations, which does not change its orthogonality.
When the design matrix has $p+1$ columns (which is the case when regressing on $p$ variables plus a constant), the number of rotations needed does not exceed $p+1$ and each rotation changes two $p+1$-vectors. The storage needed for $\mathbf{T}$ is $O((p+1)^2)$. Thus this algorithm has a computational cost of $O((p+1)^2)$ in both time and space.
A similar approach lets you determine the effect on regression of deleting a row. Maindonald gives formulas; so do Belsley, Kuh, & Welsh. Thus, if you are looking for a moving window for regression, you can retain data for the window within a circular buffer, adjoining the new datum and dropping the old one with each update. This doubles the update time and requires additional $O(k (p+1))$ storage for a window of width $k$. It appears that $1/k$ would be the analog of the influence parameter.
For exponential decay, I think (speculatively) that you could adapt this approach to weighted least squares, giving each new value a weight greater than 1. There shouldn't be any need to maintain a buffer of previous values or delete any old data.
### References
J. H. Maindonald, Statistical Computation. J. Wiley & Sons, 1984. Chapter 4.
D. A. Belsley, E. Kuh, R. E. Welsch, Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. J. Wiley & Sons, 1980.
-
2
In that case see also the extensions by Alan Miller jstor.org/stable/2347583. An archive of his Fortran software site is now at jblevins.org/mirror/amiller – onestop Feb 5 '11 at 22:44
3
– whuber♦ Feb 5 '11 at 23:20
1
The last link looks like the method I was going to suggest. The matrix identity they use is known in other places as the Sherman--Morrison--Woodbury identity. It is also quite numerically efficient to implement, but may not be as stable as a Givens rotation. – cardinal Feb 6 '11 at 0:28
1
@cardinal Thank you for the reference and for the advice. I was mainly concerned about stability. In a long sequence of updates, there may be circumstances where the data are nearly collinear and there could also be a lengthy accumulation of numerical errors, so stability seems to be of paramount concern. That is the justification for the Givens rotations (instead of Householder reflections, which take less computation). – whuber♦ Feb 6 '11 at 19:16
2
@suncoolsu Hmm... Maindonald's book was newly published when I started using it :-). – whuber♦ Feb 8 '11 at 6:02
show 10 more comments
I think recasting your linear regression model into a state-space model will give you what you are after. If you use R, you may want to use package dlm and have a look at the companion book by Petris et al.
-
maybe I'm confused but this appears to refer to a time series model? my model is actually simpler in that the samples are not a time series (effectively they are independent (x->y) samples, they are just accumulated in large volumes over time) – mikera Feb 5 '11 at 19:26
1
Yes, in the general case this is used for time series with non independent observations; but you can always assume incorrelation between successive observations, which gives the special case of interest to you. – F. Tusell Feb 5 '11 at 19:46
You can always just perform gradient descent on the sum of squares cost $E$ wrt the parameters of your model $W$. Just take the gradient of it but don't go for the closed form solution but only for the search direction instead.
Let $E(i; W)$ be the cost of the i'th training sample given the parameters $W$. Your update for the j'th parameter is then
$$W_{j} \leftarrow W_j + \alpha \frac{\partial{E(i; W)}}{\partial{W_j}}$$
where $\alpha$ is a step rate, which you should pick via cross validation or good measure.
This is very efficient and the way neural networks are typically trained. You can process even lots of samples in parallel (say, a 100 or so) efficiently.
Of course more sophisticated optimization algorithms (momentum, conjugate gradient, ...) can be applied.
-
– saccharine Mar 7 at 0:24
The problem is more easily solved when you rewrite things a little bit:
Y = y
X = [x, 1 ]
then
Y = A*X
A one time-solution is found by calculating
V = X' * X
and
C = X' * Y
note the V should have size N-by-N and C a size of N-by-M. The parameters you're looking for are then given by:
A = inv(V) * C
Since both V and C are calculated by summing over your data, you can calculate A at every new sample. This has a time complexity of O(N^3), however.
Since V is square and semi-definite positive, a LU decomposition does exists, that makes inverting V numerically more stable. There are algorithms to perform rank-1 updates to the inverse of a matrix. Find those and you'll have the efficient implementation you're looking for.
The rank-1 update algorithms can be found in "Matrix computations" by Golub and van Loan. It's tough material, but it does have a comprehensive overview of such algorithms.
Note: The method above gives a least-square estimate at each step. You can easily adding weights to the updates to X and Y. When the values of X and Y grow too large, they can be scaled by a single scalar, without affecting the result.
-
The standard least-square fit gives regression coefficients
$\beta = ( X^T X )^{-1} X^T Y$
where X is a matrix of M values for each of N data points, and is NXM in size. Y is a NX1 matrix of outputs. $\beta$ of course is a MX1 matrix of coefficients. (If you want an intercept just make one set of x's equal always to 1.)
To make this online presumably you just need to keep track of $X^T X$ and $X^T Y$, so one MXM matrix and one MX1 matrix. Every time you get a new data point you update those $M^2+M$ elements, and then calculate $\beta$ again, which costs you an MXM matrix inversion and the multiplication of the MXM matrix and the MX1 matrix.
For example, if M=1, then the one coefficient is
$\beta = \frac{\sum_{i=1}^N{x_i y_i}}{\sum_{i=1}^N{x_i^2}}$
so every time you get a new data point you update both sums and calculate the ratio and you get the updated coefficient.
If you want to damp out the earlier estimates geometrically I suppose you could weight $X^T X$ and $X^T Y$ by $(1-\lambda)$ each time before adding the new term, where $\lambda$ is some small number.
-
1
It's nice to see this simple case explained. Did you notice, though, that the question specifically asks about multivariate regression? It's not so easy to update the denominator of $\beta$ in that case! – whuber♦ Apr 19 at 21:07
I think my answer works still: ie you need to keep track of the MxM matrix $X^T X$ and the Mx1 matrix $X^T Y$. Each element of those matrices is a sum like in the M=1 example. Or am I missing something? – Mark Higgins Apr 24 at 13:17
Yes: in addition to computing a matrix product and applying a matrix to a vector, you now need to invert $X'X$ at each step. That's expensive. The whole point to online algorithms is to replace wholesale expensive steps by cheaper updating procedures. – whuber♦ Apr 24 at 13:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178420901298523, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/65057-factor-theorem.html
|
Thread:
1. factor theorem
Factorise $P(x)=x^3-7x-6$
I know that the constant term for this equation is -6 . Then , if (x-a) were to be a factor of P(x) , then a must be a factor of -6 .
Why is it so ? My book doesn't further explain .
2. Originally Posted by mathaddict
Factorise $P(x)=x^3-7x-6$
I know that the constant term for this equation is -6 . Then , if (x-a) were to be a factor of P(x) , then a must be a factor of -6 .
Why is it so ? My book doesn't further explain .
Because when you put it in the form:
$P(x) = (x-a)(x^2+bx+c)$
Then a and c must multiply to give -6, and hence both must be factors.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473841786384583, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/193935-eigenvalue-matrices.html
|
Thread:
1. Eigenvalue and matrices
Hi,
I want to show that a matrix H that has an eigenvalue t can be used to show that the matrix $\textbf{H}^n$ would have an eigenvalue of $t^n$.
The formal way of doing this i would guess to be mathematical induction, though how would i do the inductive step? (As i couldn't take the determinant of the whole thing)
thanks
2. Re: Eigenvalue and matrices
If $x$ is an eigenvector that has eigenvalue $t$, then $Hx=tx$. Therefore, $H^{n+1}x=H(H^nx)=H(t^nx)=t^n(Hx)=t^{n+1}x$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330390691757202, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/22009?sort=newest
|
## Random Walk anecdote.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking for an anecdote about a mathematician who studied random walks. I'm attempting to write an article and hope to include the story (but only if I can get the details correct). I'll try to do my best describing it in hopes someone else has heard it and knows a name or the full story.
A mathematician was walking through the park and entertaining mathematical whims. He noticed that he kept running into this same couple as he wandered around aimlessly. He wasn't sure if this behaviour was expected by chance or whether perhaps the female of the couple thought that he was cute. He rushed home to analyse the situation in terms of random walks in two dimensions.
-
Perhaps you should also add the assumption that the park is compact, because it seems to me that if it isn't and if the couple is not walking randomly, then the probability that the mathematician will run into the couple over and over is not 1. – Peter Samuelson Apr 21 2010 at 4:38
## 2 Answers
The anecdote is about Polya, and it is in his contribution, Two incidents, to the book, Scientists at Work: Festschrift in Honour of Herman Wold, edited by T Dalenius, G Karlsson, and S Malmquist, published in Sweden in 1970. It was recently quoted on page 229 of David A Levin and Yuval Peres, Polya's Theorem on random walks via Polya's Urn, Amer Math Monthly 117 (March, 2010) 220-231.
-
Thank you. This is exactly what I was looking for. – Ross Snider Apr 21 2010 at 4:55
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
My favorite is this one attributed to Kakutani: "A drunk man will find his way home, but a drunk bird may get lost forever." referring to the fact that the simple random walk in $Z^2$ is recurrent while it is transient in $Z^d$ for $d>2$.
Here you can find a reference to the anecdote: http://m759.xanga.com/122558830/item/
-
1
Ahh, in fact I do plan to use this in the article (I already knew the quote and its author). The article is about the number three and where it shows up as a transition point to more interesting and complex behavior. You know, like the 3-body problem, 3-dimension random walks, 3-colorings of planar maps, 3-SAT and NP-Completeness, FLT, or 3 bubble conjecture. I would post on MO for more but I have a suspicion the question would not be received well. As for this answer, it isn't quite an anecdote, although fantastic nevertheless. – Ross Snider Apr 21 2010 at 13:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9618418216705322, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/69683-poisson-distribution-sum.html
|
# Thread:
1. ## Poisson distribution sum
The question:
Seismic earth tremors occur at an active part of the earth's crust at the average rate of one every two days. What is most likely number of tremors to occur in a week.
My answer: 0.5*7=3.5
Book's answer: 3
What i want to know is the basis upon which the answer was reached?
How do I round of 3.5 in this situations? Normally it rounds to 4, sadly
2. Originally Posted by ssadi
The question:
Seismic earth tremors occur at an active part of the earth's crust at the average rate of one every two days. What is most likely number of tremors to occur in a week.
My answer: 0.5*7=3.5
Book's answer: 3
What i want to know is the basis upon which the answer was reached?
How do I round of 3.5 in this situations? Normally it rounds to 4, sadly
Your answer is the average number of tremors, not the most likely one.
The number of tremors to occur in a week can be approximately considered to be a Poisson random variable (that's in the title of the thread!) with mean $\lambda=3.5$ (using your computation). In other words, the probability of $k$ tremors in a week is about [tex]e^{-\lambda}\frac{\lambda^k}{k!}[/Math]. Compute that value for $k=0,1,2,3,\ldots$ and conclude.
3. Originally Posted by Laurent
Your answer is the average number of tremors, not the most likely one.
The number of tremors to occur in a week can be approximately considered to be a Poisson random variable (that's in the title of the thread!) with mean $\lambda=3.5$ (using your computation). In other words, the probability of $k$ tremors in a week is about $e^{-\lambda}\frac{\lambda^k}{k!}$. Compute that value for $k=0,1,2,3,\ldots$ and conclude.
Okay, but still I would have liked a cut out formula better.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578149914741516, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/177554/how-many-equivalence-classes-does-r-have
|
# How many equivalence classes does $R$ have?
Let $A=\{a,b,c,d,e\}$. Suppose $R$ is an equivalence relation on $A$. Suppose also that $aRd$ and $bRc$, $eRa$ and $cRe$. How many equivalence classes does $R$ have?
My thoughts: (Not sure if I have the right idea...)
### UPDATED/EDITED
Since $R$ is an equivalence relation on $A$ and $aRd$, $bRc$, $eRa$, and $cRe$, then
$$R=\{(a,d),(d,a),(a,a),(d,d),(b,c),(c,b),(c,c),\\ (b,b),(e,a),(a,e),(e,e),(c,e),(e,c)\}$$ (Did I miss any?)
So $R$ has $1$ equivalence class:
• $[a]=[b]=[c]=[d]=[e]=\{a,b,c,d,e\}$
-
1
Well, R has to be transitive and $\,bRc\,\,,\,cRe\,$, so... – DonAntonio Aug 1 '12 at 13:20
You only know that ${(a,d),(b,c),(e,a),(c,e)}\subset R$. The problmm does not state that this is the entire relationship. You need to apply the rules for equivalence relationships to extrapolate enough relationships. – Thomas Andrews Aug 1 '12 at 13:23
1
You are TOLD that it is an equivalence relation, so you can assume the transitive, symmetric, and reflexive property all hold, and therefore any consequence of these properties and the given ordered pairs being in the relation. – Geoff Robinson Aug 1 '12 at 13:25
You need to work on the transitivity to get the answer right. – Mark Bennet Aug 1 '12 at 13:44
Your conclusion about the number of equivalence classes is correct. Your list of the elements of $R$ is incomplete; $R$ is, in fact, all of $A\times A$. – Gerry Myerson Aug 1 '12 at 23:29
## 6 Answers
Hint: You are told that $R$ is an equivalence relation. So in particular, since it contains $(a, d)$, it must also contain $(d, a)$, since it is symmetric. So it is larger than you thought it was. Similarly, it must also be transitive…
-
It doesn't make sense to say "none of the elements in R are reflexive", as the reflexive property applies to the relation and not to elements.
What you need to do is make deductions like this:
If we know that $aRd$, then we must have $dRa$ since we are told that $R$ is an equivalence relation, and hence is symmetric.
-
The answer to (Right? Wrong?) is Wrong. Those members are elements of $R$ but not every element. You are given that R is an equivalence relation, so for example you know that (a,a) will also be in R.
Use the axioms of an equivalence relation to see more equivalences. For example eRa and cRe, you can conclude aRc. If you keep doing things like that, you'll soon see the answer.
-
Instead of trying to write down all the pairs in $R$ in a list, it is better to draw a diagram:
````A----D
|
E--.
\
B----C
````
Each line connects two elements that you explicitly know are related. Since you're told that $R$ is an equivalence relation, two elements must be related if there is any path between them.
However, the graph is easily seen to be connected, so everything is related to everything else, and there is one equivalence class $\{a,b,c,d,e\}$.
-
We never learned to make diagrams of equivalence classes, sadly. Thanks for your explanation though. – laser295 Aug 1 '12 at 13:47
3
Now you've learned it. – Henning Makholm Aug 1 '12 at 13:48
– user2468 Aug 1 '12 at 23:43
You're told $R$ contains those 4 pairs; you're not meant to conclude that $R$ contains only those 4 pairs. Indeed, you're told $R$ is an equivalence relation, so it must be reflexive, so it must have, for example, $(a,a)$; it must be symmetric, so, for example, since it has $(a,d)$, it must have $(d,a)$; it must be transitive, so, for example, since it has $(b,c)$ and $(c,e)$, it must have $(b,e)$. Figure out what else it has to have, and then we can talk.
-
Some people find that it's easiest to cast this problem in more familiar terms. You're told that $R$ is an equivalence relation. So is $=$ on a set of numbers, so it will have all the properties of $R$ and so we can dispense with $R$ entirely for the moment and think in terms of numbers represented by the variables $a, b,c, d, e$. You're told that
(1) $a=d$
(2) $b=c$
(3) $e=a$
(4) $c=e$
The equivalence class of, say, $a$ will be all the elements equal to $a$ so we can argue
$a=a$, since anything is equal to itself (i.e., by reflexivity).
$a=d$ by (1)
$a=e$ by (3) and symmetry
$a=c$ since $c=e$ by (4), $e=a$ by (3), and transitivity
$a=b$ since $a=c$, and $c=b$ by (2) and transitivity again
So the set of elements equal to (related to) $a$, namely the equivalence class of $a$ is $\{a,b,c,d,e\}$. In other words, in this case there is just one equivalence class, everything.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9574995040893555, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2009/07/15/nondegenerate-forms-i/?like=1&source=post_flair&_wpnonce=ae4d6bb289
|
The Unapologetic Mathematician
Nondegenerate Forms I
The notion of a positive semidefinite form opens up the possibility that, in a sense, a vector may be “orthogonal to itself”. That is, if we let $H$ be the self-adjoint transformation corresponding to our (conjugate) symmetric form, we might have a nonzero vector $v$ such that $\langle v\rvert H\lvert v\rangle=0$. However, the vector need not be completely trivial as far as the form is concerned. There may be another vector $w$ so that $\langle w\rvert H\lvert v\rangle\neq0$.
Let us work out a very concrete example. For our vector space, we take $\mathbb{R}^2$ with the standard basis, and we’ll write the ket vectors as columns, so:
$\displaystyle\begin{aligned}\lvert1\rangle&=\begin{pmatrix}1\\{0}\end{pmatrix}\\\lvert2\rangle&=\begin{pmatrix}0\\1\end{pmatrix}\end{aligned}$
Then we will write the bra vectors as rows — the transposes of ket vectors:
$\displaystyle\begin{aligned}\langle1\rvert&=\begin{pmatrix}1&0\end{pmatrix}\\\langle2\rvert&=\begin{pmatrix}0&1\end{pmatrix}\end{aligned}$
If we were working over a complex vector space we’d take conjugate transposes instead, of course. Now it will hopefully make the bra-ket and matrix connection clear if we note that the bra-ket pairing now becomes multiplication of the corresponding matrices. For example:
$\displaystyle\begin{aligned}\langle1\vert1\rangle&=\begin{pmatrix}1&0\end{pmatrix}\begin{pmatrix}1\\{0}\end{pmatrix}=\begin{pmatrix}1\end{pmatrix}\\\langle1\vert2\rangle&=\begin{pmatrix}1&0\end{pmatrix}\begin{pmatrix}0\\1\end{pmatrix}=\begin{pmatrix}0\end{pmatrix}\end{aligned}$
The bra-ket pairing is exactly the inner product we get by declaring our basis to be orthonormal.
Now let’s insert a transformation between the bra and ket to make a form. Specifically, we’ll use the one with the matrix $S=\begin{pmatrix}0&1\\1&0\end{pmatrix}$. Then the basis vector $\lvert1\rangle$ is just such a one of these vectors “orthogonal” to itself (with respect to our new bilinear form). Indeed, we can calculate
$\displaystyle\langle1\rvert S\lvert1\rangle=\begin{pmatrix}1&0\end{pmatrix}\begin{pmatrix}0&1\\1&0\end{pmatrix}\begin{pmatrix}1\\{0}\end{pmatrix}=\begin{pmatrix}1&0\end{pmatrix}\begin{pmatrix}0\\1\end{pmatrix}=\begin{pmatrix}0\end{pmatrix}$
However, this vector is not totally trivial with respect to the form $S$. For we can calculate
$\displaystyle\langle2\rvert S\lvert1\rangle=\begin{pmatrix}0&1\end{pmatrix}\begin{pmatrix}0&1\\1&0\end{pmatrix}\begin{pmatrix}1\\{0}\end{pmatrix}=\begin{pmatrix}0&1\end{pmatrix}\begin{pmatrix}0\\1\end{pmatrix}=\begin{pmatrix}1\end{pmatrix}$
Now, all this is prologue to a definition. We say that a form $B$ (symmetric or not) is “degenerate” if there is some non-zero ket vector $\lvert v\rangle$ so that for every bra vector $\langle w\rvert$ we find
$\displaystyle\langle w\rvert B\lvert v\rangle=0$
And, conversely, we say that a form is “nondegenerate” if for every ket vector $\lvert v\rangle$ there exists some bra vector $\langle w\rvert$ so that
$\displaystyle\langle w\rvert B\lvert v\rangle\neq0$
Like this:
Posted by John Armstrong | Algebra, Linear Algebra
3 Comments »
1. “We say that a form (symmetric or not) is “degenerate” if there is some ket vector ”
Do you want to say non-zero ket vector?
Comment by Johan Richter | July 19, 2009 | Reply
2. Yes, sorry. I caught this in the next post, but didn’t here.
Comment by | July 19, 2009 | Reply
3. [...] the orthogonal groups. This covers orthogonality with respect to general (nondegenerate) forms on an inner product space , the special case of orthogonality with respect to the underlying [...]
Pingback by | July 31, 2009 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9150304794311523, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/116446/random-walk-on-n-cycle?answertab=active
|
# Random walk on n-cycle
For a graph G, let W be the (random) vertex occupied at the first time the random walk has visited every vertex. That is, W is the last new vertex to be visited by the random walk. Prove the following remarkable fact: for random walk on an n-cycle, W is uniformly distributed over all vertices different from the starting vertex.
I would really appreciate if you could help me!
-
1
A good start would be to reformulate the claim to be about an ordinary random walk on $\mathbb Z$. The claim is then that at the first time $n-1$ different nodes have been visited, the number $u$ of visited nodes to the right of the starting point is uniformly distributed between $0$ and $n-2$. In this form it looks like it should be amenable to an induction proof, if you strengthen it to say something about the probability that the rightmost node (and not the leftmost one) was the last one visited, as a function of $n$ and $u$. – Henning Makholm Mar 4 '12 at 21:54
1
I'm curious to know why you changed the accepted answer. – joriki Mar 5 '12 at 23:53
I wish I could accept two answers. I am very grateful to both of you and I wanted to express my gratitude by accepting both answers. And Didier Piau's answer made me realize that your elegant solution requires a notable amount of mathematical maturity, that I may lack. On the other hand, your soultion is elegant indeed. You made me think, I think I change the accepted answer again :) – benny Mar 6 '12 at 5:33
1
In other words, you accept a solution because (people tell you) it is elegant although you do not understand how it works nor why it is true. O well. (To be clear: PLEASE do not change again.) – Did Mar 6 '12 at 6:07
1
Note that while you automatically get notified of comments under your post, others that you respond to (me in this case) don't get notified unless you ping them using the @username idiom. – joriki Mar 6 '12 at 11:27
show 1 more comment
## 2 Answers
In order to reach $W$ last, the walk has to visit one of $W$'s neigbours for the last time and then go all around the cycle to arrive at $W$ from the other side. Let's call a segment of a random walk on the cycle that starts at some vertex $V$ and reaches one of $V$'s neighbours by going around the cycle without returning to $V$ a final segment. Then the last vertex reached after time $t$ is the final vertex of the first final segment that begins at or after $t$. Consider a random walk on the cycle, and for every final segment that ends at $W$, consider the stretch of times $t$ for which it is the first final segment that begins at or after $t$. If we can show that all vertices except $W$ occur with the same frequency in this stretch, then it will follow that conversely $W$ is reached last with the same probability from all other vertices, and thus all vertices $W$ are reached last with the same probability from a given initial vertex.
But the stretch extends precisely up to the last visit to $W$ before the segment, so the frequency of vertices in it is just that between any two successive visits to $W$, which is just the frequency of occurrence of the vertices other than $W$ in the walk in general, which is the same for all vertices.
P.S.: It's actually not too difficult to determine the probability of each vertex to be the last vertex visited at any stage in the process. The vertices already visited always form an interval. If the current position is at the end of an interval of $k$ visited vertices, every unvisited vertex has the same $1/(n-1)$ probability of becoming the last one, except the first unvisited vertex at the other end of the interval, which has $k/(n-1)$. This is because for all vertices except this one, exactly the same realizations of the walk will make them the last vertex as would be the case if no vertices had been visited yet. Thus, every vertex has a constant probability $1/(n-1)$ of ending up as the last vertex until the walk first visits one of its neighbours.
If the current position is in the interior of the interval of visited vertices, the probability of reaching one end of the interval before the other varies linearly over the interval, and thus so do the probabilities of the two unvisited vertices bordering the interval to become the last vertex – the sum of their probabilities is $(k+1)/(n-1)$, and this shifts by $1/(n-1)$ by each move, in favour of the vertex that the move moves away from.
-
## Did you find this question interesting? Try our newsletter
email address
Consider a simple symmetric random walk on the integer line starting from $0$ and, for some integers $-a\leqslant 0\leqslant b$ such that $(a,b)\ne(0,0)$, the event that the walk visits every vertex in $[-a,b]$ before visiting vertex $-a-1$ or vertex $b+1$. This is the disjoint union of two events:
• Event 1: Starting from $0$, the walk visits $b$ before visiting $-a$, then, starting from $b$, it visits $-a$ before visiting $b+1$,
• Event 2: Starting from $0$, the walk visits $-a$ before hitting $b$, then, starting from $-a$, it visits $b$ before hitting $-a-1$.
Recall that the probability that a simple symmetric random walk starting from $i$ visits $i-j\leqslant i$ before visiting $i+k\geqslant i$ is $\frac{k}{k+j}$, for every nonnegative integers $j$ and $k$.
Hence, the probability of Event 1 is $\frac{a}{a+b}\cdot\frac1{a+b+1}$, the probability of Event 2 is $\frac{b}{a+b}\cdot\frac1{a+b+1}$, and the probability of their union is $\frac1{a+b+1}$. Note that this last formula is also valid when $a=b=0$.
If $b=x-1$ and $a=n-x-1$ with $1\leqslant x\leqslant n-1$ and $n\geqslant2$, then $a+b+1=n-1$ hence the computation above shows that the probability that the last visited vertex in the discrete circle $\{0,1,\ldots,n-1\}$ is $x$ is $\frac1{a+b+1}=\frac1{n-1}$. That is, the probability of the event $[W=x]$ is $\frac1{n-1}$ for each $x\ne0$ in the circle, and $W$ is uniformly distributed on the circle minus the starting point of the random walk.
-
1
Usually it tends to be you who finds the more elegant solutions that explain the unexpectedly simple result without undue calculation; this time it's the other way around :-) – joriki Mar 5 '12 at 9:19
1
@joriki: Yes. But I know as an experimental fact that a rigorous justification of each step of the elegant solution requires a notable amount of mathematical maturity. – Did Mar 5 '12 at 17:56
1
Didier, I hope my comment in connection with my curiosity about the change of the accepted answer and the subsequent re-change didn't create a "competitive" impression -- I was just pleased to find a solution involving less calculation than yours because it's more often the other way around :-) By the way, I still owe you an answer to an earlier comment -- yes, I do stay up during the night a lot, but I'm trying to cut down on that :-) – joriki Mar 6 '12 at 11:32
1
@joriki: You certainly do not have to worry about this (and your answer is excellent, naturally). You already know this but let me say it nevertheless: first, I am often puzzled by the acceptation choices on MSE; second, this puzzlement does not concern you as an answerer: you proposed a (mathematically sophisticated) solution, it got accepted hence it can only mean the OP is happy, everything is fine. (Unrelated: in my experience, staying up at night to do maths is a gambit which is difficult to refuse but is often lost, in the long run....) – Did Mar 6 '12 at 13:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 70, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485428333282471, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/geodesics+general-relativity
|
# Tagged Questions
1answer
76 views
### Killing vector argument gone awry?
What has gone wrong with this argument?! The original question A space-time such that $$ds^2=-dt^2+t^2dx^2$$ has Killing vectors \$(0,1),(-\exp(x),\frac{\exp(x)}{t}), ...
2answers
108 views
### Geodesic equations
I am having trouble understanding how the following statement (taken from some old notes) is true: For a 2 dimensional space such that $$ds^2=\frac{1}{u^2}(-du^2+dv^2)$$ the timelike geodesics ...
1answer
42 views
### “WLOG” re Schwarzschild geodesics
Why, when studying geodesics in the Schwarzschild metric, one can WLOG set $$\theta=\frac{\pi}{2}$$ to be equatorial? I assume it is so because when digging around the internet, most references seem ...
1answer
64 views
### Why four velocity under covariant differential is considered to be zero?
In Einstein's general theory of relativity the elements of four velocity $U^{\mu} (\gamma c, \gamma v)$ under covariant differential is considered to be zero, why? $$\mathcal{D} U^{\mu}=0$$ in other ...
2answers
63 views
### What is path of light in the accelerating elevator?
Mathematically, (by mathematically I means by equations) what is path of light in the accelerating elevator? What is the difference between an ordinary derivative and covariant derivative (which is ...
1answer
69 views
### The role of the affine connection the geodesic equation
I apologise in advance that my knowledge of differential geometry and GR is very limited. In general relativity the equation of motion for a particle moving only under the influence of gravity is ...
0answers
45 views
### Naked singularity and null coordinates
I'm trying to understand the notion of a naked singularity on a more mathematical level (intuitively, it's a singularity "one can see and poke with a stick", but I'm having troubles on how to actually ...
0answers
48 views
### Naked singularity and extendable geodesics
I'm currently trying to understand the notion of a naked singularity. After consulting books by Wald and Choquet-Bruhat, it seems that for a naked singularity one must have that the causal curves can ...
1answer
52 views
### Can you enter a timelike hypersurface?
As I understand it, a timelike hypersurface is one that has only spacelike normal vectors. But does this not imply that a the geodesic of a particle crossing it must be spacelike at that point? But ...
1answer
356 views
### Physical significance of Killing vector field along geodesic
Let us denote by $X^i=(1,\vec 0)$ the Killing vector field and by $u^i(s)$ a tangent vector field of a geodesic, where $s$ is some affine parameter. What physical significance do the scalar quantity ...
0answers
54 views
### Gravitational effects and metric spaces
Could somebody please explain something regarding the Nordstrom metric? In particular, I am referring to the last part of question 3 on this sheet -- about the freely falling massive bodies. My ...
1answer
115 views
### Homogeneous gravitational field and the geodesic deviation
In General Relativity (GR), we have the geodesic deviation equation (GDE) ...
4answers
330 views
### To which extent is general relativity a gauge theory?
In quantum mechanics, we know that a change of frame -- a gauge transform -- leaves the probability of an outcome measurement invariant (well, the square modulus of the wave-function, i.e. the ...
2answers
272 views
### Action for a point particle in a curved spacetime
Is this action for a point particle in a curved spacetime correct? $$\mathcal S =-Mc \int ds = -Mc \int_{\xi_0}^{\xi_1}\sqrt{g_{\mu\nu}(x)\frac{dx^\mu(\xi)}{d\xi} \frac{dx^\nu(\xi)}{d\xi}} \ \ d\xi$$
1answer
229 views
### Potential Energy in General Relativity
I often hear about how general relativity is very complicated because of all forms of energy are considered, including gravitation's own gravitational binding energy. I have two questions: In ...
1answer
255 views
### Problem with convergent geodesics at 2D sphere
There is a chapter on general relativity in the book Spacetime Physics Introduction To Special Relativity by Taylor and Wheeler, which qualitatively explains how attractive gravitational force can be ...
0answers
47 views
### Cauchy Problem in Convex Neighborhood
While reading the reference Eric Poisson and Adam Pound and Ian Vega,The Motion of Point Particles in Curved Spacetime, available here, there is something that I don't quite understand. ...
2answers
237 views
### How to think of the harmonic oscillator equation in terms of “acceleration = gradient”
This is related to another question I just asked where I learned that the equation of motion of a harmonic oscillator is expressed as: $$\ddot{x}+kx=0$$ What little physics I grasp centers on ...
1answer
618 views
### Why is light described by a null geodesic?
I'm trying to wrap my head around how geodesics describe trajectories at the moment. I get that for events to be causally connected, they must be connected by a timelike curve, so free objects must ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177869558334351, "perplexity_flag": "middle"}
|
http://en.wikisource.org/wiki/On_the_Effect_of_the_Motion_of_a_Body_upon_the_Velocity_with_which_it_is_traversed_by_Light
|
# On the Effect of the Motion of a Body upon the Velocity with which it is traversed by Light
From Wikisource
On the Effect of the Motion of a Body upon the Velocity with which it is traversed by Light (1860) by Hippolyte Fizeau
Philosophical Magazine, Series 4, vol. 19, pp. 245-260, Internet Archive
On the Effect of the Motion of a Body upon the Velocity with which it is traversed by Light Hippolyte Fizeau
1860
XXXII. On the Effect of the Motion of a Body upon the Velocity with which it is traversed by Light.
By M. H. Fizeau.
Many theories have been proposed with a view of accounting for the phænomenon of the aberration of light according to the undulatory theory. In the first instance Fresnel, and more recently Doppler, Stokes, Challis, and several others have published important researches on this subject; though none of the theories hitherto proposed appear to have received the complete approval of physicists. Of the several hypotheses which have been necessitated by the absence of any definite idea of the properties of luminiferous æther, and of its relations to ponderable matter, not one can be considered as established; they merely possess different degrees of probability.
On the whole these hypotheses may be reduced to the following three, having reference to the state in which the æther ought to be considered as existing in the interior of a transparent body. Either, first, the æther adheres or is fixed to the molecules of the body, and consequently shares all the motions of the body; or secondly, the æther is free and independent, and consequently is not carried with the body in its movements; or, thirdly, only a portion of the æther is free, the rest being fixed to the molecules of the body and, alone, sharing its movements.
The last hypothesis was proposed by Fresnel, in order at once to satisfy the conditions of the aberration of light and of a celebrated experiment of Arago's, which proved that the motion of the earth does not affect the value of the refraction suffered by the light of a star on passing through a prism. Although these two phænomena may be explained with admirable precision by means of this hypothesis, still it is far from being considered at present as an established truth, and the relations between æther and matter are still considered, by most, as unknown. The mechanical conception of Fresnel has been regarded by some as too extraordinary to be admitted without direct proofs; others consider that the observed phænomena may also be satisfied by one of the other hypotheses; and others, again, hold that certain consequences of the hypothesis in question are at variance with experiment.
The following considerations led me to attempt an experiment the result of which promised, I thought, to throw light on the question.
It will be observed that, according to the first hypothesis, the velocity with which light traverses a body must vary with the motion of that body. If the motions of the body and the ray are like-directed, the velocity of light ought to be increased by the whole velocity of the body.
If the æther be perfectly free, the velocity of light ought not to be altered by the motion of the body.
Lastly, if the body when moving only carries with it a portion of the æther, then the velocity of light ought to be increased by a fractional part of the velocity of the body and not by the whole velocity, as in the first case. This consequence is not as evident as the two preceding ones, though Fresnel has shown that it is supported by mechanical considerations of a very probable nature.
The question then resolves itself to that of determining with accuracy the effect of the motion of a body upon the velocity with which light traverses it.
It is true that the velocity with which light is propagated is so immensely superior to any we are able to impart to a body, that any change in the first velocity must in general be inappreciable. Nevertheless, by combining the most favourable circumstances, it appeared to be possible to submit to a decisive test at least two media, air and water, to which, on account of the mobility of their particles, a great velocity may be imparted.
We owe to Arago a method of observation, founded on the phænomena of interference, which is well suited to render evident the smallest variation in the index of refraction of a body, and hence also the least change in the velocity with which the body is traversed by light; for, as is well known, this velocity is inversely proportional to the refracting index. Arago and Fresnel have both shown the extraordinary sensitiveness of this method by several very delicate observations, such as that on the difference of refraction between dry and moist air.
A method of observation founded upon this principle appeared to me to be the only one capable of rendering evident any change of velocity due to motion. It consists in obtaining interference bands by means of two rays of light after their passage through two parallel tubes, through which air or water can be made to flow with great velocity in opposite directions. The especial object before me necessitated several new arrangements, which I proceed to indicate.
With respect to the intensity of light, formidable difficulties had necessarily to be encountered. The tubes, which were of glass and 5-3 millims. in diameter, had to be traversed by light along their centres, and not near their sides; the two slits, therefore, had to be placed much further apart than is ordinarily the case, on which account the light would, in the absence of a special contrivance, have been very feeble at the point where the interference bands are produced.
This inconvenience was made to disappear by placing a convergent lens behind the two slits; the bands were then observed at the point of concourse of the two rays, where the intensity of light was very considerable.
The length of the tubes being tolerably great, 1.487 metre, it was to be feared that some difference of temperature or pressure between the two tubes might give rise to a considerable displacement of the bands, and thus completely mask the displacement due to motion.
This difficulty was avoided by causing the two rays to return towards the tubes by means of a telescope carrying a mirror at its focus. In this manner each ray is obliged to traverse the two tubes successively, so that the two rays having travelled over exactly the same path, but in opposite directions, any effect due to difference of pressure or temperature must necessarily be eliminated by compensation. By means of various tests I assured myself that this compensation was complete, and that whatever change in the temperature or density of the medium might be produced in a single tube, the bands would preserve exactly the same position. According to this arrangement, the bands had to be observed at the point of departure itself of the rays: solar light was admitted laterally, and was directed towards the tubes by means of reflexion from a transparent mirror; after their double journey through the tubes, the rays returned and traversed the mirror before reaching the place of interference, where the bands were observed by means of a graduated eye-piece.
The double journey performed by the rays had also the advantage of increasing the probable effect of motion; for this effect must be the same as if the tubes had double the length and were only traversed once.
This arrangement also permitted the employment of a very simple method for rendering the bands broader than they would otherwise have been in consequence of the great distance (9 millims.) between the slits. This method consisted in placing a very thick plate of glass before one of the slits, and inclining the same in such a manner that, by the effect of refraction, the two slits had the appearance of being very close to each other: in this manner the bands become as broad as they would be if the two slits were, in reality, as near each other as they appear to be; and instead of the intensity of light being sensibly diminished by this expedient, it may, in fact, be greatly augmented by giving greater breadth to the source of light. By causing the inclination of the glass to vary, the breadth of the bands may be varied at pleasure, and thus the magnitude most convenient for precisely observing their displacement may be readily given to them.
I proceed to describe the disposition of the tubes, and the apparatus destined to put the water in motion.
The two tubes, placed side by side, were closed at each extremity by a single glass plate, fixed with gum-lac in a position exactly perpendicular to their common direction. Near each extremity was a branch tube, forming a rounded elbow, which established a communication with a broader tube reaching to the bottom of a flask; there were thus four flasks communicating with the four extremities of the tubes.
Into one flask, which we will suppose to be full of water, compressed air, borrowed from a reservoir furnished with an air-pump, was introduced through a communicating tube. Under the influence of this pressure the water rose from the flask into the tube, which it then traversed in order to enter the flask at the opposite end. The latter could also receive compressed air, and then the liquid returned into the first flask after traversing the tube in an opposite direction. In this manner a current of water was obtained whose velocity exceeded 7 metres per second. A similar current, but in an opposite direction, was produced at the same time in the other tube.
Within the observer's reach were two cocks fixed to the reservoir of air; on opening either, currents, opposite in direction, were established in both tubes; on opening the other cock the currents in each tube were simultaneously reversed.
The capacity of the reservoir, containing air at a pressure of about two atmospheres, amounted to 15 litres (half a cubic foot), that of each flask to about 2 litres; the latter were divided into equal volumes, and the velocity of the water was deduced from the section of the tubes, and from the time of efflux of half a litre.
The apparatus above described was only employed for the experiments with water in motion: with some modifications it might also be used for air; but my experiments on moving air had been previously made with a slightly different apparatus, of which more hereafter, and the results had been found quite conclusive. I had already proved that the motion of air produces no appreciable displacement of the bands. But I shall return to this result and give further details.
For water there is an evident displacement. The bands are displaced towards the right when the water recedes from the observer in the tube at his right, and approaches him in the tube on his left.
The displacement of the bands is towards the left when the direction of the current in each tube is opposite to that just defined.
During the motion of the water the bands remain well defined, and move parallel to themselves, without the least disorder, through a space apparently proportional to the velocity of the water. With a velocity of 2 metres per second even, the displacement is perceptible; for velocities between 4 and 7 metres it is perfectly measureable.
In one experiment, where a band occupied five divisions of the micrometer, the displacement amounted to 1.2 divisions towards the right and 1.2 divisions towards the left, the velocity of the water being 7.059 metres per second. The sum of the two displacements, therefore, was equal to 2.4 divisions, or nearly half the breadth of a band.
In anticipation of a probable objection, I ought to state that the system of the two tubes and four flasks, in which the motion of the water took place, was quite isolated from the other parts of the apparatus: this precaution was taken in order to prevent the pressure and shock of the water from producing any accidental flexion in parts of the apparatus whose motion might influence the position of the bands. I assured myself, however, that no such influence was exerted, by intentionally imparting motions to the system of the two tubes.
After establishing the existence of the phænomenon of displacement, I endeavoured to estimate its magnitude with all possible exactitude. To avoid all possible sources of error, I varied the magnification of the bands, the velocity of the water, and even the nature of the divisions of the micrometer, so as to be unable to predict the magnitude of the displacements before measuring them. For in measuring small quantities, where our own power of estimating has to play a great part, the influence of any preconception is always to be feared; I think, however, that the result I have obtained is altogether free from this cause of error.
For the most part the observations were made with a velocity of 7.059 metres per second; in a certain number the velocity was 5.515 metres, and in others 3.7 metres. The magnitudes observed have been all reduced to the maximum velocity 7.059 metres, and referred to the breadth of a band as unity.
| | | |
|-------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
| Displacements of the bands for a mean velocity of water equal to 7.059 metres per second. | | Differences between the observed displacements and their mean value. |
| | 0,200 0,220 0,240 0,167 0,171 0,225 0,247 0,225 0,214 0,230 0,224 0,247 0,224 0,307 0,307 0,256 0,240 0,240 0,189 | -0,030 -0,010 +0,010 -0,063 -0,059 -0,005 +0,017 -0,005 -0,016 0,000 -0,006 +0,017 -0,006 +0,077 +0,077 +0,026 +0,010 +0,010 -0,041 |
| Sum | 4,373 | |
| Mean | 0,23016 | |
By doubling the mean value we have 0.46, nearly half the breadth of a band, which represents the magnitude of the displacement produced by reversing the direction of the current in each tube.
To show the deviations on each side, the differences between the several observed displacements and the mean value of all have been inserted in the Table. It will be seen that, in general, they represent a very small fraction of the breadth of a band; the greatest deviation does not exceed one-thirteenth of the breadth of a band.
These differences are due to a difficulty which could not be overcome; the displacement remained at its maximum but for a very short period, so that the observations had to be made very rapidly. Had it been possible to maintain the velocity of the current of water constant for a greater length of time, the measurements would have been more precise; but this did not appear to be possible without considerably altering the apparatus, and such alterations would have retarded the prosecution of my research until the season was no longer favourable for experiments requiring solar light.
I proceed to compare the observed displacement with those which would result from the first and third hypotheses before alluded to. As to the second hypothesis, it may be at once rejected; for the very existence of displacements produced by the motion of water is incompatible with the supposition of an æther perfectly free and independent of the motion of bodies.
In order to calculate the displacement of the bands under the supposition that the æther is united to the molecules of bodies in such a manner as to partake of their movements, let
$v$ be the velocity of light in a vacuum,
$v'$ the velocity of light in water when at rest,
$u$ the velocity of the water supposed to be moving in a direction parallel to that of the light. It follows that
$v'+u$ is the velocity of light when the ray and the water move in the same direction, and
$v'-u$ when they move in opposite directions.
If $\Delta$ be the required retardation and E the length of the column of water traversed by each ray, we have, according to the principles proved in the theory of the interference of light,
$\Delta=E\left(\frac{v}{v'-u}-\frac{v}{v'+u}\right),$
or
$\Delta=2E\frac{u}{v}\frac{v^{2}}{v'^{2}-u^{2}}.$
Since $u$ is only the thirty-three millionth part of $v$, this expression may, without appreciable error, be reduced to
$\Delta=2E\frac{u}{v}\cdot\frac{v^{2}}{v'^{2}}$
If $m=\tfrac{v}{v'}$ be the index of refraction of water, we have the approximate formula
$\Delta=2E\frac{u}{v}m^{2}$
Since each ray traverses the tubes twice, the length E is double the real length of the tubes. Calling the latter L = 1.4875 metre, the preceding formula becomes
$\Delta=4L\frac{u}{v}m^{2}$
and the numerical calculation being performed, we find
$\Delta$ = 0.0002418 millim.
Such is the difference of path which, under the present hypothesis, ought to exist between the two rays.
Strictly speaking, this number has reference to a vacuum, and ought to be divided by the index of refraction for air; but this index differs so little from unity, that, for the sake of simplicity, the correction, which would not alter the last figure by a unit, may be neglected.
The above quantity being divided by the length of an undulation, will give the displacement of the bands in terms of the breadth of one of them. In fact, for a difference of path amounting to 1, 2, . . . $m$ undulations, the system of bands suffer a displacement equal to the breadth of 1, 2, . . . $m$ bands.
For the ray E the length of an undulation is $\lambda$ = 0.000526, and the rays about it appear to preserve the greatest intensity after the light has traversed a rather considerable thickness of water. Selecting this ray, then, we find for the displacement the value
$\frac{\Delta}{\lambda}=0.4597.$
Had, therefore, the æther participated fully in the motion of the water, in accordance with the hypothesis under consideration, a displacement of 0.46 of a band would have been observed in the foregoing experiments. But the mean of our observations gave only 0.23; and on examining the greatest particular values, it will be found that none approached the number 0.46. I may even remark that the latter number ought to be still greater, in consequence of a small error committed in the determination of the velocity of the water; an error whose tendency is known, although, as will soon be seen, it was impossible to correct it perfectly.
I conclude, then, that this hypothesis does not agree with experiment. We shall next see that, on the contrary, the third, or Fresnel's hypothesis, leads to a value of the displacement which differs very little from the result of observation.
We know that the ordinary phænomena of refraction are due to the fact that light is propagated with less velocity in the interior of a body than in a vacuum. Fresnel supposes that this change of velocity occurs because the density of the æther within a body is greater than that in a vacuum. Now for two media whose elasticity is the same, and which differ only in their densities, the squares of the velocities of propagation are inversely proportional to these densities; that is,
$\frac{D'}{D}=\frac{v^{2}}{v'^{2'}}$
D and D' being the densities of the æther in a vacuum and in the body, and $v, v'$ the corresponding velocities. From the above we easily deduce the relations
$D'=D\frac{v^{2}}{v'^{2}},\ D'-D=D\frac{v^{2}-v'^{2}}{v'^{2}},$
the latter of which gives the excess of density of the interior æther.
It is assumed that when the body is put in motion, only a part of the interior æther is carried along with it, and that this part is that which causes the excess in the density of the interior over that of the surrounding æther; so that the density of this moveable part is D'-D. The other part which remains at rest during; the body's motion has the density D.
The question now arises, With what velocity will the waves be propagated in a medium thus constituted of an immoveable and a moveable part, when for the sake of simplicity we suppose the body to be moving in the direction of the propagation of the waves?
Fresnel considers that the velocity with which the waves are propagated then becomes increased by the velocity of the centre of gravity of the stationary and moving portions of æther. Now $u$ being the velocity of the body,
$\tfrac{D'-D}{D'}u$
will be the velocity of the centre of gravity of the system in question, and according to the last formula this expression is equal to
$\frac{v^{2}-v'^{2}}{v{}^{2}}u$
Such, then, is the quantity by which the velocity of light will be augmented; and since $v'$ is the velocity when the body is at rest,
$v'+\frac{v^{2}-v'^{2}}{v{}^{2}}u$ and $v'-\frac{v^{2}-v'^{2}}{v{}^{2}}u$
will be the respective velocities when the body moves with and against the light.
By means of these expressions the corresponding displacement of the bands in our experiment may be calculated in exactly the same manner as before. For the difference of path we have the value
$\Delta=E\left\{ \frac{v}{v'-\frac{v^{2}-v'^{2}}{v^{2}}u}-\frac{v}{v'+\frac{v^{2}-v'^{2}}{v^{2}}u}\right\}$,
which by reduction and transformation becomes
$\Delta=2E\frac{u}{v}\left\{ \frac{v^{2}-v'^{2}}{v'^{2}-u^{2}\left(\frac{v^{2}-v'^{2}}{v^{2}}\right)^{2}}\right\}$.
Taking into consideration the smallness of $u$ with respect to $v'\left(\tfrac{u}{v'}=\tfrac{1}{33000000}\right)$, and the circumstance that the coefficient of $u^2$ differs little from unity, the term in $u^2$ may, without appreciable error, be neglected, and the above expression considerably simplified. In fact, if $m$ be the index of refraction, and $L=\tfrac{1}{2}E$ the length of each tube, we have approximately
$\Delta=4L\frac{u}{v}\left(m^{2}-1\right),$
whence by numerical calculation we deduce
$\Delta$ = 0.00010634 millim.
On dividing this difference of path by the length $\lambda$ of an undulation, the magnitude of the displacement becomes
$\frac{\Delta}{\lambda}=0.2022,$
the observed value being 0.23.
These values are almost identical; and what is more, the difference between observation and calculation may be accounted for with great probability by the presence of the before-mentioned error in estimating the velocity of the water. I proceed to show that the tendency of this error may be assigned, and that analogy permits us to assume that its effect must be very small.
The velocity of the water in each tube was calculated by dividing the volume of water which issued per second from one of the flasks by the sectional area of the tube. But by this method it is only the mean velocity of the water which is determined; in other words, that which would exist provided the several threads of liquid at the centre and near the sides of the tube moved with equal rapidity. It is evident, however, that this cannot be the case; for the resistance opposed by the sides of the tube, acting in a more immediate manner on the neighbouring threads of liquid, tends to diminish their velocity more than it does that of the threads nearer the centre of the tube. The velocity of the water in the centre of the tubes, therefore, must be greater than that of the water near the sides, and consequently also greater than the mean of both velocities.
Now the slits placed before each tube to admit the rays whose interference was observed, were situated in the middle of the circular ends of the tubes; so that the rays necessarily traversed the central zones, where the velocity of the water exceeded the mean velocity[2].
The law followed by these variations of velocity in the motion of water through tubes not having been determined, it was not possible to introduce the necessary corrections. Nevertheless analogy indicates that the error resulting therefrom cannot be considerable. In fact, this law has been determined in the case of water moving through open canals, where the same cause produces a similar effect; the velocity in the middle of the canal and near the surface of the water is there also greater than the mean velocity. It has been found that, for values of the mean velocity included between 1 and 5 metres per second, the maximum velocity is obtained by multiplying this mean velocity by a certain coefficient which varies from 1.23 to 1.11. Analogy therefore permits us to assume that in our case the correction to be introduced would be of the same order of magnitude.
Now on multiplying $u$ by 1.1, 1.15, and 1.2, and calculating the corresponding values of the displacement of the bands, we find in place of 0.20 the values 0.22, 0.23, 0.24 respectively; whence it will be seen that in all probability the correction would tend to cause still greater agreement between the observed and the calculated results. We may presume, then, that the small difference which exists between the two values depends upon a small error in estimating the real velocity of the water; which error cannot be rectified in a satisfactory manner, in consequence of the absence of sufficiently accurate data.
Thus the displacement of the bands caused by the motion of water, as well as the magnitude of this displacement, may be explained in a satisfactory manner by means of the theory of Fresnel.
It was before observed that the motion of air causes no perceptible displacement of the bands produced by the interference of two rays which have traversed the moving air in opposite directions. This fact was established by means of an apparatus which I will briefly describe.
A pair of bellows, loaded with weights and worked by a lever, impelled air forcibly through two parallel copper tubes whose extremities were closed by glass plates. The diameter of each tube was 1 centimetre, and its effective length 1.495 metre; the direction of the motion in one tube was opposite to that in the other, and the pressure under which this motion took place was measured by a manometer placed at the entrance of the tubes it could be raised to 3 centimetres of mercury.
The velocity of the air was deduced from the pressure and from the dimensions of the tubes, according to the known laws of the efflux of gases. The value thus found was checked by means of the known volume of the bellows, and the number of strokes necessary to produce a practically constant pressure at the entrance of the tubes. A velocity of 25 metres per second could easily be imparted to the air; occasionally greater velocities were reached, but their values remained uncertain.
In no experiment could a perceptible displacement of the bands be produced: they always occupied the same positions, no matter whether the air remained at rest, or moved with a velocity equal or even superior to 25 metres per second.
When this experiment was made, the possibility of doubling, by means of a reflecting telescope, the value of the displacement, and at the same time of completely compensating any effects due to accidental differences of temperature or pressure in the two tubes, had not suggested itself; but I employed a sure method of distinguishing between the effects due to motion, and those resulting from accidental circumstances.
This method consisted in making two successive observations, by causing the rays to traverse the apparatus in opposite directions. For this purpose the source of light was placed at the point where the central band had previously been, when the new bands formed themselves where the source of light had previously been placed.
The direction of the motion of the air in the tubes remaining the same in both cases, it is easy to see that the accidental effects would in both observations give rise to a displacement towards the same tube, whilst the displacement due solely to motion would first be on the side of one tube and then on the side of the other. In this manner a displacement due to motion would have been detected with certainty, even if it had been accompanied by an accidental displacement due, for instance, to some defect of symmetry in the diameters or orifices of the tubes, whence would result an unequal resistance to the passage of air, and consequently a difference of density.
But the symmetry given to the apparatus was so perfect that no sensible difference of density existed in the two tubes during the motion of the air. The double observation was consequently unnecessary; nevertheless it was made for the sake of greater security, and in order to be sure that the sought displacement was not accidentally compensated by a difference of density, which, though small, might be sufficient totally to mask such displacement.
Notwithstanding these precautions, however, no displacement of the bands occurred in consequence of the motion of the air; and according to an estimate I have made, a displacement equal to one-tenth of the breadth of a band would have been detected had it occurred.
The calculations with respect to this experiment are as follows. Under the hypothesis that the air, when moving, carries with it all the æther, we have
$\Delta=2L\frac{u}{v}m^{2}=0.002413$
$m^2$ being equal to 1.000567 at the temperature 10° C.
This experiment having been made in air, the maximum illumination was due to the yellow rays; and this maximum determined the breadth of the bands. Hence the value of $\lambda$ corresponding to the ray D being taken, we have
$\frac{\Delta}{\lambda}=0.4103.$
Now so great a displacement could certainly not have escaped observation, especially since it might have been doubled by reversing the current.
The following would be the results of the calculation according to the hypothesis of Fresnel: —
$\Delta=2L\frac{u}{v}\left(m^{2}-1\right)=0.0000001367$
$\frac{\Delta}{\lambda}=0.0002325.$
Now a displacement equal to $\tfrac{1}{5000}$th of the breadth of a band could not be observed; it might, in fact, be a hundred times greater and still escape observation. Thus the apparent immobility of the bands in the experiment made with moving air may be explained by the theory of Fresnel, according to which the displacement in question, although not absolutely zero, is so small as to escape observation.
After having established this negative fact, and seeking, by means of the several hypotheses respecting æther, to explain it as well as the phenomenon of aberration and the experiment of Arago, it appeared to me to be necessary to admit, with Fresnel, that the motion of bodies changes the velocity with which light traverses them, but that this change of velocity varies according to the energy with which the traversed medium refracts light; so that the change is great for highly refracting bodies, but small for feebly refracting ones such as air.
I was thus led to anticipate a sensible displacement of the bands by means of the motion of water, since its index of refraction greatly exceeds that of air.
It is true that an experiment of Babinet's, mentioned in the ninth volume of the Comptes Rendus, appeared to be in contradiction to the hypothesis of a change in the velocity of light in accordance with the law of Fresnel. But on considering the conditions of that experiment, I detected the existence of a cause of compensation whose influence would render the effect due to motion insensible. This cause proceeds from the reflexion which the light suffers in the experiment in question. It may, in fact, be demonstrated that if a certain difference of path exists between two rays, that difference becomes altered when these rays suffer reflexion from a moving mirror. Now on calculating separately the two effects (of reflexion) in the experiment of Babinet, their magnitudes will be found to be equal and opposite in sign.
This explanation rendered the hypothesis of a change of velocity still more probable, and induced me to undertake the experiment with water, as being the most suitable one for deciding the question with certainty.
The success of this experiment must, I think, lead to the adoption of the hypothesis of Fresnel, or at least to that of the law discovered by him, which expresses the relation between the change of velocity and the motion of the body; for although the fact of this law being found to be true constitutes a strong argument in favour of the hypothesis of which it is a mere consequence, yet to many the conception of Fresnel will doubtless still appear both extraordinary and, in some respects, improbable; and before it can be accepted as the expression of the real state of things, additional proofs will be demanded from the physicist, as well as a thorough examination of the subject from the mathematician.
Shortly before the publication of the above interesting memoir in the Annales de Chimie, M. Fizeau presented to the Academy a second memoir, containing the results of his experiments on the effect of the motion of a transparent solid body, such as glass, upon the velocity with which it is traversed by light. The Comptes Rendus of November 14th, 1859, contains a brief extract from this memoir; and from it we gather the principal results of his experiments, and the principles upon which the same were based.
The method of experiment which was employed in the foregoing researches on air and water being no longer applicable, recourse was had to the following property of light established by the researches of Malus, Biot, and Brewster. When a ray of polarized light traverses a plate of glass, inclined towards its direction, the plane of polarization of the transmitted ray is in general inclined towards that of the incident ray. The magnitude of the rotation of the plane of polarization which is thus caused by the two refractions at the two surfaces of the plate of glass depends, first, upon the angle of incidence; secondly, upon the azimuth of the primitive plane of polarization with reference to the plane of incidence; and thirdly, upon the index of refraction of the glass forming the plate.
The angle of incidence and the azimuth of the primitive plane of polarization remaining the same, the rotation of this plane increases with the index of refraction of the glass plate. Now since this index is inversely proportional to the velocity with which waves of light are propagated through the glass, it follows that the magnitude of the rotation of the plane of polarization increases when the velocity with which light traverses the glass plate diminishes. The determination of any change in this velocity is, therefore, reduced to that of the corresponding change in the rotation of the plane of polarization.
In the first place it was deemed necessary to determine the change in the rotation which any given increase or decrease of the index of refraction could produce. By direct and comparative measurements of these indices and rotations, in the cases of flint and ordinary glass, it was found that when the index was increased by a small fraction, the rotation increased by a fraction 4½ times greater than the first.
The question next arises what change, according to the hypothesis of Fresnel, ought to be produced in the velocity of light when it traverses glass in a state of motion? The answer is based upon the following data.
The greatest velocity at our command is unquestionably that of the earth in its orbit. At noon, during the period of the solstices, for instance, the direction of this motion is horizontal and from east to west; from this it follows that when a plate of glass receives a ray of light coming from the west, it ought to be considered as really moving to meet the ray with the immense velocity of 31,000 metres per second. When, on the contrary, the incident ray comes from the cast, the glass plate must be considered as moving with this velocity in the same direction as that of the propagation of the waves of light, by which latter it is in reality overtaken.
Now, according to the theory of Fresnel, the difference between the velocities of the light in these two extreme cases would be sufficient to produce a change in the rotation of the plane of polarization equal to $\tfrac{1}{1500}$th part of the magnitude of that rotation.
In order to test this result by experiment, a series of glass plates were interposed in the path of a polarized beam of parallel rays of light. The primitive plane of polarization was determined by a divided circle, and the rotation which this plane underwent by the action of the plates was measured by means of a second graduated circle fixed to a convenient analyser. The instrument could, moreover, be fixed in any direction so as to study the influence of all terrestrial motions upon the phænomena.
In order to make the two necessary observations conveniently and rapidly, two mirrors were previously fixed on the east and on the west of the instrument, and upon each, alternately, a beam of solar light was thrown by means of a heliostat, and thence reflected towards the instrument.
The greatest difficulties were encountered in the annealing of the glass plates of the series; and as perfectly homogeneous plates could not be obtained, it was necessary to employ various compensating expedients, all which will be found described in the memoir itself.
The conclusions to which M. Fizeau was led by means of more than 2000 observations are thus stated: —
1. The rotation of the plane of polarization produced by a series of inclined glass plates is always greater when the light which traverses them comes from the west than when it comes from the east; the observation being made about noon.
2. This excess of rotation is decidedly at a maximum at or about noon during the solstices. Before and after this hour it is less, and at about 4 o'clock is scarcely perceptible.
3. The numerical values deduced from the numerous series of observations present notable differences, the cause of which may be guessed, though it cannot yet be determined with certitude.
4. The influence of the earth's annual motion, as determined by calculation on the hypothesis of Fresnel, leads to values of the above excess of rotation which agree tolerably well with the majority of the values deduced from observation.
5. Theory, as well as experiment, therefore, lead us to conclude that the azimuth of the plane of polarization of a refracted ray is really influenced by the motion of the refracting medium, and that the motion of the earth in space exerts an influence of this kind upon the rotation of the plane of polarization produced by a series of inclined glass plates.
1. Translated from the Annales de Chimie et de Physique for December 1859. The original memoir was presented to the Parisian Academy of Sciences, Sept. 29, 1851; and a translation of the brief abstract published in the Comptes Rendus was given in the Phil. Mag. for December 1851, p. 568.
2. Each slit was a rectangle 3 millims. by 1.5, and its surface was equal to one-fifth that of the tube.
This work published before January 1, 1923 is in the public domain worldwide because the author died at least 100 years ago.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 49, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9648091197013855, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/38984/a-positive-potential-as-x-rightarrow-infty?answertab=oldest
|
# a positive potential as $x \rightarrow \infty$
let us suppose i can calculate the asymptotic of any potential $V(x)$ in one dimension , and that i manage to prove that $V(x) \ge 0$ as $x \rightarrow \infty$
could i conclude taht if or big 'x' the potential is POSITIVE that for big 'n' the enegies will be also POSITIVE $E_{n} \ge 0$ ?? as $n \rightarrow \infty$
the idea is that if we define a turning point $E=V(a)$ and 'a' is a turning point then if a is very big for big energy E then V(a) will be also positive but this is just in the WKB approximation isn't it ??
-
## 1 Answer
I) It seems that the question(v1) is essentially asking the following:
If a 1D potential $V$ is a non-negative function on the real line $\mathbb{R}$, except for a compact interval $[a,b]$ where it is allowed to be negative, then would the number of (bounded) negative energy eigenstates for the corresponding 1D Hamiltonian $$H(x,p)~=~\frac{p^2}{2m}+V(x)$$ always be finite?
The answer is in general No.
Counterexample:
$$V(x) ~=~ V_0 - \frac{A}{|x|^p},$$
where $V_0>0$ and $A>0$ are positive constants, and the power $p>2$. It is possible to prove that the spectrum is unbounded from below with infinitely many negative energy eigenstates in a very similar to e.g. methods used in this Phys.SE answer.
II)
What if one additionally assumes that the potential $V$ is bounded from below?
Then the answer is Yes, because there must exist positive constants $V_0,L>0$ such that $$V(x)~\geq~ -V_0 ~\theta(L\!-\!|x|) .$$ In other words, one can find a finite potential well with lower energy levels, but we know that the spectrum of a finite potential well satisfies the sought-for statement.
III)
What if one instead only additionally assumes that the spectrum (but not necessarily the potential $V$) is bounded from below, i.e. that the system has a stable ground state?
Then the answer is Yes, semiclassically (WKB), and I believe it is Yes non-semiclassically as well, but I don't (yet) have a rigorous proof.
-
thanks Qmechanic what would happen if the potential $V(x)=x^{2}- \frac{A}{|x|^{p}}$ for big 'x' would the energies be the ones of the harmonic oscillator ?? – Jose Javier Garcia Oct 3 '12 at 21:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408785104751587, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/107820/translating-to-english-the-logic-statements/107825
|
# translating to english the logic statements
im having a difficulty translating it to english. SO basically, i will type the corresponding statements in words. (Universal x ( x is not 0) -> existential y(xy = 1)
then we need to evaluate it as true or false.
heres my approach. i translated it to english as :
" For some x OR some y, xy = 1 is valid. " and my evaluation is "true" (T)
The reason why there is an or because i converted the "p->q" into simpler operations (negate p, or q). Then negate q means negating the universal x so i made it to some x.
I hope things get sorted out if Im wrong. Thanks stackexchange
-
## 2 Answers
It seems to me a bit unclear what does "for some $x$ OR some $y$" mean? Can you have $xy$ without choosing an $x$, for example?
In this case, translating the implication to conjunction does not change the quantifiers: "$\forall$x.$\exists$y.$\neg[(x\neq0)\wedge(xy\neq1)]$". So now, using De Morgan: $\forall$x.$\exists$y.$[(x=0)\vee(xy=1)]$.
The law of translation is $(p \implies q) \equiv \neg(p \wedge\neg q)$.
However, why not try translating it directly? I think you will find it much less confusing.
-
for all x, if x is not 0, then there exists y such that xy=1
for all x, either x is 0, or there exists y s.t. xy=1.
if I get the scope of universal quantifier right, and it is the case that I'm right this statement is true in all models of division rings, Fields (such as fiels of Reals R).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9029593467712402, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/96864/in-a-boolean-algebra-b-and-y-subseteq-b-p-is-an-upper-bound-for-y-but-not
|
# In a Boolean algebra B and $Y\subseteq B$, $p$ is an upper bound for $Y$ but not the supremum. Is $q<p$ for some other upper bound $q$?
I don't think that this is the case. I am reading over one of my professor's proof, and he seems to use this fact. Here is the proof: Let $B$ be a Boolean algebra, and suppose that $X$ is a dense subset of $B$ in the sense that every nonzero element of $B$ is above a nonzero element of $X$. Let $p$ be an element in $B$. The proof is to show that $p$ is the supremum of the set of all elements in $X$ that are below $p$. Let $Y$ be the set of elements in $X$ that are below $p$. It is to be shown that $p$ is the supremum of $Y$. Clearly, $p$ is an upper bound of $Y$. If $p$ is not the least upper bound of $Y$, then there must be an element $q\in B$ such that $q<p$ and $q$ is an upper bound for $Y$ ...etc.
I do not see how this last sentence follows. I do see that if $p$ is not the least upper bound of $Y$, then there is some upper bound $q$ of $Y$ such that $p$ is NOT less than or equal to $q$. But, since we have only a partial order, and our algebra is not necessarily complete, I do not see how we can show anything else. So, is my professor's proof wrong, or am I just missing something fundamental?
-
## 2 Answers
The set of upper bounds is closed under intersection, so $p \cap q$ is an upper bound less than $p$.
-
Let $p$ and $q$ be upper bounds of $Y$. Then $p\wedge q$ is an upper bound of $Y$ and $p\wedge q\leq p$ and $p\wedge q\leq q$. Now if for all upper bounds $q$ of $Y$, $p\leq p\wedge q\leq q$, $p$ must be the least upper bound. Otherwise $p\wedge q<p$ for some $q$ and $p\wedge q$ is a strictly lower upper bound of $Y$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520593881607056, "perplexity_flag": "head"}
|
http://advogato.org/person/vicious/diary.html?start=319
|
# Older blog entries for vicious (starting at number 319)
Visualizing complex singularities
I needed a way to visualize which t get hit for a polynomial such as $t^2+zt+z=0$ when z ranges in a simple set such as a square or a circle. That is, really this is a generically two-valued function above the z plane. Of course we can’t just graph it since we don’t have 4 real dimensions (I want t and z to of course be complex). For each complex z, there are generically two complex t above it.
So instead of looking for existing solutions (boring, surely there is a much more refined tool out there) I decided it is the perfect time to learn a bit of Python and checkout how it does math. Surprisingly well it turns out. Look at the code yourself. You will need numpy, cairo, and pygobject. I think except for numpy everything was installed on fedora. To change the polynomial or drawing parameters you need to change the code. It’s not really documented, but it should not be too hard to find where to change things. It’s less than 150 lines long, and you should take into considerations that I’ve never before written a line of python code, so there might be some things which are ugly. I did have the advantage of knowing GTK, though I never used Cairo before and I only vaguely knew how it works. It’s probably an hour or two’s worth coding, the rest of yesterday afternoon was spent on playing around with different polynomials.
What it does is randomly pick z points in a rectangle, by default real and imagnary parts going from -1 to 1. Each z point has a certain color assigned. On the left hand side of the plot you can see the points picked along with their colors. Then it solves the polynomial and plots the two (or more if the polynomial of higher degree) solutions on the right with those colors. It uses the alpha channel on the right so that you get an idea of how often a certain point is picked. Anyway, here is the resulting plot for the polynomial given above:
I am glad to report (or not glad, depending on your point of view) to report that using the code I did find a counterexample to a Lemma I was trying to prove. In fact the counterexample is essentially the polynomial above. That is, I was thinking you’d probably have to have hit every t inside the “outline” of the image if all the roots were 0 at zero. It turns this is not true. In fact there exist polynomials where t points arbitrarily close to zero are not hit even if the outline is pretty big (actually the hypothesis in the lemma were more complicated, but no point in stating them since it’s not true). For example, $t^2+zt+\frac{z}{n}=0$ doesn’t hit a whole neighbourhood of the point $t=-\frac{1}{n}$. Below is the plot for $n=5$. Note that as n goes to infinity the singularity gets close to $t(t+z) = 0$ which is the union of two complex lines.
By the way, be prepared the program eats up quite a bit of ram, it’s very inefficient in what it does, so don’t run it on a very old machine. It will stop plotting points after a while so that it doesn’t bring your machine to its knees if you happen to forget to hit “Stop”. Also it does points in large “bursts” instead of one by one.
Update: I realized after I wrote above that I never wrote a line of python code that I did write a line of python code before. In my evince/vim/synctex setup I did fiddle with some python code that I stole from gedit, but I didn’t really write any new code there rather than just whacking some old code I did not totally understand with a hammer till it fit in the hole that I needed (a round peg will go into a square hole if hit hard enough).
Syndicated 2012-05-30 17:16:11 from The Spectre of Math
Return to linear search
So … apparently searching an unordered list without any structure whatsoever is supposed to be better than having structure. At least that’s the new GNOME shell design that removes categories, removes any ordering and places icons in pages. The arguments are that it’s hard to categorize things and people use spatial memory to find where things are.
The spatial memory was here before with nautilus. It didn’t work out so great. No people don’t have spatial memory. For example for me, I use a small number of applications often, I put their launchers somewhere easy to reach. The rest of the applications I use rarely if never. No I do not remember where they are, I do not even remember what they are named. E.g. I don’t remember what the ftp client list, but I am not a total moron and I correctly guess to look for it in the “Internet” menu which is managable. Given I’ve used ftp probably once in a year, I do not remember where it is. Another example is when Maia (6 year old) needs a game to play. I never play games, but I have a few installed for these occasions. Do I want to look through an unordered list of 50-100 icons? Hell no. I want to click on “Games” and pick one. 95% or so of applications i have installed I use rarely. I will not “remember” where they are. I don’t want to spend hours trying to sort or organize the list of icons. Isn’t that what the computer can do for me? Vast majority of people (non-geeks) never change their default config, they use it as it came. So they will not organize it unless the computer organizes it for them. I have an android tablet, and this paged interface with icons you have to somehow organize yourself is totally annoying. One of the reasons why I find the tablet unusable (I don’t think I’ve turned it on for a few months now). That interface might work well when you have 10 apps, but it fails miserably when you have 100.
If I could remember that games are on page 4 (after presumably I’ve made a lot of unneeded effort to put them there) I can remember they are in the “Games” category. Actually there I don’t have to memorize it. Why don’t we just number all the buttons in an application since the user could remember what button number 4 that’s right next to button number 3 on window number 5 does. I mean, the user can use spatial memory right?
Now as for “that’s why there is search” … yeah but that only works when you know what you are searching for. I usually know what I am searching for once I found it. It’s this idea that google is the best interface for everything. Google is useful for the web because there are waaaaay too many pages to categorize. That’s not a problem for applications. Search is a compromise. It is a way to find things when there are too many to organize.
The argument “some apps don’t fit into one category neatly” also fails. The whole idea of the vfolder menus was that you could have arbitrary queries for submenus. You can have an app appear in every category where it makes sense. Now just because people making up the menus didn’t get it just right doesn’t make it a bad idea. Also now this leads to a lot of apps without any categories. The problem I think is with the original terminology. When I was designing this system I used “Keywords” instead of “Categories”. But KDE already had Keywords, so we used Categories, but you should think of them as Keywords on which to query where the icon appears. It describes the application, it doesn’t hardcode where it appears. Unfortunately, there seems to be a lack of understanding of this concept which always led to miscategorization. For example someone changed the original design to say some things were some sort of “core categories” or whatnot and that only one should appear on an icon and that there will be a menu with that name. That defeats the purpose. It’s like beating out the front glass of your car and then complaining about the wind.
Finally, what if I lend my computer to someone to do something quickly. No I am a normal person, so I don’t create a new account. And even if I do create a new account, the default sorting of apps is unlikely to be helpful. If someone just wants to quickly do something that doesn’t involve the icons on the dash, they’re out of luck if I have lots of apps installed. Plus at work I will have a different UI, on my laptop I have a different UI, and any other computer I use will have a different UI. I can’t customize everyone of them just to use them.
As it is, if I had a friend use my computer with gnome-shell they were lost. If it’s made even less usable … thank god for XFCE, though I worry that these moves towards iphonization of the UI will lead to even worse categorization. There are already many .desktop’s with badly filled out Categories field, so there will be less incentive to do it correctly.
Syndicated 2012-05-12 17:02:31 from The Spectre of Math
Determinants
I just feel like ranting about determinant notation. I always get in this mood when preparing a lecture on determinants and I look through various books for ideas on better presentation and the somewhat standard notation makes my skin crawl. Many people think it is a good idea to use
$\left\lvert \begin{matrix} a & b \\ c & d \end{matrix} \right\rvert$
instead of the sane, and hardly any more verbose
$\det \left[ \begin{matrix} a & b \\ c & d \end{matrix} \right]$ or $\det \left( \left[ \begin{matrix} a & b \\ c & d \end{matrix} \right] \right)$.
Now what’s the problem with the first one.
1) Unless you look carefully you might mistake the vertical lines for brackets and simply see a matrix, not its determinant.
2) vertical lines look like something positive while the determinant is negative.
3) What about 1 by 1 matrces. $|a|$ is a determinant of $[a]$ or is it the absolute value of $a$.
4) What if you want the absolute value of the determinant (something commonly done). Then if you’d write
$\left\lvert\left\lvert \begin{matrix} a & b \\ c & d \end{matrix} \right\rvert\right\rvert$
that looks more like the operator norm of the matrix rather than absolute value of its determinant. So in this case, even those calculus or linear algebra books that use the vertical lines will write:
$\left\lvert \det \left( \left[ \begin{matrix} a & b \\ c & d \end{matrix} \right] \right) \right\rvert$
So now the student might be confused because they don’t expect to see “det” used for determinant (consistency in notation is out the window).
So … if you are teaching linear algebra or writing a book on linear algebra, do the right thing: Don’t use vertical lines.
Syndicated 2012-05-09 20:51:55 from The Spectre of Math
GNOME UI Fail
So, another GNOME UI fail. Marketa has a new computer: Using compositing leads to crashes so using fallback gnome (am thinking i should switch her to xfce as well). But this is really not a problem of the fallback.
Anyway, the UI fail I am talking about is “adding a printer”. Something which she figured out how to do previously. Not with the new UI for the printing. The thing is, the window is almost empty and it is not at all clear what to press to add a printer. So she hasn’t figured it out and I had to help out. I figured out three things
1) The “unlock” thing is totally unintuitive. She did not think of pressing it. She doesn’t want to unlock anything, she wants to add a printer. With it, some parts of the UI are greyed out, but it’s not clear what should happen.
2) There is just a “+” in a lower corner that you have to press. She did not figure out that’s what you have to press to add a printer. A button with “Add printer” would have been a million times better.
3) Not even I figured out how to set default options for the printer such as duplex, resolution, etc… Pressing “Options” is something about forbidden users or whatnot, which is a totally useless option on a laptop.
If a PhD who has used computers for years can’t figure out how to do something like this, there is a problem with the UI.
This is a symptom all over the new GNOME system settings. It’s very hard to set something up if it didn’t set itself up automatically. There’s also a lot of guesswork involved now. The UI may be slightly prettier, but it is a step backwards usage-wise.
Here’s a solution:
1) Get rid of the lock thing, go back to the model that if you do something that requires authentication, ask for authentication. Why should there be extra UI that only confuses the user.
2) Change the “+” and “-” buttons to have the actual text. “Add printer” “Remove printer”.
3) “Add printer” should be very prominent in the UI. I bet 90% of the time when a normal user enters that dialog, they want to add a printer.
4) Put options where they can be accessed. Surely the options are accessible somewhere, but I didn’t find it.
…
Maybe I should file a bug that will get ignored …
Syndicated 2012-04-29 15:21:08 from The Spectre of Math
CS costs too much
Apparently computer science is not too interesting and costs too much. \$1.7 mil at University of Florida apparently. So obviously we cut it, so that the athletic department (costing \$99 mil) can get an extra \$2 mil a year. It’s obvious where our priorities are as a society. Even if nothing got cut, 1.7 vs 99 is pretty bad.
Syndicated 2012-04-23 13:35:31 from The Spectre of Math
XFCE 1, GNOME Shell 0
After a year of using GNOME-shell, I finally got fed up with it. GNOME shell is unfortunately really annoying to use. There are so many decisions it tries to do, that it does some of them wrong. New window placement, the whole status thing in the corner getting triggered when I don’t want it to, the overview getting triggered all the time by mistake, as well as for example custom launcher setup. When I run my script for editting latex it never shows evince and I have to focus it by alt-tab “by hand.” The whole Alt-Tab behaviour is totally nuts. I also really REALLY hate the fact that dialogs are now “attached” to their parents. I often need to look at the original window because I just forgot what I was going to type in, such as “how many pages did the document have again and what pag I am on now” when printing, this happens really really often for me, so gnome shell drives me up the wall. There are just so many little things like that that overall make it a total pain. Some are solved through extensions or change in behaviour, but I use several computers, so learning different behaviour just for my laptop is annoying.
Consistency be damned is the new motto now. From those new and cool interfaces, they are all quite different, Unity, Cinnamon, GNOME shell, (I haven’t tried KDE, I guess I won’t be able to go there out of GNOME loyalty, which was the only reason why I kept using GNOME shell for so long). Apparently rounded corners are more important than working correctly.
So at first I was happy with GNOME shell. Mostly because it seems to be aimed (despite what anyone says) at people who use the command line. People who mouse around will find GNOME shell annoying. For example my wife will not be searching for apps using the keyboard to launch them. Also the fact that it’s impossible to customize GNOME nowdays to a specific purpose easily (using dconf-editor which has totally broken UI, is really not an answer, I wasted lots of time trying to get some things to work). Either ues GNOME shell for what it’s specifically designed for, or use something else. So flexibility is also out the window.
GNOME shell seems to also think that your mousing is very precise, which it never was for me. I commonly press the wrong button, or the mouse will go somewhere it shouldn’t and the interface punishes you for it. See above about entering the overview by mistake (whenever I wanted to hit a menu or the back button or some such).
I tried LXDE, but it’s buggy as hell (at least in fedora). The window list seems to jump around, launchers don’t always work, the battery status doesn’t work, and workspace switcher is totally broken. OK, so no go there. I tried Cinnamon for a few days, but it’s bad in many of the ways that GNOME shell is. Unity is even worse.
I had some trouble with XFCE in the past (on ubuntu that was upgraded a few times, so it might not have been fair to xfce). Anyway, I installed it on fedora, and quickly set it up, and … it works. It’s not perfect, but I don’t need it to be perfect. I want it to just work, and so far it does. It gets out of my way, unlike GNOME shell which kept trying to get in my way. Plus it’s fast.
So kudos to XFCE. I think I’ll stick with it.
Syndicated 2012-04-15 18:50:49 from The Spectre of Math
Priorities
Two things I saw recently 1) NASA budget for climate research is 1 billion (for all those satellites and all that), 2) Facebook buys instagram for 1 billion.
Now we can see where our priorities (as a society) lie. What I don’t get is, that instagram has software that a reasonably good programmer could have done in a few weekends of binge hacking. It does nothing really new. You could even take fairly off the shelf things. Perhaps the servers and the online setup might be costlier, but still, nothing all that strange. To think that this is worth to us as much as figuring out where the next hurricane will hit, or when will the ice caps melt is “interesting”.
Though it is not totally out of sync with what else is happening. When the entire UC system which is responsible for several nobel prizes and innumerable new cures for diseases and leaps in terms of understanding the world, not to mention educating a huge number of students, when that system has a budget hole the size of one CEOs bonus, and it’s a huge hit for the university. Something is off in priorities. Actually there is a very good likelyhood that this CEO will die of some cancer that wasn’t cured because we don’t fund science enough.
Syndicated 2012-04-13 18:26:47 from The Spectre of Math
Devil’s in the details
It seems that not only did some democrats vote in the Michigan republican primary, satan himself also voted. Check the primary results in Mackinac county (e.g. on http://elections.msnbc.msn.com/ns/politics/2012/Michigan/Republican/primary at least this was the data on wednesday)
The final result in that county was very close. It was by one vote for Romney, but you should check how many votes did Santorum get:
Romney: 667 (41%)
Santorum: 666 (41%)
Syndicated 2012-02-29 20:40:19 from The Spectre of Math
Exponential growth (and CEO pay)
So CEO salary has increased by approximately 9.7% adjusted for inflation every year between 1990-2005 [1] (that is approx 300% increase over that time, so 4 times what they had in 1990). Anyway, that has a doubling time of $\frac{\log 2}{\log 1.097} \approx 7.5$ years. Now median CEO (among the top 200) made approximately 10 million a year in compensation in 2010 [2]. In 2009 there were about 8.3 trillion dollars in existence [3]. Anyway, approximately a CEO makes a 1 millionth of the money in the world, or in other words, if we had a million CEOs we’d exhaust our money supply. It takes about $\log_2 1000000 \approx 20$ doublings to get a million. Hence in $20\times 7.5 = 150$ years one CEO will make all the money in the world. And this is all inflation adjusted.
But we don’t have to go so far to get into trouble. Now we did talk about the top 200, so when would the top 200 make all the money in the world. Well that requires only $\log_2 \frac{1000000}{200} \approx 12.3$ years so $12.3 \times 7.5 \approx 92$ years. OK, so in less than 100 years, the top 200 CEOs will suck out all the money in the universe.
Anyway, the problem is the following: The companies are not rewarding an individual CEO for good performance. They are rewarding all future CEOs. The thing is, that there is no “starting salary.” A CEO that just started is (statistically) making about the same as the one who’s been around for quite a while. If you would start all CEOs at a base salary, then one particular CEOs salary could rise at 10% a year because he’d be with the company only a fixed number of years, the problem would be managable. Now to whatever extent there is anything like a “starting salary” the increase an individual CEO makes is even higher than 10% a year. Essentially the starting salary is increasing at 10% a year.
Let’s look at even a more realistic example of how quickly do we get into trouble. The CEO salary can easily be even 1% of the revenue for the company [4]. In fact some small private colleges are paying 1% of their budgets to their university president, a group where similar thing has happened. Well, now think about this doubling. If it is 1% now, it will be 2% in 7.5 years, 4% in 15 years, 8% in 22.5 years, 16% in 30 years, 32% in 37.5 years, 64% in 45 years, and we get 100% at less than 50 years. So in less than 50 years the entire revenue would have to support the CEO. Now you say, well but the revenue is also growing. Not so fast, the 10% pay increase is overall, that includes companies that did badly and those that did well. One would think that the growth of revenue on average (including failed companies) is not that much more than inflation. And this is adjusted for inflation. In any case CEO is definitely growing a lot faster than the economy (and hence your average revenue), and hence you hit the wall sooner or later. Even if we lob off another 2% to adjust for growth, the doubling time for CEO pay is still 9 years.
Of course a problem would appear a lot earlier than in 50 years. So it’s not only the “rich get richer” and “things are not fair” argument. This state of affairs is actually unsustainable even in relatively short period of time (within our lifetimes). I think people don’t understand that exponential growth is really really fast. That’s why pyramid schemes never work. It’s why ponzi schemes usually fail far quicker than the perpetrator hoped. 10% increase a year does not seem like much (just like 10% return on investment doesn’t seem like that terribly much).
[1] http://www2.ucsc.edu/whorulesamerica/power/wealth.html (better sources exist, but I am too lazy to search further).
[3] http://money.howstuffworks.com/how-much-money-is-in-the-world.htm
[4] http://yourperspective.org/index.php?action=read&value=16a2b99e-4847-102d-bdde-00065b3be33a
Syndicated 2012-02-22 23:17:39 from The Spectre of Math
Diffy Qs notes at 20000 downloads
So the differential equations textbook just reached 20000 downloads from unique adresses. The real analysis textbook is close behind (despite being a year younger) at around 19200. The rate is growing, it started out at around 200 per week for both in the fall and is now pushing 400 a week. As an overwhelming percentage of the hits come from google I think google might have ranked the pages higher. So if you want to help out with the project of free textbooks: link to the books on your blog, page, whatever. And press those social buttons on the page, I guess that also does it.
It’s also interesting to see how ipv6 is doing. So far, 82 ipv6 adresses looked at the real analysis book and 43 for the diffyqs. As ipv6 was active for about half a year on the server, it’s still a very tiny percentage. There were about 6-7 thousand ipv4 addresses looking at the diffyqs book during that time frame and about 8-9 thousand for the real analysis book. But at least someone is using ipv6 (if I could get an internet provider that offered ipv6, I’d use them, but I didn’t find such in Madison).
Syndicated 2012-02-06 00:31:34 from The Spectre of Math
310 older entries...
New Advogato Features
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956638753414154, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/82376?sort=newest
|
Twisted de-Rham cohomology and Eilenberg-Mac Lane spaces
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
One can define twisted cohomology theories via bundles of classifying spaces. In particular, given a cohomology theory $h^{*}$ and a corresponding $\Omega$-spectrum $E_{n}, \varepsilon_{n}$, we can consider on a space $X$ a bundle with fiber $E_{n}$, and define a twisted version of $h^{n}$ as the set of homotopy classes of sections of such a bundle. If the bundle is trivial, we recover the ordinary cohomology theory.
Let us consider a manifold $X$ and its de-Rham cohomology. I suppose there are no problems in defining the Eilenberg-MacLane spaces $K(\mathbb{R}, n)$, even if $\mathbb{R}$ is not countable. For a fixed odd-dimensional form $H$, we can define the (even and odd) twisted de-Rham cohomology groups, via the twisted coboundary $d + H\wedge$, for $d$ the de-Rham differential.
Is there a way to relate the two previous approaches? In particular, given an odd-form $H$, is there a suitable bundle of real Eilenberg-MacLane spaces, whose homotopy classes of sections correspond to the twisted cohomology classes defined via $d + H\wedge$?
-
I believe the answer to this question (and probably far more than you ever wanted to know) is contained in the paper "Periodic twisted cohomology and T-duality" by Bunke, Schick and Spitzweck, in Asterisque 337 (2011) - arxiv.org/abs/0805.1459 – Jeffrey Giansiracusa Dec 1 2011 at 16:13
In general Brown's representability theorem expresses which cohomology-esque theories can be represented by Eilenberg Mac-Lane-esque spaces. The answer, roughly, is "all of them". Have you looked into that? – Will Sawin Dec 1 2011 at 21:29
2 Answers
You cannot compare both cohomologies unless $H$ has degree $1$, because otherwise the cohomology of $d+H\wedge$ is not $\mathbb{Z}$-graded.
If $H$ has degree $1$ then the cohomology of $d+H\wedge$ is the cohomology of $X$ with local coefficients corresponding to the flat line bundle with $1$-form $H$.
The only twisted coefficients associated to singular cohomology with real coefficients are the usual local coefficiens. This follows from the fact that, for $E_n=K(\mathbb{R},n)$, the bundles over $X$ with fiber $E_n$ are classified by $B(\operatorname{Aut}^h(E_n))=K(\operatorname{Aut}\mathbb{R},1)$. Here $\operatorname{Aut}^h$ is the topological group (or $A$-infinity space) of self-homotopy equivalences, and $\operatorname{Aut}$ is just an automorphism group. Therefore, bundles over $X$ with fiber $E_n$ are classified by homomorphisms $\pi_1(X)\rightarrow \operatorname{Aut}(\mathbb{R})$, which correspond to local systems.
-
Thank you for your answer. If $H$ has not degree 1, of course the theory is not $\mathbb{Z}$-graded, but I thought it was possible to use bundles of cartesian products of Eilenberg-MacLane spaces (of odd or even degrees) in some way. Your answer seems to exclude this. – Fabio Dec 1 2011 at 16:55
Fabio, you are on the right. Fernando is only talking about Z-graded cohomology. But if you instead ask about 2-periodic cohomology (Z/2-graded) then I believe you can have more interesting twists. – Jeffrey Giansiracusa Dec 1 2011 at 21:33
Sure, what I meant is that, if $H$ doesn't have degree $1$, then the associated twisted cohomology (which is very interesting and studied in the literature) cannot correspond to any twisted generalized cohomology in the sense Fabio first defined, as the former is not $\mathbb{Z}$-graded, but the later is. – Fernando Muro Dec 1 2011 at 22:26
That's true, of course a $\mathbb{Z}$-graded theory cannot be equal to a $\mathbb{Z}_{2}$-graded one, but the doubt I have is the following. Let us consider, for example, the even cohomology: in order to twist it, can we consider a bundle whose fiber is $K(\mathbb{R}, 0) \times K(\mathbb{R}, 2) \times \cdots$? In this way we must consider $Aut(K(\mathbb{R}, 0) \times K(\mathbb{R}, 2) \times \cdots)$, which maybe does not split as the direct sum of the single automorphism groups in each degree. Can we obtain in this way the twisted cohomology as defined via $d + H \wedge$? – Fabio Dec 2 2011 at 0:00
That's true, it doesn't split because real Eilenberg-MacLane spaces do have real cohomology, and it's easy enough (free on one generator at the indicated dimension) so that the homotopy automorphism group can be computed. There's a challenge here! – Fernando Muro Dec 2 2011 at 8:37
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You should turn your d+H into a flat superconnection of degree one. Here is the example which works for K-theory. You consider the $\mathbb{Z}$-graded complex $\Omega(M)[b,b^{-1}]$ with $b$ of degree $-2$ and define the superconnection by $d+bH$ for the closed $3$-form $H$. There is an equivalence between the $\infty$-categories of such superconnections and bundles of chain complexes (representations of the singular complex of your underlying manifold), see Block-Smith. If you then apply the Eilenberg-MacLane equivalence between the categories of chain complexes and $H\mathbb{Z}$-modules, then you get a bundle of $H\mathbb{Z}$-modules. This (or the bundle of its $\infty$-loop spaces) is what you are lokking for. Actually, all these steps are equivalences so that you can go backwards.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256913065910339, "perplexity_flag": "head"}
|
http://terrytao.wordpress.com/2009/05/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Monthly Archive
You are currently browsing the monthly archive for May 2009.
## Google Wave
29 May, 2009 in non-technical, opinion | by Terence Tao | 25 comments
As readers of this blog are no doubt aware, I (in conjunction with Tim Gowers and many others) have been working collaboratively on a mathematical project. To do this, we have been jury-rigging together a wide variety of online tools for this, including at least two blogs, a wiki, some online spreadsheets, and good old-fashioned email, together with offline tools such as Maple, LaTeX, C, and other programming languages and packages. (To a lesser extent, I also rely this sort of mish-mash of semi-compatible online and offline software packages in my more normal mathematical collaborations, though polymath1 has been particularly chaotic in this regard.)
While this has been working reasonably well so far, the mix of all the various tools has been somewhat clunky, to put it charitably, and it would be good to have a more integrated online framework to do all of these things seamlessly; currently there seem to be software that achieves various subsets of what one would need for this, but not all. (This point has also recently been made at the Secret Blogging Seminar.)
Yesterday, though, Google Australia unveiled a new collaborative software platform called “Google Wave” which incorporates many of these features already, and looks flexible enough to incorporate them all eventually. (Full disclosure: my brother is one of the software engineers for this project.) It’s nowhere near ready for release yet – it’s still in the development phase – but with the right type of support for things like LaTeX, this could be an extremely useful platform for mathematical collaboration (including the more traditional type of collaboration with just a handful of authors).
There is a demo for the product below. It’s 80 minutes long, and aimed more at software developers than at end users, but I found it quite interesting, and worth watching through to the end:
[Update, May 30: Apparently a LaTeX renderer is already being developed as an API extension to Google Wave; here is a very preliminary screenshot. Also, a shorter explanation of what Google Wave is and does can be found here. ]
[Update, Jun 7: Another review can be found here.]
## Reflections on compressed sensing
25 May, 2009 in math.IT, paper | Tags: compressed sensing, David Donoho, Emmanuel Candes | by Terence Tao | 9 comments
[This post should have appeared several months ago, but I didn't have a link to the newsletter at the time, and I subsequently forgot about it until now. -T.]
Last year, Emmanuel Candés and I were two of the recipients of the 2008 IEEE Information Theory Society Paper Award, for our paper “Near-optimal signal recovery from random projections: universal encoding strategies?” published in IEEE Inf. Thy.. (The other recipient is David Donoho, for the closely related paper “Compressed sensing” in the same journal.) These papers helped initiate the modern subject of compressed sensing, which I have talked about earlier on this blog, although of course they also built upon a number of important precursor results in signal recovery, high-dimensional geometry, Fourier analysis, linear programming, and probability. As part of our response to this award, Emmanuel and I wrote a short piece commenting on these developments, entitled “Reflections on compressed sensing“, which appears in the Dec 2008 issue of the IEEE Information Theory newsletter. In it we place our results in the context of these precursor results, and also mention some of the many active directions (theoretical, numerical, and applied) that compressed sensing is now developing in.
## DHJ: Writing the second paper
22 May, 2009 in math.HO, polymath | Tags: polymath1 | by Terence Tao | 110 comments
Now that the quarter is nearing an end, I’m returning to the half of the polymath1 project hosted here, which focussed on computing density Hales-Jewett numbers and related quantities. The purpose of this thread is to try to organise the task of actually writing up the results that we already have; as this is a metathread, I don’t think we need to number the comments as in the research threads.
To start the ball rolling, I have put up a proposed outline of the paper on the wiki. At present, everything in there is negotiable: title, abstract, introduction, and choice and ordering of sections. I suppose we could start by trying to get some consensus as to what should or should not go into this paper, how to organise it, what notational conventions to use, whether the paper is too big or too small, and so forth. Once there is some reasonable consensus, I will try creating some TeX files for the individual sections (much as is already being done with the first polymath1 paper) and get different contributors working on different sections (presumably we will be able to coordinate all this through this thread). This, like everything else in the polymath1 project, will be an experiment, with the rules made up as we go along; presumably once we get started it will become clearer what kind of collaborative writing frameworks work well, and which ones do not.
## 245C, Notes 5: Hausdorff dimension (optional)
19 May, 2009 in 245C - Real analysis, math.CA, math.MG | Tags: Frostman measure, geometric measure theory, Hausdorff dimension, Minkowski dimension | by Terence Tao | 26 comments
A fundamental characteristic of many mathematical spaces (e.g. vector spaces, metric spaces, topological spaces, etc.) is their dimension, which measures the “complexity” or “degrees of freedom” inherent in the space. There is no single notion of dimension; instead, there are a variety of different versions of this concept, with different versions being suitable for different classes of mathematical spaces. Typically, a single mathematical object may have several subtly different notions of dimension that one can place on it, which will be related to each other, and which will often agree with each other in “non-pathological” cases, but can also deviate from each other in many other situations. For instance:
• One can define the dimension of a space ${X}$ by seeing how it compares to some standard reference spaces, such as ${{\bf R}^n}$ or ${{\bf C}^n}$; one may view a space as having dimension ${n}$ if it can be (locally or globally) identified with a standard ${n}$-dimensional space. The dimension of a vector space or a manifold can be defined in this fashion.
• Another way to define dimension of a space ${X}$ is as the largest number of “independent” objects one can place inside that space; this can be used to give an alternate notion of dimension for a vector space, or of an algebraic variety, as well as the closely related notion of the transcendence degree of a field. The concept of VC dimension in machine learning also broadly falls into this category.
• One can also try to define dimension inductively, for instance declaring a space ${X}$ to be ${n}$-dimensional if it can be “separated” somehow by an ${n-1}$-dimensional object; thus an ${n}$-dimensional object will tend to have “maximal chains” of sub-objects of length ${n}$ (or ${n+1}$, depending on how one initialises the chain and how one defines length). This can give a notion of dimension for a topological space or a commutative ring.
The notions of dimension as defined above tend to necessarily take values in the natural numbers (or the cardinal numbers); there is no such space as ${{\bf R}^{\sqrt{2}}}$, for instance, nor can one talk about a basis consisting of ${\pi}$ linearly independent elements, or a chain of maximal ideals of length ${e}$. There is however a somewhat different approach to the concept of dimension which makes no distinction between integer and non-integer dimensions, and is suitable for studying “rough” sets such as fractals. The starting point is to observe that in the ${d}$-dimensional space ${{\bf R}^d}$, the volume ${V}$ of a ball of radius ${R}$ grows like ${R^d}$, thus giving the following heuristic relationship
$\displaystyle \frac{\log V}{\log R} \approx d \ \ \ \ \ (1)$
between volume, scale, and dimension. Formalising this heuristic leads to a number of useful notions of dimension for subsets of ${{\bf R}^n}$ (or more generally, for metric spaces), including (upper and lower) Minkowski dimension (also known as box-packing dimension or Minkowski-Bougliand dimension), and Hausdorff dimension.
[In ${K}$-theory, it is also convenient to work with ``virtual" vector spaces or vector bundles, such as formal differences of such spaces, and which may therefore have a negative dimension; but as far as I am aware there is no connection between this notion of dimension and the metric ones given here.]
Minkowski dimension can either be defined externally (relating the external volume of ${\delta}$-neighbourhoods of a set ${E}$ to the scale ${\delta}$) or internally (relating the internal ${\delta}$-entropy of ${E}$ to the scale). Hausdorff dimension is defined internally by first introducing the ${d}$-dimensional Hausdorff measure of a set ${E}$ for any parameter ${0 \leq d < \infty}$, which generalises the familiar notions of length, area, and volume to non-integer dimensions, or to rough sets, and is of interest in its own right. Hausdorff dimension has a lengthier definition than its Minkowski counterpart, but is more robust with respect to operations such as countable unions, and is generally accepted as the “standard” notion of dimension in metric spaces. We will compare these concepts against each other later in these notes.
One use of the notion of dimension is to create finer distinctions between various types of “small” subsets of spaces such as ${{\bf R}^n}$, beyond what can be achieved by the usual Lebesgue measure (or Baire category). For instance, a point, line, and plane in ${{\bf R}^3}$ all have zero measure with respect to three-dimensional Lebesgue measure (and are nowhere dense), but of course have different dimensions (${0}$, ${1}$, and ${2}$ respectively). (The Kakeya set conjecture, discussed recently on this blog, offers another good example.) This can be used to clarify the nature of various singularities, such as that arising from non-smooth solutions to PDE; a function which is non-smooth on a set of large Hausdorff dimension can be considered less smooth than one which is non-smooth on a set of small Hausdorff dimension, even if both are smooth almost everywhere. While many properties of the singular set of such a function are worth studying (e.g. their rectifiability), understanding their dimension is often an important starting point. The interplay between these types of concepts is the subject of geometric measure theory.
Read the rest of this entry »
## The two-ends reduction for the Kakeya maximal conjecture
15 May, 2009 in math.CA, math.CO, tricks | Tags: Kakeya conjecture, Kakeya maximal function, rescaling, two-ends reduction | by Terence Tao | 10 comments
In this post I would like to make some technical notes on a standard reduction used in the (Euclidean, maximal) Kakeya problem, known as the two ends reduction. This reduction (which takes advantage of the approximate scale-invariance of the Kakeya problem) was introduced by Wolff, and has since been used many times, both for the Kakeya problem and in other similar problems (e.g. by Jim Wright and myself to study curved Radon-like transforms). I was asked about it recently, so I thought I would describe the trick here. As an application I give a proof of the ${d=\frac{n+1}{2}}$ case of the Kakeya maximal conjecture.
Read the rest of this entry »
## Recent progress on the Kakeya conjecture
11 May, 2009 in math.AG, math.AP, math.AT, math.CO, talk, travel | Tags: additive combinatorics, heat flow, incidence geometry, Kakeya conjecture, polynomial method | by Terence Tao | 22 comments
Below the fold is a version of my talk “Recent progress on the Kakeya conjecture” that I gave at the Fefferman conference.
Read the rest of this entry »
## Szemeredi’s regularity lemma via the correspondence principle
8 May, 2009 in expository, math.CO, math.DS, math.PR | Tags: correspondence principle, szemeredi regularity lemma | by Terence Tao | 1 comment
In a previous post, we discussed the Szemerédi regularity lemma, and how a given graph could be regularised by partitioning the vertex set into random neighbourhoods. More precisely, we gave a proof of
Lemma 1 (Regularity lemma via random neighbourhoods) Let ${\varepsilon > 0}$. Then there exists integers ${M_1,\ldots,M_m}$ with the following property: whenever ${G = (V,E)}$ be a graph on finitely many vertices, if one selects one of the integers ${M_r}$ at random from ${M_1,\ldots,M_m}$, then selects ${M_r}$ vertices ${v_1,\ldots,v_{M_r} \in V}$ uniformly from ${V}$ at random, then the ${2^{M_r}}$ vertex cells ${V^{M_r}_1,\ldots,V^{M_r}_{2^{M_r}}}$ (some of which can be empty) generated by the vertex neighbourhoods ${A_t := \{ v \in V: (v,v_t) \in E \}}$ for ${1 \leq t \leq M_r}$, will obey the regularity property
$\displaystyle \sum_{(V_i,V_j) \hbox{ not } \varepsilon-\hbox{regular}} |V_i| |V_j| \leq \varepsilon |V|^2 \ \ \ \ \ (1)$
with probability at least ${1-O(\varepsilon)}$, where the sum is over all pairs ${1 \leq i \leq j \leq k}$ for which ${G}$ is not ${\varepsilon}$-regular between ${V_i}$ and ${V_j}$. [Recall that a pair ${(V_i,V_j)}$ is ${\varepsilon}$-regular for ${G}$ if one has
$\displaystyle |d( A, B ) - d( V_i, V_j )| \leq \varepsilon$
for any ${A \subset V_i}$ and ${B \subset V_j}$ with ${|A| \geq \varepsilon |V_i|, |B| \geq \varepsilon |V_j|}$, where ${d(A,B) := |E \cap (A \times B)|/|A| |B|}$ is the density of edges between ${A}$ and ${B}$.]
The proof was a combinatorial one, based on the standard energy increment argument.
In this post I would like to discuss an alternate approach to the regularity lemma, which is an infinitary approach passing through a graph-theoretic version of the Furstenberg correspondence principle (mentioned briefly in this earlier post of mine). While this approach superficially looks quite different from the combinatorial approach, it in fact uses many of the same ingredients, most notably a reliance on random neighbourhoods to regularise the graph. This approach was introduced by myself back in 2006, and used by Austin and by Austin and myself to establish some property testing results for hypergraphs; more recently, a closely related infinitary hypergraph removal lemma developed in the 2006 paper was also used by Austin to give new proofs of the multidimensional Szemeredi theorem and of the density Hales-Jewett theorem (the latter being a spinoff of the polymath1 project).
For various technical reasons we will not be able to use the correspondence principle to recover Lemma 1 in its full strength; instead, we will establish the following slightly weaker variant.
Lemma 2 (Regularity lemma via random neighbourhoods, weak version) Let ${\varepsilon > 0}$. Then there exist an integer ${M_*}$ with the following property: whenever ${G = (V,E)}$ be a graph on finitely many vertices, there exists ${1 \leq M \leq M_*}$ such that if one selects ${M}$ vertices ${v_1,\ldots,v_{M} \in V}$ uniformly from ${V}$ at random, then the ${2^{M}}$ vertex cells ${V^{M}_1,\ldots,V^{M}_{2^{M}}}$ generated by the vertex neighbourhoods ${A_t := \{ v \in V: (v,v_t) \in E \}}$ for ${1 \leq t \leq M}$, will obey the regularity property (1) with probability at least ${1-\varepsilon}$.
Roughly speaking, Lemma 1 asserts that one can regularise a large graph ${G}$ with high probability by using ${M_r}$ random neighbourhoods, where ${M_r}$ is chosen at random from one of a number of choices ${M_1,\ldots,M_m}$; in contrast, the weaker Lemma 2 asserts that one can regularise a large graph ${G}$ with high probability by using some integer ${M}$ from ${1,\ldots,M_*}$, but the exact choice of ${M}$ depends on ${G}$, and it is not guaranteed that a randomly chosen ${M}$ will be likely to work. While Lemma 2 is strictly weaker than Lemma 1, it still implies the (weighted) Szemerédi regularity lemma (Lemma 2 from the previous post).
Read the rest of this entry »
## At the Fefferman conference
6 May, 2009 in math.CA, math.PR, travel | Tags: Charles Fefferman, Diego Cordoba, fractional differentiation, mean, median, mode, Soonsik Kwon | by Terence Tao | 3 comments
I am currently at Princeton for the conference “The power of Analysis” honouring Charlie Fefferman‘s 60th birthday. I myself gave a talk at this conference entitled “Recent progress on the Kakeya conjecture”; I plan to post a version of this talk on this blog shortly.
But one nice thing about attending these sorts of conferences is that one can also learn some neat mathematical facts, and I wanted to show two such small gems here; neither is particularly deep, but I found both of them cute. The first one, which I learned from my former student Soonsik Kwon, is a unified way to view the mean, median, and mode of a probability distribution ${\mu}$ on the real line. If one assumes that this is a continuous distribution ${\mu = f(x)\ dx}$ for some smooth, rapidly decreasing function ${f: {\mathbb R} \rightarrow {\mathbb R}^+}$ with ${\int_{\mathbb R} f(x)\ dx = 1}$, then the mean is the value of ${x_0}$ that minimises the second moment
$\displaystyle \int_{\mathbb R} |x-x_0|^2 f(x)\ dx,$
the median is the value of ${x_0}$ that minimises the first moment
$\displaystyle \int_{\mathbb R} |x-x_0| f(x)\ dx,$
and the mode is the value of ${x_0}$ that maximises the “pseudo-negative first moment”
$\displaystyle \int_{\mathbb R} \delta(x-x_0) f(x)\ dx.$
(Note that the Dirac delta function ${\delta(x-x_0)}$ has the same scaling as ${|x-x_0|^{-1}}$, hence my terminology “pseudo-negative first moment”.)
The other fact, which I learned from my former classmate Diego Córdoba (and used in a joint paper of Diego with Antonio Córdoba), is a pointwise inequality
$\displaystyle |\nabla|^\alpha ( f^2 )(x) \leq 2 f(x) |\nabla|^\alpha f(x)$
for the fractional differentiation operators ${|\nabla|^\alpha}$ applied to a sufficiently nice real-valued function ${f: {\mathbb R}^d \rightarrow {\mathbb R}}$ (e.g. Schwartz class will do), in any dimension ${d}$ and for any ${0 \leq \alpha \leq 1}$; this should be compared with the product rule ${\nabla (f^2 ) = 2 f \nabla f}$.
The proof is as follows. By a limiting argument we may assume that ${0 < \alpha < 1}$. In this case, there is a formula
$\displaystyle |\nabla|^\alpha f(x) = c(\alpha) \int_{{\mathbb R}^d} \frac{f(x)-f(y)}{|x-y|^{d+\alpha}}\ dy$
for some explicit constant ${c(\alpha) > 0}$ (this can be seen by computations similar to those in my recent lecture notes on distributions, or by analytically continuing such computations; see also Stein’s “Singular integrals and differentiability properties of functions”). Using this formula, one soon sees that
$\displaystyle 2 f(x) |\nabla|^\alpha f(x) - |\nabla|^\alpha ( f^2 )(x) = c(\alpha) \int_{{\mathbb R}^d} \frac{|f(x)-f(y)|^2}{|x-y|^{d+\alpha}}\ dy$
and the claim follows.
## The federal budget, rescaled
4 May, 2009 in non-technical | Tags: federal budget, rescaling | by Terence Tao | 44 comments
In a recent Cabinet meeting, President Obama called for a \$100 million spending cut in 90 days from the various federal departments as a sign of budget discipline. While this is nominally quite a large number, it was pointed out correctly by many people that this was in fact a negligible fraction of the total federal budget; for instance, Greg Mankiw noted that the cut was comparable to a family with an annual spending of \$100,000 and a deficit of \$34,000 deciding on a spending cut of \$3. (Of course, this is by no means the only budgetary initiative being proposed by the administration; just today, for instance, a change in the taxation law for offshore income was proposed which could potentially raise about \$210 billion over the next ten years, or about \$630 a year with the above scaling, though it is not clear yet how feasible or effective this change would be.)
I thought that this sort of rescaling (converting \$100 million to \$3) was actually a rather good way of comprehending the vast amounts of money in the federal budget: we are not so adept at distinguishing easily between \$1 million, \$1 billion, and \$1 trillion, but we are fairly good at grasping the distinction between \$0.03, \$30, and \$30,000. So I decided to rescale (selected items in) the federal budget, together with some related numbers for comparison, by this ratio 100 million:3, to put various figures in perspective.
This is certainly not an advanced application of mathematics by any means, but I still found the results to be instructive. The same rescaling puts the entire population of the US at about nine – the size of a (large) family – which is of course consistent with the goal of putting the federal budget roughly on the scale of a family budget (bearing in mind, of course, that the federal government is only about a fifth of the entire US economy, so one might perhaps view the government as being roughly analogous to the “heads of household” of this large family). The implied (horizontal) length rescaling of $\sqrt{100 \hbox{million}:3} \approx 5770$ is roughly comparable to the scaling used in Dubai’s “The World” (which is not a coincidence, if you think about it; the purpose of both rescalings is to map global scales to human scales). Perhaps readers may wish to contribute additional rescaled statistics of interest to those given below.
One caveat: the rescaling used here does create some noticeable distortions in other dimensional quantities. For example, if one rescales length by the implied factor of $\approx 5770$, but leaves time unrescaled (so that a fiscal year remains a fiscal year), then this will rescale all speeds by a factor of $\approx 5770$ also. For instance, the circumference of the Earth has been rescaled to a manageable-looking 6.9 kilometres (4.3 miles), but the speed of, say, a commercial airliner (typically about 900 km/hr, or 550 mi/hr) is rescaled also, to a mere 150 metres (or 160 yards) or so per hour, i.e. two or three meters or yards per minute – about the speed of a tortoise.
All amounts here are rounded to three significant figures (and in some cases, the precision may be worse than this). I am using here the FY2008 budget instead of the 2009 or 2010 one, as the data is more finalised; as such, the numbers here are slightly different from those of Mankiw. (For instance, the 2010 budget includes the expenditures for Iraq & Afghanistan, whereas in previous budgets these were appropriated separately.) I have tried to locate the most official and accurate statistics whenever possible, but corrections and better links are of course welcome.
FY 2008 budget:
• Total revenue: \$75,700
• Individual income taxes: \$34,400
• Social security & other payroll taxes: \$27,000
• Total spending: \$89,500
• Net mandatory spending: \$48,000
• Medicare, Medicaid, and SCHIP: \$20,500
• Social Security: \$18,400
• Net interest: \$7,470
• Net discretionary spending: \$34,000
• Department of Defense: \$14,300
• DARPA: \$89
• Global War on Terror: \$4,350
• Department of Education: \$1,680
• Department of Energy: \$729
• NASA: \$519
• Net earmarks: \$495
• NSF: \$180
• Maths & Physical Sciences: \$37.50
• Budget deficit: \$13,800
• Additional appropriations (not included in regular budget)
• Iraq & Afghanistan: \$5,640
• Spending cuts within 90 days of Apr 20, 2009: \$3
• Air force NY flyover “photo shoot”, Apr 27, 2009: \$0.01
• Additional spending cuts for FY2010, proposed May 7, 2009: \$510
• Projected annual revenue from proposed offshore tax code change: \$630
Other figures (for comparison)
• National debt held by public 2008: \$174,000
• Held by foreign/international owners 2008: \$85,900
• National debt held by government agencies, 2008: \$126,000
• Held by OASDI (aka (cumulative) “Social Security Surplus”): \$64,500
• National GDP 2008: \$427,000
• National population 2008: 9
• GDP per capita 2008: \$47,000
• Land mass: 0.27 sq km (0.1 sq mi, or 68 acres)
• World GDP 2008: \$1,680,000
• World population 2008: 204
• GDP per capita 2008 (PPP): \$10,400
• Land mass: 4.47 sq km (1.73 sq mi)
• World’s richest (non-rescaled) person: Bill Gates (net worth \$1,200, March 2009)
• 2008/2009 Bailout package (TARP): \$21,000 (maximum)
• Amount spent by Dec 2008: \$7,410
• AIG bailout from TARP: \$1,200
• AIG Federal Reserve credit line: \$4,320
• AIG bonuses in 2009 Q1: \$4.95
• GM & Chrysler loans: \$552
• 2009/2010 Stimulus package (ARRA): \$23,600
• Tax cuts (spread out over 10 years): \$8,640
• State and local fiscal relief: \$4,320
• Education: \$3,000
• “Race to the top” education fund: \$150
• Investments in scientific research: \$645
• NSF allocation: \$90 (was initially \$42)
• ARPA-E: \$12
• Pandemic flu preparedness: \$1.50 (was initially \$27, after being dropped from FY2008 and FY2009 budgets)
• Additional request after A(H1N1) (“swine flu”) outbreak, Apr 28: \$45
• Volcano monitoring: \$0.46 (erroneously reported as \$5.20)
• Salt marsh mouse preservation (aka “Pelosi’s mouse“): \$0.00 (erroneously reported as \$0.90)
• Market capitalization of NYSE
• May 2008 (peak): \$506,000
• March 2009: \$258,000
• Largest company by market cap: Exxon Mobil (approx \$10,000, Apr 2009)
• Value of US housing stock (2007): \$545,760
• Total value of outstanding mortgages (2008): \$330,000
• Total value of sub-prime mortgages outstanding (2007 est): \$39,000
• Total value of mortgage-backed securities (2008): \$267,000
• Credit default swap contracts, total notional value:
• April 2008: \$1,320,000
• Oct 2008: \$1,040,000
• Credit default swaps related to mortgages: less than \$10,000
• US trade balance (2007)
• Exports: \$49,400
• Imports: \$70,400
• Trade deficit: \$21,000
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 113, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9194663763046265, "perplexity_flag": "middle"}
|
http://nrich.maths.org/5578/index?nomenu=1
|
'Factor-multiple Chains' printed from http://nrich.maths.org/
Show menu
Here is an example of a factor-multiple chain of four numbers:
Can you see how it works? Perhaps you could make some statements about some of the numbers in the chain using the words "factor" and "multiple".
In these chains, each blue number can range from $2$ up to $100$ and must be a whole number.
You may like to experiment with this spreadsheet which allows you to enter numbers in each box. Perhaps you can make some more chains for yourself.
What are the smallest blue numbers that will make a complete chain?
What are the largest blue numbers that will make a complete chain?
What numbers cannot appear in any chain?
What is the biggest difference possible between two adjacent blue numbers?
What is the largest and the smallest possible range of a complete chain? (The range is the difference between the largest and smallest values.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8966576457023621, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/41206/irreducible-representations-of-a-semidirect-product?answertab=votes
|
# Irreducible representations of a semidirect product
I have two finite groups. The irreducible representations of their product are given by tensor products of the irreducible of representations of the groups.
Is there a way to build the irreducible representations of a semidirect product from the irreducible representations of the groups?
Any references are very welcome. I couldn't find this in Serre, so I'm guessing it isn't straightforward like the product case. So, any tips would also be great.
Just in case it is completely known and available in the literature, I am interested in $SL_2(\mathbb{F_q})\rtimes H$, where $H$ is the Heisenberg group.
-
## 1 Answer
I am not aware of a general procedure that would work for any semi-direct product. Serre treats semi-direct products by abelian groups in Part II, Section 8.2. See also my answer to another question. In your particular case, apart from lifting the representations of $H$ from the quotient, I would also try to induce the irreducible representations of $SL_2(\mathbb{F}_q)$ to $G$ and see which ones remain irreducible or where you can split off summands that you already know about. Mackey's irreducibility criterion (see e.g. Serre, Part II, section 7.4) should be quite useful for this. Generally, taking inner products of two such induced characters is easy using Frobenius reciprocity and Mackey's formula.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468683004379272, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/72209-can-you-how-do-you-integrate-e-2x-2-a.html
|
# Thread:
1. ## can you/how do you integrate e^2x^2?
Title says it all, this isnt a question I've seen just something I'm curious about.
If you have e raised to the power of 2x, where the x is raised to the 2nd power can you integrate that? If so how?
2. You mean $\int e^{2x^2}dx$?
3. Originally Posted by Abu-Khalil
You mean $\int e^{2x^2}dx$?
I'm curious myself (new to calculus).
Could you take the natural log of both sides to bring the $2x^2$ down and then integrate?
4. Originally Posted by ryu991
Title says it all, this isnt a question I've seen just something I'm curious about.
If you have e raised to the power of 2x, where the x is raised to the 2nd power can you integrate that? If so how?
It can't be done in terms of elementary functions.
-------------------------------------------------------------------------
As an aside: If you were to find $\int_{-\infty}^{\infty}e^{-2x^2}\,dx$ or $\int_0^{\infty}e^{-2x^2}\,dx$ or $\int_{-\infty}^{0}e^{-2x^2}\,dx$, then it can be done, but by a conversion to a double polar integral. This way, it can be shown that $\int_0^{\infty}e^{-2x^2}\,dx=\frac{1}{2}\sqrt{\frac{\pi}{2}}$
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9477578401565552, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/12122/deriving-newtons-third-law-from-homogeneity-of-space/12132
|
# Deriving Newton's Third Law from homogeneity of Space
I am following the first volume of the course of theoretical physics by Landau. So, whatever I say below mainly talks regarding the first 2 chapters of Landau and the approach of deriving Newton's laws from Lagrangian principle supposing Hamilton's principle of extremum action. Please keep this view in mind while reading and answering my queries and kindly neglect the systems to which Action Principle is not applicable:
If we use homogeneity of space in Euler-Lagrange equations, we obtain a remarkable result i.e. the conservation of momentum for a closed system.
Now, this result, using the form of Lagrange for a closed system of particles, transforms into $\Sigma F = 0$ . Now, how from this can we conclude that the internal forces that particles exert come in equal and opposite pairs?
Is it because for 2 particles this comes out as $F_{1} + F_{2} = 0$ and we take the forces exerted by particles on one other to be independent of other particles (i.e. Superposition Principle) as an experimental fact?
I doubt it as whole of Newtonian Mechanics is derivable from Lagrangian Mechanics and supposed Symmetries. So, according to me, a fact like Newton's Third Law should be derivable from it without using an additional experimental fact.
I have an idea to prove it rigorously. Consider two particles $i$ and $j$. Let the force on $i$ by $j$ be $F_{ij}$ and on $j$ by $i$ be $k_{ij}F_{ij}$. Now the condition becomes $\Sigma (1+k_{ij})F_{ij}=0$ where the terms to be included and rejected in summation understood. As this must be true for any value of $F_{ij}$, we get $k_{ij}=-1$. I don't know if this argument or refinement of such an argument holds or not. I can see many questions arising in this argument and it's not very convincing to me.
I would like to hear from you people as to if it is an experimental result used or not? If not, then is the method given above right or wrong? If wrong, how can we prove it?
Addendum
My method of proof uses the fact of superposition of forces itself, so it is flawed. I have assumed that the coefficients $k_{ij}$ are constants and don't change in the influence of all other particles which is exactly what superposition principle says.
As the superposition of Forces can be derived by superposition of potential energies at a point in space and potential energy being more fundamental in Lagrangian Mechanics, I restate my question as follows:
Is the principle of superposition of potential energies by different sources at a point in space derivable from the inside of Lagrangian Mechanics or is it an experimental fact used in Lagrangian Mechanics?
I, now, doubt this to be derivable as the fundamental assumption about potential energy is only that it is a function of coordinates of particles and this function may or may not respect superposition.
-
## 4 Answers
The derivation in Landau and Lifschitz is making some additional implicit assumptions. They assume that all forces come from pair-interactions, and that the pair forces are rotationally invariant. With these two assumptions, the potential function in the Lagrangian is
$V(x1,...,xn) = \sum_{\langle i,j\rangle} V(|x_i - x_j|)$
And then it is easy to prove Newton's third law, because the derivative of the distance function is equal and opposite for each pair of particles.
This type of derivation is reasonable from a physical point of view for macroscopic objects, but it is not mathematically ok, because it omits important examples.
### No rotational invariance, no third law
Dropping the assumption of rotational invariance, but keeping the assumption of pair-wise interaction, one gets the following counterexample in 2 dimensions, with two particles (A,B) with position vectors $(A_x,A_y)$ $(B_x,B_y)$ respectively:
$V(A_x,A_y,B_x,B_y) = f(A_x-B_x) + f(A_y - B_y)$
where f is any function other than $f(x)=x^2$. This pair potential leads to equal and opposite forces, but not collinear ones. Linear momentum and energy are conserved, but angular momentum is not, except when both particles are on the lines $y=\pm x$ relative to each other. The potential is unphysical of course, in the absence of a medium like a lattice that breaks rotational invariance.
### Many-body direct interactions, no reflection symmetry, no third law
There is another class of counterexamples which is much more interesting, because they do not break angular momentum or center of mass conservation laws, and so they are physically possible interactions in vacuum, but they do break Newton's third law. This is the chiral three-body interaction.
Consider 3 particles A,B,C in two dimensions whose potential function is equal to the signed area of the triangle formed by the points A,B,C.
$V(A,B,C) = B_x C_y - A_x C_y -B_x A_y - C_x B_y + C_x A_y + A_x B_y$
If all 3 particles are collinear, the forces for this 3-body potential are perpendicular to the common line they lie on. The derivative of the area is maximum by moving the points away from the common line. So you obviously cannot write the force as any sum of pairwise interactions along the line of separation, equal and opposite or not. The forces and torques still add up to zero, since this potential is translationally and rotationally invariant.
### Many body direct interaction, spatial reflection symmetry, crappy third law
If the force on k particles is reflection invariant, it never gets out of the subspace spanned by their mutual separation. This is because if they lie in a lower dimensional subspace, the system is invariant with respect to reflections perpendicular to that subspace, so the forces must be as well.
This means that you can always cook up equal and opposite forces between the particles that add up to the total force, and pretend that these forces are physically meaningful. This allows you to salvage Newton's third law, in a way. But it gives nonsense forces.
To see that this is nonsense, consider the three-particle triangle area potential from before, but this time take the absolute value. The result is reflection invariant, but contains a discontinuity in the derivative when the particles become collinear. Near collinearity, the forces perpendicular have a finite limit. But in order to write these finite forces as a sum of equal and opposite contributions from the three-particles, you need the forces between the particles to diverge at collinearity.
### Three body interactions are natural
There is natural physics that gives such a three-body interaction. You can imagine the three bodies are connected by rigid frictionless struts that are free to expand and contract like collapsible antennas, and a very high-quality massless soap bubble is stretched between the struts. The soap bubble prefers to have less area according to its nonzero surface tension. If the dynamics of the soap bubble and the struts are fast compared to the particles, you can integrate out the soap bubble degrees of freedom and you will get just such a three-body interaction.
Then the reason the bodies snap together near collinearity with a finite transverse force is clear--- the soap bubble wants to collapse to zero area, so it pulls them in. It is then obvious that there is no sense in which they have any diverging pairwise forces, or any pairwise forces at all.
Other cases where you get three body interactions directly is when you have a nonlinear field between the three objects, and the field dynamics are fast. Consider a cubically self-interacting massive scalar field (with cubic coupling $\lambda$) sourced by classical stationary delta-function sources of strength g. The leading nonlinear contribution to the classical potential is a tree-level, classical, three-body interaction of the form
$V(x,y,z) \propto g^3 \lambda \int d^3k_1 d^3k_2 { e^{i(k_1\cdot (x-z) + k_2\cdot(y-z))} \over (k_1^2 + m^2) (k_2^2 + m^2)((k_1+k_2)^2 + m^2)}$
which heuristically goes something like ${e^{-mr_{123}}r_{123}\over r_{12}r_{23}r_{13}}$ where the r's are the side lengths of the triangle and $r_{123}$ is the perimeter (this is just a scaling estimate). For nucleons, many body potentials are significant.
### The forces from the crappy third law are not integrable
If you still insist on a Newton's third law description of three-body interactions like the soap bubble particles, and you give a pairwise force for each pair of particles which adds up to the full many-body interaction, these pairwise forces cannot be thought of as coming from a potential function. They are not integrable.
The example of the soap-bubble force makes it clear--- if A,B,C are nearly collinear with B between A and C, closer to A, you can slide B away from A towards C very very close to collinearity, and bring it back less close to collinear. The A-B force is along the line of separation, and it diverges at collinearity, so the integral of the force along this loop cannot be zero.
The force is still conservative of course, it comes from a three-body potential after all. This means that the two-body A-B force plus the two-body B-C force is integrable. It's just that the A-C two body force is not. So the separation is completely silly.
### Absence of multi-body interactions for macroscopic objects in empty space
The interactions of macroscopic objects are through contact forces, which are necessarily pairwise since all other contacts are far away, and electromagnetic and gravitational fields, which are very close to linear at these scales. The electromagnetic and gravitational forces end up being linearly additive between pairs, and the result is a potential of the form Landau and Lifschitz consider--- pairwise interactions which are individually rotationally invariant.
But for close packed atoms in a crystal, there is no reason to ignore 3-body potentials. It is certainly true that in the nucleus three-body and four-body potentials are necessary, but in both cases you are dealing with quantum systems.
So I don't think the third law is particularly fundamental. As a philosophical thing, that nothing can act without being acted upon, it's as valid as any other general principle. But as a mathematical statement of the nature of interactions between particles, it is completely dated. The fundamental things are the conservation of linear momentum, angular momentum, and center of mass, which are independent laws, derived from translation invariance, rotational invariance, and Galilean invariance respectively. The pair-wise forces acting along the separation direction are just an accident.
-
Within the framework of classical mechanics, the Newton's third law is an independent postulate.
Newton's third law in its strong form says that not only are mutual forces of action and reaction equal and opposite between two bodies at position $\vec{r}_1$ and $\vec{r}_2$, they are also collinear, i.e. parallel to $\vec{r}_2-\vec{r}_1$.
When we derive Lagrange equations from Newton's laws (see e.g. Herbert Goldstein, "Classical Mechanics", chapter 1), it may appear a bit hidden when we actually use Newton's third law.
In the derivation, we assume that forces of constraints do no virtual work$^{1}$. Consider now a rigid body. It's a fact that we rely heavily on Newton's third law in its strong form to argue that the internal forces of constraints (which hold the rigid body together) do no virtual work.
See also D'Alembert's principle and the principle of virtual work for more information.
--
$^{1}$ This does not hold for, e.g., sliding friction forces, which we therefore have to exclude.
-
Hi, actually I am referring to the approach used by Landau in his Course of Theoretical Physics Volume 1 Mechanics. In it, he supposes a function Lagrange and uses symmetry and experimental facts to derive various properties of Lagrange and Newtonian Mechanics. Now, I would request you to view the question in above limelight and edit your answer as necessary. Yes, I agree, that Lagrange equations are derivable from newton's laws but I am talking about the approach of deriving Newton's Laws from Lagrangian Mechanics. Mathematically, both are equivalent formulations. – Lakshya Bhardwaj Jul 10 '11 at 15:40
1
@Lakshya Bhardwaj: Landau and Lifshitz, "Mechanics", starts on page 2 by assuming an action principle. However there are systems that have no action principle, e.g., systems with non-holonomic constraints. For this and other reasons, Newton's laws are more fundamental (within the framework of classical non-relativistic mechanics). – Qmechanic♦ Jul 10 '11 at 20:37
Thank you, I didn't know about the limitations of action principle. So, let's frame the question like this: Given that we talk only about systems which follow action principle and we start using Lagrange formalism to derive Newton's Laws.... read the question as rest. I would be very thankful to you if you would see the question in a mathematical framework which is not general and try to answer it in that system only. I am not interested in general answers as I have just started in Classical Mechanics. Please review the last paragraph of my question. Thank You. – Lakshya Bhardwaj Jul 11 '11 at 11:04
@Lakshya Bhardwaj: Well, if we start with a Lagrangian $L$ that (among its, in general, many terms) contains a potential term of the form $V(|\vec{r}_2-\vec{r}_1|)$, where $\vec{r}_1$ and $\vec{r}_2$ are the positions of two point particles, then it is straightforward to show that the corresponding forces between the two particles obey the strong Newton's third law. – Qmechanic♦ Jul 11 '11 at 16:45
Yes, it is Sir. But, I want to extend the case. My question is that if we take many particles, i.e. more than two. Can we definitely conclude theoretically that the internal forces would be equal and opposite between a pair of particles? Although my question is only about weak form, I would be interested in knowing the proof for strong form also. What you have indicated in your comment is for two particles. But for multi-particles we have to assume principle of superposition to establish Newton's third law in weak form. I want to ask, if that's derivable or is it experimental observation? – Lakshya Bhardwaj Jul 12 '11 at 11:43
show 3 more comments
It might be a little crude, but doesn't time reversal symmetry help to prove this? I can't quite think of a rigorous proof, but it seems to me that the matrix Fij should be transposed somehow if you change t->-t.
-
Nice idea. But, forces aren't changed in time reversal. Can you please clarify how this would lead us to derive it? – Lakshya Bhardwaj Jul 11 '11 at 11:11
What you are saying is using isotropy of time. So, does that mean we need homogeneity of space and isotropy of time, i.e. two things? That's interesting. :) I am very much looking forward to your reply. – Lakshya Bhardwaj Jul 11 '11 at 13:30
Once you got that the sum of all the forces is zero, why did you assume that the internal forces will have the same collinear direction (i.e. why would they be opposite along the same line. They can be along any direction as long as the sum is zero?) Also, you consider 2 particles; $\vec{F_1}$+$\vec{F_2}$=$0$ might give you an idea on how the forces behave. Generalise to 3 and more directions.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421831965446472, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/8052/why-are-spectral-sequences-so-ubiquitous
|
## Why are spectral sequences so ubiquitous?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I sort of understand the definition of a spectral sequence and am aware that it is an indispensable tool in modern algebraic geometry and topology. But why is this the case, and what can one do with it? In other words, if one were to try to do everything without spectral sequences and only using more elementary arguments, why would it make things more difficult?
-
8
Have you seen www-math.mit.edu/~tchow/spectral.pdf ? – Qiaochu Yuan Dec 7 2009 at 0:45
1
Only vaguely-I was going to read that whenever I decided to make my next attempt to really understand the subject (i.e., probably soon). But that answers the question of "what is a spectral sequence." – Akhil Mathew Dec 7 2009 at 1:17
2
I would also recommend the book by Bott and Tu and perhaps compare the discussion there with the one in Frank Warner's book on the equivalence between Cech, deRham, and singular cohomology. Spectral sequences, like other modern mathematical abstractions, are not absolutely necessary but they provide a convenient way to do bookkeeping and also "reuse" the same proof in otherwise apparently different settings. – Deane Yang Dec 7 2009 at 1:42
2
I have one question (not worth its own question) that perhaps someone here can answer. When I learnt spectral sequences as a grad student (from a number of sources: Bott and Tu, Lang, Hilton-Stammbach if memory serves me) I used to refer, say, to the $E_2$ term of the spectral sequence. In recent times, and in some of the answers below, people refer to the $E_2$ page. Not having used them for a while, I first heard "page" from Mike Hopkins during his talk on the Kervaire problem at the Atiyah 80 earlier this year. I like the imagery, but wonder at the origin. Does anyone know? Thanks! – José Figueroa-O'Farrill Dec 7 2009 at 20:42
## 5 Answers
1. Let's say you have a resolution $0\to A\to J^0\to J^1\to\dots$ (of a module, a sheaf, etc.) If $J^n$ are acyclic (meaning, have trivial higher cohomology, resp. derived functors $R^nF$), you can use this resolution to compute the cohomologies of $A$ (resp. $R^nF(A)$). If $J^n$ are not acyclic, you get a spectral sequence instead, and that's the best you can do.
2. Let us say you have two functors $F:\mathcal A\to\mathcal B$ and $G:\mathcal B\to \mathcal C$. Let us say you know the derived functors for $F$ and $G$ and would like to compute them for the composition $GF$. Answer: Grothendieck's spectral sequence.
1 and 2 account for the vast majority of applications of spectral sequences, and provide plenty of motivation -- I am sure you will agree.
The reason for the spectral sequences in both cases is the same. Intuitively, $A$ in case 1 (resp. $F(A)$ in case 2) is made of parts which are not themselves elementary. Instead, they are made (via an appropriate filtration) from some other elementary, "acyclic" objects.
So there is a 2-step process here. You can do the first step and the second step separately but they are not exactly independent of each other. Instead, they are entangled somehow. The spectral sequence gives you a way to deal with this situation.
-
Thanks! I think this (especially 1) is the motivation I was looking for. I was vaguely aware that there was a spectral sequence for composing functors but it seemed somewhat non-intuitive at first. – Akhil Mathew Dec 7 2009 at 2:30
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Qiaochu links to a really nice article by Timothy Chow that says a lot about the mechanics of how to go from a filtered complex to its spectral sequence. Two questions that remain are, (1) why do filtered complexes show up so much, and (2) is there anything that you could do with a filtered complex other than compute its spectral sequence?
VA gives a very general motivation for why filtered complexes show up so much. Probably it is just more general than what I was going to say. To discuss things more concretely, (I am told that VA's interesting answer is not a generalization of this paragraph.) For question (1), here are two general ways that filtered complexes show up: (a) you might get a filtered complex if you are interested in a chain complex that comes from a filtered object. For instance it could be a stratified topological space. A CW complex is stratified, but it has a special structure so that its chain complex doesn't need a spectral sequence. (It is an acyclic resolution as VA says.) If you have a stratification that you wish was CW but isn't, then you just have a general filtered complex. (b) There are some cases where just a single map or similar gives you a filtered complex. These may all be special cases of what Grothendieck described. For example, suppose that you are interested in the de Rham cohomology of a manifold. (This is then a certain resolution of the sheaf of smooth constant functions on the manifold.) Suppose further that the manifold is fibered over another manifold, or maybe is just foliated. Then differential forms are filtered by how parallel they are to the foliation.
Concerning question (2), there is an interesting result for filtered complexes over a field coming from representation theory. (I learned this some years ago in a discussion with Michael Khovanov.) The theorem in this case is that there is no more information in a filtered complex than in its spectral sequence. For simplicity, let's look at filtrations of length $n$. Then a filtered vector space is a representation of the $A_n$ quiver with all arrows in the same direction. (Each term of the filtration is assigned to a vertex of the quiver, and the arrows correspond to the inclusions.) The quiver also has other representations, but filtered vector spaces are the projective modules. A filtered complex is thus a projective complex over the $A_n$ quiver algebra. It is just like homological algebra over any other ring; you usually look at projective complexes. Since the $A_n$ quiver has projective dimension 1, it is easy to identify the indecomposable chain complexes of filtered vector spaces. There are two types, with two terms and with one terms: $$0 \to k^{(i)} \to k^{(j)} \to 0 \qquad\qquad 0 \to k^{(i)} \to 0.$$ In this notation, $k$ is the ground field and $k^{(i)}$ is $k$ in degree $i$. The first type of complex is valid if $i \ge j$. If you compute the spectral sequence of an indecomposable complex, you will see that detects the first type of term at the $(i-j)$th page, and kills it on the next page. The second type of complex is of course the surviving homology. You can also go backwards and reconstruct the filtered complex from its spectral sequence.
Of course it is simplistic to only discuss filtered complexes and spectral sequences over a field. Nonetheless, roughly speaking spectral sequences are no more than a framework for analyzing filtered complexes.
VA asked for more details about the indecomposable modules, which is a fair request because I was quite cryptic about the relation. In particular, I use non-standard indexing in my own thinking about this. Unfortunately, I'm not sure that I can convert to standard indexing without making a mistake, so I won't convert. But I did change one thing above: I fixed the filtration degrees so that they are correct.
Suppose that $C = (C_n)$ is a complex of filtered vector spaces. Say that the filtration is increasing and indexed by $\mathbb{Z}_\ge 0$. The complex has two degrees: The chain degree $n$ and the filtration degree $k$. Suppose that $\partial$ is a differential with chain degree $-1$ and filtration degree $0$. (This is where the numbering begins to be non-standard, although it makes sense from the point of view of quiver representations.) Then the theorem is that over a field, the two kinds of indecomposable complexes are those listed above, where the term $k^{(i)}$ has chain degree $n$, and the term $k^{(j)}$ (if present) has chain degree $n-1$. To be precise about what $k^{(i)}$ means, it is a filtration of the field $k$ in which the degree $j$ subspace is $0$ when $j < i$ and $k$ when $j \ge i$.
The page $E^0$ is the associated graded complex. The page $E^r$ has a differential of degree $(-1,-r)$. When $r = i-j$, the differential $\partial^r$ of the first type of indecomposable complex connects the "tip" of $k^{(i)}$ to the "tip" of $k^{(j)}$ and kills them both on the next page. The other kind of indecomposable complex has a vanishing differential, so the induced differential on every page also vanishes.
-
Thanks! Do you have a reference for the de Rham spectral sequence for a manifold fibered/foliated over another? – Akhil Mathew Dec 7 2009 at 2:33
1
The book "Singularities" by Peter Orlik has a discussion, and refers to what could be the original paper, "Characteristic invariants of foliated bundles", by Kamber and Tondeur. springerlink.com/content/pk53617932261r7m – Greg Kuperberg Dec 7 2009 at 2:48
A very nice example with a quiver! But what exactly is the statement? Knowing ALL terms of a spectral sequence (including $E_0^{pq}$) one can reconstruct the filtered complex $(A^n,d)$? Or only knowing $E_1^{pq}$? (The term $E_0^{p,q}$ already gives the graded pieces $Gr^pA^n$, and so each $A^n$ if it's a vector space.) Could you clarify? P.S. For the de Rham cohomology, one needs a resolution of the constant sheaf R; the sheaf of smooth functions is fine, and so acyclic. – VA Dec 7 2009 at 4:41
Since the first type of indecomposable survives until the $(j-i)$-th page, you have to know all of the pages. Let me clarify it this way: Every filtered complex is a direct sum of indecomposable ones, and the spectral sequence is a corresponding direct sum. So just compute the spectral sequence of the indecomposables listed above, and you will be enlightened. – Greg Kuperberg Dec 7 2009 at 5:09
1
Greg, you shouldn't be so down on your answer; yours is much more general, and gives a much completer view of where spectral sequences come from than VA's (though of course, that is a very important special case). – Ben Webster♦ Dec 7 2009 at 15:36
show 3 more comments
Let me first try to answer a simpler question:
Why are long exact sequences so ubiquitous?
Almost anything that is written as a capital letter, followed by a subscript i or superscript -i, i an integer, and finally some stuff in parentheses, can be interpreted as πi of some spectrum (or sometimes space, as in nonabelian group cohomology, or maybe a sheaf of spectra or spaces...). And almost any long exact sequence which involves three similar terms in a cycle, with i decreasing by 1 every three terms, comes from the long exact sequence of homotopy groups of a fiber sequence of spectra or spaces. For instance, the long exact sequence for the cohomology of a group G with coefficients in a short exact sequence of G-modules A → B → C corresponds to (HA)hG → (HB)hG → (HC)hG, since H-i(G, M) = πi((HG)hM) (which is nonzero only for nonpositive i), which is a fiber sequence because HA → HB → HC is one (since A → B → C is a SES) and (–)hG is a homotopy limit, so it preserves fiber sequences. Or, I could draw a square with these three terms in it and 0 in the lower left, which is both a pullback and pushout square (since spectra form a stable (∞,1)-category).
Long exact sequences are actually a special case of spectral sequences—those whose E1 page has only two columns. If you have never done this before, you should check for yourself that the d1 and the extension problems exactly tell you that there is a long exact sequence formed by the two columns and whatever the spectral sequence converges to. This suggests that we could try to generalize the picture with a pushout/pullback square of spectra to find something to which a more general spectral sequence corresponds. Two possibilites are: we could extend the top row of the square to a directed sequence of spectra, and take the homotopy cofiber of each map; or we could extend the right column of the square to an inverse sequence of spectra and take the fiber of each map. These are the homotopy version of filtered and cofiltered objects, respectively; however, there is no condition on the maps (the notion of "inclusion" does not make much sense in the homotopy-theoretic world). Associated to each is a spectral sequence, though there are convergence issues when the sequence of spectra is infinite. It could be that most spectral sequences encountered in practice can be viewed as arising from an underlying sequence of spectra, though I have not attempted to convince myself of this fact.
Edit: Clark Barwick suggests that one may indeed view all "natural" spectral sequences as arising from filtered spectra. He and I would like to know whether there are any convincing counterexamples, so please let me know if you have any! Note however that 1 and 2 from VA's answer are not counterexamples.
-
Just a typo correction: the long exact sequence is formed by the entries in the $E_1$ page and the limit of the spectral sequence, not $E_2$ (or $E^1$ and not $E^2$ if you prefer homological spectral sequences to the cohomological ones). – VA Dec 8 2009 at 2:28
Oops. There are various ways a spectral sequence can yield a long exact sequence, but you're definitely right about the one I care about. – Reid Barton Dec 8 2009 at 3:12
2
With respect to your edit, there are several natural spectral sequences abutting to homotopy groups of certain spaces, which are less likely to arise from filtered spectra. For example, the unstable Adams spectral sequence for computing homotopy from cohomology rings. – Tyler Lawson Dec 16 2009 at 13:09
1
Tyler is of course right. Perhaps one should say instead that all "natural" spectral sequences come from filtered objects in an ∞-category $C$ with a conservative functor $C\to D$ to a stable ∞-category with a t-structure. The terms of the spectral sequence live in the heart of the t-structure, and if $C$ isn't already stable, then you get "fringe effects," which in general can be a lot more disastrous than the gentle term suggests. This perspective doesn't explain the presence of non-abelian groups and pointed sets in, e.g., the spectral sequence of a tower of fibrations, though. – Clark Barwick Jan 7 2010 at 17:08
I still don't feel as though I understand at an intuitive level what a spectral sequence "is" or what it "means." But here is one nice explanation I heard about how one particular spectral sequence (the Serre spectral sequence) arises in topology.
• Homotopy groups behave nicely with respect to products: $\pi_n(X\times Y) \cong \pi_n(X) \times \pi_n(Y)$.
• Homotopy groups behave not quite so nicely with respect to fibrations (= "twisted products"): instead of an isomorphism you get a long exact sequence.
Homotopy groups are great, but they are often hard to compute. (Co)homology is easier to compute, but it also doesn't behave as nicely on products and fibrations.
• Homology doesn't quite preserve products (i.e. take them to tensor products)--it almost does, but in general there's the Künneth theorem that relates them by an SES.
• Thus, when you apply homology to a fibration, you should expect to get something that has both the problems of a long exact sequence and and the problems of the Künneth theorem. This "something" is a spectral sequence.
-
Whenever you have a sequence of maps of (perhaps graded) abelian groups
$$\cdots \to A^0 \to A^1 \to A^2 \to \cdots$$
and each pair $A^{p-1} \to A^p$ is involved in a long exact sequence with third term $C^p$, there is a spectral sequence whose terms are the groups $C^p$. If the sequence eventually stabilizes on both ends (or many variants of weaker hypotheses), the spectral sequence detects the difference between the limit and the colimit. For instance, if you have a space filtered by subspaces, or a chain complex filtered by subcomplexes, or some arbitrary sequence of composable maps in a triangulated category, this kind of structure arises.
The core tool introduced by homological algebra is the long exact sequence, and the spectral sequence has proved to be an extremely useful organizational tool whenever you have two or more long exact sequences that interlock; the alternative is often to simply work with the long exact sequences one by one. It's led to a lot of the methodology in algebraic topology where you can take some difficult computation that looks approximately like a structure you can handle, and handle the "easy" portion first with the difficult issues encoded by the differentials and extensions in the spectral sequence.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9511873722076416, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/jerk+equations-of-motion
|
# Tagged Questions
2answers
143 views
### Is there any case in physics where the equations of motion depend on high time derivatives of the position?
For example if the force on a particle is of the form $\mathbf F = \mathbf F(\mathbf r, \dot{\mathbf r}, \ddot{\mathbf r}, \dddot{\mathbf r})$, then the equation of motion would be a third order ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8822216391563416, "perplexity_flag": "head"}
|
http://www.msri.org/programs/17
|
# Mathematical Sciences Research Institute
Home » Commutative algebra
# Program
Commutative Algebra August 12, 2002 to May 16, 2003
Organizers Luchezar Avramov, Mark Green, Craig Huneke, Karen E. Smith and Bernd Sturmfels
Description Commutative algebra comes from several sources: the 19th century theory of equations, number theory, invariant theory and algebraic geometry. To study the set of solutions of an equation, e.g. the circle $x^2 + y^2 = 1$ in $C^2$, one can form the ring $C[x,y]/(x^2+y^2-1)$ where $C$ is the complex numbers. This ring represents polynomial functions on the circle. In a similar manner, to study the zero set of a system of polynomial equations over the complex or real numbers, $$F_1(x_1,...,x_m) = ... = F_n(x_1,...,x_m) = 0,$$ we can form a ring which represents the polynomial functions on this zero set and study its algebraic properties. This investigation can often be reduced to the study of the polynomial functions $F_1,...,F_n$ themselves and techniques of commutative algebra provide significant insight into their properties. When studying the zero set of the $F_i$, it is convenient to study the set of all polynomials $\sum_iF_iG_i$ where the $G_i$ are other polynomials. All such functions share the common zeroes of the $F_i$. The set of all such sums is an \it ideal\rm. A classic development of Kummer's from the 19th century was the realization that ideals could be factored much like numbers can be factored into products of primes. A modern version of this factorization is the powerful tool of primary decomposition of ideals. More recently, Grobner bases have been shown to be extremely useful to analyze properties of such ideals.
A significant development over the last 20 years is the role that commutative algebra is taking as a tool for solving problems from a rapidly expanding list of disciplines. In oversimplified terms, the process could be described as follows: Mathematicians with various backgrounds discover ways of encoding information of interest into commutative rings and their modules, then use algebraic concepts, methods, and results to analyze that information efficiently. A famous example of such encoding is the translation of an abstract finite simiplicial complex into an ideal represented by the zeroes of square-free monomials in a set of variables corresponding to the vertices of the simplicial complex. Of course, the existing body of work in commutative algebra is not tailored to suit all new demands. This is precisely where the subject benefits most from the recent surge of external interest, as it receives an influx of novel questions, points of view, and expertise.
Our year-long program will highlight these recent developments and will include the following areas:
• Tight closure and characteristic p methods
• Toric algebra and geometry
• Homological algebra
• Representation theory
• Singularities and intersection theory
• Combinatorics and Grobner bases
The program will hold an Introductory Workshop in the early fall (dates TBA). Invited speakers include David Benson (University of Georgia), David Eisenbud (MSRI), Mark Haiman (UC Berkeley), Melvin Hochster (University of Michigan), Rob Lazarsfeld (University of Michigan), and Bernard Teissier.
Logistics
Programmatic Workshops
September 09, 2002 - September 13, 2002 Introductory Workshop in Commutative Algebra
December 02, 2002 - December 06, 2002 Commutative Algebra: Local and Birational Theory
February 03, 2003 - February 07, 2003 Commutative Algebra: Interactions with Homological Algebra and Representation Theory
March 13, 2003 - March 15, 2003 Computational Commutative Algebra
March 29, 2003 - April 03, 2003 Commutative Algebra and Geometry (Banff Int'l Research Station Workshop)
• Program Home
• How to Apply
### Navigational Links
• Top
• Description
• Programmatic Workshops
• Questions?
Contact: [email protected]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8966938257217407, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/30972/kirby-calculus-and-local-moves/51933
|
## Kirby calculus and local moves
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Every orientable 3-manifold can be obtained from the 3-sphere by doing surgery along a framed link. Kirby's theorem says that the surgery along two framed links gives homeomorphic manifolds if and only if the links can be related by a sequence of Kirby moves and isotopies. This is pretty similar to Reidemeister's theorem, which says that two link diagrams correspond to isotopic links if and only if they can be related by a sequence of plane isotopies and Reidemeister moves.
Note however that Kirby moves, as opposed to the Reidemeister moves, are not local: the second Kirby move involves changing the diagram in the neighborhood of a whole component of the link. In "On Kirby's calculus", Topology 18, 1-15, 1979 Fenn and Rourke gave an alternative version of Kirby's calculus. In their approach there is a countable family of allowed transformations, each of which looks as follows: replace a $\pm 1$ framed circle around $n\geq 0$ parallel strands with the twisted strands (clockwise or counterclockwise, depending on the framing of the circle) and no circle. Note that this time the parts of the diagrams that one is allowed to change look very similar (it's only the number of strands that varies), but still there are countably many of them.
I would like to ask if this is the best one can do. In other words, can there be a finite set of local moves for the Kirby calculus? To be more precise, is there a finite collection $A_1,\ldots A_N,B_1,\ldots B_N$ of framed tangle diagrams in the 2-disk such that any two framed link diagrams that give homeomorphic manifolds are related by a sequence of isotopies and moves of the form "if the intersection of the diagram with a disk is isotopic to $A_i$, then replace it with $B_i$"?
I vaguely remember having heard that the answer to this question is no, but I do not remember the details.
-
This is a great question! You actually learn a lot from reading it. And the level of difficulty looks ideal for an MO question. – Gil Kalai Jul 8 2010 at 18:59
Gil -- thanks. Glad you liked the question. – algori Jul 8 2010 at 20:44
1
In a sense Kirby moves are local, just not local in the sense of diagrams. They're local for surgery presentations. If you think of surgery presentations as describing handle attachments on a $4$-ball, handle attachments come from Morse functions on the total manifold (after attachments) and moving from one surgery presentation to another amounts to moving from one Morse function to a neighbouring Morse function. I don't know the answer to your actual question although I think several people have thought about this. – Ryan Budney Jul 15 2010 at 10:23
Ryan -- yes, indeed. By the way, did anyone describe an analog of Kirby's calculus for general manifolds in a given oriented cobordism class? – algori Jul 15 2010 at 16:15
A full analogy would be a little rough to pull off -- 3-manifolds have the advantage that the surgery presentations can be made to only have 2-handles. In high dimensions you tend to have combinations of handle dimensions so you'd have more of a "surgery sequence of diagrams" than a "surgery diagram". So there'd be relations between surgery sequences corresponding to handle cancellations, and relations coming from "changing the cobordism". But I imagine you could say something. I don't know what may have been done in this direction. – Ryan Budney Jul 16 2010 at 12:30
show 1 more comment
## 3 Answers
There is a finite set of local moves. For instance, these:
In the second row, the number of encircled vertical strands is $n\leqslant 3$ on the left and $n\leqslant 2$ on the right move. So they are indeed finite. The bottom-right move is just the Fenn-Rourke move (with $\leqslant 2$ strands). The box is a full counterclockwise twist. We also add all the corresponding moves with $-1$ instead of $+1$.
We can prove that these moves generate the Fenn-Rourke moves as follows. Consider as an example the Fenn-Rourke move with 3 strands:
To generate this move, we first construct a chain of 0-framed unknots as follows:
Then we slide the vertical strands along the chain:
Now it is sufficient to use the Fenn-Rourke move with 2 strands, slide, and use another Fenn-Rourke with one strand:
Finally, iterate this procedure 3 times:
and you are done. The same algorithm works for the general Fenn-Rourke move with $n$ strands.
EDIT I have slightly expanded the proof and posted it in the arXiv as http://arxiv.org/abs/1102.1288
-
I don't know why, I cannot see the pictures with Firefox (but I can with Safari). – Bruno Martelli Jan 13 2011 at 9:04
1
@Bruno: That is because your images are all .pdf, which are not universally supported as embedded web images. You need to use .jpg, .gif, .png for everyone to see your images. – Joseph O'Rourke Jan 13 2011 at 13:28
Thanks a lot, I converted them into png. I would suggest to write this suggestion also in the FAQs, that could be helpful (but maybe it's already there and I didn't look at the appropriate page) – Bruno Martelli Jan 13 2011 at 13:57
1
Very nice! I think your set of moves can be shrunk, by allowing only stabilization (Kirby 1) with $+1$ framing, and only a $-1$ Fenn-Rourke move. See Figure 13 of Ning Lu's paper, cited in my answer, where he credits this observation to Lickorish. – Daniel Moskovich Feb 8 2011 at 14:09
1
I am glad that you like it. Thank you for the reference, I agree that we can eliminate from the list all Fenn-Rourke moves with sign $+1$ except the stabilization. I don't know if we can do more. – Bruno Martelli Feb 9 2011 at 13:40
show 4 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If you didn't like my first answer, here's a different one; again, slightly changing the question. This answer is better suited to the way the Kirby theorem is used in quantum topology. Consider the space KTG of framed oriented knotted trivalent graphs, modulo four operations:
1. Switching the orientation of an edge
2. Edge deletion.
3. The unzip operation- see here for example.
4. Connect sum.
Dylan Thurston proved that KTG is finitely generated by two elements- the tetrahedron with its two possible vertex-orientations.
You can realize a band-slide (Kirby II) by unzipping a KTG in two different ways. The unzip is an honest local move. One application is that band-slides become a well-defined move, both on the topological level (for links the band-slide isn't well-defined because it depends how you bring together the arc to slide, and the arc it slides over), and indeed even on the level of Jacobi diagrams.
The full story is work-in-progress by Bar-Natan and Dancso. In keeping with Dror's habit of mathematical open-ness, they posted a rough draft version. See Page 13 for how to realize a band-slide with unzips.
Thus, my answer this time is:
Yes, you can realize Kirby moves locally if you extend to KTG. And quantum topologically, extending to KTG is probably conceptually the right thing to do.
I don't know the corresponding 4-dimensional picture, although I have my fantasies.
-
Dear Daniel -- it's not that I didn't like your answer(s). Far from it. Thanks for the information and the references. They are very interesting indeed; it's just that I don't think they completely settle the question as it is stated, which is why I'm keeping it open for the moment. – algori Jan 2 2011 at 17:42
I completely agree. I also want to know the answer to this question! The two answers I gave are close thoughts I hoped could be useful- but they don't answer the question, only various closely related questions. – Daniel Moskovich Jan 2 2011 at 19:44
I think that the answer to your question is yes EDIT: if you allow local to mean "local within a thickened surface" or allow local moves between tangles whose strands may not be part of the surgery link. So my answer is "yes to a slighly modified version of your question".
One idea is that the Kirby theorem follows from your favourite finite presentation of the mapping class group (say one of Wajnryb's), and that you can translate there and back between surgery presentations and Heegaard splittings of $3$-manifolds.
In one direction, a mapping class on a Heegaard surface $H\subset S^3$ is generated by Dehn twists, each of which can be realized by surgery along the curve along which you are twisting, or rather the inclusion of that curve into $S^3$. Thicken $H$ to $H\times I$, and push each curve off to a different height $H\times {t_i}$. Include the curves in $S^3$, and there's your surgery link. In the other direction, project your surgery link along a surface $H$ so that each component becomes a simple closed curve (the boundary of a thickened Seifert surface of the link provides one good surface), and you have surgery along those curve realized as a bunch of Dehn twists, hence a mapping class of $H$.
By the Reidemeister-Singer theorem, any two Heegaard splittings of a 3-manifold are stably equivalent, so adding a $\pm 1$-framed unknot away from the surgery presentation is your first local move. Then, you have all the moves that are induced by the relations in your favourite finite presentation of the mapping class group. Write the left hand side of the relation as one framed link, the right-hand side as another, set them equal, and voila. It isn't pretty though. EDIT: As Ian Agol commented, these latter moves are local within a thickened surface, but not necessarily in a ball.
You can find pictures of the local moves in Ning Lu's paper or in Matveev and Polyak's paper. In one direction, as proved in both papers, the Kirby moves generate these moves. In the other direction, the fact that they come from a finite presentation of the mapping class group tells you that they generate the Kirby moves.
EDIT: If you want local moves in a ball, Matveev-Polyak gives a tangle presentation of the mapping class group, and Section 5 tells you how to translate there and back between this and a surgery presentation. Roughly, you remove regular neighbourhoods of strands whose endpoints are at the bottom, and the plat closure of what you are left with is the surgery link. A complete set of local moves between such tangles is Figures 12-19 of that paper. Very similar constructions appear for example in papers of Habiro. Anyway, there is a complete set of local moves in a ball between tangles, and a clear easy algorithm to translate there and back from such tangles to surgery presentations.
-
4
Hmmm, I'm not sure these are local in the sense of the question. The Hatcher-Thurston relations lie in subsurfaces which are a surface of bounded complexity. So the relation occurs in a thickening of this surface inside S^3. However, this is not inside of a ball (so not a tangle of bounded complexity), since there may be other strands of the surgery link coming through. – Agol Sep 3 2010 at 3:53
Fair enough. The moves I just described are local in the thickened surface, just not in a ball. However, you can write them as local moves between tangles inside a ball- see Figures 12-19 of Matveev-Polyak, where they do just that. The tangles are not themselves part of the surgery link (at least, not all strands are), but they seem a good substitute, and there are clear algorithms to translate both ways, as in Matveev-Polyak. I'll edit this into my answer later. So I suppose my answer is "if you slightly modify the question, then yes". – Daniel Moskovich Sep 3 2010 at 14:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436009526252747, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/boundary-condition
|
# Tagged Questions
The boundary-condition tag has no wiki summary.
1answer
45 views
### Periodic boundary condition on a Wave Function of a Particle in a Box
Until now solving the Schrodinger Equation for a particle in a box was relatively easy because the boundaries conditions imposed zero value on the wave function at the boundaries. But now I must find ...
1answer
46 views
### Boundary conditions on wave equation
I am having trouble understanding the boundary conditions. From the solutions, the first is that $D_1(0, t) = D_2(0, t)$ because the rope can't break at the junction. The second is that ...
0answers
39 views
### Klein-Gordon equation (boundary value problem) [closed]
Could some help me with this question. One of my friends ask me but I have no idea about it I am pure mathematician This the equation $u_{tt}-u_{xx}+\frac{3}{4}u-\frac{3}{2}u^3=0$, Here the ...
1answer
42 views
### Computational Fluid Dynamics methods
I have read some articles about the finite difference method on a cartesian orthogonal grid. I understand how it works when Dirichlet boundary conditions are used, or when Neumann boundary conditions ...
0answers
41 views
### equilibrium intensity Helmoltz equation
Helmoltz equation describes the evolution of the bulk electromagnetic field, even when doing scalar optics as an approximation. Beam Propagation Method is a common approximation that assumes certain ...
1answer
29 views
### Why frequency and tension doesn't change in the two medium?
I am reading a book about wave mechanics. There are two different cord (one light and one heavy) connected together, one person waving the lighter one, the wave transverse to the right from the ...
0answers
25 views
### Interface condition for heat exchange
I would like to compute the heat distribution of a piece of metal with some surrounding material. The heat is assumed to propagate by diffusion, so inside the metal piece and also on the outside, the ...
2answers
276 views
### Greens function in EM with boundary conditions confusion
So I thought I was understanding Green's functions, but now I am unsure. I'll start by explaining (briefly) what I think I know then ask the question. Background Greens are a way of solving ...
1answer
140 views
### What's the average position of oscillating particles in a box with periodic boundary conditions?
Imagine an open box repeating itself in a way that a if a particle crossing one of the box boundary is "teleported" on the opposite boundary (typical periodic boundary position in 3D). Now put a ...
2answers
2k views
### Dirichlet and Neumann Boundary condition: physical example
Can anybody tell me some practical/physical example where we use Dirichlet and Neumann Boundary condition. Is it possible to use both conditions together at the same region? If we have a cylindrical ...
1answer
74 views
### Help with the understanding of boundary conditions on $AdS_3$
So I am trying to reproduce results in this article, precisely the 3rd chapter 'Virasoro algebra for AdS$_3$'. I have the metric in this form: ...
1answer
272 views
### Diffeomorphisms and boundary conditions
I am trying to find out how did the authors in this paper (arXiv:0809.4266) found out the general form of the diffeomorphism which preserve the boundary conditions in the same paper. I found this ...
1answer
114 views
### Einstein's equations as a Dirichlet boundary problem
Can Einstein's equations in vacuum $R_{ab} - \frac{1}{2}Rg_{ab} + \Lambda g_{ab}= 0$ be treated as a Dirichlet problem? I am thinking of something along those lines: Consider a compact manifold $M$ ...
0answers
37 views
### Non reflecting boundaries in waveguides
Can someone please explain the Sommerfeld radiation condition and what is the alternative non-reflecting boundary conditions for waveguides of general geometries?
1answer
97 views
### Boundary conditions for fields in Kerr/CFT
I am reading a paper by Guica et al. on Kerr/CFT correspondence (arXiv:0809.4266) and I'm not sure if I got this. They choose the boundary conditions, like a deviation of the full metric from the ...
4answers
384 views
### Is the momentum operator well-defined in the basis of standing waves?
Suppose I want to describe an arbitrary state of a quantum particle in a box of side $L$. The relevant eigenmodes are those of standing waves, namely \left<x|n\right>=\sqrt{\frac{2}{L}}\cdot ...
1answer
95 views
### boundary limit conditions in 3D water surface simulation
As is discused on this post, taking some assumptions, the water surface can be simulated by a discrete aproximation of a grid of heights using this formula Where: HT is the new height grid HT-1 ...
2answers
962 views
### How do you find the magnetic field corresponding to an electric field?
If we are given the electric field $\vec E$ how can I find the corresponding magnetic field? I think I can use Maxwell's equations? In particular, \$\nabla\times \vec E= -{\partial \vec B\over \partial ...
2answers
492 views
### Barrier in an infinite double well
I am stuck on a QM homework problem. The setup is this: (To be clear, the potential in the left and rightmost regions is $0$ while the potential in the center region is $V_0$, and the wavefunction ...
1answer
156 views
### Curvature and edge state
If the boundary of quantum hall fluid has non-constant curvature, how will it affect the edge state which is usually described in chiral Luttinger fluid?
1answer
164 views
### Quantum Field Theory: why fields are equal to zero on the boundary?
One of the first assumptions, when introducing the Lagrangian and Hamiltonian in an undergraduate course on QFT is $$\phi(x)=0\,\text{on the boundary}$$ and this is widely used in many situations ...
4answers
454 views
### Is the principle of least action a boundary value or initial condition problem?
Here is a question that's been bothering me since I was a sophomore in university, and should have probably asked before graduating: In analytic (Lagrangian) mechanics, the derivation of the ...
2answers
249 views
### Maxwell equation boundary conditions on a conducting sheet
I'm having difficulties solving boundary conditions for an infinitely thin conducting layer in a presence of an alternating field. I use the Maxwell equations: $\nabla \cdot \mathbf B = 0$ \$\nabla ...
0answers
138 views
### A square with electric field lines parallel to its sides [closed]
A square ABCD has charges $+q$ and $-q$ on the vertices A and C (diagonally opposite sides). How can I place point charges outside this square so that electric field lines are parallel to all the ...
0answers
42 views
### EM-wave hits a brick-wall, $\pi/2$ -phase-shift? [duplicate]
Possible Duplicate: Phase shift of 180 degrees on reflection from optically denser medium If I have a cord-wave, I get a phase-shift with attached cord but do I get such a phase-shift with ...
2answers
188 views
### Boundary conditions for crystals
As students on solid state physics, we are all taught to use the periodic boundary condition, taking 1D as an example: $\psi(x)=\psi(x+L)$ where $L$ is the length of the 1D crystal. My question is: ...
0answers
99 views
### Quantization and natural boundary conditions
The Euler-Lagrange equations follow from minimizing the action. Usually this is done with fixed (e.g. vanishing) boundary conditions such that we do not have to worry about any boundary terms. ...
2answers
85 views
### Can a mechanical systems on hold be switched off, in another way than just letting it do it's thing?
Can the value of the potential energy, which is responsible for driving the system, diminish in time, while the system itself is stationary during that time? Can there be dissipation in a system, ...
1answer
162 views
### What is discontinuity in Vector Fields
I am reading David J. Griffiths and have a problem understanding the concept of discontinuity for E-field. The E-field has apparently to components. (How does he decompose the vector field into the ...
2answers
77 views
### The appearance of volume $V$ in the Fourier series representation of a periodic cubic system
In the textbook Understanding Molecular Simulation by Frenkel and Smit (Second Edition), the authors represent a function $f(\textbf{r})$ (which depends on the coordinates of a periodic system) as a ...
2answers
153 views
### Effect of boundary conditions on partition functions
While computing partition functions in statistical mechanics models (say) on a 2d lattice one usually makes use of "circular boundary conditions" which thus gives the lattice topology of a torus. It ...
1answer
136 views
### How to deal with conflicting “no-slip” Navier-Stokes boundary value constraints?
The no-slip boundary value constraint for Navier-Stokes solutions was explained in my fluid dynamics class as a requirement to match velocities at the interfaces. Now that my class is done, I've been ...
2answers
253 views
### Can we impose a boundary condition on the derivative of the wavefunction through the physical assumptions?
Consider the Schrödinger equation for a particle in one dimension, where we have at least one boundary in the system (say the boundary is at $x=0$ and we are solving for $x>0$). Sometimes we want ...
0answers
153 views
### boundary conditions
let be the operator $-i\hbar x\frac{df(x)}{dx}-i\hbar \frac{f(x)}{2}=E_{n}f(x)$ what kind of bundary conditions can i put ?? i have tried to find a function so for every integer 'n' i get \$ ...
1answer
214 views
### Open Ended/ Close Ended instruments?
Close ended instruments have twice the wavelength, because the wave must travel twice the distance to repeat itself. Why must a wave reach a lower density medium (air in this case) to repeat? When ...
0answers
78 views
### How do I simulate a constant velocity flow in porous media
I am modelling gas combustion in porous media. Most contemporary models assume that the pressure drop from the porous media is small enough to disregard, but I want to include that in my ...
3answers
2k views
### Phase shift of 180 degrees on reflection from optically denser medium
Can anyone please provide an intuitive explanation of why phase shift of 180 degrees occurs in the Electric Field of a EM wave,when reflected from an optically denser medium? I tried searching for it ...
1answer
128 views
### Can someone explain probability flux in the tunneling boundary condition of Vilenkin?
This is what's leading to the notion of a quantum universe tunneling from nothing into existence, right? The idea is that probability flux flows out of superspace (configuration space) at ...
1answer
215 views
### Boundary conditions of Navier-Cauchy equation
I'm having difficulties with Neumann boundary conditions in Navier-Cauchy equations (a.k.a. the elastostatic equations). The trouble is that if I rotate a body then Neumann boundary condition should ...
2answers
274 views
### When “unphysical” solutions are not actually unphysical
When solving problems in physics, one often finds, and ignores, "unphysical" solutions. For example, when solving for the velocity and time taken to fall a distance h (from rest) under earth gravity: ...
1answer
275 views
### Do spacelike junctions in the Thin-Shell Formalism imply energy nonconservation and counterintuitive wormholes?
The Thin Shell Formalism (MTW 1973 p.551ff) is used to properly paste together different vacuum solutions to the Einstein equations. At the junction of the two solutions is a hypersurface of matter – ...
2answers
453 views
### Boundary conditions for static electric field
Consider a surface that carries surface charge density. In electrostatics, boundary conditions are studied by showing that there is a discontinuity in the normal component of the electric field across ...
1answer
161 views
### Boundary condition for water waves generation
I am trying to model water wave propagation in 2D with CFD. What would be a suitable boundary condition to generate the waves on the left boundary? Currently I vary the fraction of water above the ...
0answers
262 views
### What is a boundary condition for capacitors/dielectrics?
I am extremely confused about what boundary conditions are. One minute ago I was solving easy capacitor questions and the next minute I am being asked boundary condition questions and there is no such ...
2answers
350 views
### How does Bloch's theorem generalize to a finite sized crystal?
I would be fine with a one dimensional lattice for the purpose of answering this question. I am trying to figure out what more general theorem (if any) gives Bloch's theorem as the number of unit ...
3answers
821 views
### What's the difference between “boundary value problems” and “initial value problems”?
Mathematically speaking, is there any essential difference between initial value problems and boundary value problems? The specification of the values of a function $f$ and the "velocities" ...
2answers
440 views
### Image charges, laplace equation and uniqueness theorem
Consider a well-known problem of the electric field generated by a system composed of a point charge in proximity of a large earthed conductor. It is said that the potential due to an image charge ...
1answer
335 views
### Free boundary conditions
I am trying to simulate liquid film evaporation with free boundary conditions (in cartesian coordinates) and my boundary conditions are thus: $$\frac{\partial h}{\partial x} = 0, \qquad (1)$$ ...
1answer
95 views
### Surface tension of N non-mixing fluids
I am a mathematician, not a physicist, so please be gentle with me if I write something wrong. Consider a bounded, regular container $\Omega$, which is filled with the fluids $F_1,...,F_N$ which do ...
1answer
203 views
### Transparent boundary condition. Beam propagation method
I am interested in the finite-difference beam propagation method and its applications. I try to solve the Helmholtz equation. At first, i would like to solve numerically it for the easiest case, ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9129809737205505, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/21743/additive-commutators-and-trace-over-a-pid
|
## Additive commutators and trace over a PID
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to find an example of principal ideal domain $R$, such that there exists a square matrix $A\in \mathfrak{M}_n(R)$ with zero trace that is not a commutator (i.e. for all $B,C \in \mathfrak{M}_n(R)$, $A\neq BC-CB$).
I know that such a PID (if it can be found) cannot be a field, or $\mathbb{Z}$.
-
Why can't you take $R = \mathbb{Z}$, n=2, and $A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$? – Mike Skirvin Apr 18 2010 at 16:08
Because that A can be written as BC-CB! – Kevin Buzzard Apr 18 2010 at 16:16
1
@Mike: $A=[B,C]$ with $B=\begin{pmatrix}1&0\\0&0\end{pmatrix}$, $C=A$. – Vladimir Dotsenko Apr 18 2010 at 16:20
I see, I was thinking that $B,C$ also had to be trace zero. – Mike Skirvin Apr 18 2010 at 18:16
Is there a NATURAL proof for $R$ being a field? I have only seen ones involving a lot of hacking. – darij grinberg Sep 23 2010 at 16:43
## 3 Answers
It is not difficult to see that Rosset & Rosset's result for $2\times2$ matrices is equivalent to the surjectivity of the bilinear map $(X,Y)\mapsto X\times Y$ (called vector product when $A={\mathbb R}^3$) over $A^3$. For this, just search $B$ and $C$ such that $b_{22}=c_{22}=0$.
To prove it, let $Z=(a,b,c)\in A^3$ be given. One can choose a primitive vector $X=(x,y,z)$ such that $ax+by+cz=0$. By primitive, I mean that $gcd(x,y,z)=1$. Bézout tells that there exist a vector $U=(u,v,w)$ such that $ux+vy+wz=1$. Set $Y=Z\times U$. Then $Z=X\times Y$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Every matrix with trace zero over a PID is a commutator, according to the MR review of
Rosset, Myriam(IL-BILN); Rosset, Shmuel(IL-TLAV) Elements of trace zero that are not commutators. Comm. Algebra 28 (2000), no. 6, 3059--3072.
From the Math Review:
Although Shoda's method fails when $C$ is a PID, the authors do prove the result in this case, and give counterexamples for $C$ of dimension $\ge 2$.
However, I just took a look at the paper, and as far as I can see the authors only claim the result for 2x2 matrices!
Can anyone resolve this conundrum?
-
1
The paper of Rosset and Rosset contains an explicit calculation for 2x2 matrices over a PID. The math review, and to some extent the paper, claim that it solves this for general square matrices over PIDs, but I do not see that this is the case. – Jack Schmidt Apr 18 2010 at 17:48
@Jack: I just independently spotted this! I thought it was a bit disingenuous to give a MR review for a reference, and when I checked it out I came to the same conclusion as you! I edited accordingly. – Kevin Buzzard Apr 18 2010 at 17:49
3
@Kevin: this confirms my feeling - this paper does not claim it for matrices bigger than 2x2. Moreover, on the second page of the article, they claim explicitly that they do not know the answer for the 3x3 case! – Vladimir Dotsenko Apr 18 2010 at 18:05
Here (see the very last paragraph) it is stated that every matrix with trace zero over a PID is a commutator. However, I can't come up with a proof right away; the only proof for matrices over a field that I remember (due to Albert?) does not immediately generalize.
-
If it helps anyone, the proof for matrices over a field (Albert and Muckenhoupt) is available via the following link: projecteuclid.org/… – Vladimir Dotsenko Apr 18 2010 at 17:14
1
Vladimir, that is my .pdf file you linked to and I actually do not have a good citation! I may have just read the MathSciNet review as Jack did without looking at the Rosset--Rosset article itself. So I have now removed that claim from the .pdf file and will see if I can find someone to settle this question. ой.... – KConrad Apr 18 2010 at 20:22
2
I have written to someone who has worked on these kinds of problems and will post a reply when I get it. – KConrad Apr 18 2010 at 20:29
1
Vladimir, I wrote to Zak Mesyan (who wrote a paper on trace 0 and commutators) and he told me that whether an arbitrary nxn matrix over a PID with trace 0 is a commutator is still open as far as he knows. – KConrad Apr 19 2010 at 14:04
1
Keith, thanks! Why don't you post it as an answer here? I think it's a bit misleading that the accepted answer to this question states that every nxn traceless matrix over a PID is a commutator! – Vladimir Dotsenko Apr 19 2010 at 21:13
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448360204696655, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/74787/adjointness-of-the-corresponding-simplicial-functors-associated-to-an-adjoint-pa
|
# Adjointness of the corresponding simplicial functors associated to an adjoint pair between categories
Suppose you have an adjoint pair of functors $F:C\to D$ and $G:D\to C$. Let ${\mathrm{Simp}}C$ and ${\mathrm{Simp}}D$ be the category of simplicial objects in the categories $C$ and $D$ respectively. We have an associated pair of functors $\tilde{F}:{\mathrm{Simp}}C\to {\mathrm{Simp}}D$ and $\tilde{G}:{\mathrm{Simp}}D\to {\mathrm{Simp}}C$ between these simplicial categories. Is $\tilde{F}$ and $\tilde{G}$ still adjoint functors, at least up to homotopy?
-
## 1 Answer
Yes.
First something more general:
Let $A$ be a small category and let $F: C \leftrightarrow D : G$ be an adjoint pair of functors. Then there is an associated adjoint pair of functors $F^A : C^A \leftrightarrow D^A:G^A$ on the respective categories $C^A$ and $D^A$ of functors $A \to C$ and $A\to D$
To prove this, check:
1. Every functor $H: C \to D$ gives a functor $H^A: C^A \to D^A$ (simply by postcomposition of a functor $A \to C$ with $H:C \to D$).
2. Every natural transformation $\alpha: H \Rightarrow H'$ between functors $H,H': C \to D$ gives a natural transformation $\alpha^A : H^A \Rightarrow H'{}^A$.
3. Unit and counit of the adjunction $F: C \leftrightarrow D: G$ extend to unit and counit of an adjunction $F^A:C^A \leftrightarrow D^A:G^A$: verify the triangle identities (called counit-unit equations on Wikipedia).
Now observe that a simplicial object in $C$ is the same as a contravariant functor $\Delta \to C$, where $\Delta$ is the simplex category consisting of the finite ordinal numbers and order-preserving maps, so $\operatorname{Simp}{C} = C^{\Delta^{\rm op}}$.
Apply the above with $A = \Delta^{\rm op}$.
-
Theo, nice answer! I found it interesting in its generality (I don't know anything about simplicial stuff). One question, though. Modulo size issues, you have "applied the internal hom functor $\hom_{\textbf{CAT}}(A,-):\textbf{CAT} \to \textbf{CAT}$ to the adjunction given by $(F,G)$", and you gave a sketch of the proof that it is still an adjunction. Is this still true of any functor $X:\textbf{CAT}\to \textbf{CAT}$? Thanks. – Bruno Stonek Jul 31 '12 at 14:03
@Bruno: Thanks, however, I'm not sure I understand what exactly you're asking: an endo-functor of the category $\mathbf{CAT}$ doesn't necessarily give you anything on the natural transformations (that Hom does give you that is used here in point 3). So I'd expect a similar statement as soon as $X$ is a $2$-functor with the appropriate strictness-properties but I don't feel comfortable enough with higher stuff to say anything definitive. An example of interest might help me see what you want, but I'm not making any guarantees. – t.b. Aug 1 '12 at 19:36
I think you perfectly understood what I was asking. I guess I'll look up on 2-functors. Thank you! – Bruno Stonek Aug 1 '12 at 19:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345795512199402, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/55454/is-there-a-riemann-roch-for-smooth-projective-curves-over-an-arbitrary-field
|
## Is there a Riemann-Roch for smooth projective curves over an arbitrary field?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be a smooth projective curve over a field $k$. We let $\omega$ be the canonical line bundle of $X$ and we denote by $F$ the field of $k$-valued rational functions on $X$.
(1) When $k$ is algebraically closed then $\omega$ is a dualizing sheaf for $X$. From there it is easy to prove Riemann-Roch for regular (holomorphic) line bundles $L$ over $X$: By this I mean a precise formula which computes the Euler characteristic of $L$ in terms of the degree of $L$ and the genus of $X$ (I think of both as being topological invariants).
(2) When $k$ is a finite field then one may consider the topological ring $\mathbf{A}_F$, the ring of Adeles of $F$. Doing Fourrier analysis on this self-dual locally compact abelian group and doing a counting argument one may deduce Riemann-Roch.
Q1: Is it possible to generalize Riemann-Roch to other fields? What about real and $p$-adic numbers?
Q2: Is $\omega$ a dualizing sheaf when $k$ is finite? If not
(I guess that in general one has to replace the notion of dualizing sheaf by some kind of complex in a derived category)
Q3: Is there a way to prove simultaneously $(1)$ and $(2)$?
Q4: Is there some notion that would encompass simultaneuously $\mathbf{A}_F$ and $\omega$?
-
## 2 Answers
Yes. There is a Riemann-Roch for smooth projective curves over arbitrary fields. It was proved by the German school of function fields in the 30's. From (2) I deduce that you've been reading Weil's "Basic Number Theory". Anyway, the proof that Weil gives there is a shortened version of a proof he gave of the full theorem. It's in his collected works ("Algebraische Beweis der Riemann-Roch Satz", or some such title). The proof is reproduced in many books, particularly those with "function field" in the title (e.g. Stichtenoth) or Lang's "Introduction to algebraic and abelian functions" (watch for misprints, as usual). You can also prove it with the modern geometric machinery. A book that bridges the two approaches is Serre's "Groupes algebriques et corps de classes". There is absolutely no restriction on the ground field, except the proof is slightly more tortuous when the field is not perfect.
-
3
The proof in Serre's book is one of the best examples I know of of beauty as it can be embodied in an argument :) – Mariano Suárez-Alvarez Feb 14 2011 at 22:37
Thanks Felipe for the reference. – Hugo Chapdelaine Feb 14 2011 at 22:43
1
Van der Waerden's Algebra, vol. 2, 5th edition, presents Weil's proof, taken from J. Reine u. Angew. Math., vol.179, 1938. He mentions it was influenced by the metodo rapido of Severi, for which a later reference is in Acta Pont. Accad. Sci., 1952. The earliest proof cited is by F.K. Schmidt, Math. Z., vol.41, 1936. – roy smith Feb 15 2011 at 15:44
The unabridged name of the journal mentioned by roy is Acta Pontificia Accademia Scientiarum, Città del Vatican. Manin, Hawking and Witten are contemporary members of this Academy: it.wikipedia.org/wiki/… – Georges Elencwajg Feb 15 2011 at 17:47
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Dear Hugo, the wonderful formalism of schemes allows us to have a Riemann-Roch theorem for a projective curve $X$ over an arbitrary field $k$, even without any assumption of smoothness. It says, like in the good old times of Riemann surfaces, that for a Cartier divisor $D$ on $X$ we have $$\chi (\mathcal O_X(D))= deg(D)+ \chi (\mathcal O_X)$$
There is a dualizing sheaf $\omega$ and Serre duality yields the formula $$h^0(X,\mathcal O_X(D))-h^0(X,\omega \otimes\mathcal O_X(-D))=1-p_a(X)+deg D$$ where $p_a(X)=1-\chi(\mathcal O_x)$ is the so called arithmetic genus of the curve.
Everything is in our friend Qing Liu's fantastic book Algebraic Geometry and Arithmetic Curves but I bet he's too modest to give you this obvious answer!
Edit The first displayed formula is actually valid in even greater generality: it holds for any projective curve $X$ (smooth or not) over an arbitrary artinian ring $k$. The proof is on page 164 of
Altman, A.; Kleiman, S., Introduction to Grothendieck duality theory. Lecture Notes in Mathematics No. 146, Springer-Verlag, Berlin-New York, 1970
Complement As an answer to Hugo's question in his comment below, let me add that indeed, in the case of a smooth projective curve over a field $k$, the dualizing sheaf $\omega$ is nothing else than the canonical sheaf . More generally for a smooth projective variety of dimension $r$ over $k$, the dualizing sheaf is just the canonical sheaf $\omega=\Omega^r_{X/k}$. This is (a special case of) Theorem I.4.6, page 14 in Altman-Kleiman's monograph.
-
1
Of course Felipe is absolutely right about the German school's priority . However I find fascinating the ease and elegance with which schemes integrate all those classical results into their harmonious, coherent and powerful language. – Georges Elencwajg Feb 15 2011 at 1:31
Thanks a lot Georges for Qing Liu's reference. So what is the dualizing sheaf in general for a smooth projective curve defined over an arbitrary field $k$? Is it still the canonical line bundle? – Hugo Chapdelaine Feb 15 2011 at 18:08
1
@Hugo: yes. (I would have answered your question by saying: "Yes. It's called the Riemann-Roch Theorem." None of the standard proofs use the algebraic closure of the ground field at any point. But the answers you have been given are somewhat more generous than this...) – Pete L. Clark Feb 16 2011 at 7:29
1
You are probably right, but unfortunately, unless I'm mistaken, I thought that $k$ was assumed to algebraically closed in Hartshorne's book. Is there some tricky point to address in characteristic $p$ when $k$ is not perfect? – Hugo Chapdelaine Feb 17 2011 at 14:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.921533465385437, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/newtonian-gravity?sort=votes&pagesize=30
|
# Tagged Questions
The Newtonian model of gravity in which the force between two objects is given by GMm/r^2.
7answers
3k views
### Does juggling balls reduce the total weight of the juggler and balls?
A friend offered me a brain teaser to which the solution involves a $195$ pound man juggling two $3$-pound balls to traverse a bridge having a maximum capacity of only $200$ pounds. He explained that ...
9answers
7k views
### Don't heavier objects actually fall faster because they exert their own gravity?
The common understanding is that, setting air resistance aside, all objects dropped to Earth fall at the same rate. This is often demonstrated through the thought experiment of cutting a large object ...
7answers
3k views
### Why is the Earth so fat?
I made a naive calculation of the height of Earth's equatorial bulge and found that it should be about 10km. The true height is about 20km. My question is: why is there this discrepancy? The ...
1answer
704 views
### Why does it take so long to get to the ISS?
I don't understand why when first launched Space X's Dragon capsule had to orbit the Earth many times in order to match up with the ISS? Was this purely to match it's speed, or to get closer (as in ...
4answers
1k views
### Would you be weightless at the center of the Earth?
If you could travel to the center of the Earth (or any planet), would you be weightless there?
4answers
587 views
### Staying in orbit - but doesn't any perturbation start a positive feedback?
I am not a physicist; I am a software engineer. While trying to fall asleep recently, I started thinking about the following. There are many explanations online of how any object stays in orbit. The ...
8answers
12k views
### Why doesn't the Moon fall upon Earth?
Why doesn't the Moon, or for that matter anything rotating another larger body, ever fall into the larger body?
9answers
2k views
### Why do space crafts take off with rockets instead of just ascending like an aircraft until they reach space?
I guess it's not a very educated question, but I never quite understood why spacecrafts have to shoot up and can't just reach space by simply continuing an upwards ascent like an airplane.
5answers
1k views
### Why does the moon face earth with the same side?
I know this is an astronomy question, but no such stackexchange site exists. So here I am, asking about the physics of the solar system. I know that the rotation period of the moon equals its ...
9answers
3k views
### Why are orbits elliptical?
Almost all of the orbits of planets and other celestial bodies are elliptical, not circular. Is this due to gravitational pull by other nearby massive bodies? If this was the case a two body system ...
4answers
501 views
### Why are so many forces explainable using inverse squares when space is three dimensional?
It seems paradoxical that the strength of so many phenomena (Newtonian gravity, Coulomb force) are calculable by the inverse square of distance. However, since volume is determined by three ...
5answers
2k views
### Why do we always see the same side of the Moon?
I am puzzled why we always see the same side of the Moon even though it is rotating around its own axis apart from revolving around the earth. Shouldn't this only be possible if the Moon is not ...
4answers
495 views
### Anti-gravity in an infinite lattice of point masses
Another interesting infinite lattice problem I found while watching a physics documentary. Imagine an infinite square lattice of point masses, subject to gravity. The masses involved are all $m$ and ...
1answer
466 views
### James Webb Space Telescope's halo orbit at Lagrange point L2
The James Webb Space Telescope (JWST) is expected to be launched in 2018 and operate in the L2 vicinity, about 1.5 million km from Earth. It will be placed in a halo orbit around the unstable L2 ...
3answers
804 views
### Significance of the second focus in elliptical orbits
1.In classical mechanics, using Newton's laws, the ellipticity of orbits is derived. It is also said that the center of mass is at one of the foci. 2.Each body will orbit the center of the mass of ...
3answers
2k views
### Why are Saturn's rings so thin?
Take a look at this picture (from APOD http://apod.nasa.gov/apod/ap110308.html): I presume that rocks within rings smash each other. Below the picture there is a note which says that Saturn's rings ...
7answers
2k views
### Would it help if you jump inside a free falling elevator?
Imagine you're trapped inside a free falling elevator. Would you decrease your impact impulse by jumping during the fall? When?
5answers
937 views
### The square in the Newton's law of universal gravitation is really a square?
When I was in the university (in the late 90s, circa 1995) I was told there had been research investigating the $2$ (the square of distance) in the Newton's law of universal gravitation. ...
4answers
568 views
### Is Feynman's explanation of how the moon stays in orbit wrong?
Yesterday, I understood what it means to say that the moon is constantly falling (from a lecture by Richard Feynman). In the picture below there is the moon in green which is orbiting the earth in ...
4answers
968 views
### Acceleration of two falling objects with identical form and air drag but different masses
I have a theoretical question that has been bugging me and my peers for weeks now - and we have yet to settle on a concrete answer. Imagine two balloons, one is filled with air, one with concrete. ...
3answers
456 views
### Gravity in other dimensions than 3 and stable orbits
I have heard from here that stable orbits (ones that require a large amount of force to push it significantly out of it's elliptical path) can only exist in a three spatial dimensions because gravity ...
4answers
223 views
### Two planets in same orbit - not planets?
Let us pretend for a moment that there are two identical planets that are exactly opposite their star from each other and are the same distance from said star. (This would make them, at all times, ...
1answer
272 views
### Apollo and orbital mechanics: orbital decay if the Trans Earth Injection (TEI) burn had failed
I'm reading Jim Lovell (Apollo 8 and 13) and Jeffrey Kluger's book Apollo 13, which is a fantastic read about a long past era I only have kindergarten memories of. On page 54 there is a paragraph that ...
3answers
429 views
### How do we explain accelerated motion in Newtonian physics and in modern physics?
Maybe my question will seem stupid, but I am not a physicist so I have some problems understanding a classic Newtonian experiment: in the bucket experiment, why does he have to introduce the absolute ...
3answers
1k views
### Why does the moon drift away from earth?
I once saw on TV that the moon is slowly drifting away from the earth, something like an inch a year. In relation to that the day on earth what also increase in time. I wonder why is that?
6answers
1k views
### Is Newton's Law of Gravity consistent with General Relativity?
By 'Newton's Law of Gravity', I am referring to The magnitude of the force of gravity is proportional to the product of the mass of the two objects and inversely proportional to their distance ...
3answers
293 views
### Is it possible that 5 planets can revolve around a single star in a single orbit?
I'm writing a novel and I'm quite confused if this system could be possible in the real universe. Is it possible that a system exist, where 5 identical planets which could be of same characteristics ...
1answer
1k views
### What is the maximum efficiency of a trebuchet?
Using purely gravitational potential energy, what is the highest efficiency one can achieve with a trebuchet counter-weight type of machine? Efficiency defined here as transformation of potential ...
4answers
467 views
### What causes a soccer ball to follow a curved path?
Soccer players kick the ball in a linear kick, though you find it to turn sideways, not even in one direction. Just mid air it changes that curve's direction. Any physical explanation? Maybe this ...
1answer
124 views
### Gravitationally bound systems in an expanding universe
This isn't yet a complete question; rather, I'm looking for a qual-level question and answer describing a gravitationally bound system in an expanding universe. Since it's qual level, this needs a ...
10answers
956 views
### Is the distance between the sun and the earth increasing?
M = mass of the sun m = mass of the earth r = distance between the earth and the sun The sun is converting mass into energy by nuclear fusion. \$F = \frac{GMm}{r^2} = \frac{mv^2}{r} \rightarrow r ...
7answers
1k views
### How does Newtonian gravitation conflict with special relativity?
In the Wikipedia article Classical Field Theory (Gravitation), it says After Newtonian gravitation was found to be inconsistent with special relativity, . . . I don't see how Newtonian ...
7answers
1k views
### How does the earth move?
My son who is 5 years old is asking me a question about how the earth moves around the sun. What answer should I give him?
4answers
326 views
### What's the exact gravitational force between spherically symmetric masses?
Consider spherical symmetric$^1$ masses of radii $R_1$ and $R_2$, with spherical symmetric density distributions $\rho_1(r_1)$ and $\rho_2(r_2)$, and with a distance between the centers of the spheres ...
6answers
533 views
### When driving uphill why can't I reach a velocity that I would have been able to maintain if I started with it?
Consider these two situations when driving on a long straight road uphill: Starting at a high velocity $v_h$, which the car is able to maintain. Starting at a lower velocity $v_l$, and then trying ...
4answers
120 views
### Integrating radial free fall in Newtonian gravity
I thought this would be a simple question, but I'm having trouble figuring it out. Not a homework assignment btw. I am a physics student and am just genuinely interested in physics problems involving ...
2answers
186 views
### Is the gravitational potential of a planet in orbit always equal to minus the squared velocity?
Say a planet (mass $m$) is orbiting a star (mass $M$) in a perfect circle, so it is in circular motion. $F=ma$ and the gravitational force between two masses $F=\frac{GMm}{r^2}$ so ...
3answers
199 views
### The feasibility of a satellite orbiting at a fixed time
I was speaking with some friends of mine, one of whom was an aerospace engineer. He posited the infeasibility of a hypothetical "Margaritaville Satellite" that orbited earth in such a way that ...
5answers
290 views
### Intuitive explanation of the inverse square power $\frac{1}{r^2}$ in Newton's law of gravity
Is there an intuitive explanation why it is plausible that the gravitational force which acts between two point masses is proportional to the inverse square of the distance $r$ between the masses (and ...
2answers
150 views
### Is it possible to use a balloon to float so high in the atmosphere that you can be gravitationally pulled towards a satellite?
A recent joke on the comedy panel show 8 out of 10 cats prompted this question. I'm pretty sure the answer's no, but hopefully someone can surprise me. If you put a person in a balloon, such that ...
4answers
612 views
### Can gravity be shielded, like electromagnetism?
If I remember well, they said that it can't, but I do not know why. Yes, I meant if gravity can be shielded using something like a Faraday cage (or something else?). Thank you.
1answer
142 views
### How would a large a mass be stable at the Earth Sun L4 or L5 point?
I've heard about the Trojan asteroids and there is the famous idea of putting a space colony at one of these points, but the explanations I see for how something is stable at those points it they are ...
1answer
421 views
### The gravitational potential of ellipsoid
In the literature (Kirchhoff G. - Mechanic (1897), Lecture 18 or Lamb, H. - Hydrodynamics (1879)) one can find the following analytical closed form expression for the gravitational potential of ...
11answers
1k views
### Why do we say that the earth moves around the sun?
In history we are taught that the Catholic Church was wrong, because the Sun does not move around the Earth, instead the Earth moves around the Sun. But then in physics we learn that movement is ...
2answers
199 views
### Why is there this asymmetry between the two foci of an orbital ellipse?
Why does the Earth revolve with the Sun at one of its foci? Does the other focus do nothing? Why is there this asymmetry in our solar system?
4answers
2k views
### The Time That 2 Masses Will Collide Due To Newtonian Gravity
My friend and I have been wracking our heads with this one for the past 3 hours... We have 2 point masses, $m$ and $M$ in a perfect world separated by radius r. Starting from rest, they both begin to ...
2answers
80 views
### Gravitational potential outside Lagrangian points or Lagrange points
The diagram in Why are L4 and L5 lagrangian points stable? shows that the gravitational potential decreases outside the ring of Lagrange points — this image shows it even more clearly: If I ...
2answers
642 views
### Calculating gravity when taking into account the change of gravitational force
This is a problem that has bothered me for a couple of weeks now, and I can't seem to wrap my head around it and understand it. Let's say we have a planet with a mass of m. We also have an object of ...
3answers
270 views
### the collision of Phobos
Mars has two moons: Phobos and Deimos. Both are irregular and are believed to have been captured from the nearby asteroid belt. Phobos always shows the same face to Mars because of tidal forces ...
3answers
230 views
### Differences between the gravitational constants $G$ and $g$?
There's a formula (described by Sir Isaac Newton) that gives the force acting between two objects: $$F = \frac{Gm_1m_2}{r^2}$$ And then there's a formula for weight of an object $$w = mg$$ My ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418714642524719, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/106388?sort=oldest
|
## Extension of lipschitz functions along a curve
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a curve $\gamma$ in a Banach space $X$ and a function f defined along the curve s.t. $$\big\Vert f(\gamma(t))-f(\gamma(s))\big\Vert\leq L\big\Vert\gamma(t)-\gamma(s)\big\Vert$$ is it possible to extend the Lipschitz functions to the whole of $X$?
-
4
That depends more on the set of values $f$ can take than on the domain metric space, so you failed to provide the most relevant information here: what is the range space of $f$? – fedja Sep 5 at 2:43
## 3 Answers
It is not always possible to extend when $X$ is a Banach space. Take a Banach space $Y_n$ which contains an $n$ dimensional subspace $E_n$ such that every projection from $Y_n$ onto $E_n$ has norm at least $C_n$ with $C_n\to \infty$. ($Y_n$ can e.g. be $L_1$ and $E_n$ the span of $n$ IID gaussian random variables; then $C_n$ is of order $n^{1/2}$.) Let $X_n = Y_n \oplus_2 E_n$. For the curve in $Y_n$ take any curve in the unit sphere of $E_n \oplus {0}$ that contains an $\epsilon_n$ net $A_n$ of the unit sphere of $E_n \oplus {0}$. For $f_n$ take the natural isometry from $E_n \oplus {0}$ onto ${0} \oplus E_n$ restricted to the curve. Let $F_n$ be an extension of $f_n$ to a Lipschitz mapping on $X_n$; WLOG $F_n$ maps into ${0} \oplus E_n$ since this is a norm one complemented subspace of $X_n$. Let $G_n$ be the positively homogeneous extension of the restriction of $F_n$ to the unit sphere of $X_n$. Then the Lipschitz constant of $G_n$ is at most three times the Lipschitz constant of $F_n$. Compose $G_n$ with the obvious isometry from ${0} \oplus E_n$ onto $E_n \oplus {0}$. The restriction of this map to $Y_n$ gives a positively homogenous mapping from $Y_n$ into $E_n$ that is the identity on $A_N$. By the arguments in Johnson, William B.(1-OHSN); Lindenstrauss, Joram(IL-HEBR) Extensions of Lipschitz mappings into a Hilbert space. Conference in modern analysis and probability (New Haven, Conn., 1982), 189–206, Contemp. Math., 26, Amer. Math. Soc., Providence, RI, 1984
we conclude that if $\epsilon_n$ is sufficiently small, there is a projection from $Y_n$ onto $E_n$ whose norm is no worse than something like ten times the Lipschitz constant of $G_n$.
All of this shows that you cannot get Lipschitz extensions with controlled norms. Take an infinite direct sum to get an example where you cannot get any Lipschitz extension.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The basic extension result for Lipschitz functions is the theorem of Kirszbaum. This works for functions with values in $\mathbb{R}^n$ and is expounded in Federer's book on Geometric Measure Theory. I think that it even works for functions with values in Hilbert space but can't trace a reference.
-
Kirszbaum's theorem is for mappings from a subset of a Hilbert space into a Hilbert space. – Bill Johnson Sep 5 at 11:48
And his last name is Kirszbraun. – Mateusz Wasilewski Sep 5 at 15:23
@Bill Johnson. Sorry, you are right, of course. The theorem I should have quoted was the MacShane-Whitney result that you can extend any Lischitz function from a subset of a metric space retaining the Lipschitz constant, but for the real-valued case which I assume, by default, is what the questioner intended. Sorry about misspelling the name. Not sure about the etiquette in this forum. Will this mea culpa suffice or should (can) I edit my response? hanks again. – jbc Sep 5 at 16:14
You can edit your answer and this is one of the arch-typical reasons for being able to do so. – BSteinhurst Sep 5 at 21:18
If you mean a real-valued function $f$, yes, and keeping the same constant $L$, by a simple construction. Check the last mentioned property listed here.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9037996530532837, "perplexity_flag": "head"}
|
http://www.ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Rost_Markus&arg9=Markus_Rost
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Return to List Item: 1 of 1
The Book of Involutions
Max-Albert Knus, Eidgenössische Technische Hochschule, Zürich, Switzerland, Alexander Merkurjev, University of California, Los Angeles, CA, Markus Rost, Universität Regensburg, Germany, and Jean-Pierre Tignol, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
SEARCH THIS BOOK:
Colloquium Publications
1998; 593 pp; hardcover
Volume: 44
ISBN-10: 0-8218-0904-0
ISBN-13: 978-0-8218-0904-4
List Price: US\$84
Member Price: US\$67.20
Order Code: COLL/44
This monograph is an exposition of the theory of central simple algebras with involution, in relation to linear algebraic groups. It provides the algebra-theoretic foundations for much of the recent work on linear algebraic groups over arbitrary fields. Involutions are viewed as twisted forms of (hermitian) quadrics, leading to new developments on the model of the algebraic theory of quadratic forms. In addition to classical groups, phenomena related to triality are also discussed, as well as groups of type $$F_4$$ or $$G_2$$ arising from exceptional Jordan or composition algebras. Several results and notions appear here for the first time, notably the discriminant algebra of an algebra with unitary involution and the algebra-theoretic counterpart to linear groups of type $$D_4$$. This volume also contains a Bibliography and Index.
Features:
• original material not in print elsewhere
• a comprehensive discussion of algebra-theoretic and group-theoretic aspects
• extensive notes that give historical perspective and a survey on the literature
• rational methods that allow possible generalization to more general base rings
Readership
Graduate students and research mathematicians interested in central simple algebras, linear algebraic groups, nonabelian Galois cohomology, and composition or Jordan algebras.
Reviews
"It is not only the book of involutions', but also the book of the classical groups' ... a very welcome addition to the literature. The topics treated here have been the objects of very intensive research. The specialists felt the need of a reference book, and the beginners of a good introduction. Both of these needs are fulfilled by the `book of involutions' ... a very useful reference source ... these results are not yet published elsewhere ... very well-written ... In addition to being an excellent exposition of many basic results concerning algebras with involution and the classical groups, the book also contains many new ideas and new results, often due to the authors themselves. The topic is a very beautiful and vital one, object of intensive current research. This research is now made easier thanks to the impressive work of the four authors."
-- Zentralblatt MATH
"This volume is a compendious study of algebras with involution, a subject with many facets which becomes particularly interesting for central simple algebras. The book is excellently written, and the chapters on algebraic groups and Galois cohomology alone would make the book an ideal read for aspiring postgraduate students of an algebraic persuasion. In addition, there is plenty of material to enlighten even those of us who already know something about the subject. All in all, this book recommends itself to anyone who wants a thorough reference source, complete with an ample selection of enlightening exercises and historical notes, which deals with exceptional Jordan algebras, Clifford algebras and modules, Tits' algebras and algebraic groups in a modern manner."
-- Bulletin of the London Mathematical Society
"The book under review is an important work which records many of the significant advances in the theory of algebras with involution that have taken place in recent years. Much of the material has not previously appeared in a book before; indeed, there is a substantial amount that has not appeared anywhere before. There is an interesting selection of exercises for each chapter which cover many ancillary results not included in the body of the text. Additionally, there are carefully prepared and highly informative notes at the end of each chapter which give historical commentary and sources for the topics covered. Overall, this book is an outstanding achievement. It will be an indispensable reference for the specialist, and a challenging but highly rewarding introduction for the novice."
-- Mathematical Reviews
• Involutions and Hermitian forms
• Invariants of involutions
• Similitudes
• Algebras of degree four
• Algebras of degree three
• Algebraic groups
• Galois cohomology
• Composition and triality
• Cubic Jordan algebras
• Trialitarian central simple algebras
• Bibliography
• Index
• Notation
Return to List Item: 1 of 1
AMS Home | Comments: [email protected] © Copyright 2012, American Mathematical Society Privacy Statement
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9005894064903259, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/12/18/the-orbit-method/?like=1&_wpnonce=d19428bcb3
|
# The Unapologetic Mathematician
## The Orbit Method
Over at Not Even Wrong, there’s a discussion of David Vogan’s talks at Columbia about the “orbit method” or “orbit philosophy”. This is the view that there is — or at least there should be — a correspondence between unitary irreps of a Lie group $G$ and the orbits of a certain action of $G$. As Woit puts it
This is described as a “method” or “philosophy” rather than a theorem because it doesn’t always work, and remains poorly understood in some cases, while at the same time having shown itself to be a powerful source of inspiration in representation theory.
What he doesn’t say in so many words (but which I’m just rude enough to) is that the same statement applies to a lot of theoretical physics. Path integrals are, as they currently stand, prima facie nonsense. In some cases we’ve figured out how to make sense of them, and to give real meaning to the conceptual framework of what should happen. And this isn’t a bad thing. Path integrals have proven to be a powerful source of inspiration, and a lot of actual, solid mathematics and physics has come out of trying to determine what the hell they’re supposed to mean.
Where this becomes a problem is when people take the conceptual framework as literal truth rather than as the inspirational jumping-off point it properly is.
## 1 Comment »
1. Every “method” or “philosophy” that doesn’t work all the time stands as a challenge: we should figure out precisely how much truth it contains, and how much delusion, until we get something that works all the time. I’m a firm believer that mathematics is perfect: in math, whenever things don’t work “as well as expected”, whenever we’re tempted say “unfortunately…”, it means we’re suffering some confusion – and we need to find that confusion and root it out. When we understanding things correctly, there are no flaws.
Of course, this is a never-ending task.
Comment by | December 18, 2007 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260997772216797, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/number-theory/173121-sequences-consecutive-non-squarefree-integers.html
|
# Thread:
1. ## Sequences of Consecutive Non-Squarefree Integers?
Hi, I am new to this forum and unfortunately I'm in a number theory class that is kicking my ass. I need some help anywhere I can get it. My homework problem is this:
Prove that for any positive integer k, there exists a sequence of k consecutive positive non-squarefree (squareful?) integers.
So I found this one algorithm, which is kind of similar to the Chinese Remainder Theorem, that says let a1=1, a2=3, a3=5, and list the primes like that. Pick a k.
Then let n1=(a1)^2
Add (a1)^2 repeatedly until you get a number n2 such that
(n2)+1 is congruent to 0 (mod (a2)^2)
Then to this number, add the product ((a1)^2)((a2)^2) repeatedly until you get n3 such that
(n3)+2 is congruent to 0 (mod(a3)^2)
and so on, until you reach nk such that
(nk)+(k-1) is congruent to 0 (mod (ak)^2)
Now, one thing I don't understand is why the numbers in between nk and (nk)+(k-1) are squareful. I know that they are because I've played around with this on the calculator, but I can't show why. For example, I've gotten up to k=4: 14749, 14750, 14751, and 14752. I can see why 14749 and 14752 are squareful, but I don't know why 14750 and 14751 are squareful.
Also, if there is another solution out there, that would be great too!!! Thank you!
2. You probably know that 1 is not prime, right?
I didn't really pay attention to the algorithm, but you're right in saying it's the Chinese Remainder Theorem. Just solve the system
$x\equiv 0 \mod 2^2$
$x+1\equiv 0 \mod 3^2$
...
$x+n\equiv 0 \mod p_{n+1}^2$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9625347852706909, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/18034/can-we-decompose-diffmxn/52984
|
Can we decompose Diff(MxN)?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If you have two manifolds $M^m$ and $N^n$, how does one / can one decompose the diffeomorphisms $\text{Diff}(M\times N)$ in terms of $\text{Diff}(M)$ and $\text{Diff}(N)$? Is there anything we can say about the structure of this group? I have looked in some of my textbooks, but I haven't found any actual discussion of the manifolds $\text{Diff}(M)$ of a manifold $M$ other than to say they are "poorly understood."
Can anyone point me to a source that discusses the manifold $\text{Diff}(M)$? My background is in physics, and understanding the structure of these kinds of groups is important for some of the the things we do, but I haven't seen any discussion of this in my differential geometry textbooks.
-
12
As a general comment, there's no good reason to expect that the structure of Aut(A x B) is determined by the structure of Aut(A) and Aut(B) in an arbitrary category, where x denotes the product. The product, by definition, only lets you decompose functions into it; it doesn't say anything about how to decompose functions out of it. (This is essentially the same reason that while the maps A -> 1 are trivial by definition, where 1 is the terminal object, the maps 1 -> A can be interesting.) – Qiaochu Yuan Mar 13 2010 at 6:57
1
(Even in the special case where x is a biproduct, one still has to consider Hom(A, B) and Hom(B, A).) – Qiaochu Yuan Mar 13 2010 at 7:18
From Ryan's comment you can see that just understanding the homotopy type of such a space is not well understood. – Sean Tilson Mar 29 2010 at 4:46
4
To expand on Qiaochu's comment: in the category of groups, let $A1=B1$ be trivial groups , and let $A2=B2$ be groups of order $2$ . All four of these groups have trivial automorphism groups, but $\operatorname{Aut}(A_1 \times B_1)$ is trivial whereas $\operatorname{Aut}(A_2 \times B_2) \cong \operatorname{GL}_2(\mathbb{F}_2) \cong S_3$. – Pete L. Clark Aug 3 2010 at 1:50
3 Answers
The homotopy-type of the group of diffeomorphisms of a manifold are fairly well understood in dimensions $1$, $2$ and $3$. For a sketch of what's known see Hatcher's "Linearization in three-dimensional topology," in: Proc. Int. Congress of. Math., Helsinki, Vol. I (1978), pp. 463-468.
Similarly, the finite subgroups of $Diff(M)$ are well understood in dimensions $3$ and lower. Hatcher's paper is a good reference for that as well, when combined with a few semi-recent theorems.
If you're interested in general subgroups of $Diff(M)$, there's still a fair bit of discussion going on just for subgroups of $Diff(S^1)$, as it contains a pretty rich collection of subgroups.
In high dimensions there's not much known. For example, nobody knows if $Diff(S^4)$ has any more than two path-components. See for example this little blurb. Some of the rational homotopy groups of $Diff(S^n)$ are known for $n$ large enough.
I wrote a survey on what's known about the spaces $Diff(S^n)$, and spaces of smooth embeddings of one sphere in another $Emb(S^j,S^n)$ a few years ago, here.
Getting back to your earlier question, groups of diffeomorphisms of connect-sums can be pretty compicated objects. In dimension $2$ it's already interesting. For example, $Diff(S^1 \times S^1)$ has the homotopy-type of $S^1 \times S^1 \times GL_2(\mathbb Z)$. Diff of a connect-sum of $g$ copies of $S^1 \times S^1$ has the homotopy-type of a discrete group provided $g>1$, this is called the mapping class group of a surface of genus $g$. It's a pretty complicated and heavily-studied object. In the genus $g=2$ case this group is fairly similar to the braid group on $6$ strands.
In dimension $3$, it's an old theorem of Hatcher's that $Diff(S^1 \times S^2)$ doesn't have the homotopy-type of a finite-dimensional CW-complex, as it has the homotopy-type of $O_2 \times O_3 \times \Omega SO_3$. I've been spending a lot of time recently, studying the homotopy-type of $Diff(M)$ when $M$ is the complement of a knot in $S^3$, and knot complements in general. The paper of mine I linked to goes into some detail on this.
From the perspective of differential geometry, the homotopy-type of $Diff(S^n)$ is rather interesting as it's closely related to the homotopy-type of the space of "round Riemann metrics" on $S^n$. This is a classic construction, is outlined in my paper but it goes like this: $Diff(S^n)$ has the homotopy type of a product $O_{n+1} \times Diff(D^n)$ where the diffeomorphisms of $D^n$ are required to be the identity on the boundary -- this is a local linearization argument. $Diff(D^n)$ has the homotopy-type of the space of round metrics on $S^n$. The idea is that any two round metrics are related by a diffeomorphism of $S^n$. So $Diff(S^n)$ acts transitively on the space of round metrics (with a fixed volume, say), and the stabilizer of a round metric is $O_{n+1}$ basically by the definition of a round metrics. Kind of silly but fundamental.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
When $N$ is the circle there's a sort of answer. In fact there's a whole chapter in the book
Burghelea, Dan; Lashof, Richard; Rothenberg, Melvin: Groups of automorphisms of manifolds. With an appendix ("The topological category'') by E. Pedersen. Lecture Notes in Mathematics, Vol. 473. Springer-Verlag, Berlin-New York, 1975.
dedicated to showing that after looping once, the space $\text{Diff}(M\times S^1)$ splits up to homotopy as (the loops of) $$\text{Diff}(M\times I) \times B\text{Diff}(M\times I) \times \eta(M) ,$$ where the middle term is a non-connective one-fold delooping of $\text{Diff}(M\times I)$ and $\eta(M)$ is the mysterious "nil-term" (when writing $\text{Diff}(W)$ for a manifold $W$ with boundary, the convention is that the diffeomorphisms are to preserve the boundary pointwise). In particular, once gets a decomposition on the level of homotopy groups. (This theorem is an analog of the Bass-Heller-Swan type result which says $K(R[t]) \simeq K(R) \times BK(R) \times \eta(R)$.)
One can say something about the homotopy type the nil-term in the concordance stable range, roughly, $\dim M/3$.
Furthermore, $\text{Diff}(M\times I)$ sits in a fibration sequence $$\text{Diff}(M\times I) \to C(M) \to \text{Diff}(M)$$ where $C(M)$ is the topological group of concordances of $M$. After inverting 2, this sequence is homotopically trivial and $\pi_k(\text{Diff}(M\times I))$ can be identified with the invariant part of the $\Bbb Z_2$-action on $\pi_k(C(M))$ induced by conjugating a concordance with the diffeomorphism which turns $M\times I$ upside down ($(x,t) \mapsto (x,1-t)$). Lastly, $\pi_k(C(M))$ can be studied via algebraic $K$-theory methods when $k$ is within the concordance stable range.
-
Hi John. This result about the fibration being trivial in the stable range after inverting 2, what's the reference for that? Is that Igusa? – Ryan Budney Jan 24 2011 at 2:10
No, It's not Igusa. I do remember Kiyoshi having attributed it to Hatcher. It might be in Hatcher's paper from the 1976 Stanford conference. I realize now that might also have to loop the fibration once to get the correct statement. – John Klein Jan 24 2011 at 4:44
I'm not an expert, but my impression was that you can't reasonably expect anything like a decomposition in general. Here is a big list of references on automorphisms of manifolds, compiled by Andre Henriques. The wikipedia page has a brief discussion of diffeomorphism groups, and it requires somewhat less background.
-
Your 1st link is broken. – Ryan Budney Mar 13 2010 at 6:52
How odd. I just tried it on 3 different browsers and 2 different computers and it worked each time. I'll try to put it in a comment. web.archive.org/web/20070208084859/http%3A//… – S. Carnahan♦ Mar 13 2010 at 7:21
I get a "403 Forbidden" error message. It appears I'm banned from that webpage. Strange. – Ryan Budney Mar 13 2010 at 7:23
1
Perhaps the administrators of archive.org are hockey fans... – S. Carnahan♦ Mar 13 2010 at 7:53
2
André has it at his new home: staff.science.uu.nl/~henri105/talbotrefs.html – Ben Wieland May 17 2010 at 21:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369819760322571, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/50291/degenerate-case-of-linear-programming-duality
|
## Degenerate case of linear programming duality?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let's say we have a maximization linear program that looks like this: maximize $\vec{c}\vec{x}$, subject to $\matrix{A}\vec{x} \leq 0$, $\vec{x} \geq 0$. If we take the dual, we have "minimize $0\vec{y}$, subject to $\vec{y}\matrix{A}\geq\vec{c}, \vec{y}\geq 0$". I'm particularly bothered by the "minimize $0$" part of the dual program - but does the duality theorem still hold - that is: is it true that if there is a $\vec{y}$ that is feasible for the dual program, then for all $\vec{x}$ that is feasible for the primal program, $\vec{c}\vec{x} \leq 0$?
Thanks!
-
## 2 Answers
Yes, the duality theorem holds. "Minimize 0" makes your life much easier, because you know exactly what the optimal value of the dual program is...
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Igor is correct, and more can be said about this primal-dual pair of linear programs: because the primal maximization problem is homogeneous (i.e. zero constant terms in the constraints), the common optimal objecive value for both problems is either zero or $+\infty$:
1. If there exists a feasible solution $\vec{x}$ with a positive objective value, i.e. with $A \vec{x} \le 0$ and $\vec{c}\vec{x} > 0$, then this solution can be scaled by any positive constant $\lambda > 0$ (because $A (\lambda \vec{x}) \le 0$), which shows that the problem is unbounded ($\vec{c}(\lambda \vec{x})$ tends to $+\infty$ as $\lambda \to +\infty$).
2. If no feasible solution with negative objective value exists, then $\vec{x}=0$ is an optimal solution, and the optimal objective value is zero.
Using duality theory, one can check that the first case corresponds to an infeasible dual problem (no $\vec{y}\ge 0$ such that $\vec{y} A \ge \vec{c}$), while the second situation happens as soon as the dual problem admits a feasible solution.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9235310554504395, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Convection%e2%80%93diffusion_equation
|
# Convection–diffusion equation
The convection–diffusion equation is a combination of the diffusion and convection (advection) equations, and describes physical phenomena where particles, energy, or other physical quantities are transferred inside a physical system due to two processes: diffusion and convection. Depending on context, the same equation can be called the advection–diffusion equation, drift–diffusion equation, Smoluchowski equation (after Marian Smoluchowski),[1] or (generic) scalar transport equation.[2]
## Equation
### General
The general equation is[3][4]
$\frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c) - \nabla \cdot (\vec{v} c) + R$
where
• c is the variable of interest (species concentration for mass transfer, temperature for heat transfer),
• D is the diffusivity (also called diffusion coefficient), such as mass diffusivity for particle motion or thermal diffusivity for heat transport,
• $\vec{v}$ is the average velocity that the quantity is moving. For example, in advection, c might be the concentration of salt in a river, and then $\vec{v}$ would be the velocity of the water flow. As another example, c might be the concentration of small bubbles in a calm lake, and then $\vec{v}$ would be the average velocity of bubbles rising towards the surface by buoyancy (see below).
• R describes "sources" or "sinks" of the quantity c. For example, for a chemical species, R>0 means that a chemical reaction is creating more of the species, and R<0 means that a chemical reaction is destroying the species. For heat transport, R>0 might occur if thermal energy is being generated by friction.
• $\nabla$ represents gradient and $\nabla\cdot$ represents divergence.
### Common simplifications
In a common situation, the diffusion coefficient is constant, there are no sources or sinks, and the velocity field describes an incompressible flow (i.e., it has zero divergence). Then the formula simplifies to:[5][6][7]
$\frac{\partial c}{\partial t} = D \nabla^2 c - \vec{v} \cdot \nabla c.$
In this form, the convection–diffusion equation combines both parabolic and hyperbolic partial differential equations.
### Stationary version
The stationary convection–diffusion equation describes the steady-state behavior of a convective-diffusive system. In steady-state, $\partial c/\partial t = 0$, so the formula is:
$0 = \nabla \cdot (D \nabla c) - \nabla \cdot (\vec{v} c) + R.$
## Derivation
The convection–diffusion equation can be derived in a straightforward way[4] from the continuity equation, which states that the rate of change for a scalar quantity in a differential control volume is given by flow and diffusion into and out of that part of the system along with any generation or consumption inside the control volume:
$\frac{\partial c}{\partial t} + \nabla\cdot\vec{j} = s,$
where $\vec{j}$ is the total flux and s is a net volumetric source for c. There are two sources of flux in this situation. First, diffusive flux arises due to diffusion. This is typically approximated by Fick's first law:
$\vec{j}_{\text{diffusion}} = -D \, \nabla c$
i.e., the flux of the diffusing material (relative to the bulk motion) in any part of the system is proportional to the local concentration gradient. Second, when there is overall convection or flow, there is an associated flux called advective flux:
$\vec{j}_{\text{advective}} = \vec{v} \, c$
The total flux (in a stationary coordinate system) is given by the sum of these two:
$\vec{j} = \vec{j}_{\text{diffusion}} + \vec{j}_{\text{advective}} = -D \, \nabla c + \vec{v} \, c.$
Plugging into the continuity equation:
$\frac{\partial c}{\partial t} + \nabla\cdot \left(\vec{-D\,\nabla c + \vec{v}\, c}\right) = s.$
## Complex mixing phenomena
In general, D, $\vec{v}$, and s may vary with space and time. In cases in which they depend on concentration as well, the equation becomes nonlinear, giving rise to many distinctive mixing phenomena such as Rayleigh–Bénard convection when $\vec{v}$ depends on temperature in the heat transfer formulation and reaction-diffusion pattern formation when s depends on concentration in the mass transfer formulation.
## Velocity in response to a force
In some cases, the average velocity field $\vec{v}$ exists because of a force; for example, the equation might describe the flow of ions dissolved in a liquid, with an electric field pulling the ions in some direction (as in gel electrophoresis). In this situation, it is usually called the drift-diffusion equation or the Smoluchowski equation,[1] after Marian Smoluchowski who described it in 1915[8] (not to be confused with the Einstein–Smoluchowski relation or Smoluchowski coagulation equation).
Typically, the average velocity is directly proportional to the applied force, giving the equation:[9][10]
$\frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c) - \nabla \cdot (\zeta^{-1} \vec{F} c) + R$
where $\vec{F}$ is the force, and $\zeta$ characterizes the friction or viscous drag. (The inverse $\zeta^{-1}$ is called mobility.)
### Derivation of Einstein relation
Main article: Einstein relation (kinetic theory)
When the force is associated with a potential energy $\vec{F} = \nabla U$ (see conservative force), a steady-state solution to the above equation (i.e. 0 = R = ∂c/∂t) is:
$c \propto \exp( - D^{-1} \zeta^{-1} U)$
(assuming D and $\zeta$ are constant). In other words, there are more particles where the energy is lower. This concentration profile is expected to agree with the Boltzmann distribution (more precisely, the Gibbs measure). From this assumption, the Einstein relation can be proven: $D \zeta = k_B T$.[10]
## As a stochastic differential equation
The convection–diffusion equation (with no sources or drains, R=0) can be viewed as a stochastic differential equation, describing random motion with diffusivity D and bias $\vec{v}$. For example, the equation can describe the Brownian motion of a single particle, where the variable c describes the probability distribution for the particle to be in a given position at a given time. The reason the equation can be used that way is because there is no mathematical difference between the probability distribution of a single particle, and the concentration profile of a collection of infinitely many particles (as long as the particles do not interact with each other).
The Langevin equation describes advection, diffusion, and other phenomena in an explicitly stochastic way. One of the simplest forms of the Langevin equation is when its "noise term" is Gaussian; in this case, the Langevin equation is exactly equivalent to the convection–diffusion equation.[10] However, the Langevin equation is more general.[10]
## Similar equations in other contexts
The convection–diffusion equation is a relatively simple equation describing flows, or alternatively, describing a stochastically-changing system. Therefore, the same or similar equation arises in many contexts unrelated to flows through space.
• It is formally identical to the Fokker–Planck equation for the velocity of a particle.
• It is closely related to the Black–Scholes equation and other equations in financial mathematics.
• It is closely related to the Navier–Stokes equations, because the flow of momentum in a fluid is mathematically similar to the flow of mass or energy. The correspondence is clearest in the case of an incompressible Newtonian fluid, in which case the Navier–Stokes equation is:
$\frac{\partial \mathbf{M}}{\partial t} = \frac{\mu}{\rho} \nabla^2 \mathbf{M} -\mathbf{v} \cdot \nabla \mathbf{M} + (\mathbf{f}-\nabla \text{P})$
where M is the momentum of the fluid (per unit volume) at each point (equal to the density $\rho$ multiplied by the velocity v), $\mu$ is viscosity, P is fluid pressure, and f is any other body force such as gravity. In this equation, the term on the left-hand side describes the change in momentum at a given point; the first term on the right describes viscosity, which is really the diffusion of momentum; the second term on the right describes the advective flow of momentum; and the last two terms on the right describes the external and internal forces which can act as sources or sinks of momentum.
## In semiconductor physics
In semiconductor physics, this equation is called the drift–diffusion equation. The word "drift" is related to drift current and drift velocity. The equation is normally written:
$\mathbf{J}_n/(-q) = - D_n \nabla n - n \mu_n \mathbf{E}$
$\mathbf{J}_p/q = - D_p \nabla p + p \mu_p \mathbf{E}$
$\frac{\partial n}{\partial t} = -\nabla \cdot \mathbf{J}_n + R$
$\frac{\partial p}{\partial t} = -\nabla \cdot \mathbf{J}_p + R$
where
• n and p are the concentrations (densities) of electrons and holes, respectively,
• q>0 is the elementary charge,
• Jn and Jp are the electric currents due to electrons and holes respectively,
• Jn/-q and Jp/q are the corresponding "particle currents" of electrons and holes respectively,
• R represents carrier generation and recombination (R>0 for generation of electron-hole pairs, R<0 for recombination.)
• E is the electric field vector
• $\mu_n$ and $\mu_p$ are electron and hole mobility.
The diffusion coefficient and mobility are related by the Einstein relation as above:
$D_n = \mu_n k_B T/q, \quad D_p = \mu_p k_B T/q,$
where kB is Boltzmann constant and T is absolute temperature. The drift current and diffusion current refer separately to the two terms in the expressions for J, i.e.:
$\mathbf{J}_{n,\text{drift}}/(-q) = - n \mu_n \mathbf{E}, \qquad \mathbf{J}_{p,\text{drift}}/q = p \mu_p \mathbf{E}$
$\mathbf{J}_{n,\text{diffusion}}/(-q) = - D_n \nabla n, \qquad \mathbf{J}_{p,\text{diffusion}}/q = - D_p \nabla p.$
## References
1. ^ a b Chandrasekhar (1943). Rev. Mod. Phys. 15: 1. Bibcode:1943RvMP...15....1C. doi:10.1103/RevModPhys.15.1. See equation (312)
2. Computational Fluid Dynamics in Industrial Combustion by Baukal and Gershtein, p67, google books link.
3. Introduction to Climate Modelling, by Thomas Stocker, p57, google books link
4. ^ a b Advective Diffusion Equation, lecture notes by Scott A. Socolofsky and Gerhard H. Jirka, web link
5. Bejan A (2004). Convection Heat Transfer.
6. Bird, Stewart, Lightfoot (1960). Transport Phenomena.
7. Probstein R (1994). Physicochemical Hydrodynamics.
8. M. v. Smoluchowski,
9. ^ a b c d The Theory of Polymer Dynamics by Doi and Edwards, p46-52, google books link
• Granville Sewell, The Numerical Solution of Ordinary and Partial Differential Equations, Academic Press (1988). ISBN 0-12-637475-9
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 39, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.890466034412384, "perplexity_flag": "middle"}
|
http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Laplace_transform
|
More results on: Download PDF files on: Download Word files on: Images on: Video/Audio on: Download PowerPoint on: More results from.edu web: Map (if applicable) of:
Laplace transform - Wikipedia, the free encyclopedia
# Laplace transform
The Laplace transform is a widely used integral transform with many applications in physics and engineering. Denoted $\displaystyle\mathcal{L} \left\{f(t)\right\}$, it is a linear operator of a function f(t) with a real argument t (t ≥ 0) that transforms it to a function F(s) with a complex argument s. This transformation is essentially bijective for the majority of practical uses; the respective pairs of f(t) and F(s) are matched in tables. The Laplace transform has the useful property that many relationships and operations over the originals f(t) correspond to simpler relationships and operations over the images F(s).1 It is named after Pierre-Simon Laplace, who introduced the transform in his work on probability theory.
The Laplace transform is related to the Fourier transform, but whereas the Fourier transform expresses a function or signal as a series of modes of vibration (frequencies), the Laplace transform resolves a function into its moments. Like the Fourier transform, the Laplace transform is used for solving differential and integral equations. In physics and engineering it is used for analysis of linear time-invariant systems such as electrical circuits, harmonic oscillators, optical devices, and mechanical systems. In such analyses, the Laplace transform is often interpreted as a transformation from the time-domain, in which inputs and outputs are functions of time, to the frequency-domain, where the same inputs and outputs are functions of complex angular frequency, in radians per unit time. Given a simple mathematical or functional description of an input or output to a system, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications.
## History
The Laplace transform is named after mathematician and astronomer Pierre-Simon Laplace, who used a similar transform (now called z transform) in his work on probability theory. The current widespread use of the transform came about soon after World War II although it had been used in the 19th century by Abel, Lerch, Heaviside, and Bromwich. The older history of similar transforms is as follows. From 1744, Leonhard Euler investigated integrals of the form
$z = \int X(x) e^{ax}\, dx \quad\text{ and }\quad z = \int X(x) x^A \, dx$
as solutions of differential equations but did not pursue the matter very far.2 Joseph Louis Lagrange was an admirer of Euler and, in his work on integrating probability density functions, investigated expressions of the form
$\int X(x) e^{- a x } a^x\, dx,$
which some modern historians have interpreted within modern Laplace transform theory.34clarification needed
These types of integrals seem first to have attracted Laplace's attention in 1782 where he was following in the spirit of Euler in using the integrals themselves as solutions of equations.5 However, in 1785, Laplace took the critical step forward when, rather than just looking for a solution in the form of an integral, he started to apply the transforms in the sense that was later to become popular. He used an integral of the form:
$\int x^s \phi (x)\, dx,$
akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of the transformed equation. He then went on to apply the Laplace transform in the same way and started to derive some of its properties, beginning to appreciate its potential power.6
Laplace also recognised that Joseph Fourier's method of Fourier series for solving the diffusion equation could only apply to a limited region of space as the solutions were periodic. In 1809, Laplace applied his transform to find solutions that diffused indefinitely in space.7
## Formal definition
The Laplace transform of a function f(t), defined for all real numbers t ≥ 0, is the function F(s), defined by:
$F(s) = \mathcal{L} \left\{f(t)\right\}(s)=\int_0^{\infty} e^{-st} f(t) \,dt.$
The parameter s is a complex number:
$s = \sigma + i \omega, \,$ with real numbers σ and ω.
The meaning of the integral depends on types of functions of interest. A necessary condition for existence of the integral is that f must be locally integrable on [0,∞). For locally integrable functions that decay at infinity or are of exponential type, the integral can be understood as a (proper) Lebesgue integral. However, for many applications it is necessary to regard it as a conditionally convergent improper integral at ∞. Still more generally, the integral can be understood in a weak sense, and this is dealt with below.
One can define the Laplace transform of a finite Borel measure μ by the Lebesgue integral8
$(\mathcal{L}\mu)(s) = \int_{[0,\infty)} e^{-st}d\mu(t).$
An important special case is where μ is a probability measure or, even more specifically, the Dirac delta function. In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a distribution function f. In that case, to avoid potential confusion, one often writes
$(\mathcal{L}f)(s) = \int_{0^-}^\infty e^{-st}f(t)\,dt$
where the lower limit of 0− is shorthand notation for
$\lim_{\varepsilon\downarrow 0}\int_{-\varepsilon}^\infty.$
This limit emphasizes that any point mass located at 0 is entirely captured by the Laplace transform. Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the Laplace–Stieltjes transform.
### Probability theory
In pure and applied probability, the Laplace transform is defined as an expected value. If X is a random variable with probability density function f, then the Laplace transform of f is given by the expectation
$(\mathcal{L}f)(s) = E\left[e^{-sX} \right] \,$
By abuse of language, this is referred to as the Laplace transform of the random variable X itself. Replacing s by −t gives the moment generating function of X. The Laplace transform has applications throughout probability theory, including first passage times of stochastic processes such as Markov chains, and renewal theory.
Of particular use is the ability to recover the probability distribution function of a random variable X by means of the Laplace transform as follows
$F_X(x) = \mathcal{L}^{-1}_s \left\lbrace \frac{E\left[e^{-sX}\right]}{s}\right\rbrace (x) = \mathcal{L}^{-1}_s \left\lbrace \frac{\left(\mathcal{L} f\right)(s)}{s} \right\rbrace (x) \,$
### Bilateral Laplace transform
Main article: Two-sided Laplace transform
When one says "the Laplace transform" without qualification, the unilateral or one-sided transform is normally intended. The Laplace transform can be alternatively defined as the bilateral Laplace transform or two-sided Laplace transform by extending the limits of integration to be the entire real axis. If that is done the common unilateral transform simply becomes a special case of the bilateral transform where the definition of the function being transformed is multiplied by the Heaviside step function.
The bilateral Laplace transform is defined as follows:
$F(s) = \mathcal{L}\left\{f(t)\right\} =\int_{-\infty}^{\infty} e^{-st} f(t)\,dt.$
### Inverse Laplace transform
For more details on this topic, see Inverse Laplace transform.
The inverse Laplace transform is given by the following complex integral, which is known by various names (the Bromwich integral, the Fourier-Mellin integral, and Mellin's inverse formula):
$f(t) = \mathcal{L}^{-1} \{F(s)\} = \frac{1}{2 \pi i} \lim_{T\to\infty}\int_{ \gamma - i T}^{ \gamma + i T} e^{st} F(s)\,ds,$
where γ is a real number so that the contour path of integration is in the region of convergence of F(s). An alternative formula for the inverse Laplace transform is given by Post's inversion formula.
## Region of convergence
If f is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform F(s) of f converges provided that the limit
$\lim_{R\to\infty}\int_0^R f(t)e^{-st}\,dt$
exists. The Laplace transform converges absolutely if the integral
$\int_0^\infty |f(t)e^{-st}|\,dt$
exists (as a proper Lebesgue integral). The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former instead of the latter sense.
The set of values for which F(s) converges absolutely is either of the form Re(s) > a or else Re(s) ≥ a, where a is an extended real constant, −∞ ≤ a ≤ ∞. (This follows from the dominated convergence theorem.) The constant a is known as the abscissa of absolute convergence, and depends on the growth behavior of f(t).9 Analogously, the two-sided transform converges absolutely in a strip of the form a < Re(s) < b, and possibly including the lines Re(s) = a or Re(s) = b.10 The subset of values of s for which the Laplace transform converges absolutely is called the region of absolute convergence or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence.
Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at s = s0, then it automatically converges for all s with Re(s) > Re(s0). Therefore the region of convergence is a half-plane of the form Re(s) > a, possibly including some points of the boundary line Re(s) = a. In the region of convergence Re(s) > Re(s0), the Laplace transform of f can be expressed by integrating by parts as the integral
$F(s) = (s-s_0)\int_0^\infty e^{-(s-s_0)t}\beta(t)\,dt,\quad \beta(u)=\int_0^u e^{-s_0t}f(t)\,dt.$
That is, in the region of convergence F(s) can effectively be expressed as the absolutely convergent Laplace transform of some other function. In particular, it is analytic.
A variety of theorems, in the form of Paley–Wiener theorems, exist concerning the relationship between the decay properties of f and the properties of the Laplace transform within the region of convergence.
In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output. This is equivalent to the absolute convergence of the Laplace transform of the impulse response function in the region Re(s) ≥ 0. As a result, LTI systems are stable provided the poles of the Laplace transform of the impulse response function have negative real part.
## Properties and theorems
The Laplace transform has a number of properties that make it useful for analyzing linear dynamical systems. The most significant advantage is that differentiation and integration become multiplication and division, respectively, by s (similarly to logarithms changing multiplication of numbers to addition of their logarithms). Because of this property, the Laplace variable s is also known as operator variable in the L domain: either derivative operator or (for s−1) integration operator. The transform turns integral equations and differential equations to polynomial equations, which are much easier to solve. Once solved, use of the inverse Laplace transform reverts to the time domain.
Given the functions f(t) and g(t), and their respective Laplace transforms F(s) and G(s):
$f(t) = \mathcal{L}^{-1} \{ F(s) \}$
$g(t) = \mathcal{L}^{-1} \{ G(s) \}$
the following table is a list of properties of unilateral Laplace transform:11
Properties of the unilateral Laplace transform
Time domain 's' domain Comment
Linearity $a f(t) + b g(t) \$ $a F(s) + b G(s) \$ Can be proved using basic rules of integration.
Frequency differentiation $t f(t) \$ $-F'(s) \$ F′ is the first derivative of F.
Frequency differentiation $t^{n} f(t) \$ $(-1)^{n} F^{(n)}(s) \$ More general form, nth derivative of F(s).
Differentiation $f'(t) \$ $s F(s) - f(0) \$ f is assumed to be a differentiable function, and its derivative is assumed to be of exponential type. This can then be obtained by integration by parts
Second Differentiation $f''(t) \$ $s^2 F(s) - s f(0) - f'(0) \$ f is assumed twice differentiable and the second derivative to be of exponential type. Follows by applying the Differentiation property to f′(t).
General Differentiation $f^{(n)}(t) \$ $s^n F(s) - s^{n - 1} f(0) - \cdots - f^{(n - 1)}(0) \$ f is assumed to be n-times differentiable, with nth derivative of exponential type. Follow by mathematical induction.
Frequency integration $\frac{f(t)}{t} \$ $\int_s^\infty F(\sigma)\, d\sigma \$ This is deduced using the nature of frequency differentiation and conditional convergence.
Integration $\int_0^t f(\tau)\, d\tau = (u * f)(t)$ ${1 \over s} F(s)$ u(t) is the Heaviside step function. Note (u ∗ f)(t) is the convolution of u(t) and f(t).
Time scaling $f(at)$ $\frac{1}{|a|} F \left ( {s \over a} \right )$
Frequency shifting $e^{at} f(t) \$ $F(s - a) \$
Time shifting $f(t - a) u(t - a) \$ $e^{-as} F(s) \$ u(t) is the Heaviside step function
Multiplication $f(t)g(t)$ $\frac{1}{2\pi i}\lim_{T\to\infty}\int_{c-iT}^{c+iT}F(\sigma)G(s-\sigma)\,d\sigma \$ the integration is done along the vertical line Re(σ) = c that lies entirely within the region of convergence of F.12
Convolution $(f * g)(t) = \int_0^t f(\tau)g(t-\tau)\,d\tau$ $F(s) \cdot G(s) \$ f(t) and g(t) are extended by zero for t < 0 in the definition of the convolution.
Complex conjugation $f^*(t)$ $F^*(s^*)$
Cross-correlation $f(t)\star g(t)$ $F^*(-s^*)\cdot G(s)$
Periodic Function $f(t)$ ${1 \over 1 - e^{-Ts}} \int_0^T e^{-st} f(t)\,dt$ f(t) is a periodic function of period T so that f(t) = f(t + T), for all t ≥ 0. This is the result of the time shifting property and the geometric series.
$f(0^+)=\lim_{s\to \infty}{sF(s)}.$
$f(\infty)=\lim_{s\to 0}{sF(s)}$, if all poles of sF(s) are in the left half-plane.
The final value theorem is useful because it gives the long-term behaviour without having to perform partial fraction decompositions or other difficult algebra. If a function has poles in the right-hand plane or on the imaginary axis, (e.g. $e^t$ or sin(t), respectively) the behaviour of this formula is undefined.
### Relation to moments
Main article: Moment generating function
The quantities
$\mu_n = \int_0^\infty t^nf(t)\,dt$
are the moments of the function f. Note by repeated differentiation under the integral, $(-1)^n(\mathcal L f)^{(n)}(0) = \mu_n$. This is of special significance in probability theory, where the moments of a random variable X are given by the expectation values $\mu_n=E[X^n]$. Then the relation holds:
$\mu_n = (-1)^n\frac{d^n}{ds^n}E[e^{-sX}].$
### Proof of the Laplace transform of a function's derivative
It is often convenient to use the differentiation property of the Laplace transform to find the transform of a function's derivative. This can be derived from the basic expression for a Laplace transform as follows:
$\begin{align} \mathcal{L} \left\{f(t)\right\} & = \int_{0^-}^{\infty} e^{-st} f(t)\,dt \\[8pt] & = \left[\frac{f(t)e^{-st}}{-s} \right]_{0^-}^{\infty} - \int_{0^-}^\infty \frac{e^{-st}}{-s} f'(t) \, dt\quad \text{(by parts)} \\[8pt] & = \left[-\frac{f(0^-)}{-s}\right] + \frac{1}{s}\mathcal{L}\left\{f'(t)\right\}, \end{align}$
yielding
$\mathcal{L}\left\{ f'(t) \right\} = s\cdot\mathcal{L} \left\{ f(t) \right\}-f(0^-),$
and in the bilateral case,
$\mathcal{L}\left\{ { f'(t) } \right\} = s \int_{-\infty}^\infty e^{-st} f(t)\,dt = s \cdot \mathcal{L} \{ f(t) \}.$
The general result
$\mathcal{L} \left\{ f^{(n)}(t) \right\} = s^n \cdot \mathcal{L} \left\{ f(t) \right\} - s^{n - 1} f(0^-) - \cdots - f^{(n - 1)}(0^-),$
where fn is the nth derivative of f, can then be established with an inductive argument.
### Evaluating improper integrals
Let $\mathcal{L}\left\{f(t)\right\}=F(s)$, then (see the table above)
$\mathcal{L}\left\{\frac{f(t)}{t}\right\}=\int_{s}^{\infty}F(p)\, dp,$
or
$\int_{0}^{\infty}\frac{f(t)}{t}e^{-st}\, dt=\int_{s}^{\infty}F(p)\, dp.$
Letting s → 0, we get the identity
$\int_{0}^{\infty}\frac{f(t)}{t}\, dt=\int_{0}^{\infty}F(p)\, dp.$
For example,
$\int_{0}^{\infty}\frac{\cos at-\cos bt}{t}\, dt=\int_{0}^{\infty}\left(\frac{p}{p^{2}+a^{2}}-\frac{p}{p^{2}+b^{2}}\right)\, dp=\frac{1}{2}\left.\ln\frac{p^{2}+a^{2}}{p^{2}+b^{2}} \right|_{0}^{\infty} =\ln b-\ln a.$
Another example is Dirichlet integral.
### Relationship to other transforms
#### Laplace–Stieltjes transform
The (unilateral) Laplace–Stieltjes transform of a function g : R → R is defined by the Lebesgue–Stieltjes integral
$\{\mathcal{L}^*g\}(s) = \int_0^\infty e^{-st}dg(t).$
The function g is assumed to be of bounded variation. If g is the antiderivative of f:
$g(x) = \int_0^x f(t)\,dt$
then the Laplace–Stieltjes transform of g and the Laplace transform of f coincide. In general, the Laplace–Stieltjes transform is the Laplace transform of the Stieltjes measure associated to g. So in practice, the only distinction between the two transforms is that the Laplace transform is thought of as operating on the density function of the measure, whereas the Laplace–Stieltjes transform is thought of as operating on its cumulative distribution function.13
#### Fourier transform
The continuous Fourier transform is equivalent to evaluating the bilateral Laplace transform with imaginary argument s = iω or s = 2πfi :
$\begin{align} \hat{f}(\omega) & = \mathcal{F}\left\{f(t)\right\} \\[1em] & = \mathcal{L}\left\{f(t)\right\}|_{s = i\omega} = F(s)|_{s = i \omega}\\[1em] & = \int_{-\infty}^{\infty} e^{-i \omega t} f(t)\,\mathrm{d}t.\\ \end{align}$
This definition of the Fourier transform requires a prefactor of 1/2π on the reverse Fourier transform. This relationship between the Laplace and Fourier transforms is often used to determine the frequency spectrum of a signal or dynamical system.
The above relation is valid as stated if and only if the region of convergence (ROC) of F(s) contains the imaginary axis, σ = 0. For example, the function f(t) = cos(ω0t) has a Laplace transform F(s) = s/(s2 + ω02) whose ROC is Re(s) > 0. As s = iω is a pole of F(s), substituting s = iω in F(s) does not yield the Fourier transform of f(t)u(t), which is proportional to the Dirac delta-function δ(ω-ω0).
However, a relation of the form
$\lim_{\sigma\to 0^+} F(\sigma+i\omega) = \hat{f}(\omega)$
holds under much weaker conditions. For instance, this holds for the above example provided that the limit is understood as a weak limit of measures (see vague topology). General conditions relating the limit of the Laplace transform of a function on the boundary to the Fourier transform take the form of Paley-Wiener theorems.
#### Mellin transform
The Mellin transform and its inverse are related to the two-sided Laplace transform by a simple change of variables. If in the Mellin transform
$G(s) = \mathcal{M}\left\{g(\theta)\right\} = \int_0^\infty \theta^s g(\theta) \frac{d\theta}{\theta}$
we set θ = e−t we get a two-sided Laplace transform.
#### Z-transform
The unilateral or one-sided Z-transform is simply the Laplace transform of an ideally sampled signal with the substitution of
$z \ \stackrel{\mathrm{def}}{=}\ e^{s T} \$
where T = 1/fs is the sampling period (in units of time e.g., seconds) and fs is the sampling rate (in samples per second or hertz)
Let
$\Delta_T(t) \ \stackrel{\mathrm{def}}{=}\ \sum_{n=0}^{\infty} \delta(t - n T)$
be a sampling impulse train (also called a Dirac comb) and
$\begin{align} x_q(t) & \stackrel{\mathrm{def}}{=}\ x(t) \Delta_T(t) = x(t) \sum_{n=0}^{\infty} \delta(t - n T) \\ & = \sum_{n=0}^{\infty} x(n T) \delta(t - n T) = \sum_{n=0}^{\infty} x[n] \delta(t - n T) \end{align}$
be the continuous-time representation of the sampled x(t)
$x[n] \ \stackrel{\mathrm{def}}{=}\ x(nT) \$
The Laplace transform of the sampled signal $x_q(t) \$ is
$\begin{align} X_q(s) & = \int_{0^-}^\infty x_q(t) e^{-s t} \,dt \\ & = \int_{0^-}^\infty \sum_{n=0}^\infty x[n] \delta(t - n T) e^{-s t} \, dt \\ & = \sum_{n=0}^\infty x[n] \int_{0^-}^\infty \delta(t - n T) e^{-s t} \, dt \\ & = \sum_{n=0}^\infty x[n] e^{-n s T}. \end{align}$
This is precisely the definition of the unilateral Z-transform of the discrete function xn
$X(z) = \sum_{n=0}^{\infty} x[n] z^{-n}$
with the substitution of z → esT.
Comparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace transform of the sampled signal:
$X_q(s) = X(z) \Big|_{z=e^{sT}}.$
The similarity between the Z and Laplace transforms is expanded upon in the theory of time scale calculus.
#### Borel transform
The integral form of the Borel transform
$F(s) = \int_0^\infty f(z)e^{-sz}\,dz$
is a special case of the Laplace transform for f an entire function of exponential type, meaning that
$|f(z)|\le Ae^{B|z|}$
for some constants A and B. The generalized Borel transform allows a different weighting function to be used, rather than the exponential function, to transform functions not of exponential type. Nachbin's theorem gives necessary and sufficient conditions for the Borel transform to be well defined.
#### Fundamental relationships
Since an ordinary Laplace transform can be written as a special case of a two-sided transform, and since the two-sided transform can be written as the sum of two one-sided transforms, the theory of the Laplace-, Fourier-, Mellin-, and Z-transforms are at bottom the same subject. However, a different point of view and different characteristic problems are associated with each of these four major integral transforms.
## Table of selected Laplace transforms
The following table provides Laplace transforms for many common functions of a single variable.1415 For definitions and explanations, see the Explanatory Notes at the end of the table.
Because the Laplace transform is a linear operator:
• The Laplace transform of a sum is the sum of Laplace transforms of each term.
$\mathcal{L}\left\{f(t) + g(t) \right\} = \mathcal{L}\left\{f(t)\right\} + \mathcal{L}\left\{ g(t) \right\}$
• The Laplace transform of a multiple of a function is that multiple times the Laplace transformation of that function.
$\mathcal{L}\left\{a f(t)\right\} = a \mathcal{L}\left\{ f(t)\right\}$
Using this linearity, and various trigonometric, hyperbolic, and complex number (etc.) properties and/or identities, some Laplace transforms can be obtained from others quicker than by using the definition directly.
The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, u(t). The entries of the table that involve a time delay τ are required to be causal (meaning that τ > 0). A causal system is a system where the impulse response h(t) is zero for all time t prior to t = 0. In general, the region of convergence for causal systems is not the same as that of anticausal systems.
Function Time domain
$f(t) = \mathcal{L}^{-1} \left\{ F(s) \right\}$
Laplace s-domain
$F(s) = \mathcal{L}\left\{ f(t) \right\}$
Region of convergence Reference
unit impulse $\delta(t) \$ $1$ $\mathrm{all} \ s \,$ inspection
delayed impulse $\delta(t-\tau) \$ $e^{-\tau s} \$ time shift of
unit impulse
unit step $u(t) \$ ${ 1 \over s }$ Re(s) > 0 integrate unit impulse
delayed unit step $u(t-\tau) \$ ${ e^{-\tau s} \over s }$ Re(s) > 0 time shift of
unit step
ramp $t \cdot u(t)\$ $\frac{1}{s^2}$ Re(s) > 0 integrate unit
impulse twice
nth power
( for integer n )
$t^n \cdot u(t)$ ${ n! \over s^{n+1} }$ Re(s) > 0
(n > −1)
Integrate unit
step n times
qth power
(for complex q)
$t^q \cdot u(t)$ ${ \Gamma(q+1) \over s^{q+1} }$ Re(s) > 0
Re(q) > −1
1617
nth root $\sqrt[n]{t} \cdot u(t)$ ${ \Gamma(\frac{1}{n}+1) \over s^{\frac{1}{n}+1} }$ Re(s) > 0 Set q = 1/n above.
nth power with frequency shift $t^{n} e^{-\alpha t} \cdot u(t)$ $\frac{n!}{(s+\alpha)^{n+1}}$ Re(s) > −α Integrate unit step,
apply frequency shift
delayed nth power
with frequency shift
$(t-\tau)^n e^{-\alpha (t-\tau)} \cdot u(t-\tau)$ $\frac{n! \cdot e^{-\tau s}}{(s+\alpha)^{n+1}}$ Re(s) > −α Integrate unit step,
apply frequency shift,
apply time shift
exponential decay $e^{-\alpha t} \cdot u(t)$ ${ 1 \over s+\alpha }$ Re(s) > −α Frequency shift of
unit step
two-sided exponential decay $e^{-\alpha|t|} \$ ${ 2\alpha \over \alpha^2 - s^2 }$ −α < Re(s) < α Frequency shift of
unit step
exponential approach $( 1-e^{-\alpha t}) \cdot u(t) \$ $\frac{\alpha}{s(s+\alpha)}$ Re(s) > 0 Unit step minus
exponential decay
sine $\sin(\omega t) \cdot u(t) \$ ${ \omega \over s^2 + \omega^2 }$ Re(s) > 0 Bracewell 1978, p. 227
cosine $\cos(\omega t) \cdot u(t) \$ ${ s \over s^2 + \omega^2 }$ Re(s) > 0 Bracewell 1978, p. 227
hyperbolic sine $\sinh(\alpha t) \cdot u(t) \$ ${ \alpha \over s^2 - \alpha^2 }$ Re(s) > |α| Williams 1973, p. 88
hyperbolic cosine $\cosh(\alpha t) \cdot u(t) \$ ${ s \over s^2 - \alpha^2 }$ Re(s) > |α| Williams 1973, p. 88
exponentially decaying
sine wave
$e^{-\alpha t} \sin(\omega t) \cdot u(t) \$ ${ \omega \over (s+\alpha )^2 + \omega^2 }$ Re(s) > −α Bracewell 1978, p. 227
exponentially decaying
cosine wave
$e^{-\alpha t} \cos(\omega t) \cdot u(t) \$ ${ s+\alpha \over (s+\alpha )^2 + \omega^2 }$ Re(s) > −α Bracewell 1978, p. 227
natural logarithm $\ln (t) \cdot u(t)$ $- { 1 \over s}\, \left[ \ln(s)+\gamma \right]$ Re(s) > 0 Williams 1973, p. 88
Bessel function
of the first kind,
of order n
$J_n( \omega t) \cdot u(t)$ $\frac{ \left(\sqrt{s^2+ \omega^2}-s\right)^{n}}{\omega^n \sqrt{s^2 + \omega^2}}$ Re(s) > 0
(n > −1)
Williams 1973, p. 89
Error function $\mathrm{erf}(t) \cdot u(t)$ ${e^{s^2/4} \left(1 - \operatorname{erf} \left(s/2\right)\right) \over s}$ Re(s) > 0 Williams 1973, p. 89
Explanatory notes:
u(t) represents the Heaviside step function. $\delta(t) \,$ represents the Dirac delta function. Γ(z) represents the Gamma function. γ is the Euler–Mascheroni constant. t, a real number, typically represents time, although it can represent any independent dimension. s is the complex angular frequency, and Re(s) is its real part. α, β, τ, and ω are real numbers. n is an integer.
## s-Domain equivalent circuits and impedances
The Laplace transform is often used in circuit analysis, and simple conversions to the s-Domain of circuit elements can be made. Circuit elements can be transformed into impedances, very similar to phasor impedances.
Here is a summary of equivalents:
Note that the resistor is exactly the same in the time domain and the s-Domain. The sources are put in if there are initial conditions on the circuit elements. For example, if a capacitor has an initial voltage across it, or if the inductor has an initial current through it, the sources inserted in the s-Domain account for that.
The equivalents for current and voltage sources are simply derived from the transformations in the table above.
## Examples: How to apply the properties and theorems
The Laplace transform is used frequently in engineering and physics; the output of a linear time invariant system can be calculated by convolving its unit impulse response with the input signal. Performing this calculation in Laplace space turns the convolution into a multiplication; the latter being easier to solve because of its algebraic form. For more information, see control theory.
The Laplace transform can also be used to solve differential equations and is used extensively in electrical engineering. The Laplace transform reduces a linear differential equation to an algebraic equation, which can then be solved by the formal rules of algebra. The original differential equation can then be solved by applying the inverse Laplace transform. The English electrical engineer Oliver Heaviside first proposed a similar scheme, although without using the Laplace transform; and the resulting operational calculus is credited as the Heaviside calculus.
### Example 1: Solving a differential equation
In nuclear physics, the following fundamental relationship governs radioactive decay: the number of radioactive atoms N in a sample of a radioactive isotope decays at a rate proportional to N. This leads to the first order linear differential equation
$\frac{dN}{dt} = -\lambda N$
where λ is the decay constant. The Laplace transform can be used to solve this equation.
Rearranging the equation to one side, we have
$\frac{dN}{dt} + \lambda N = 0.$
Next, we take the Laplace transform of both sides of the equation:
$\left( s \tilde{N}(s) - N_o \right) + \lambda \tilde{N}(s) \ = \ 0$
where
$\tilde{N}(s) = \mathcal{L}\{N(t)\}$
and
$N_o \ = \ N(0).$
Solving, we find
$\tilde{N}(s) = { N_o \over s + \lambda }.$
Finally, we take the inverse Laplace transform to find the general solution
$\begin{align} N(t) & = \mathcal{L}^{-1} \{\tilde{N}(s)\} = \mathcal{L}^{-1} \left\{ \frac{N_o}{s + \lambda} \right\} \\ & = \ N_o e^{-\lambda t}, \end{align}$
which is indeed the correct form for radioactive decay.
### Example 2: Deriving the complex impedance for a capacitor
In the theory of electrical circuits, the current flow in a capacitor is proportional to the capacitance and rate of change in the electrical potential (in SI units). Symbolically, this is expressed by the differential equation
$i = C { dv \over dt}$
where C is the capacitance (in farads) of the capacitor, i = i(t) is the electric current (in amperes) through the capacitor as a function of time, and v = v(t) is the voltage (in volts) across the terminals of the capacitor, also as a function of time.
Taking the Laplace transform of this equation, we obtain
$I(s) = C \left( s V(s) - V_o \right)$
where
$I(s) = \mathcal{L} \{ i(t) \}, \,$
$V(s) = \mathcal{L} \{ v(t) \}, \,$
and
$V_o \ = \ v(t)|_{t=0}. \,$
Solving for V(s) we have
$V(s) = { I(s) \over sC } + { V_o \over s }.$
The definition of the complex impedance Z (in ohms) is the ratio of the complex voltage V divided by the complex current I while holding the initial state Vo at zero:
$Z(s) = { V(s) \over I(s) } \bigg|_{V_o = 0}.$
Using this definition and the previous equation, we find:
$Z(s) = \frac{1}{sC},$
which is the correct expression for the complex impedance of a capacitor.
### Example 3: Method of partial fraction expansion
Consider a linear time-invariant system with transfer function
$H(s) = \frac{1}{(s+\alpha)(s+\beta)}.$
The impulse response is simply the inverse Laplace transform of this transfer function:
$h(t) = \mathcal{L}^{-1}\{H(s)\}.$
To evaluate this inverse transform, we begin by expanding H(s) using the method of partial fraction expansion:
$\frac{1}{(s+\alpha)(s+\beta)} = { P \over s+\alpha } + { R \over s+\beta }.$
The unknown constants P and R are the residues located at the corresponding poles of the transfer function. Each residue represents the relative contribution of that singularity to the transfer function's overall shape. By the residue theorem, the inverse Laplace transform depends only upon the poles and their residues. To find the residue P, we multiply both sides of the equation by s + α to get
$\frac{1}{s+\beta} = P + { R (s+\alpha) \over s+\beta }.$
Then by letting s = −α, the contribution from R vanishes and all that is left is
$P = \left.{1 \over s+\beta}\right|_{s=-\alpha} = {1 \over \beta - \alpha}.$
Similarly, the residue R is given by
$R = \left.{1 \over s+\alpha}\right|_{s=-\beta} = {1 \over \alpha - \beta}.$
Note that
$R = {-1 \over \beta - \alpha} = - P$
and so the substitution of R and P into the expanded expression for H(s) gives
$H(s) = \left( \frac{1}{\beta-\alpha} \right) \cdot \left( { 1 \over s+\alpha } - { 1 \over s+\beta } \right).$
Finally, using the linearity property and the known transform for exponential decay (see Item #3 in the Table of Laplace Transforms, above), we can take the inverse Laplace transform of H(s) to obtain:
$h(t) = \mathcal{L}^{-1}\{H(s)\} = \frac{1}{\beta-\alpha}\left(e^{-\alpha t}-e^{-\beta t}\right),$
which is the impulse response of the system.
### Example 3.2: Convolution
The same result can be achieved using the convolution property as if the system is a series of filters with transfer functions of 1/(s+a) and 1/(s+b). That is, the inverse of
$H(s) = \frac{1}{(s+a)(s+b)} = \frac{1}{s+a} \cdot \frac{1}{s+b}$
is
$\mathcal{L}^{-1} \left \{ \frac{1}{s+a} \right \} \, * \, \mathcal{L}^{-1} \left \{ \frac{1}{s+b} \right \} = e^{-at} \, * \, e^{-bt} = \int_0^t e^{-ax}e^{-b(t-x)} \, dx = \frac{e^{-a t}-e^{-b t}}{b-a}.$
### Example 4: Mixing sines, cosines, and exponentials
Time function Laplace transform
$e^{-\alpha t}\left[\cos{(\omega t)}+\left(\frac{\beta-\alpha}{\omega}\right)\sin{(\omega t)}\right]u(t)$ $\frac{s+\beta}{(s+\alpha)^2+\omega^2}$
Starting with the Laplace transform
$X(s) = \frac{s+\beta}{(s+\alpha)^2+\omega^2},$
we find the inverse transform by first adding and subtracting the same constant α to the numerator:
$X(s) = \frac{s+\alpha } { (s+\alpha)^2+\omega^2} + \frac{\beta - \alpha }{(s+\alpha)^2+\omega^2}.$
By the shift-in-frequency property, we have
$\begin{align} x(t) & = e^{-\alpha t} \mathcal{L}^{-1} \left\{ {s \over s^2 + \omega^2} + { \beta - \alpha \over s^2 + \omega^2 } \right\} \\[8pt] & = e^{-\alpha t} \mathcal{L}^{-1} \left\{ {s \over s^2 + \omega^2} + \left( { \beta - \alpha \over \omega } \right) \left( { \omega \over s^2 + \omega^2 } \right) \right\} \\[8pt] & = e^{-\alpha t} \left[\mathcal{L}^{-1} \left\{ {s \over s^2 + \omega^2} \right\} + \left( { \beta - \alpha \over \omega } \right) \mathcal{L}^{-1} \left\{ { \omega \over s^2 + \omega^2 } \right\} \right]. \end{align}$
Finally, using the Laplace transforms for sine and cosine (see the table, above), we have
$x(t) = e^{-\alpha t} \left[\cos{(\omega t)}u(t)+\left(\frac{\beta-\alpha}{\omega}\right)\sin{(\omega t)}u(t)\right].$
$x(t) = e^{-\alpha t} \left[\cos{(\omega t)}+\left(\frac{\beta-\alpha}{\omega}\right)\sin{(\omega t)}\right]u(t).$
### Example 5: Phase delay
Time function Laplace transform
$\sin{(\omega t+\phi)} \$ $\frac{s\sin\phi+\omega \cos\phi}{s^2+\omega^2} \$
$\cos{(\omega t+\phi)} \$ $\frac{s\cos\phi - \omega \sin\phi}{s^2+\omega^2} \$
Starting with the Laplace transform,
$X(s) = \frac{s\sin\phi+\omega \cos\phi}{s^2+\omega^2}$
we find the inverse by first rearranging terms in the fraction:
$\begin{align} X(s) & = \frac{s \sin \phi}{s^2 + \omega^2} + \frac{\omega \cos \phi}{s^2 + \omega^2} \\ & = (\sin \phi) \left(\frac{s}{s^2 + \omega^2} \right) + (\cos \phi) \left(\frac{\omega}{s^2 + \omega^2} \right). \end{align}$
We are now able to take the inverse Laplace transform of our terms:
$\begin{align} x(t) & = (\sin \phi) \mathcal{L}^{-1}\left\{\frac{s}{s^2 + \omega^2} \right\} + (\cos \phi) \mathcal{L}^{-1}\left\{\frac{\omega}{s^2 + \omega^2} \right\} \\ & =(\sin \phi)(\cos \omega t) + (\sin \omega t)(\cos \phi). \end{align}$
This is just the sine of the sum of the arguments, yielding:
$x(t) = \sin (\omega t + \phi). \$
We can apply similar logic to find that
$\mathcal{L}^{-1} \left\{ \frac{s\cos\phi - \omega \sin\phi}{s^2+\omega^2} \right\} = \cos{(\omega t+\phi)}. \$
### Example 6: Determining structure of astronomical object from spectrum
The wide and general applicability of the Laplace transform and its inverse is illustrated by an application in astronomy which provides some information on the spatial distribution of matter of an astronomical source of radiofrequency thermal radiation too distant to resolve as more than a point, given its flux density spectrum, rather than relating the time domain with the spectrum (frequency domain).
Assuming certain properties of the object, e.g. spherical shape and constant temperature, calculations based on carrying out an inverse Laplace transformation on the spectrum of the object can produce the only possible model of the distribution of matter in it (density as a function of distance from the center) consistent with the spectrum.18 When independent information on the structure of an object is available, the inverse Laplace transform method has been found to be in good agreement.
## Notes
1. K.F. Riley, M.P. Hobson, S.J. Bence (2010). Mathematical methods for physics and engineering (3rd ed.). Cambridge University Press. p. 455. ISBN 978-0-521-86153-3.
2. J.J.Distefano, A.R. Stubberud, I.J. Williams (1995). Feedback systems and control (2nd ed.). Schaum's outlines. p. 78. ISBN 0-07-017052-5.
3. Mathematical Handbook of Formulas and Tables (3rd edition), S. Lipschutz, M.R. Spiegel, J. Liu, Schuam's Outline Series, p.183, 2009, ISBN 978-0-07-154855-7 - provides the case for real q.
4. On the interpretation of continuum flux observations from thermal radio sources: I. Continuum spectra and brightness contours, M Salem and MJ Seaton, Monthly Notices of the Royal Astronomical Society (MNRAS), Vol. 167, p. 493-510 (1974) II. Three-dimensional models, M Salem, MNRAS Vol. 167, p. 511-516 (1974)
## References
### Modern
• Arendt, Wolfgang; Batty, Charles J.K.; Hieber, Matthias; Neubrander, Frank (2002), Vector-Valued Laplace Transforms and Cauchy Problems, Birkhäuser Basel, ISBN 3-7643-6549-8 .
• Bracewell, Ronald N. (1978), The Fourier Transform and its Applications (2nd ed.), McGraw-Hill Kogakusha, ISBN 0-07-007013-X
• Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), Boston: McGraw-Hill, ISBN 0-07-116043-4 .
• Davies, Brian (2002), Integral transforms and their applications (Third ed.), New York: Springer, ISBN 0-387-95314-0 .
• Feller, William (1971), An introduction to probability theory and its applications. Vol. II., Second edition, New York: John Wiley & Sons, MR 0270403 .
• Korn, G. A.; Korn, T. M. (1967), Mathematical Handbook for Scientists and Engineers (2nd ed.), McGraw-Hill Companies, ISBN 0-07-035370-0 .
• Polyanin, A. D.; Manzhirov, A. V. (1998), Handbook of Integral Equations, Boca Raton: CRC Press, ISBN 0-8493-2876-4 .
• Schwartz, Laurent (1952), "Transformation de Laplace des distributions", Comm. Sém. Math. Univ. Lund [Medd. Lunds Univ. Mat. Sem.] (in French) 1952: 196–206, MR 0052555 .
• Siebert, William McC. (1986), Circuits, Signals, and Systems, Cambridge, Massachusetts: MIT Press, ISBN 0-262-19229-2 .
• Widder, David Vernon (1941), The Laplace Transform, Princeton Mathematical Series, v. 6, Princeton University Press, MR 0005923 .
• Widder, David Vernon (1945), "What is the Laplace transform?", (The American Mathematical Monthly) 52 (8): 419–425, doi:10.2307/2305640, ISSN 0002-9890, JSTOR 2305640, MR 0013447 .
• Williams, J. (1973), Laplace Transforms, Problem Solvers 10, George Allen & Unwin, ISBN 0-04-512021-8
### Historical
• Deakin, M. A. B. (1981), "The development of the Laplace transform", Archive for the History of the Exact Sciences 25 (4): 343–390, doi:10.1007/BF01395660
• Deakin, M. A. B. (1982), "The development of the Laplace transform", Archive for the History of the Exact Sciences 26: 351–381, doi:10.1007/BF00418754
• Euler, L. (1744), "De constructione aequationum", Opera omnia, 1st series 22: 150–161 .
• Euler, L. (1753), "Methodus aequationes differentiales", Opera omnia, 1st series 22: 181–213 .
• Euler, L. (1769), "Institutiones calculi integralis, Volume 2", Opera omnia, 1st series 12 , Chapters 3–5.
• Grattan-Guinness, I (1997), "Laplace's integral solutions to partial differential equations", in Gillispie, C. C., Pierre Simon Laplace 1749–1827: A Life in Exact Science, Princeton: Princeton University Press, ISBN 0-691-01185-0 .
• Lagrange, J. L. (1773), Mémoire sur l'utilité de la méthode, Œuvres de Lagrange 2, pp. 171–234 .
HPTS - Area Progetti - Edu-Soft - JavaEdu - N.Saperi - Ass.Scuola.. - TS BCTV - TSODP - TRTWE
TSE-Wiki - Blog Lavoro - InterAzioni- NormaScuola - Editoriali - Job Search - DownFree !
TerritorioScuola. Some rights reserved. Informazioni d'uso ☞
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 173, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8510255813598633, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/16550/why-do-mirages-only-appear-on-hot-days/16555
|
# Why do mirages only appear on hot days?
A previous question asked why the road sometimes appears wet on hot days.
The reason is that when there's a temperature gradient in the air, it causes a gradient in the index of refraction, causing the light to bend. This diagram (from Lagerbaer's answer to the previous question) is
First, Wikipedia says that in order for the effect to appear, there must be a temperature gradient on the order of a few degrees per hundred meters above the asphalt.
To investigate this claim, I assumed the air near the surface of the asphalt is at constant pressure, so the density is inverse-proportional to the temperature (i.e. I assumed the density variation due to the weight of the air is small compared to the density variation induced by temperature gradient. Otherwise we would see mirages all the time.)
Then I assumed the difference of index of refraction of air from one is proportional to the density, i.e. $n = 1 + a\frac{\rho}{\rho_0}$ with $n$ the index of refraction of air, $a$ some dimensionless constant, and $\rho_0$ some reference density. Looking up the index of refraction of air online, I guessed $a = 0.0003$.
Then I assumed the temperature above the Earth is modeled by $T = T_0 + gy$ with $y$ the height and $g$ a temperature gradient in ${}^{\circ}\mathrm{C}/\mathrm{m}$.
This completes the model. Fermat's principle gives a variational problem to solve for the path of the light. However, the resulting differential equation was hard to work with, so to first order I approximated that the light starts at a horizontal distance $x = -L$ at height $y =h$, slopes down towards the ground with some slope $m$ until it gets to $x=0$, then slopes back up at the same slope $m$ until it gets to $x = L$ and $y = h$ again. Then I chose $m$ to minimize the travel time of this path.
To first order for small $h$, I found $$m = \frac{agL}{2T_0}$$
The problem is that when I plug in $g = 5^{\circ}\mathrm{C}/100\mathrm{m}$, $a = 0.0003$, $T_0 = 300 \mathrm{K}$, and $L = 100 \mathrm{m}$ I get $m = 2.5*10^{-6}$, which is too small to explain the mirage. It would only allow light to dip a quarter millimeter in the path from a car $200 \mathrm{m}$ away to me. In the Wikipedia image of a mirage, the light clearly dips down at least 1000 times that much (check out the blue car).
So my first question is: what's wrong? Why are there mirages when this model predicts that there are not?
My second is: why do we see them only when it's hot? This model depends only weakly on absolute temperature, and hot absolute temperatures actually decrease the effect. Evidently, there are only high temperature gradients on hot days, by why should that be? A temperature gradient over the asphalt comes from sun heating the asphalt directly more than it heats the air. Shouldn't that happen on cold days as well as on hot days?
-
## 1 Answer
I have not done the math but would expect that the radiation from the asphalt as T^4 will favor larger gradients for higher temperatures. I have the impression that air goes something like T^6, so even conduction energy transferred will have larger gradients the hotter it is. Your g is temperature dependent I guess.
Edit in response to edit of question.
Shouldn't that happen on cold days as well as on hot days?
I am copying from the comments below:
if you do the calculation of black body, asphalt at 40C radiates 547watts/m^2. At 41C 551, i.e. difference 4watts/m^2 , whereas at 50C 617 and at 51C 628, i.e. 11Watts/m^2 Delta (T). that is what I mean by g is temperature dependent. If there is convection the asphalt remains cooler and the equilibrium between heating from incoming radiation and cooling from black body is lower.
On a cold day even if windless with no convection the equilibrium temperature will be much lower since it starts heating from a much lower ground temperature. You can fry eggs in the summer in Greece on the asphalt, not in the winter.
In addition 5C per meter is a low number. The asphalt may be at 50 or 60C but at 1 meter it will not be more than 35, in Greece, summer. In northern latitudes maybe 25C?
-
First problem is, that those mirages do not appear on warm days, but on sunny days with low or no wind. The asphalt will easily reach 50 °C or more, and not radiate much! So a thin layer of air above the asphalt is heated to rather high temperatures (by conduction!), and acts as a "mirror". What do You mean with this "air T⁶"? – Georg Nov 4 '11 at 15:10
The black asphalt radiates as T^4 (black body formula) Air is a bad conductor. On sunny windless days, without convection mixing the air, it will be radiation which will be heating the immediate layer of air, the infrared part of the spectrum that will jiggle the molecules in passing. The derivative of T^4 is T^3, the higher the asphalt temperature the higher the increment. The air layers also radiate, but as T^6 ( I would have to search for a link), so each layer heating the next layer has even more incremental temperature dependence. – anna v Nov 4 '11 at 15:29
@georg the above for you – anna v Nov 4 '11 at 15:54
@ anna, nitrogen and oxygen do not absrb any IR! The absorption by the trace CO2 is negligible in this dimensions, and last not least, at 50 to maybe 100 °C radiation of the asphalt is negligible! – Georg Nov 4 '11 at 21:01
– anna v Nov 5 '11 at 5:39
show 7 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335867762565613, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/70171?sort=votes
|
## Is there a relation between $P^*|D|P$ and $|P^*DP|$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Considering in the complex fields. Let $P$ be a nonsingular matrix, $P^*$ be its conjugate transpose, is there a relation between $P^*|D|P$ and $|P^*DP|$, where $D$ is a diagonal matrix? In particular, is it true
$P^* |D|P \ge |P^*DP|$ in the sense of Lowner order, or is there an order for eigenvalues?
Here $|A|=(A^*A)^{1/2}$, the absolute value of a complex matrix.
Edit As I know from Suvrit's answer, there is no relation like $P^* |D|P \ge |P^* DP|$ in the sense of Lowner order. So my question becomes, is the $i$th largest eigenvalue of $P^* |D|P$ larger than that of $|P^*DP|$?
-
Even the modified question has a negative answer; please see my update. – S. Sra Oct 26 2011 at 10:42
## 2 Answers
Update: In the edited question, the OP asks whether $\lambda_i^\downarrow(P^*|D|P) \ge \lambda_i^\downarrow(|P^*DP|)$ (or even the reverse direction). Such a relations do not hold either. Take for e.g., $P=\begin{bmatrix} 2 & 1 \\ 2 & 2\end{bmatrix}$, and use the same $D$ as below.
Then, we have $\lambda^\downarrow(P^*|D|P) = (41.22, 0.776)$, while `$\lambda^\downarrow(|P^*DP|) = (23.369, 1.369)$`.
However, if one assume that $P$ is a contraction, then several interesting results can be shown.
I don't think there is any useful relation.
Here is a counterexample:
\begin{equation*} P = \begin{bmatrix} 2 & 2\\ 2 & 4 \end{bmatrix}, \end{equation*} and \begin{equation*} D = \begin{bmatrix} -2 & 0\\ 0 & 4 \end{bmatrix} \end{equation*} Then, \begin{equation*} P^T|D|P = \begin{bmatrix} 12 & 16\\ 16 & 24 \end{bmatrix} \end{equation*} and
\begin{equation*} |P^TDP|^2 = \begin{bmatrix} 640 & 1536\\ 1536 & 3712 \end{bmatrix} \end{equation*} Then, \begin{equation*} \lambda(P^T|D|P - |P^TDP|) = (-3.3678, 31.4856), \end{equation*} which is indefinite.
-
Isn't your $P^TDP=[8, 24; 24, 56]$, then $|P^TDP|=[11.3137, 22.6274; 22.6274, 56.56858]$? – Sunni Jul 12 2011 at 23:54
sorry, while typesetting I reran with 'sqrt' instead of 'sqrtm' by mistake; it's still a counterexample with the matrix squareroot--- thanks for catching. – S. Sra Jul 13 2011 at 1:54
I think there is a relation between eigenvalues, i.e. the $k$th largest eigenvlaue of $P^* |D|P$ is no less than the $k$th largest eigenvalue of \$|P^*DP| – Zae Kwong Jul 13 2011 at 17:12
Sure, one can have several inequalities; All I am saying is that the semidefinite ordering that you asked in your second question above, does not hold in either direction because of the above indefinite matrix. – S. Sra Jul 13 2011 at 17:40
1
Is there any reason why you do not divide everything by 2? – Tsuyoshi Ito Jul 22 2011 at 13:40
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Using the same example above, I found
p=[1,1;1,2]
p =
```` 1 1
1 2
````
d=[-1,0;0,2]
d =
````-1 0
0 2
````
c=[1,0;0,2]
c =
```` 1 0
0 2
````
eig(((p*d*p)^2)^(1/2))
ans =
````0.2426
8.2426
````
eig(p*c*p)
ans =
````0.1690
````
11.8310
But this example shows it is possible the majorization relation holds.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8498918414115906, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/tagged/differential-equations?page=5&sort=newest&pagesize=50
|
# Tagged Questions
Questions on the symbolic (DSolve, DifferentialRoot) and numerical (NDSolve) solutions of differential equations in Mathematica.
1answer
252 views
### Efficient way to perform elementary integration step with NDSolve internal method
I'm trying to tweak the NDSolve function to perform one elementary integration step (using some explicitly selected stepping algorithm via Method option). Such a possibility is crucial for me, since ...
1answer
408 views
### Solving a PDE containing DiracDelta
I want to get the answer from a PDE: \begin{align*} \frac{\partial \rho(r,t)}{\partial t}&=Dr^{-2}\frac{\partial}{\partial r}r^2h(r)e^{-U(r)}\frac{\partial}{\partial ...
1answer
309 views
### Solving Differential Equation depending on variables solved by NDSolve
How to solve a differential equation which consists of variables depending upon another differential equation?
1answer
120 views
### Using MaxStepFraction as ticks on plot
Is there any way I could use the MaxStepFraction (or grid size) as used in NDSolve in the example below as ticks on the 3d Plot? ...
0answers
141 views
### Trying to plot a solution to a DE on Mathematica [closed]
I am trying to solve and plot a solution to an ODE, but Mathematica keeps returning other useless answers In[0] DSolve[y'[x] == 1 + y[x]^4, y[x], x] Out[0] DSolve[True,y(x),x] Can someone ...
2answers
204 views
### Replacing variable in an equation with an Interpolating function polynomial and plotting residual
I was trying to plot the residual for the solution of my PDE. However, I was unsure about a couple of things. I imported the data and created an Interpolation polynomial with ...
0answers
124 views
### EventLocator with LSODA?
Is the EventLocator option not compatible with LSODA on NDSolve. Below is what I tried to do ...
3answers
132 views
### How could I get the value of y[t] at each specific interpolation point?
sol = NDSolve[{Derivative[2][y][t] + Sin[y[t]] == 0, Derivative[1][y][0] == 0, y[0] == 1}, y, {t, 0, 2}] the above-mentioned differential equations can be solved ...
4answers
268 views
### How can I get the value of a at “t=2.4985352432136567” in the following expression?
By running the following code: ...
3answers
263 views
### Accessing Reduce from DSolve
When solving transcendental equations, Solve frequently warns us that inverse functions are being used so that some solutions may not be found. We also see that ...
1answer
389 views
### How to solve a differential equation of nonhomogeneous fourth order real variable complex function with Mathematica? [closed]
Recently, I was got in trouble in solving a fourth order differential equation with Mathematica. And it is written like: ...
1answer
600 views
### How to discretize a nonlinear PDE fast?
I wish to numerically solve the following PDE. Although there are some complete discussions for solving PDEs in tutorial/NDSolvePDE, there is no hint for the nonlinear case by discretization. Thus, I ...
2answers
223 views
### export data points from differential equation system?
I am solving a dynamical three equation system. Besides plotting the individual effects for each of the state variables in an array and in a tridimensional graph, I would like to export the data ...
1answer
490 views
### Problem while solving system of two second order non linear coupled differential equations using NDSolve function
I am a completely new to Mathematica, and I am sorry if this question is dumb. I have to solve a system of two second order non linear coupled differential equations (that I got from the Lagrangian ...
3answers
704 views
### DSolve gives complex function although the solution is a real one
I have a problem with the DSolve[] command in mathematica 8. Solving the the following 4th order differential equation spits out a complex solution although it should be a real one. The equation is: ...
2answers
244 views
### How to avoid this kind of numerical error caused by extreme parameters when using NDSolve?
Here I use a one-dimensional heat conduction equation as the example. I found that when the thermal diffusion coefficient is small enough, Mathematica will give a result against the second law of ...
1answer
107 views
### Particular solutions of a Differential Equation not evaluated in a given case
Below first case which gives particular solutions of an OED correctly: ...
2answers
612 views
### How to handle NDSolve::ndsz problem (singularity problem)
I have 2 second order differential equations (non-linear). The physics behind them is correct. I verified the equations many times. It is a solid pendulum with a mass-spring at the end of it. Now, ...
2answers
308 views
### Manipulate a Differential Equation result
I want to Manipulate the result of Differential Equation like : ...
2answers
192 views
### Differentiating an unknown solution to a PDE
Sorry if this question is too basic -- I'm not very familiar with Mathematica. I am interested in a way to systematically address the following sort of problem: Suppose that $u=u(x,y)$ is a function ...
2answers
300 views
### Interpreting the interpolating function and saving data to plot with external program
So far I have been solving non-linear pdes with NDSolve and then plotting the result with the in-built Plot3D and ...
0answers
1k views
### Integro-differential equation
I have to numerically solve a nonlinear partial integro-differential equation using Mathematica. This is my equation, \frac{\partial y(x,t)}{\partial t}=\int_{-\infty}^\infty K_0(|x-u|) ...
0answers
41 views
### integro differential equation [duplicate]
Possible Duplicate: Integro-differential equation I have to solve an Integro-differential equation with Mathematica or Matlab, in the following form: ...
4answers
517 views
### Change variables in differential expressions
I have a fairly complicated differential expression in terms of a variable r and two unknown functions of r, B[r] and n[r]. I want to do a Taylor expansion of this around r=infinity. I want to do this ...
1answer
465 views
### I failed to solve a set of one-dimension fluid mechanics PDEs with NDSolve
@DNA The fluid here has been assumed as single component perfect gas i.e. it obeys the equation P=ρRT, the thermal conductivity is assumed as a constant, so the equation is: ...
1answer
177 views
### Multiple simultaneous events in EventLocator method for NDSolve
I'm using NDSolve to integrate a system of ODEs, and EventLocator to stop the integration when it leaves a certain region in phase space. This works perfectly as it should. However, I've also added ...
2answers
411 views
### 3- dimensional plot of 2-dimensional systems of differential equations
Let's take this first example of a 2D output: ...
2answers
589 views
### Animation of Differential Equations from NDSolve with ParametricPlot3D and Evaluate
I have a system of differential equations (referred to as "s") and use NDSolve to obtain the solution. I substitute the interpolated functions for the original ...
1answer
387 views
### vectorial ODE in mathematica with matrix exponentials
I want to solve the following equation in mathematica : DSolve[{X'[t] == A.X[t], X[0] == ( {{0},{0}} )}, X[t], t] It is a system of 2 ODEs coupled by the matrix A, ...
1answer
237 views
### Can DSolve solve systems with unspecified function coefficients?
I am an economic research student with no previous experience with Mathematica, so please pardon me if my questions sounds really stupid. I am hoping to solve a system of nonlinear ODEs symbolically. ...
1answer
154 views
### Could the PrecisionGoal for NDSolve be a negative number?
The help of Mathematica doesn't say so much about the PrecisionGoal for NDSolve, and I never considered much about it even after ...
1answer
311 views
### Fourier series of interpolating function result of NDSolve
I am having a tough time formulating the right question but here goes. I know that solving the pde as in here gives me an interpolating function. I understand that the interpolating function object ...
2answers
276 views
### How to set the initial condition? (to make IC and BC consistent)
I want to find the initial condition which fits mixed boundary condition of Phi[r, Theta, t]. The original initial condition in text is Phi[r, Theta, 0] == 1 . ...
1answer
215 views
### I ran into an error when I was trying to solve a PDE with a piecewise initial condition by NDSolve
This is a very simple one-dimensional heat-conduct equation, the only special part of it is the piecewise initial condition: ...
4answers
1k views
### How can I plot the direction field for a differential equation?
I'd like to plot the graph of the direction field for a differential equation, to get a feel for it. I'm a novice, right now, when it comes to plotting in Mathematica, so I'm hoping that someone can ...
2answers
1k views
### Efficient Langevin Equation Solver
This question is not about good algorithms for solving stochastic differential equations. It is about how to implement simple codes in Mathematica efficiently exploiting Mathematica's programming ...
1answer
117 views
### Problem with NMaximize output [closed]
This is what I'm trying to do: ...
1answer
327 views
### Function output from DSolve
I want to get a function as output form DSolve. For Example : sol = DSolve[{Q''[t] + 40 Q'[t] + 625 Q[t] == 100*Cos[10*t], Q[0] == 0, Q'[0] == 0}, Q[t], t] I ...
3answers
407 views
### How to use results of NDsolve[] for further solving of ODEs?
I have a system of ODEs with 10 eqns. I can solve the first 5 independently. How can I use those results to solve for the remaining 5? An easy example would be $\dot{x}=f(x), \quad \dot{y}=g(x,y)$ ...
1answer
149 views
### Problem with Eventlocator Method for NDSolve
I want to solve the ode and plot the solution v[x] for different values of parameter a where ...
1answer
259 views
### Solution of numerical vector equation
Suppose I have a vector equation: Y'[t]==rhs[Y[t]] and Y[0]==ConstantArray[0,n] where "rhs[Y[t]]" is a black box function ...
1answer
327 views
### Optimizing an energy functional in mathematica using variational calculus
I have an energy functional which I want to optimize using calculus of variation. It would be nice if someone could please post a working example using mathematica. The procedure is as follows, ...
0answers
243 views
### Unexpected result from contour plot - where is the color gradient?
When I plot a contour plot of my (in)famous code :P I get some unexpected results that I don't quite understand. My code (the one that has ...
1answer
110 views
### Backward compatibility issues while DumpSaving Interpolation polynomials
I have bkwrd compatibility issues when I save my NDSolve result (which is an interpolating polynomia) using DumpSave from Mathematica 8 on Windows 7 and then try ...
1answer
387 views
### How to tell mathematica not to resolve stiffness issues
Very often I solve partial differential equations that are nonlinear and could be up to 4th order. In these cases, it is usual for the solution determined by ...
2answers
767 views
### Problem with the DSolve function
OK guys, here's the thing, Mathematica does not return a solution for this system of differential equations: ...
1answer
204 views
### Laplacian or Grad of an InterpolatingFunction
For obtaining a Schlieren image from an equation for density, I need to calculate the first derivative of density and make a contour plot. The below code snippet calculates the first derivative w.r.t ...
0answers
324 views
### Effectively Dirac delta in numeric PDE - Mathematica or Matlab solution?
I am trying to solve a following Partial Differential Equation: u_t(x,y,t)= u_xx(x,y,t)+u_yy(x,y,t) + 7 u(x,x,t) which causes troubles due to the last term ...
2answers
286 views
### How to solve a Differential Equation with DSolve with Function Coefficient?
Suppose I have v[x_] = (1.453 Sech[x + 1])^2 + I Sech[x + 1] Tanh[x + 1] And I have to solve the equation: ...
2answers
450 views
### optimization problem with NDSolve
I want to minimize the function fcc. When fcc is calculated for a specified point the answer is correct: ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9062597155570984, "perplexity_flag": "middle"}
|
http://johncarlosbaez.wordpress.com/category/biodiversity/
|
# Azimuth
## Maximum Entropy and Ecology
21 February, 2013
I already talked about John Harte’s book on how to stop global warming. Since I’m trying to apply information theory and thermodynamics to ecology, I was also interested in this book of his:
• John Harte, Maximum Entropy and Ecology, Oxford U. Press, Oxford, 2011.
There’s a lot in this book, and I haven’t absorbed it all, but let me try to briefly summarize his maximum entropy theory of ecology. This aims to be “a comprehensive, parsimonious, and testable theory of the distribution, abundance, and energetics of species across spatial scales”. One great thing is that he makes quantitative predictions using this theory and compares them to a lot of real-world data. But let me just tell you about the theory.
It’s heavily based on the principle of maximum entropy (MaxEnt for short), and there are two parts:
Two MaxEnt calculations are at the core of the theory: the first yields all the metrics that describe abundance and energy distributions, and the second describes the spatial scaling properties of species’ distributions.
### Abundance and energy distributions
The first part of Harte’s theory is all about a conditional probability distribution
$R(n,\epsilon | S_0, N_0, E_0)$
which he calls the ecosystem structure function. Here:
• $S_0$: the total number of species under consideration in some area.
• $N_0$: the total number of individuals under consideration in that area.
• $E_0$: the total rate of metabolic energy consumption of all these individuals.
Given this,
$R(n,\epsilon | S_0, N_0, E_0) \, d \epsilon$
is the probability that given $S_0, N_0, E_0,$ if a species is picked from the collection of species, then it has $n$ individuals, and if an individual is picked at random from that species, then its rate of metabolic energy consumption is in the interval $(\epsilon, \epsilon + d \epsilon).$
Here of course $d \epsilon$ is ‘infinitesimal’, meaning that we take a limit where it goes to zero to make this idea precise (if we’re doing analytical work) or take it to be very small (if we’re estimating $R$ from data).
I believe that when we ‘pick a species’ we’re treating them all as equally probable, not weighting them according to their number of individuals.
Clearly $R$ obeys some constraints. First, since it’s a probability distribution, it obeys the normalization condition:
$\displaystyle{ \sum_n \int d \epsilon \; R(n,\epsilon | S_0, N_0, E_0) = 1 }$
Second, since the average number of individuals per species is $N_0/S_0,$ we have:
$\displaystyle{ \sum_n \int d \epsilon \; n R(n,\epsilon | S_0, N_0, E_0) = N_0 / S_0 }$
Third, since the average over species of the total rate of metabolic energy consumption of individuals within the species is $E_0/ S_0,$ we have:
$\displaystyle{ \sum_n \int d \epsilon \; n \epsilon R(n,\epsilon | S_0, N_0, E_0) = E_0 / S_0 }$
Harte’s theory is that $R$ maximizes entropy subject to these three constraints. Here entropy is defined by
$\displaystyle{ - \sum_n \int d \epsilon \; R(n,\epsilon | S_0, N_0, E_0) \ln(R(n,\epsilon | S_0, N_0, E_0)) }$
Harte uses this theory to calculate $R,$ and tests the results against data from about 20 ecosystems. For example, he predicts the abundance of species as a function of their rank, with rank 1 being the most abundant, rank 2 being the second most abundant, and so on. And he gets results like this:
The data here are from:
• Green, Harte, and Ostling’s work on a serpentine grassland,
• Luquillo’s work on a 10.24-hectare tropical forest, and
• Cocoli’s work on a 2-hectare wet tropical forest.
The fit looks good to me… but I should emphasize that I haven’t had time to study these matters in detail. For more, you can read this paper, at least if your institution subscribes to this journal:
• J. Harte, T. Zillio, E. Conlisk and A. Smith, Maximum entropy and the state-variable approach to macroecology, Ecology 89 (2008), 2700–2711.
### Spatial abundance distribution
The second part of Harte’s theory is all about a conditional probability distribution
$\Pi(n | A, n_0, A_0)$
This is the probability that $n$ individuals of a species are found in a region of area $A$ given that it has $n_0$ individuals in a larger region of area $A_0.$
$\Pi$ obeys two constraints. First, since it’s a probability distribution, it obeys the normalization condition:
$\displaystyle{ \sum_n \Pi(n | A, n_0, A_0) = 1 }$
Second, since the mean value of $n$ across regions of area $A$ equals $n_0 A/A_0,$ we have
$\displaystyle{ \sum_n n \Pi(n | A, n_0, A_0) = n_0 A/A_0 }$
Harte’s theory is that $\Pi$ maximizes entropy subject to these two constraints. Here entropy is defined by
$\displaystyle{- \sum_n \Pi(n | A, n_0, A_0)\ln(\Pi(n | A, n_0, A_0)) }$
Harte explains two approaches to use this idea to derive ‘scaling laws’ for how $n$ varies with $n$. And again, he compares his predictions to real-world data, and get results that look good to my (amateur, hasty) eye!
I hope sometime I can dig deeper into this subject. Do you have any ideas, or knowledge about this stuff?
26 Comments | biodiversity, biology, information and entropy | Permalink
Posted by John Baez
## The Mathematics of Biodiversity (Part 8)
14 July, 2012
Last time I mentioned that estimating entropy from real-world data is important not just for measuring biodiversity, but also for another area of biology: neurobiology!
When you look at something, neurons in your eye start firing. But how, exactly, is their firing related to what you see? Questions like this are hard! Answering them— ‘cracking the neural code’—is a big challenge. To make progress, neuroscientists are using information theory. But as I explained last time, estimating information from experimental data is tricky.
Romain Brasselet, now a postdoc at the Max Planck Institute for Biological Cybernetics at Tübingen, is working on these topics. He sent me a nice email explaining this area.
This is a bit of a digression, but the Mathematics of Biodiversity program in Barcelona has been extraordinarily multidisciplinary, with category theorists rubbing shoulders with ecologists, immunologists and geneticists. One of the common themes is entropy and its role in biology, so I think it’s worth posting Romain’s comments here. This is what he has to say…
### Information in neurobiology
I will try to explain why neurobiologists are today very interested in reliable estimates of entropy/information and what are the techniques we use to obtain them.
The activity of sensory as well as more central neurons is known to be modulated by external stimulations. In 1926, in a seminal paper, Adrian observed that neurons in the sciatic nerve of the frog fire action potentials (or spikes) when some muscle in the hindlimb is stretched. In addition, he observed that the frequency of the spikes increases with the amplitude of the stretching.
• E.D. Adrian, The impulses produced by sensory nerve endings. (1926).
For another very nice example, in 1962, Hubel and Wiesel found neurons in the cat visual cortex whose activity depends on the orientation of a visual stimulus, a simple black line over white background: some neurons fire preferentially for one orientation of the line (Hubel and Wiesel were awarded the 1981 Nobel Prize in Physiology for their work). This incidentally led to the concept of “receptive field” which is of tremendous importance in neurobiology—but though it’s fascinating, it’s a different topic.
Good, we are now able to define what makes a neuron tick. The problem is that neural activity is often very “noisy”: when the exact same stimulus is presented many times, the responses appear to be very different from trial to trial. Even careful observation cannot necessarily reveal correlations between the stimulations and the neural activity. So we would like a measure capable of capturing the statistical dependencies between the stimulation and the response of the neuron to know if we can say something about the stimulation just by observing the response of a neuron, which is essentially the task of the brain. In particular, we want a fundamental measure that does not rely on any assumption about the functioning of the brain. Information theory provides the tools to do this, that is why we like to use it: we often try to measure the mutual information between stimuli and responses.
To my knowledge, the first paper using information theory in neuroscience was by MacKay and McCulloch in 1952:
• Donald M. Mackay and Warren S. McCulloch, The limiting information capacity of a neuronal link, Bulletin of Mathematical Biophysics 14 (1952), 127–135.
But information theory was not used in neuroscience much until the early 90′s. It started again with a paper by Bialek et al. in 1991:
• W. Bialek, F. Rieke, R. R. de Ruyter van Steveninck and D. Warland, Reading a neural code, Science 252 (1991), 1854–1857.
However, when applying information-theoretic methods to biological data, we often have a limited sampling of the neural response, we are usually very happy when we have 50 trials for a given stimulus. Why is this limited sample a problem?
During the major part of the 20th century, following Adrian’s finding, the paradigm for the neural code was the frequency of the spikes or, equivalently, the number of spikes in a window of time. But in the early 90′s, it was observed that the exact timing of spikes is (in some cases) reliable across trials. So instead of considering the neural response as a single number (the number of spikes), the temporal patterns of spikes started to be taken into account. But time is continuous, so to be able to do actual computations, time was discretized and a neural response became a binary string.
Now, if you consider relevant time-scales, say, a 100 millisecond time window with a 1 millisecond bin with a firing frequency of about 50 per second, then your response space is huge and the estimates of information with only 50 trials are not reliable anymore. That’s why a lot of efforts have been carried to overcome the limited sampling bias.
Now, getting at the techniques developed in this field, John already mentioned the work by Liam Paninski, but here are other very interesting references:
• Stefano Panzeri and Alessandro Treves, Analytical estimates of limited sampling biases in different information measures, Network: Computation in Neural Systems 7 (1996), 87–107.
They computed the first-order bias of the information (related to the Miller–Madow correction) and then used a Bayesian technique to estimate the number of responses not included in the sample but that would be in an infinite sample (a goal similar to that of Good’s rule of thumb).
• S.P. Strong, R. Koberle, R.R. de Ruyter van Steveninck, and W. Bialek, Entropy and information in neural spike trains, Phys. Rev. Lett. 80 (1998), 197–200.
The entropy (or if you prefer, information) estimate can be expanded in a power series in $N$ (the sample size) around the true value. By computing the estimate for various values of $N$ and fitting it with a parabola, it is possible to estimate the value of the entropy as $N \rightarrow \infty.$
These approaches are also well-known:
• Ilya Nemenman, Fariel Shafee and William Bialek, Entropy and inference, revisited, 2002.
• Alexander Kraskov, Harald Stögbauer and Peter Grassberger, Estimating mutual information, Phys. Rev. E. 69 (2004), 066138.
Actually, Stefano Panzeri has quite a few impressive papers about this problem, and recently with colleagues he has made public a free Matlab toolbox for information theory (www.ibtb.org) implementing various correction methods.
Finally, the work by Jonathan Victor is worth mentioning, since he provided (to my knowledge again) the first estimate of mutual information using geometry. This is of particular interest with respect to the work by Christina Cobbold and Tom Leinster on measures of biodiversity that take the distance between species into account:
• J. D. Victor and K. P. Purpura, Nature and precision of temporal coding in visual cortex: a metric-space analysis, Journal of Neural Physiology 76 (1996), 1310–1326.
He introduced a distance between sequences of spikes and from this, derived a lower bound on mutual information.
• Jonathan D. Victor, Binless strategies for estimation of information from neural data, Phys. Rev. E. 66 (2002), 051903.
Taking inspiration from work by Kozachenko and Leonenko, he obtained an estimate of the information based on the distances between the closest responses.
Without getting too technical, that’s what we do in neuroscience about the limited sampling bias. The incentive is that obtaining reliable estimates is crucial to understand the ‘neural code’, the holy grail of computational neuroscientists.
9 Comments | biodiversity, biology, information and entropy | Permalink
Posted by John Baez
## The Mathematics of Biodiversity (Part 7)
12 July, 2012
How ignorant are you?
Do you know?
Do you know how much don’t you know?
It seems hard to accurately estimate your lack of knowledge. It even seems hard to say precisely how hard it is. But the cool thing is, we can actually extract an interesting math question from this problem. And one answer to this question leads to the following conclusion:
There’s no unbiased way to estimate how ignorant you are.
But the devil is in the details. So let’s see the details!
The Shannon entropy of a probability distribution is a way of measuring how ignorant we are when this probability distribution describes our knowledge.
For example, suppose all we care about is whether this ancient Roman coin will land heads up or tails up:
If we know there’s a 50% chance of it landing heads up, that’s a Shannon entropy of 1 bit: we’re missing one bit of information.
But suppose for some reason we know for sure it’s going to land heads up. For example, suppose we know the guy on this coin is the emperor Pupienus Maximus, a egomaniac who had lead put on the back of all coins bearing his likeness, so his face would never hit the dirt! Then the Shannon entropy is 0: we know what’s going to happen when we toss this coin.
Or suppose we know there’s a 90% it will land heads up, and a 10% chance it lands tails up. Then the Shannon entropy is somewhere in between. We can calculate it like this:
$- 0.9 \log_2 (0.9) - 0.1 \log_2 (0.1) = 0.46899...$
so that’s how many bits of information we’re missing.
But now suppose we have no idea. Suppose we just start flipping the coin over and over, and seeing what happens. Can we estimate the Shannon entropy?
Here’s a naive way to do it. First, use your experimental data to estimate the probability that that the coin lands heads-up. Then, stick that probability into the formula for Shannon entropy. For example, say we flip the coin 3 times and it lands head-up once. Then we can estimate the probability of it landing heads-up as 1/3, and tails-up as 2/3. So we can estimate that the Shannon entropy is
$\displaystyle{ - \frac{1}{3} \log_2 (\frac{1}{3}) -\frac{2}{3} \log_2 (\frac{2}{3}) = 0.918... }$
But it turns out that this approach systematically underestimates the Shannon entropy!
Say we have a coin that lands up a certain fraction of the time, say $p.$ And say we play this game: we flip our coin $n$ times, see what we get, and estimate the Shannon entropy using the simple recipe I just illustrated.
Of course, our estimate will depend on the luck of the game. But on average, it will be less than the actual Shannon entropy, which is
$- p \log_2 (p) - (1-p) \log_2 (1-p)$
We can prove this mathematically. But it shouldn’t be surprising. After all, if $n = 1,$ we’re playing a game where we flip the coin just once. And with this game, our naive estimate of the Shannon entropy will always be zero! Each time we play the game, the coin will either land heads up 100% of the time, or tails up 100% of the time!
If we play the game with more coin flips, the error gets less severe. In fact it approaches zero as the number of coin flips gets ever larger, so that $n \to \infty.$ The case where you flip the coin just once is an extreme case—but extreme cases can be good to think about, because they can indicate what may happen in less extreme cases.
One moral here is that naively generalizing on the basis of limited data can make you feel more sure you know what’s going on than you actually are.
I hope you knew that already!
But we can also say, in a more technical way, that the naive way of estimating Shannon entropy is a biased estimator: the average value of the estimator is different from the value of the quantity being estimated.
Here’s an example of an unbiased estimator. Say you’re trying to estimate the probability that the coin will land heads up. You flip it $n$ times and see that it lands up $m$ times. You estimate that the probability is $m/n.$ That’s the obvious thing to do, and it turns out to be unbiased.
Statisticians like to think about estimators, and being unbiased is one way an estimator can be ‘good’. Beware: it’s not the only way! There are estimators that are unbiased, but whose standard deviation is so huge that they’re almost useless. It can be better to have an estimate of something that’s more accurate, even though on average it’s a bit too low. So sometimes, a biased estimator can be more useful than an unbiased estimator.
Nonetheless, my ears perked up when Lou Jost mentioned that there is no unbiased estimator for Shannon entropy. In rough terms, the moral is that:
There’s no unbiased way to estimate how ignorant you are.
I think this is important. For example, it’s important because Shannon entropy is also used as a measure of biodiversity. Instead of flipping a coin repeatedly and seeing which side lands up, now we go out and collect plants or animals, and see which species we find. The relative abundance of different species defines a probability distribution on the set of species. In this language, the moral is:
There’s no unbiased way to estimate biodiversity.
But of course, this doesn’t mean we should give up. We may just have to settle for an estimator that’s a bit biased! And people have spent a bunch of time looking for estimators that are less biased than the naive one I just described.
By the way, equating ‘biodiversity’ with ‘Shannon entropy’ is sloppy: there are many measures of biodiversity. The Shannon entropy is just a special case of the Rényi entropy, which depends on a parameter $q$: we get Shannon entropy when $q = 1.$
As $q$ gets smaller, the Rényi entropy gets more and more sensitive to rare species—or shifting back to the language of probability theory, rare events. It’s the rare events that make Shannon entropy hard to estimate, so I imagine there should be theorems about estimators for Rényi entropy, which say it gets harder to estimate as $q$ gets smaller. Do you know such theorems?
Also, I should add that biodiversity is better captured by the ‘Hill numbers’, which are functions of the Rényi entropy, than by the Rényi entropy itself. (See here for the formulas.) Since these functions are nonlinear, the lack of an unbiased estimator for Rényi entropy doesn’t instantly imply the same for the Hill numbers. So there are also some obvious questions about unbiased estimators for Hill numbers. Do you know answers to those?
Here are some papers on estimators for entropy. Most of these focus on estimating the Shannon entropy of a probability distribution on a finite set.
This old classic has a proof that the ‘naive’ estimator of Shannon entropy is biased, and estimates on the bias:
• Bernard Harris, The statistical estimation of entropy in the non-parametric case, Army Research Office, 1975.
He shows the bias goes to zero as we increase the number of samples: the number I was calling $n$ in my coin flip example. In fact he shows the bias goes to zero like $O(1/n).$ This is big big O notation which means that as $n \to +\infty,$ the bias is bounded by some constant times $1/n.$ This constant depends on the size of our finite set—or, if you want to do better, the class number, which is the number of elements on which our probability distribution is nonzero.
Using this idea, he shows that you can find a less biased estimator if you have a probability distribution $p_i$ on a finite set and you know that exactly $k$ of these probabilities are nonzero. To do this, just take the ‘naive’ estimator I described earlier and add $(k-1)/2n.$ This is called the Miller–Madow bias correction. The bias of this improved estimator goes to zero like $O(1/n^2).$
The problem is that in practice you don’t know ahead of time how many probabilities are nonzero! In applications to biodiversity this would amount to knowing ahead of time how many species exist, before you go out looking for them.
But what about the theorem that there’s no unbiased estimator for Shannon entropy? The best reference I’ve found is this:
• Liam Paninski, Estimation of entropy and mutual information, Neural Computation 15 (2003) 1191-1254.
In Proposition 8 of Appendix A, Paninski gives a quick proof that there is no unbiased estimator of Shannon entropy for probability distributions on a finite set. But his paper goes far beyond this. Indeed, it seems like a pretty definitive modern discussion of the whole subject of estimating entropy. Interestingly, this subject is dominated by neurobiologists studying entropy of signals in the brain! So, lots of his examples involve brain signals.
Another overview, with tons of references, is this:
• J. Beirlant, E. J. Dudewicz, L. Györfi, and E. C. van der Meulen, Nonparametric entropy estimation: an overview.
This paper focuses on the situation where don’t know ahead of time how many probabilities are nonzero:
• Anne Chao and T.-J. Shen, Nonparametric estimation of Shannon’s index of diversity when there are unseen species in sample, Environmental and Ecological Statistics 10 (2003), 429&–443.
In 2003 there was a conference on the problem of estimating entropy, whose webpage has useful information. As you can see, it was dominated by neurobiologists:
• Estimation of entropy and information of undersampled probability distributions: theory, algorithms, and applications to the neural code, Whistler, British Columbia, Canada, 12 December 2003.
By the way, I was very confused for a while, because these guys claim to have found an unbiased estimator of Shannon entropy:
• Stephen Montgomery Smith and Thomas Schürmann, Unbiased estimators for entropy and class number.
However, their way of estimating entropy has a funny property: in the language of biodiversity, it’s only well-defined if our samples include at least one species of each organism. So, we cannot compute this estimate for an arbitary list of $n$ samples. This means it’s not estimator in the usual sense—the sense that Paninski is using! So it doesn’t really contradict Paninski’s result.
To wrap up, let me state Paninski’s result in a mathematically precise way. Suppose $p$ is a probability distribution on a finite set $X$. Suppose $S$ is any number we can compute from $p$: that is, any real-valued function on the set of probability distributions. We’ll be interested in the case where $S$ is the Shannon entropy:
$\displaystyle{ S = -\sum_{x \in X} p(x) \, \log p(x) }$
Here we can use whatever base for the logarithm we like: earlier I was using base 2, but that’s not sacred. Define an estimator to be any function
$\hat{S}: X^n \to \mathbb{R}$
The idea is that given $n$ samples from the set $X,$ meaning points $x_1, \dots, x_n \in X,$ the estimator gives a number $\hat{S}(x_1, \dots, x_n)$. This number is supposed to estimate some feature of the probability distribution $p$: for example, its entropy.
If the samples are independent and distributed according to the distribution $p,$ the sample mean of the estimator will be
$\displaystyle{ \langle \hat{S} \rangle = \sum_{x_1, \dots, x_n \in X} \hat{S}(x_1, \dots, x_n) \, p(x_1) \cdots p(x_n) }$
The bias of the estimator is the difference between the sample mean of the estimator and actual value of $S$:
$\langle \hat{S} \rangle - S$
The estimator $\hat{S}$ is unbiased if this bias is zero for all $p.$
Proposition 8 of Paninski’s paper says there exists no unbiased estimator for entropy! The proof is very short…
Okay, that’s all for today.
I’m back in Singapore now; I learned so much at the Mathematics of Biodiversity conference that there’s no way I’ll be able to tell you all that information. I’ll try to write a few more blog posts, but please be aware that my posts so far give a hopelessly biased and idiosyncratic view of the conference, which would be almost unrecognizable to most of the participants. There are a lot of important themes I haven’t touched on at all… while this business of entropy estimation barely came up: I just find it interesting!
If more of you blogged more, we wouldn’t have this problem.
23 Comments | biodiversity, information and entropy, probability | Permalink
Posted by John Baez
## The Mathematics of Biodiversity (Part 6)
6 July, 2012
Here are two fun botany stories I learned today from Lou Jost.
### The decline and fall of the Roman Empire
I thought Latin was a long-dead language… except in Finland, where 75,000 people regularly listen to the news in Latin. That’s cool, but surely the last time someone seriously needed to write in Latin was at least a century ago… right?
No! Until the beginning of 2012, botanists reporting new species were required to do so in Latin.
Like this:
Arbor ad 8 alta, raminculis sparse pilosis, trichomatis 2-2.5 mm longis. Folia persistentia; laminae anisophyllae, foliis majoribus ellipticus, 12-23.5 cm longis, 6-13 cm latis, minoribus orbicularis, ca 8.5 cm longis, 7.5 cm latis, apice acuminato et caudato, acuminibus 1.5-2 cm longis, basi rotundata ad obtusam, margine integra, supra sericea, trichomatis 2.5-4 mm longis, appressis, pagina inferiore sericea ad pilosam, trichomatis 2-3 mm longis; petioli 4-7 mm longi. Inflorescentia terminalis vel axillaris, cymosa, 8-10 cm latis. Flores bisexuales; calyx tubularis, ca. 6 mm longus, 10-costatus; corolla alba, tubularis, 5-lobata; stamina 5, filis 8-10 mm longis, pubescentia ad insertionem.
The International Botanical Congress finally voted last year to drop this requirement. So, the busy people who are discovering about 2000 species of plants, algae and fungi each year no longer need to file their reports in the language of the Roman Empire.
### Orchid Fever
The first person who publishes a paper on a new species of plant gets to name it. Sometimes the competition is fierce, as for the magnificent orchid shown above, Phragmipedium kovachii.
Apparently one guy beat another, his archenemy, by publishing an article just a few days earlier. But the other guy took his revenge by getting the first guy arrested for illegally taking an endangered orchid out of Peru. The first guy wound up getting two years’ probation and a \$1,000 fine.
But, he got his name on the orchid!
I believe the full story appears here:
• Eric Hansen, Orchid Fever: A Horticultural Tale of Love, Lust, and Lunacy, Vintage Books, New York, 2001.
You can read a summary here.
### Ecominga
By the way, Lou Jost is not only a great discoverer of new orchid species and a biologist deeply devoted to understanding the mathematics of biodiversity. He also runs a foundation called Ecominga, which runs a number of nature reserves in Ecuador, devoted to preserving the amazing biodiversity of the Upper Pastaza Watershed. This area contains over 190 species of plants not found anywhere else in the world, as well as spectacled bears, mountain tapirs, and an enormous variety of birds.
The forests here are being cut down… but Ecominga has bought thousands of hectares in key locations, and is protecting them. They need money to pay the locals who patrol and run the reserves. It’s not a lot of money in the grand scheme of things—a few thousand dollars a month. So if you’re interested, go to the Ecominga website, check out the information and reports and pictures, and think about giving them some help! Or for that matter, contract me and I’ll put you in touch with him.
5 Comments | biodiversity, biology | Permalink
Posted by John Baez
## The Mathematics of Biodiversity (Part 5)
3 July, 2012
I’d be happy to get your feedback on these slides of the talk I’m giving the day after tomorrow:
• John Baez, Diversity, entropy and thermodynamics, 6 July 2012, Exploratory Conference on the Mathematics of Biodiversity, Centre de Recerca Matemàtica, Barcelona.
Abstract: As is well known, some popular measures of biodiversity are formally identical to measures of entropy developed by Shannon, Rényi and others. This fact is part of a larger analogy between thermodynamics and the mathematics of biodiversity, which we explore here. Any probability distribution can be extended to a 1-parameter family of probability distributions where the parameter has the physical meaning of ‘temperature’. This allows us to introduce thermodynamic concepts such as energy, entropy, free energy and the partition function in any situation where a probability distribution is present—for example, the probability distribution describing the relative abundances of different species in an ecosystem. The Rényi entropy of this probability distribution is closely related to the change in free energy with temperature. We give one application of thermodynamic ideas to population dynamics, coming from the work of Marc Harper: as a population approaches an ‘evolutionary optimum’, the amount of Shannon information it has ‘left to learn’ is nonincreasing. This fact is closely related to the Second Law of Thermodynamics.
This talk is rather different than the one I’d envisaged giving! There was a lot of interest in my work on Rényi entropy and thermodynamics, because Rényi entropies—and their exponentials, called the Hill numbers—are an important measure of biodiversity. So, I decided to spend a lot of time talking about that.
11 Comments | biodiversity, biology, information and entropy, mathematics, physics, probability | Permalink
Posted by John Baez
## The Mathematics of Biodiversity (Part 4)
2 July, 2012
Today the conference part of this program is starting:
• Research Program on the Mathematics of Biodiversity, June-July 2012, Centre de Recerca Matemàtica, Barcelona, Spain. Organized by Ben Allen, Silvia Cuadrado, Tom Leinster, Richard Reeve and John Woolliams.
Lou Jost kicked off the proceedings with an impassioned call to think harder about fundamental concepts:
• Lou Jost, Why biologists should care about the mathematics of biodiversity.
Then Tom Leinster gave an introduction to some of these concepts, and Lou explained how they show up in ecology, genetics, economics and physics.
Suppose we have $n$ different species on an island. Suppose a fraction $p_i$ of the organisms belong to the $i$th species. So,
$\displaystyle{ \sum_{i=1}^n p_i = 1}$
and mathematically we can treat these numbers as probabilities.
People have many ways to compute the ‘biodiversity’ from these numbers. Some of these can be wildly misleading when applied incorrectly, and this has led to shocking errors. For example, in genetics, a commonly used formula for determining when plants or animals on a bunch of islands will split into separate species is completely wrong.
In fact, if we’re not careful, some measures of biodiversity can fool us into thinking we’re saving most of the biodiversity when we’re actually losing almost all of it!
One good example involves measures of similarity between tropical butterflies in the canopy (the top of the forest) and the understory (the bottom). According to Lou Just, some published studies say the similarity is about 95%. That sounds like the two communities are almost the same. However, almost no butterflies living in the canopy live in the understory, and vice versa! The problem is that mathematics is being used inappropriately.
Here are four famous measures of biodiversity:
• Species richness. This is just the number of species:
$n$
• Shannon entropy. This is the expected amount of information you gain when someone tells you which species an organism belongs to:
$\displaystyle{ - \sum_{i=1}^n p_i \ln(p_i) }$
• The inverse Simpson index. This is the reciprocal of the probability that two randomly chosen organisms belong to the same species:
$\displaystyle{ 1 \big/ \sum_{i=1}^n p_i^2 }$
The probability that two organisms belong to the same species is called the Simpson index:
$\displaystyle{ \sum_{i=1}^n p_i^2 }$
This is used in economics as a measure of the concentration of wealth, where $p_i$ is the fraction of wealth owned by the $i$th individual. Be careful: there’s a lot of different jargon in different fields, so it’s easy to get confused at first! For example, the probability that two organisms belong to different species is often called the Gini–Simpson index:
$\displaystyle{ 1 - \sum_{i=1}^n p_i^2 }$
It was introduced by the statistician Corrado Gini a century ago, in 1912 and the ecologist Edward H. Simpson in 1949. It’s also called the heterozygosity in genetics.
• The Berger–Parker index. This is the fraction of organisms that belong to the most common species:
$\mathrm{max} \, p_i$
So, unlike the other main ones I’ve listed, this quantity tends to go down when biodiversity goes up. To fix this we could take its reciprocal, as we did with the Simpson index.
What a mess, eh? But here’s some good news: all these quantities are functions of a single quantity, the Rényi entropy:
$\displaystyle{ H_q(p) = \frac{1}{1 -q} \ln \sum_{i=1}^n p_i^q }$
for various values of the parameter $q.$
I’ve written about the Rényi entropies and their role in thermodynamics before on this blog. I’ll also talk about it later in this conference, and I’ll show you my slides. So, I won’t repeat that story here. Suffice it to say that Rényi entropies are fascinating but still a bit mysterious to me.
But one of Lou Jost’s main points is that we can make bad mistakes if we work with Rényi entropies when we should be working with their exponentials, which are called Hill numbers and denoted by a $D$, for ‘diversity’:
$\displaystyle{ {}^qD(p) = e^{H_q(p)} = \left(\sum_{i=1}^n p_i^q \right)^{\frac{1}{1-q}} }$
These were introduced by M. O. Hill in 1973. One reason they’re good is that they are effective numbers. This means that if all the species are equally common, the Hill number equals the number of species, regardless of $q$:
$p_i = \frac{1}{n} \; \Longrightarrow \; {}^qD(p) = n$
So, they’re a way of measuring an ‘effective’ number of species in situations where species are not all equally common.
A closely related fact is that the Hill numbers obey the replication principle. This means that if we have probability distributions on two finite sets, each with Hill number $X$ for some choice of $q,$ and we combine them with equal weights to get a probability distribution on the disjoint union of those sets, the resulting distribution has Hill number $2X.$
Another good fact is that the Hill numbers are as large as possible when all the probabilities $p_i$ are equal. They’re as small as possible, namely 1, when one of the $p_i$ equals 1 and the rest are zero.
Let’s see how all the measures of biodiversity I listed are either Hill numbers or can easily be converted to Hill numbers. We’ll also see that at $q = 0,$ the Hill number treats all species that are present in an equal way, regardless of their abundance. As $q$ increases, it counts more abundant species more heavily, since we’re raising the probabilities $p_i$ to a bigger power. And when $q = \infty$, we only care about the most abundant species: none of the others matter at all!
Here goes:
• The species richness is the limit of the Hill numbers as $q \to 0$ from above:
$\displaystyle{ \lim_{q \to 0^+} {}^qD (p) = n }$
So, we can just call this ${}^0D(p).$
• The exponential of the Shannon entropy is the limit of the Hill numbers as $q \to 1$:
$\displaystyle{ \lim_{q \to 1} {}^qD(p) = \exp\left(- \sum_{i=1}^n p_i \ln(p_i)\right) }$
So, we can just call this ${}^1D(p).$
• The inverse Simpson index is the Hill number at $q = 2$:
$\displaystyle{ {}^2D(p) = 1 \big/ \sum_{i=1}^n p_i^2 }$
• The reciprocal of the Berger–Parker index is the limit of Hill numbers as $q \to +\infty$:
$\displaystyle{ \lim_{q \to +\infty} {}^qD(p) = 1 \big/ \mathrm{max} \, p_i }$
so we can call this quantity ${}^\infty D(p).$
These facts mean that understanding Hill numbers will help us understand lots of measures of biodiversity! And the good properties of Hill numbers will help us avoid dangerous mistakes.
For mathematicians, a good challenge is to find theorems uniquely characterizing the Hill numbers…. preferably with assumptions that biologists will accept as plausible facts about ‘diversity’. Some theorems like this already exist for specific choices of $q,$ but it will be better to characterize the function ${}^q D$ for all values of $q$ in one blow. Tom Leinster is working on such a theorem now.
Another important task is to generalize Hill numbers to take into account things like:
• ‘distances’ between species, measured either genetically, phylogenetically or functionally,
• ‘values’ for species, measured either economically or
any other way.
There’s a lot of work on this, and many of the talks here conference will discuss these generalizations.
13 Comments | biodiversity, biology, information and entropy, mathematics | Permalink
Posted by John Baez
## The Mathematics of Biodiversity (Part 3)
27 June, 2012
We tend to think of biodiversity as a good thing, but sometimes it’s deadly. Yesterday Andrei Korobeinikov gave a talk on ‘Viral evolution within a host’, which was mainly about AIDS.
The virus that causes this disease, HIV, can reproduce very fast. In an untreated patient near death there are between 1010 and 1012 new virions per day! Remember, a virion is an individual virus particle. The virus also has a high mutation rate: about 3 × 10-5 mutations per generation for each base—that is, each molecule of A,T,C, or G in the RNA of the virus. That may not seem like a lot, but if you multiply it by 1012 you’ll see that a huge number of new variations of each base arise within the body of a single patient.
So, evolution is at work within you as you die.
And in fact, many scientists believe that the diversity of the virus eventually overwhelms your immune system! Although it’s apparently not quite certain, it seems that while the body generates B cells and T cells to attack different variants of HIV as they arise, they eventually can’t keep up with the sheer number of variants.
Of course, the fact that the HIV virus attacks the immune system makes the disearse even worse. Here in blue you see the number of T cells per cubic millimeter of blood, and in red you see the number of virions per cubic centimeter of blood for a typical untreated patient:
Mathematicians and physicists have looked at some very simple models to get a qualitative understanding of these issues. One famous paper that started this off is:
• Lev S. Tsimring, Herbert Levine and David A. Kessler, RNA virus evolution via a fitness-space model, Phys. Rev. Lett. 76 (1996), 4440–4443.
The idea here is to say that at any time $t$ the viruses have a probability density $p(r,t)$ of having fitness $r$. In fact the different genotypes of the virus form a cloud in a higher-dimensional space, but these authors are treating that space is 1-dimensional, with fitness as its one coordinate, just to keep things simple. They then write down an equation for how the population density changes with time:
$\displaystyle{\frac{\partial }{\partial t}p(r,t) = (r - \langle r \rangle)\, p(r,t) + D \frac{\partial^2 }{\partial r}p(r,t) - \frac{\partial}{\partial r}(v_{\mathrm{drift}}\, p(r,t)) }$
This is a replication-mutation-drift equation. If we just had
$\displaystyle{\frac{\partial }{\partial t}p(r,t) = (r - \langle r \rangle)\, p(r,t) }$
this would be a version of the replicator equation, which I explained recently in Information Geometry (Part 9). Here
$\displaystyle{ \langle r \rangle = \int_0^\infty r p(r,t) dr }$
is the mean fitness, and the replicator equations says that the fraction of organisms of a given type grows at a rate proportional to how much their fitness exceeds the mean fitness: that’s where the $(r - \langle r \rangle)$ comes from.
If we just had
$\displaystyle{\frac{\partial }{\partial t}p(r,t) = D \frac{\partial^2 }{\partial r^2}p(r,t) }$
this would be the heat equation, which describes diffusion occurring at a rate $D$. This models the mutation of the virus, though not in a very realistic way.
If we just had
$\displaystyle{\frac{\partial}{\partial t} p(r,t) = - \frac{\partial}{\partial r}(v_{\mathrm{drift}} \, p(r,t)) }$
the fitness of the virus would increase at rate equal to the drift velocity $v_{\mathrm{drift}}$.
If we include both the diffusion and drift terms:
$\displaystyle{\frac{\partial }{\partial t} p(r,t) = D \frac{\partial^2 }{\partial r^2}p(r,t) - \frac{\partial}{\partial r}(v_{\mathrm{drift}} \, p(r,t)) }$
we get the Fokker–Planck equation. This is a famous model of something that’s spreading while also drifting along at a constant velocity: for example, a drop of ink in moving water. Its solutions look like this:
Here we start with stuff concentrated at one point, and it spreads out into a Gaussian while drifting along.
By the way, watch out: what biologists call ‘genetic drift’ is actually a form of diffusion, not what physicists call ‘drift’.
More recently, people have looked at another very simple model. You can read about it here:
• Martin A. Nowak, and R. M. May, Virus Dynamics, Oxford University Press, Oxford, 2000.
In this model the variables are:
• the number of healthy human cells of some type, $\mathrm{H}(t)$
• the number of infected human cells of that type, $\mathrm{I}(t)$
• the number of virions, $\mathrm{V}(t)$
These are my names for variables, not theirs. It’s just a sick joke that these letters spell out ‘HIV’.
Chemists like to describe how molecules react and turn into other molecules using ‘chemical reaction networks’. You’ve seen these if you’ve taken chemistry, but I’ve been explaining more about the math of these starting in Network Theory (Part 17). We can also use them here! Though May and Nowak probably didn’t put it this way, we can consider a chemical reaction network with the following 6 reactions:
• the production of a healthy cell:
$\longrightarrow \mathrm{H}$
• the infection of a healthy cell by a virion:
$\mathrm{H} + \mathrm{V} \longrightarrow \mathrm{I}$
• the production of a virion by an infected cell:
$\mathrm{I} \longrightarrow \mathrm{I} + \mathrm{V}$
• the death of a healthy cell:
$\mathrm{H} \longrightarrow$
• the death of a infected cell:
$\mathrm{I} \longrightarrow$
• the death of a virion:
$\mathrm{V} \longrightarrow$
Using a standard recipe which I explained, we can get from this chemical reaction network to some ‘rate equations’ saying how the number of healthy cells, infected cells and virions changes with time:
$\displaystyle{ \frac{d\mathrm{H}}{dt} = \alpha - \beta \mathrm{H}\mathrm{V} - \gamma \mathrm{H} }$
$\displaystyle{ \frac{d\mathrm{I}}{dt} = \beta \mathrm{H}\mathrm{V} - \delta \mathrm{I} }$
$\displaystyle{ \frac{d\mathrm{V}}{dt} = - \beta \mathrm{H}\mathrm{V} + \epsilon \mathrm{I} - \zeta \mathrm{V} }$
The Greek letters are constants called ‘rate constants’, and there’s one for each of the 6 reactions. The equations we get this way are exactly those described by Nowak and May!
What Andrei Korobeinikov is to unify the ideas behind the two models I’ve described here. Alas, I don’t have the energy to explain how. Indeed, I don’t even have the energy to explain what the models I’ve described actually predict. Sad, but true.
I don’t see anything online about Korobeinikov’s new work, but you can read some of his earlier work here:
• Andrei Korobeinikov, Global properties of basic virus dynamics models.
• Suzanne M. O’Regan, Thomas C. Kelly, Andrei Korobeinikov, Michael J. A. O’Callaghan and Alexei V. Pokrovskii, Lyapunov functions for SIR and SIRS epidemic models, Appl. Math. Lett. 23 (2010), 446-448.
The SIR and SIRS models are models of disease that also arise from chemical reaction networks. I explained them back in Network Theory (Part 3). That was before I introduced the terminology of chemical reaction networks… back then I was talking about ‘stochastic Petri nets’, which are an entirely equivalent formalism. Here’s the stochastic Petri net for the SIRS model:
Puzzle: Draw the stochastic Petri net for the HIV model discussed above. It should have 3 yellow circles and 6 aqua squares.
20 Comments | biodiversity, biology, mathematics, networks | Permalink
Posted by John Baez
## Information Geometry (Part 13)
26 June, 2012
Last time I gave a sketchy overview of evolutionary game theory. Now let’s get serious.
I’ll start by explaining ‘Nash equilibria’ for 2-person games. These are situations where neither player can profit by changing what they’re doing. Then I’ll introduce ‘mixed strategies’, where the players can choose among several strategies with different probabilities. Then I’ll introduce evolutionary game theory, where we think of each strategy as a species, and its probability as the fraction of organisms that belong to that species.
Back in Part 9, I told you about the ‘replicator equation’, which says how these fractions change with time thanks to natural selection. Now we’ll see how this leads to the idea of an ‘evolutionarily stable strategy’. And finally, we’ll see that when evolution takes us toward such a stable strategy, the amount of information the organisms have ‘left to learn’ keeps decreasing!
### Nash equilibria
We can describe a certain kind of two-person game using a payoff matrix, which is an $n \times n$ matrix $A_{ij}$ of real numbers. We think of $A_{ij}$ as the payoff that either player gets if they choose strategy $i$ and their opponent chooses strategy $j.$
Note that in this kind of game, there’s no significant difference between the ‘first player’ and the ‘second player’: either player wins an amount $A_{ij}$ if they choose strategy $i$ and their opponent chooses strategy $j.$ So, this kind of game is called symmetric even though the matrix $A_{ij}$ may not be symmetric. Indeed, it’s common for this matrix to be antisymmetric, meaning $A_{ij} = - A_{ji},$ since in this case what one player wins, the other loses. Games with this extra property are called zero-sum games. But we won’t limit ourselves to those!
We say a strategy $i$ is a symmetric Nash equilibrium if
$A_{ii} \ge A_{ji}$
for all $j.$ This means that if both players use strategy $i,$ neither gains anything by switching to another strategy.
For example, suppose our matrix is
$\left( \begin{array}{rr} -1 & -12 \\ 0 & -3 \end{array} \right)$
Then we’ve got the Prisoner’s Dilemma exactly as described last time! Here strategy 1 is cooperate and strategy 2 is defect. If a player cooperates and so does his opponent, he wins
$A_{11} = -1$
meaning he gets one month in jail. We include a minus sign because ‘winning a month in jail’ is not a good thing. If the player cooperates but his opponent defects, he gets a whole year in jail:
$A_{12} = -12$
If he defects but his opponent cooperates, he doesn’t go to jail at all:
$A_{21} = 0$
And if they both defect, they both get three months in jail:
$A_{22} = -3$
You can see that defecting is a Nash equilibrium, since
$A_{22} \ge A_{12}$
So, oddly, if our prisoners know game theory and believe Nash equilibria are best, they’ll both be worse off than if they cooperate and don’t betray each other.
### Nash equilibria for mixed strategies
So far we’ve been assuming that with 100% certainty, each player chooses one strategy $i = 1,2,3,\dots, n.$ Since we’ll be considering more general strategies in a minute, let’s call these pure strategies.
Now let’s throw some probability theory into the stew! Let’s allow the players to pick different pure strategies with different probabilities. So, we define a mixed strategy to be a probability distribution on the set of pure strategies. In other words, it’s a list of $n$ nonnegative numbers
$p_i \ge 0$
that sum to one:
$\displaystyle{ \sum_{i=1}^n p_i = 1 }$
Say I choose the mixed strategy $p$ while you, my opponent, choose the mixed strategy $q.$ Say our choices are made independently. Then the probability that I choose the pure strategy $i$ while you chose $j$ is
$p_i q_j$
so the expected value of my winnings is
$\displaystyle{ \sum_{i,j = 1}^n p_i A_{ij} q_j }$
or using vector notation
$p \cdot A q$
where the dot is the usual dot product on $\mathbb{R}^n.$
We can easily adapt the concept of Nash equilibrium to mixed strategies. A mixed strategy $q$ is a symmetric Nash equilibrium if for any other mixed strategy $p,$
$q \cdot A q \ge p \cdot A q$
This means that if both you and I are playing the mixed strategy $q,$ I can’t improve my expected winnings by unilaterally switching to the mixed strategy $p.$ And neither can you, because the game is symmetric!
If this were a course on game theory, I would now do some examples. But it’s not, so I’ll just send you to page 6 of Sandholm’s paper: he looks at some famous games like ‘hawks and doves’ and ‘rock paper scissors’.
### Evolutionarily stable strategies
We’re finally ready to discuss evolutionarily stable strategies. To do this, let’s reinterpret the ‘pure strategies’ $i = 1,2,3, \dots n$ as species. Here I don’t necessarily mean species in the classic biological sense: I just mean different kinds of self-replicating entities, or replicators. For example, they could be different alleles of the same gene.
Similarly, we’ll reinterpret the ‘mixed strategy’ $p$ as describing a mixed population of replicators, where the fraction of replicators belonging to the $i$th species is $p_i.$ These numbers are still probabilities: $p_i$ is the probability that a randomly chosen replicator will belong to the $i$th species.
We’ll reinterpret the payoff matrix $A_{ij}$ as a fitness matrix. In our earlier discussion of the replicator equation, we assumed that the population $P_i$ of the $i$th species grew according to the replicator equation
$\displaystyle{ \frac{d P_i}{d t} = f_i(P_1, \dots, P_n) P_i }$
where the fitness function $f_i$ is any smooth function of the populations of each kind of replicator.
But in evolutionary game theory it’s common to start by looking at a simple special case where
$\displaystyle{f_i(P_1, \dots, P_n) = \sum_{j=1}^n A_{ij} p_j }$
where
$\displaystyle{ p_j = \frac{P_j}{\sum_k P_k} }$
is the fraction of replicators who belong to the $j$th species.
What does this mean? The idea is that we have a well-mixed population of game players—or replicators. Each one has its own pure strategy—or species. Each one randomly roams around and ‘plays games’ with each other replicator it meets. It gets to reproduce at a rate proportional to its expected winnings.
This is unrealistic in all sorts of ways, but it’s mathematically cute, and it’s been studied a lot, so it’s good to know about. Today I’ll explain evolutionarily stable strategies only in this special case. Later I’ll go back to the general case.
Suppose that we select a sample of replicators from the overall population. What is the mean fitness of the replicators in this sample? For this, we need to know the probability that a replicator from this sample belongs to the $i$th species. Say it’s $q_j.$ Then the mean fitness of our sample is
$\displaystyle{ \sum_{i,j=1}^n q_i A_{ij} p_j }$
This is just a weighted average of the fitnesses in our earlier formula. But using the magic of vectors, we can write this sum as
$q \cdot A p$
We already saw this type of expression in the last section! It’s my expected winnings if I play the mixed strategy $q$ and you play the mixed strategy $p.$
John Maynard Smith defined $q$ to be evolutionarily stable strategy if when we add a small population of ‘invaders’ distributed according to any other probability distribution $p,$ the original population is more fit than the invaders.
In simple terms: a small ‘invading’ population can’t do better than the population as a whole.
Mathematically, this means:
$q \cdot A ((1-\epsilon)q + \epsilon p) > p \cdot A ((1-\epsilon)q + \epsilon p)$
for all mixed strategies $p$ and all sufficiently small $\epsilon \ge 0 .$ Here
$(1-\epsilon)q + \epsilon p$
is the population we get by replacing an $\epsilon$-sized portion of our original population by invaders.
Puzzle: Show that $q$ is a weakly evolutionarily stable strategy if and only these two conditions hold for all mixed stategies $p:$
$q \cdot A q \ge p \cdot A q$
and also
$q \cdot A q = p \cdot A q \; \implies \; q \cdot A p > p \cdot A p$
The first condition says that $q$ is a symmetric Nash equilibrium. In other words, the invaders can’t on average be better playing against the original population than members of the original population are. The second says that if the invaders are just as good at playing against the original population, they must be worse at playing against each other! The combination of these conditions means the invaders won’t take over.
Again, I should do some examples… but instead I’ll refer you to page 9 of Sandholm’s paper, and also these course notes:
• Samuel Alizon and Daniel Cownden, Evolutionary games and evolutionarily stable strategies.
• Samuel Alizon and Daniel Cownden, Replicator dynamics.
### The decrease of relative information
Now comes the punchline… but with a slight surprise twist at the end. Last time we let
$P = (P_1, \dots , P_n)$
be a population that evolves with time according to the replicator equation, and we let $p$ be the corresponding probability distribution. We supposed $q$ was some fixed probability distribution. We saw that the relative information
$I(q,p) = \displaystyle{ \sum_i \ln \left(\frac{q_i}{ p_i }\right) q_i }$
obeys
$\displaystyle{ \frac{d}{dt} I(q,p) = (p - q) } \cdot f(P)$
where $f(P)$ is the vector of fitness functions. So, this relative information can never increase if
$(p - q) \cdot f(P) \le 0$
for all $P$.
We can adapt this to the special case we’re looking at now. Remember, right now we’re assuming
$\displaystyle{f_i(P_1, \dots, P_n) = \sum_{j=1}^n A_{ij} p_j }$
so
$f(P) = A p$
Thus, the relative information will never increase if
$(p - q) \cdot A p \le 0$
or in other words,
$q \cdot A p \ge p \cdot A p \qquad \qquad \qquad \qquad \qquad \qquad (1)$
Now, this looks very similar to the conditions for an evolutionary stable strategy as stated in the Puzzle above. But it’s not the same! That’s the surprise twist.
Remember, the Puzzle says that $q$ is an evolutionarily stable state if for all mixed strategies $p$ we have
$q \cdot A q \ge p \cdot A q \qquad \qquad \qquad \qquad \qquad \qquad (2)$
and also
$q \cdot A q = p \cdot A q \; \implies \; q \cdot A p > p \cdot A p \qquad \; (3)$
Note that condition (1), the one we want, is neither condition (2) nor condition (3)! This drove me crazy for almost a day.
I kept thinking I’d made a mistake, like mixing up $p$ and $q$ somewhere. You’ve got to mind your p’s and q’s in this game!
But the solution turned out to be this. After Maynard Smith came up with his definition of ‘evolutionarily stable state’, another guy came up with a different definition:
• Bernhard Thomas, On evolutionarily stable sets, J. Math. Biology 22 (1985), 105–115.
For him, an evolutionarily stable strategy obeys
$q \cdot A q \ge p \cdot A q \qquad \qquad \qquad \qquad \qquad \qquad (2)$
and also
$q \cdot A p \ge p \cdot A p \qquad \qquad \qquad \qquad \qquad \qquad (1)$
Condition (1) is stronger than condition (3), so he renamed Maynard Smith’s evolutionarily stable strategies weakly evolutionarily stable strategies. And condition (1) guarantees that the relative information $I(q,p)$ can never increase. So, now we’re happy.
Except for one thing: why should we switch from Maynard Smith’s perfectly sensible concept of evolutionarily stable state to this new stronger one? I don’t really know, except that
• it’s not much stronger
and
• it lets us prove the theorem we want!
So, it’s a small mystery for me to mull over. If you have any good ideas, let me know.
5 Comments | biodiversity, biology, information and entropy, mathematics, probability | Permalink
Posted by John Baez
## Information Geometry (Part 12)
24 June, 2012
Last time we saw that if a population evolves toward an ‘evolutionarily stable state’, then the amount of information our population has ‘left to learn’ can never increase! It must always decrease or stay the same.
This result sounds wonderful: it’s a lot like the second law of thermodynamics, which says entropy must always increase. Of course there are some conditions for this wonderful result to hold. The main condition is that the population evolves according to the replicator equation. But the other is the existence of an evolutionarily stable state. Last time I wrote down the rather odd-looking definition of ‘evolutionary stable state’ without justifying it. I need to do that soon. But if you’ve never thought about evolutionary game theory, I think giving you a little background will help. So today let me try that.
### Evolutionary game theory
We’ve been thinking of evolution as similar to inference or learning. In this analogy, organisms are like ‘hypotheses’, and the population ‘does experiments’ to see if these hypotheses make ‘correct predictions’ (i.e., can reproduce) or not. The successful ones are reinforced while the unsuccessful ones are weeded out. As a result, the population ‘learns’. And under the conditions of the theorem we discussed last time, the relative information—the amount ‘left to learn’—goes down!
While you might object to various points of this analogy, it’s useful—and that’s really all you can ask of an analogy. It’s useful because it lets us steal chunks of math from the subjects of Bayesian inference and machine learning and apply them to the study of biodiversity and evolution! This is what Marc Harper has been doing:
• Marc Harper, Information geometry and evolutionary game theory.
• Marc Harper, The replicator equation as an inference dynamic.
But now let’s bring in another analogy, also contained in Harper’s work. We can also think of evolution as similar to a game. In this analogy, organisms are like ‘strategies’—or if you prefer, they have strategies. The winners get to reproduce, while the losers don’t. John Maynard Smith started developing this analogy in 1973, and eventually wrote a whole book on it:
• John Maynard Smith, Evolution and the Theory of Games, Cambridge University Press, 1982.
As far as I can tell, evolutionary game theory has brought almost as many chunks of math to game theory as it has taken from it. Maybe it’s just my ignorance showing, but it seems that game theory becomes considerably deeper when we think about games that many players play again and again, with the winners getting to reproduce, while the losers are eliminated.
According to William Sandholm:
The birth of evolutionary game theory is marked by the publication of a series of papers by mathematical biologist John Maynard Smith. Maynard Smith adapted the methods of traditional game theory, which were created to model the behavior of rational economic agents, to the context of biological natural selection. He proposed his notion of an evolutionarily stable strategy (ESS) as a way of explaining the existence of ritualized animal conflict.
Maynard Smith’s equilibrium concept was provided with an explicit dynamic foundation through a diff erential equation model introduced by Taylor and Jonker. Schuster and Sigmund, following Dawkins, dubbed this model the replicator dynamic, and recognized the close links between this game-theoretic dynamic and dynamics studied much earlier in population ecology and population genetics. By the 1980s, evolutionary game theory was a well-developed and firmly established modeling framework in biology.
Towards the end of this period, economists realized the value of the evolutionary approach to game theory in social science contexts, both as a method of providing foundations for the equilibrium concepts of traditional game theory, and as a tool for selecting among equilibria in games that admit more than one. Especially in its early stages, work by economists in evolutionary game theory hewed closely to the interpretation set out by biologists, with the notion of ESS and the replicator dynamic understood as modeling natural selection in populations of agents genetically programmed to behave in specific ways. But it soon became clear that models of essentially the same form could be used to study the behavior of populations of active decision makers. Indeed, the two approaches sometimes lead to identical models: the replicator dynamic itself can be understood not only as a model of natural selection, but also as one of imitation of successful opponents.
While the majority of work in evolutionary game theory has been undertaken by biologists and economists, closely related models have been applied to questions in a variety of fields, including transportation science, computer science, and sociology. Some paradigms from evolutionary game theory are close relatives of certain models from physics, and so have attracted the attention of workers in this field. All told, evolutionary game theory provides a common ground for workers from a wide range of disciplines.
### The Prisoner’s Dilemma
In game theory, the most famous example is the Prisoner’s Dilemma. In its original form, this ‘game’ is played just once:
Two men are arrested, but the police don’t have enough information to convict them. So they separate the two men, and offer both the same deal: if one testifies against his partner (or defects), and the other remains silent (and thus cooperates with his partner), the defector goes free and the cooperator goes to jail for 12 months. If both remain silent, both are sentenced to only 1 month in jail for a minor charge. If they both defect, they both receive a 3-month sentence. Each prisoner must choose either to defect or cooperate with his partner in crime; neither gets to hear what the other decides. What will they do?
Traditional game theory emphasizes the so-called ‘Nash equilibrium’ for this game, in which both prisoners defect. Why don’t they both cooperate? They’d both be better off if they both cooperated. However, for them to both cooperate is ‘unstable’: either one could shorten their sentence by defecting! By definition, a Nash equilibrium has the property that neither player can improve his situation by unilaterally changing his strategy.
In the Prisoner’s Dilemma, the Nash equilibrium is not very nice: both parties would be happier if they’d only cooperate. That’s why it’s called a ‘dilemma’. Perhaps the most tragic example today is global warming. Even if all players would be better off if all cooperate to reduce carbon emissions, any one will be better off if everybody except themselves cooperates while they emit more carbon.
For this and many other reasons, people have been interested in ‘solving’ the Prisoner’s Dilemma: that is, finding reasons why cooperation might be favored over defection.
This book got people really excited in seeing what evolutionary game theory has to say about the Prisoner’s Dilemma:
• Robert Axelrod, The Evolution of Cooperation, Basic Books, New York, 1984. (A related article with the same title is available online.)
The idea is that under certain circumstances, strategies that are ‘nicer’ than defection will gradually take over. The most famous of these strategies is ‘tit for tat’, meaning that you cooperate the first time and after that do whatever your opponent just did. I won’t go into this further, because it’s a big digression and I’m already digressing too far. I’ll just mention that from the outlook of evolutionary game theory, the Prisoner’s Dilemma is still full of surprises. Just this week, some fascinating new work has been causing a stir:
• William Press and Freeman Dyson, Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent, Edge, 18 June 2012.
I hope I’ve succeeded in giving you a vague superficial sense of the history of evolutionary game theory and why it’s interesting. Next time I’ll get serious about the task at hand, which is to understand ‘evolutionarily stable strategies’. If you want to peek ahead, try this nice paper:
• William H. Sandholm, Evolutionary game theory, 12 November 2007.
This is where I got the long quote by Sandholm on the history of evolutionary game theory. The original quote contained lots of references; if you’re interested in those, go to page 3 of this paper.
9 Comments | biodiversity, biology, information and entropy, mathematics | Permalink
Posted by John Baez
## The Mathematics of Biodiversity (Part 2)
24 June, 2012
How likely is it that the next thing we see is one of a brand new kind? That sounds like a hard question. Last time I told you about the Good–Turing rule for answering this question.
The discussion that blog entry triggered has been very helpful! Among other things, it got Lou Jost more interested in this subject. Two days ago, he showed me the following simple argument for the Good–Turing estimate.
Suppose there are finitely many species of orchid. Suppose the fraction of orchids belonging to the $i$th species is $p_i.$
Suppose we start collecting orchids. Suppose each time we find one, the chance that it’s an orchid of the $i$th species is $p_i.$ Of course this is not true in reality! For example, it’s harder to find a tiny orchid, like this:
than a big one. But never mind.
Say we collect a total of $N$ orchids. What is the probability that we find no orchids of the $i$th species? It is
$(1 - p_i)^N$
Similarly, the probability that we find exactly one orchid of the $i$th species is
$N p_i (1 - p_i)^{N-1}$
And so on: these are the first two terms in a binomial series.
Let $n_1$ be the expected number of singletons: species for which we find exactly one orchid of that species. Then
$\displaystyle{ n_1 = \sum_i N p_i (1 - p_i)^{N-1} }$
Let $D$ be the coverage deficit: the expected fraction of the total population consisting of species that remain undiscovered. Given our assumptions, this is the same as the chance that the next orchid we find will be of a brand new species.
Then
$\displaystyle{ D = \sum_i p_i (1-p_i)^N }$
since $p_i$ is the fraction of orchids belonging to the $i$th species and $(1-p_i)^N$ is the chance that this species remains undiscovered.
Lou Jost pointed out that the formulas for $n_1$ and $D$ are very similar! In particular,
$\displaystyle{ \frac{n_1}{N} = \sum_i p_i (1 - p_i)^{N-1} }$
should be very close to
$\displaystyle{ D = \sum_i p_i (1 - p_i)^N }$
when $N$ is large. So, we should have
$\displaystyle{ D \approx \frac{n_1}{N} }$
In other words: the chance that the next orchid we find is of a brand new species should be close to the fraction of orchids that are singletons now.
Of course it would be nice to turn these ‘shoulds’ into precise theorems! Theorem 1 in this paper does that:
• David McAllester and Robert E. Schapire, On the convergence rate of Good–Turing estimators, February 17, 2000.
By the way: the only difference between the formulas for $n_1/N$ and $D$ is that the first contains the exponent $N-1,$ while the second contains the exponent $N.$ So, Lou Jost’s argument is a version of Boris Borcic’s ‘time-reversal’ idea:
Good’s estimate is what you immediately obtain if you time-reverse your sampling procedure, e.g., if you ask for the probability that there is a change in the number of species in your sample when you randomly remove a specimen from it.
7 Comments | biodiversity, mathematics, probability | Permalink
Posted by John Baez
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 259, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9212106466293335, "perplexity_flag": "middle"}
|
http://danielwalsh.tumblr.com/tagged/gif
|
# Dan’s Geometrical Curiosities
## A collection of art and math projects by a Physics student, with help from a friend who likes blog posts. Table of contents.
### Playing with a Flowing Torus
A friend recently sent me a video that demonstrates a remarkable slinky-like object called a toroflux, designed by Jochen Valett:
It is a long ribbon of spring metal wrapped into a coil, except instead of coiling into a helix, as most springs do, this one coils through itself. It coils again and again, for the entire extent of the ribbon, where it is finally joined to the other end, creating a single loop of metal ribbon. Because the loop is passed through itself many times, the resulting shape is non-trivial in the knot-theoretic sense; it takes the form of a torus knot.
This spring behaves in an extraordinary way when manipulated. Because the spring is under tension, and is seeking the lowest energy state possible, when it is released, it prefers to open into a torus with the largest “minor radius” possible. That is, it becomes a horn torus, where the “hole” in the doughnut vanishes.
The force responsible for this then has a tendency to “grab” onto anything placed through the hole of the torus (like a stick), and if the object has enough friction, the toroflux will have good traction. The result is remarkable; the toroflux begins spinning, while simultaneously falling, creating the illusion of a silver bubble-like structure.
I was so excited by this that I had to purchase one for myself. If I had a ribbon of spring metal, I would have wanted to make my own, but unfortunately I couldn’t find anything that would suffice. Before it arrived, I made predictions about how such an object could work, and I have enjoyed playing with it. I would like to explain here how the toroflux works using an informal analogy, hopefully making it understandable.
#### Capturing the idea of “rolling” down
Let’s say we want to build a device that rolls down a vertical rope. When we think of rolling, the first thing that comes to mind is wheels. So we could accomplish the task by placing a number of vertical bicycle wheels radially in contact with the rope, each wheel connected to its neighbor via some framework. Here I have pictured two wheels for simplicity, but the principle could be generalized to many wheels. Provided that the device doesn’t slip off, and traction is maintained between the wheels and the rope, the assembly will fall, and the wheels will simultaneously turn as indicated by the blue arrows (relative to the falling frame).
This allows our apparatus to roll down straight. But what if we want the device to spin like the toroflux does? Well, just like when driving a car, you can guide a wheeled device in a certain direction by turning the wheels. In our case, we want the device to begin rotating. What would happen if we took these wheels and rotated the axis of each wheel some fixed angle about the radial direction? The adjacent diagram illustrates this with a top-down view of the wheel system, with the rope in cross-section. As the wheels “drive” down the rope, they have no choice but to make the whole assembly spin because of the way they are angled.
Now we have something that is much more similar to the toroflux. It not only rolls down a rope, but the whole system also rotates about the axis of the rope as it does so. The ratio of the downward velocity and the rotational velocity is determined by the angle of the wheels.
#### Doing away with the chassis
We now have a number of wheels held together by the support struts and also held tight to the rope with spring mechanisms to maintain traction. The support chassis is necessary for a simple reason: the wheels are not connected to one another. In the topological sense, the set of wheels is comprised of more than one connected component. We can rid ourselves of the chassis if we create a single wheel that locally (relative to the rope) looks identical to the wheels. That is, the rope should be unable to tell the difference between many tires touching it and a single, serpentine tire touching it and curving inward toward the rope in exactly the same way that the individual wheels did.
Here is a way to accomplish this. If you follow the curvature of a wheel, instead of curving around in a circle and reconnecting directly with itself, it must arc over toward the next wheel and join with it! This wheel will also do the same thing with its other neighbor, and this continues all the way around until we have only one spiraling wheel, all one piece. This does not interfere with the rolling of the wheels, because all the wheels spin in the same direction, so the “excess” tread from one wheel goes into the tread that is moving away on the neighboring wheel. Thus the physical result is essentially the same. (This forms a torus “knot” classified by {1,n}, signifying that in a single rotation around the rope, the coils spiral n times.) The above animation shows how the separate wheels are deformed and connected to each other to form a single coil-tire.
This is all fine and good, but we still require a great amount of tension in the spring to keep it pressed firmly enough on the rope to prevent slipping. As the device rotates faster, centrifugal force will pull the spring-tire apart, and it will begin to lose traction with the rope. We will see in the next section how to prevent this from happening.
#### Using centrifugal force to our advantage
Our device runs into some trouble when it starts spinning really fast. As the device spins, the coils pull apart and lose traction with the rope. We can put the coils under more tension, but there are practical limitations. There is a much more clever way to solve this problem, and it involves using centrifugal force against itself to achieve what we are looking for.
Instead of running the coil in the way I described earlier (as a {1,n} torus), we can loop the torus through itself, so that each coil wraps around the other side of the rope, so when the toroflux begins spinning rapidly, centrifugal force only pulls it tighter around the rope. In this way, we form a {n-1,n} knot, which is a true knot, so that upon orbiting the rope n-1 times, the coil winds n times. This knot resembles but is not actually equivalent to a single torus of circles of the Hopf fibration.
Hopefully this explanation makes clear how the toroflux works, and also helps illuminate the ingenuity and utter elegance of this invention. I also made a video to demonstrate it:
• 19 notes
• Posted at 6:56 PM
• Tagged: topology geometry math gif
### Explaining an astonishing slinky
A friend recently sent me an animated gif depicting a man dropping a stretched slinky in slow motion:
The first time you watch it, it’s hard to resist calling it fake. The slow motion footage clearly shows that the bottom of the slinky doesn’t move until the whole slinky collapses to the bottom. But common sense tells us that this is impossible; of course all parts of the slinky should fall downward under the influence of gravity. (For an extended version of this gif, you can also watch the source video by Veritasium on YouTube, which includes a brief explanation.) My friend was especially skeptical, so I wanted to analyze this problem with cool illustrations and animated gifs…and of course some MATH. If you want to cheat, you can scroll to the end of this post to see the solution and its visualization.
I decided to idealize the problem like this: the slinky is an ideal spring with mass distributed uniformly throughout. It is also a spring that can pass through itself. These assumptions make analyzing the problem easier.
#### Initial Shape of the Slinky
Before we can figure out how the slinky bounces around as time goes on, we first must figure out what the slinky looks like before it is released. Springy things, we know, always try to un-spring themselves (that is, they try to release stress to seek out a “most relaxed state”, or static equilibrium).
(Many things have this behavior. For example, consider water in a pool. Water molecules are trying to “relax” in the earth’s gravitational field — they all want to fall as low as possible — but they cannot all fall to the bottom of the pool, because they all take up some finite volume. If you make waves in the pool, the molecules shuffle about in an effort to try to minimize the total energy in the pool. It’s not until the pool becomes still that you can see that all of the water on the surface has reached a compromise: it becomes flat — equal opportunity for all molecules.)
In our case, we have gravity pulling down, but we also have the stretchiness of the spring playing a role. How can we write down the total energy in the spring due to both the springiness and also the force of gravity?
First, we need a way to keep track of different points on the slinky. Position from the bottom is no good, because the slinky stretches, and points that were once one inch from the bottom now might be two inches. We need a naming system that always assigns the same name to any particular atom on the slinky, so I name an atom by how much mass is below it. While the slinky can stretch around, the amount of mass below a certain point on the slinky remains fixed, so the name of the bottom of the slinky will be zero, while the name of the top will be $$M$$, where $$M$$ is the total mass of the spring. We also need to know where all of these points on the slinky are — how high off the ground they are. We will call the height of a given point (parametrized by $$m$$, the mass below the point) $$y(m)$$.This allows us to write down a formula for the potential energy due to gravity using an integral:
$$U_{gravity}=\int_0^M g y\;dm$$
How about the potential energy in the stretch of the spring? Let’s look at a small region of spring. We can get an idea of how “stretched” the spring is at this point by computing the derivative $$\frac{dy}{dm}$$. If the whole spring were stretched at this same stretch factor, then the length of such a spring would be $$\frac{dy}{dm} M$$. However, the whole spring is not stretched linearly like this, but we know that the small region we’re looking at looks identical to any small region of the same size on the uniformly stretched spring. This means that the amount of stretch energy in this region is equal to the amount of stretch energy in our entire uniformly stretched spring, multiplied by the percentage of region we’re actually looking at. Mathematically, this means if the energy in the uniform spring is $$\frac{1}{2} k(\frac{dy}{dm} M)^2$$, then we must multiply this by a factor $$\frac{dm}{M}$$ to extract the potential energy due to the small region alone. This amounts to
$$U_{spring}=\int_0^M \frac{1}{2} k y’^2 M\;dm.$$
All together, this becomes:
$$U_{total}=\int_0^M g y+\frac{M}{2} k y’^2\;dm.$$
We want to find a spring configuration that minimizes this energy. This amounts to solving the Euler-Lagrange equations
$$\frac{d}{dm}\left[\frac{\partial L}{\partial y’}\right]-\frac{\partial L}{\partial y}=0,$$
where $$L=g y+\frac{M}{2} k y’^2$$ is our potential energy functional. Take note that the primes here represent derivatives with respect to $$m$$, not with respect to $$x$$, which is unconventional.
We find the partial derivatives to be $$\frac{\partial L}{\partial y’}=k y’ M$$, and $$\frac{\partial L}{\partial y}=g$$. The resulting differential equation becomes $$k y” M=g$$, which integrates to
$$y(m)=\frac{g m^2}{2 k M}.$$
Great. Now we’ve found the initial configuration of the dangling spring. An interesting thing to notice here is that the length of the dangling slinky is exactly half that of a weightless spring (with the same spring constant) with all of its mass concentrated at the point at the bottom. I promised there would be some neat graphics, so on the right is what the spring looks like before dropping, with mathematical precision.
#### How the Slinky Wiggles
Now that we know what the spring looks like initially, let’s work out how it moves over time. Looking again at the resulting differential equation, and writing it in the suggestive form
$$k M y^{\prime\prime} -g=0,$$
we can conclude that the left hand side of the equation represents the accelerations of a particular point on our spring, and as they’re being set to zero, it’s no surprise that we were looking for the equilibrium condition. This allows us to extend our result to a dynamic one by replacing the zero with the acceleration
$$k M y^{\prime\prime}-g=\ddot{y},$$
where the double dot over the variable y represents two time derivatives. It’s clear that the first term on the left hand side represents accelerations due to internal tensions in the spring, and the second term represents the acceleration due to gravity. If we move our analysis into a freely-falling reference frame, we see that the g term vanishes, so we are left with
$$k M y^{\prime\prime}=\ddot{y},$$
which is the wave equation in one dimension, which might be more familiar in the form
$$\frac{\partial^2 u}{\partial t^2}=v^2 \frac{\partial^2 u}{\partial x^2},$$
where $$v$$ is the wave velocity. In our case our “velocity” is $$\sqrt{k M}$$, which does not have units of velocity in the standard sense. We want to figure out how the wave will propagate over time given the initial condition we found above.
#### Solving the Wave Equation
Using d’Alembert’s method for solving the wave equation, we can attempt to express the solution as a sum of two oppositely propagating waves:
$$y(m,t)=f(m-vt)+g(m+vt).$$
This wave must satisfy initial conditions, and once the spring is released, it must also satisfy certain boundary conditions on the ends of the spring. Once the spring is released, there cannot any longer be any tension on the edges of the spring. Thus, the stretching at the boundaries must go to zero, and hence $$\frac{dy}{dm}=0$$ at these boundaries. These are equivalent to saying that the wave has “loose” ends; when a wave hits these loose ends, it reflects back off of them unchanged. If the wave was instead held fixed at both ends, like a string on a piano, the wave bounces back inverted, in order to ensure that the string remains fixed at the ends. In the same way, when our wave reflects, it reflects unchanged to ensure that the derivative at the boundaries remains zero.
Secondly, there is also an initial velocity condition. At the moment the slinky is released, it is at rest. How can two waves travelling in opposite directions sum to a motionless wave? Simple: if at a given position one wave (say $$f(m)$$) is moving up (due to the fact that it’s propagating to the right), then the other wave ( $$g(m)$$) must be moving down at the exact same rate (due to the fact that it’s propagating to the left). Since both waves are travelling at the same speed, but in opposite directions, the only possible conclusion is that $$f(m)=g(m)$$. If you find this hard to believe, write it out with math and prove it that way. But the purpose of this whole post is to try to avoid math where possible, while still getting a correct answer. Since we know the form of the wave at time zero, we know that the shape of the two counter-moving waves must each be half the size of the total wave. So we conclude
$$f(m)=g(m)=\frac{1}{2}\frac{g m^2}{2 M k}.$$
We also can’t forget the time dependence. In particular, when the waves go along their way, the space they leave behind is replaced by a reflected version. An easier way for me to imagine this is that $$f$$ and $$g$$ are waves extending infinitely in both directions, generated by flip-floping one period of the above expression for $$f$$ and $$g$$. That is, periodically extend these functions to infinity in both directions, except every other period is mirrored. Now we can imagine these two waves countermoving and superimposing. To achieve this infinite mirror effect, we can use the modulo function, and the following cute trick: shift the parabolic solution over by M, modulo the argument of the function by 2M, and finally shift it back in the opposite direction by M. This will have the effect of looping the function in just the way we wanted. There’s one more remaining step: we need to move back into the non-freefall reference frame by subtracting a $$\frac{1}{2} g t^2$$. When the dust settles, the solution works out to be
$$\frac{g \left(\text{Mod}\left[x-\sqrt{k M}t+M,2M\right]-M\right)^2}{4 k M}+\frac{g \left(\text{Mod}\left[x+\sqrt{k M}t+M,2M\right]-M\right)^2}{4 k M}-\frac{1}{2}g t^2.$$
#### Appreciating (and Visualizing) the Solution
OK, so that’s not very pretty to look at, but the real question is what does the answer look like? Time for a cool animated gif.
On the left is an animation of what our ideal slinky looks like as it falls. The really fascinating thing is that the top of the spring actually passes through itself as it falls. I made one end of it a little narrower than the other so that it would be easier to see this passage.
The second interesting part: we can see very clearly in this animation that the bottom of the spring does in fact remain stationary until the wave reaches it. Further, the position of the ends of the spring are piecewise linear in time, that is, the velocities are constant during any one inversion cycle. Once the next inversion cycle begins, both the velocities change, but are again constant. If you convince yourself that this is true during the first phase, then you must also immediately believe it for any subsequent phase, because once the slinky comes back to its initial configuration, the process is the same except it is falling faster. Thus the speeds will be the same except with a velocity offset.
Perhaps the strangest thing is that this isn’t only true for the endpoints. Look closely at the first phase and you will see that all points on the slinky remain at rest until the wave reaches them from the top. So as strange as it sounds, the slinky never actually changes its shape as it falls, aside from when it undergoes the inversion between phases. The slinky will continue leapfrogging its way downward forever in this way, and aside from some abrupt changes in speed, any point on the slinky remains at a constant velocity as it falls. These abrupt changes in speed ensure that the center of mass of the slinky will continue to fall at the acceleration due to gravity.
Now knowing the answer, is it any easier to convince ourselves of it? I often joke around with a friend of mine about the fact that once you know the answer to a problem, the results seem obvious. In accordance with that principle, there are some ways to understand this phenomenon. If we look at the very bottom of the slinky before it is released, the force of tension in the spring must perfectly cancel the weight of the spring at this point. When the slinky is released, these forces are still in equilibrium, and will remain in equilibrium until the neighboring part of the spring begins to move.
You can also think of it this way: the bottom of the slinky does not begin falling until the wave reaches it. The wave in the slinky carries the information that the top has been released, and since this information travels at a finite speed, there is no way that the bottom of the slinky can know that the top has been released right away. It takes a finite amount of time for this message to propagate through the slinky, and it is not until the wave finally arrives at the bottom that it begins to fall. The bottom of the slinky is like an eager friend of a runner in a race living on the other side of the world in ancient times. Before he can start celebrating, he must wait for the message to get to him via a boat halfway around the world. This principle still holds for points higher up on the slinky, too. Any point on the slinky whatsoever will not begin falling until the wave reaches it. This can also be seen from the slow-motion footage shown in the animated gif!
#### “Extending” the Solution
As you can see, nowhere in my analysis have I assumed that the slinky is of a certain length. Therefore, this principle should work for any kind of slinky.To see evidence of this, watch the follow-up YouTube video of an even longer slinky. In this video, you can see that the top of the slinky is trying to push through itself! Since it can’t, it begins to tumble after a while. In this video, they also briefly mention the idea of considering the slinky in the absence of gravity, which is equivalent to moving into a freely falling frame as we did in the above derivation. This amounts to removing the $$\frac{-1}{2} g t^2$$ term. The solution ends up looking like this:
In the end, if you want to do this successfully with a much longer slinky, and show via experiment that the bottom still doesn’t move, then the radius of your slinky must increase proportionally to the length so that slight imperfections in the way it’s dropped, air currents, etc. do not affect the motion.
• 60 notes
• Posted at 2:10 AM
• Tagged: gif math physics mathematica
### The mice problem: more on curves of pursuit
This drawing is titled “Drawn Entirely With Straight Lines”, and it has some interesting geometric properties. I’ll explain how it works.
First, we can mentally break the hexagon shape into smaller modules:
Each equilateral triangle region is identical, and they are reflections of each other. If you placed this hexagon inside of a kaleidoscope, this pattern would repeat infinitely in all directions.
The imaginary curves that are visible in the picture are closely related to the mice problem, which is a special kind of pursuit problem (like what we saw in my tractrix post). The mice problem goes like this: Three mice are initially sitting at the corners of an equilateral triangle. All at once, each of the mice begin crawling with equal speed directly toward the mouse on their right. What is the path of each mouse?
This is an animation of the path of the rodents (not made by me):
Suppose that the rats periodically and simultaneously leave droppings to mark their path. Again, each triple of simultaneous droppings forms an equilateral triangle. What would this look like? I used Mathematica’s Graphics function to illustrate it:
In this animation, the rodents leave droppings more frequently as their collision becomes more imminent (perhaps in eager anticipation of their meeting). The rodents always form an equilateral triangle, which means that the path that they take is also symmetrical — it is a simultaneous rotation and scaling (shrinking) of the triangles. Here is another animation of this effect:
It is precisely the rotation/scale symmetry that allows this gif to loop seamlessly.
But why do the lines curve like that? The angle between the direction a rat is aimed relative to the center of the triangle remains fixed throughout, and the resulting kind of curve is a logarithmic spiral. If you really use your imagination, as a regular screw can sink into wood, a screw with this logarithmic shape could be theoretically used to enlarge/scale something.
Anyway, here is a quick sketch with pencil and paper I made using the mouse algorithm with step size remaining constant (constant dropping rate):
I also did a similar diagram in Mathematica, again with the triangles more dense toward the center to emphasize the symmetry. That is, if each triangle were rotated and scaled the right amount, it would look identical, except with either an extra or a missing outer triangle.
There is a sculpture in San Francisco that looks like a three-dimensional icosahedral illustration of the mice problem (it’s called Icosaspirale, by Charles Perry). This time, each triangular module hides a similar scale/rotation symmetry, except now in three dimensions, with the center of dilation not lying on the plane of the largest triangle, but instead somewhere between the center of the triangle and the center of the icosahedron. It is really remarkable what kind of astounding visual effects can be generated by imposing various symmetries!
• 16 notes
• Posted at 4:06 AM
• Tagged: art drawing geometry gif visualization math mathematica
### Sudo make me a pseudosphere
You may have seen it before: a magician spreads out a deck of cards on a table into a long row. He tilts the endmost card upward, and the neighboring cards rise accordingly, resting on the adjacent card. The next thing you know there is a beautiful shape, an array of cards coming to a sharp point which can be moved back and forth with the magician’s finger.
This is a tractrix. It’s actually the same shape as the curve that you get when you drag a pole along a line in the ground. You can imagine this by imagining each card in the spread as a pole at a different instant in time. The pole is being dragged by the lower side of the card, and the upper side is the side being dragged, as shown in this animation.
To make things more interesting, what would the cross-section of these cards look like when rotated about a horizontal axis? Each card would rotate to become a cone, so we would be left with a collection of nested cones that together form the surface of revolution of the tractrix.
This surface is called the pseudosphere, and it is a classic example of constant negative curvature (saddle-shaped curvature). What is cool is that we have built a surface with negative curvature out of many cones, which are intrinsically flat in the sense that they could be built from flat waffles to make waffle cones. This presents itself as a wonderful way to build a pseudosphere from paper: cut out a bunch of identical circles from paper, and roll them up into variously sloped cones, and then stack them from most shallow to steepest.
I calculated the optimum sizes of paper circles and cut them up and created a lovely model of this. It looks like a dunce cap with a very sharp point at the top. In reality, it goes infinitely far up, but I didn’t have enough time to build it that far. I did, however, have enough paper to build the infinite version, because it turns out that the surface area of the pseudosphere is finite despite its infinite extent, which is kind of cool.
The pseudosphere is a series of stacked cones, each with slightly differing slopes. The thing that I found particularly interesting about this model is that the cones have the interesting property that they can all be cut from equally sized disks of paper.
The following video is what this Pseudosphere model looks like in all its glory.
• 6 notes
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 38, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423314929008484, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Longitude
|
# Longitude
For Dava Sobel's book about John Harrison, see Longitude (book). For the adaptation of Sobel's book, see Longitude (TV series).
Map of Earth
Longitude (λ)
Lines of longitude appear vertical with varying curvature in this projection, but are actually halves of great ellipses, with identical radii at a given latitude.
Latitude (φ)
Lines of latitude appear horizontal with varying curvature in this projection; but are actually circular with different radii. All locations with a given latitude are collectively referred to as a circle of latitude.
The equator divides the planet into a Northern Hemisphere and a Southern Hemisphere, and has a latitude of 0°.
Geodesy
Fundamentals
Concepts
Technologies
Standards
History
Longitude (pron.: or ),[1] is a geographic coordinate that specifies the east-west position of a point on the Earth's surface. It is an angular measurement, usually expressed in degrees and denoted by the Greek letter lambda (λ). Points with the same longitude lie in lines running from the North Pole to the South Pole. By convention, one of these, the Prime Meridian, which passes through the Royal Observatory, Greenwich, England, establishes the position of zero degrees longitude. The longitude of other places is measured as an angle east or west from the Prime Meridian, ranging from 0° at the Prime Meridian to +180° eastward and −180° westward. Specifically, it is the angle between a plane containing the Prime Meridian and a plane containing the North Pole, South Pole and the location in question. This forms a right-handed coordinate system with the z axis (right hand thumb) pointing from the Earth's center toward the North Pole and the x axis (right hand index finger) extending from Earth's center through the equator at the Prime Meridian.
A location's north-south position along a meridian is given by its latitude, which is (not quite exactly) the angle between the local vertical and the plane of the Equator.
If the Earth were perfectly spherical and homogeneous, then longitude at a point would just be the angle between a vertical north-south plane through that point and the plane of the Greenwich meridian. Everywhere on Earth the vertical north-south plane would contain the Earth's axis. But the Earth is not homogenous, and has mountains—which have gravity and so can shift the vertical plane away from the Earth's axis. The vertical north-south plane still intersects the plane of the Greenwich meridian at some angle; that angle is astronomical longitude, the longitude you calculate from star observations. The longitude shown on maps and GPS devices is the angle between the Greenwich plane and a not-quite-vertical plane through the point; the not-quite-vertical plane is perpendicular to the surface of the spheroid chosen to approximate the Earth's sea-level surface, rather than perpendicular to the sea-level surface itself.
## History
Main article: History of longitude
Amerigo Vespucci's means of determining longitude
The measurement of longitude is important both to cartography and to provide safe ocean navigation. Mariners and explorers for most of history struggled to determine precise longitude. Finding a method of determining exact longitude took centuries, resulting in the history of longitude recording the effort of some of the greatest scientific minds.
Latitude was calculated by observing with quadrant or astrolabe the inclination of the sun or of charted stars, but longitude presented no such manifest means of study.
Amerigo Vespucci was perhaps the first European to proffer a solution, after devoting a great deal of time and energy studying the problem during his sojourns in the New World:
As to longitude, I declare that I found so much difficulty in determining it that I was put to great pains to ascertain the east-west distance I had covered. The final result of my labours was that I found nothing better to do than to watch for and take observations at night of the conjunction of one planet with another, and especially of the conjunction of the moon with the other planets, because the moon is swifter in her course than any other planet. I compared my observations with an almanac. After I had made experiments many nights, one night, the twenty-third of August 1499, there was a conjunction of the moon with Mars, which according to the almanac was to occur at midnight or a half hour before. I found that...at midnight Mars's position was three and a half degrees to the east.[2]
John Harrison solved the greatest problem of his day.[3]
By comparing the relative positions of the moon and Mars with their anticipated positions, Vespucci was able to crudely deduce his longitude. But this method had several limitations: First, it required the occurrence of a specific astronomical event (in this case, Mars passing through the same right ascension as the moon), and the observer needed to anticipate this event via an astronomical almanac. One needed also to know the precise time, which was difficult to ascertain in foreign lands. Finally, it required a stable viewing platform, rendering the technique useless on the rolling deck of a ship at sea. See Lunar distance (navigation).
In 1612, Galileo Galilei proposed that with sufficiently accurate knowledge of the orbits of the moons of Jupiter one could use their positions as a universal clock and this would make possible the determination of longitude, but the method he devised was impracticable[citation needed] and it was never used at sea. In the early 18th century there were several maritime disasters attributable to serious errors in reckoning position at sea, such as the loss of four ships of the fleet of Sir Cloudesley Shovell in the Scilly naval disaster of 1707. Motivated by these disasters, in 1714 the British government established the Board of Longitude: prizes were to be awarded to the first person to demonstrate a practical method for determining the longitude of a ship at sea. These prizes motivated many to search for a solution.
Drawing of Earth with Longitudes
John Harrison, a self-educated English clockmaker then invented the marine chronometer, a key piece in solving the problem of accurately establishing longitude at sea, thus revolutionising and extending the possibility of safe long distance sea travel.[3] Though the British rewarded John Harrison for his marine chronometer in 1773, chronometers remained very expensive and the lunar distance method continued to be used for decades. Finally, the combination of the availability of marine chronometers and wireless telegraph time signals put an end to the use of lunars in the 20th century.
Unlike latitude, which has the equator as a natural starting position, there is no natural starting position for longitude. Therefore, a reference meridian had to be chosen. It was a popular practice to use a nation's capital as the starting point, but other significant locations were also used. While British cartographers had long used the Greenwich meridian in London, other references were used elsewhere, including: El Hierro, Rome, Copenhagen, Jerusalem, Saint Petersburg, Pisa, Paris, Philadelphia, Pennsylvania, and Washington D.C. In 1884, the International Meridian Conference adopted the Greenwich meridian as the universal Prime Meridian or zero point of longitude.
## Noting and calculating longitude
Longitude is given as an angular measurement ranging from 0° at the Prime Meridian to +180° eastward and −180° westward. The Greek letter λ (lambda),[4][5] is used to denote the location of a place on Earth east or west of the Prime Meridian.
Each degree of longitude is sub-divided into 60 minutes, each of which is divided into 60 seconds. A longitude is thus specified in sexagesimal notation as 23° 27′ 30″ E. For higher precision, the seconds are specified with a decimal fraction. An alternative representation uses degrees and minutes, where parts of a minute are expressed in decimal notation with a fraction, thus: 23° 27.500′ E. Degrees may also be expressed as a decimal fraction: 23.45833° E. For calculations, the angular measure may be converted to radians, so longitude may also be expressed in this manner as a signed fraction of π (pi), or an unsigned fraction of 2π.
For calculations, the West/East suffix is replaced by a negative sign in the western hemisphere. Confusingly, the convention of negative for East is also sometimes seen. The preferred convention—that East be positive—is consistent with a right-handed Cartesian coordinate system, with the North Pole up. A specific longitude may then be combined with a specific latitude (usually positive in the northern hemisphere) to give a precise position on the Earth's surface.
Longitude at a point may be determined by calculating the time difference between that at its location and Coordinated Universal Time (UTC). Since there are 24 hours in a day and 360 degrees in a circle, the sun moves across the sky at a rate of 15 degrees per hour (360°/24 hours = 15° per hour). So if the time zone a person is in is three hours ahead of UTC then that person is near 45° longitude (3 hours × 15° per hour = 45°). The word near was used because the point might not be at the center of the time zone; also the time zones are defined politically, so their centers and boundaries often do not lie on meridians at multiples of 15°. In order to perform this calculation, however, a person needs to have a chronometer (watch) set to UTC and needs to determine local time by solar or astronomical observation. The details are more complex than described here: see the articles on Universal Time and on the equation of time for more details.
### Singularity and discontinuity of longitude
Note that the longitude is singular at the Poles and calculations that are sufficiently accurate for other positions, may be inaccurate at or near the Poles. Also the discontinuity at the ±180° meridian must be handled with care in calculations. An example is a calculation of east displacement by subtracting two longitudes, which gives the wrong answer if the two positions are on either side of this meridian. To avoid these complexities, consider replacing latitude and longitude with another horizontal position representation in calculation.
## Plate movement and longitude
The Earth's tectonic plates move relative to one another in different directions at speeds on the order of 50 to 100mm per year.[6] So points on the Earth's surface on different plates are always in motion relative to one another, for example, the longitudinal difference between a point on the Equator in Uganda, on the African Plate, and a point on the Equator in Ecuador, on the South American Plate, is increasing by about 0.0014 arcseconds per year. These tectonic movements likewise affect latitude.
If a global reference frame such as WGS84 is used, the longitude of a place on the surface will change from year to year. To minimize this change, when dealing just with points on a single plate, a different reference frame can be used, whose coordinates are fixed to a particular plate, such as NAD83 for North America or ETRS89 for Europe.
## Length of a degree of longitude
The length of a degree of longitude depends only on the radius of a circle of latitude. For a sphere of radius a that radius at latitude φ is (cos φ) times a, and the length of a one-degree (or π/180 radians) arc along a circle of latitude is
$\Delta^1_{\rm LONG}= \frac{\pi}{180}a \cos \phi \,\!$
$\phi$ $\Delta^1_{\rm LAT}$ $\Delta^1_{\rm LONG}$
0° 110.574 km 111.320 km
15° 110.649 km 107.551 km
30° 110.852 km 96.486 km
45° 111.132 km 78.847 km
60° 111.412 km 55.800 km
75° 111.618 km 28.902 km
90° 111.694 km 0.000 km
When the Earth is modelled by an ellipsoid this arc length becomes [7][8]
$\Delta^1_{\rm LONG}= \frac{\pi a\cos\phi}{180(1 - e^2 \sin^2 \phi)^{1/2}}\,$
where e, the eccentricity of the ellipsoid, is related to the major and minor axes (the equatorial and polar radii respectively) by
$e^2=\frac{a^2-b^2}{a^2}$
An alternative formula is
$\Delta^1_{\rm LONG}= \frac{\pi}{180}a \cos \psi \,\!$
where $\tan \psi = \frac{b}{a} \tan \phi$
Cos φ decreases from 1 at the equator to zero at the poles, so the length of a degree of longitude decreases likewise. This contrasts with the small (1%) increase in the length of a degree of latitude, equator to pole. The table shows both for the WGS84 ellipsoid with a = 6,378,137.0 m and b = 6,356,752.3142 m. Note that the distance between two points 1 degree apart on the same circle of latitude, measured along that circle of latitude, is slightly more than the shortest (geodesic) distance between those points; the difference is less than 0.6 m.
## Ecliptic latitude and longitude
Main article: Ecliptic coordinate system
Ecliptic latitude and longitude are defined for the planets, stars, and other celestial bodies in a broadly similar way to that in which terrestrial latitude and longitude are defined, but there is a special difference.
The plane of zero latitude for celestial objects is the plane of the ecliptic. This plane is not parallel to the plane of the celestial equator, but rather is inclined to it by the obliquity of the ecliptic, which currently has a value of about 23° 26′. The closest celestial counterpart to terrestrial latitude is declination, and the closest celestial counterpart to terrestrial longitude is right ascension. These celestial coordinates bear the same relationship to the celestial equator as terrestrial latitude and longitude do to the terrestrial equator, and they are also more frequently used in astronomy than celestial longitude and latitude.
The polar axis (relative to the celestial equator) is perpendicular to the plane of the Equator, and parallel to the terrestrial polar axis. But the (north) pole of the ecliptic, relevant to the definition of ecliptic latitude, is the normal to the ecliptic plane nearest to the direction of the celestial north pole of the Equator, i.e. 23° 26′ away from it.
Ecliptic latitude is measured from 0° to 90° north (+) or south (−) of the ecliptic. Ecliptic longitude is measured from 0° to 360° eastward (the direction that the Sun appears to move relative to the stars), along the ecliptic from the vernal equinox. The equinox at a specific date and time is a fixed equinox, such as that in the J2000 reference frame.
However, the equinox moves because it is the intersection of two planes, both of which move. The ecliptic is relatively stationary, wobbling within a 4° diameter circle relative to the fixed stars over millions of years under the gravitational influence of the other planets. The greatest movement is a relatively rapid gyration of Earth's equatorial plane whose pole traces a 47° diameter circle caused by the Moon. This causes the equinox to precess westward along the ecliptic about 50″ per year. This moving equinox is called the equinox of date. Ecliptic longitude relative to a moving equinox is used whenever the positions of the Sun, Moon, planets, or stars at dates other than that of a fixed equinox is important, as in calendars, astrology, or celestial mechanics. The 'error' of the Julian or Gregorian calendar is always relative to a moving equinox. The years, months, and days of the Chinese calendar all depend on the ecliptic longitudes of date of the Sun and Moon. The 30° zodiacal segments used in astrology are also relative to a moving equinox. Celestial mechanics (here restricted to the motion of solar system bodies) uses both a fixed and moving equinox. Sometimes in the study of Milankovitch cycles, the invariable plane of the solar system is substituted for the moving ecliptic. Longitude may be denominated from 0 to $\begin{matrix}2\pi\end{matrix}$ radians in either case.
## Longitude on bodies other than Earth
Planetary co-ordinate systems are defined relative to their mean axis of rotation and various definitions of longitude depending on the body. The longitude systems of most of those bodies with observable rigid surfaces have been defined by references to a surface feature such as a crater. The north pole is that pole of rotation that lies on the north side of the invariable plane of the solar system (near the ecliptic). The location of the Prime Meridian as well as the position of body's north pole on the celestial sphere may vary with time due to precession of the axis of rotation of the planet (or satellite). If the position angle of the body's Prime Meridian increases with time, the body has a direct (or prograde) rotation; otherwise the rotation is said to be retrograde.
In the absence of other information, the axis of rotation is assumed to be normal to the mean orbital plane; Mercury and most of the satellites are in this category. For many of the satellites, it is assumed that the rotation rate is equal to the mean orbital period. In the case of the giant planets, since their surface features are constantly changing and moving at various rates, the rotation of their magnetic fields is used as a reference instead. In the case of the Sun, even this criterion fails (because its magnetosphere is very complex and does not really rotate in a steady fashion), and an agreed-upon value for the rotation of its equator is used instead.
For planetographic longitude, west longitudes (i.e., longitudes measured positively to the west) are used when the rotation is prograde, and east longitudes (i.e., longitudes measured positively to the east) when the rotation is retrograde. In simpler terms, imagine a distant, non-orbiting observer viewing a planet as it rotates. Also suppose that this observer is within the plane of the planet's equator. A point on the Equator that passes directly in front of this observer later in time has a higher planetographic longitude than a point that did so earlier in time.
However, planetocentric longitude is always measured positively to the east, regardless of which way the planet rotates. East is defined as the counter-clockwise direction around the planet, as seen from above its north pole, and the north pole is whichever pole more closely aligns with the Earth's north pole. Longitudes traditionally have been written using "E" or "W" instead of "+" or "−" to indicate this polarity. For example, the following all mean the same thing:
• −91°
• 91°W
• +269°
• 269°E.
The reference surfaces for some planets (such as Earth and Mars) are ellipsoids of revolution for which the equatorial radius is larger than the polar radius; in other words, they are oblate spheroids. Smaller bodies (Io, Mimas, etc.) tend to be better approximated by triaxial ellipsoids; however, triaxial ellipsoids would render many computations more complicated, especially those related to map projections. Many projections would lose their elegant and popular properties. For this reason spherical reference surfaces are frequently used in mapping programs.
The modern standard for maps of Mars (since about 2002) is to use planetocentric coordinates. The meridian of Mars is located at Airy-0 crater.[9]
Tidally-locked bodies have a natural reference longitude passing through the point nearest to their parent body: 0° the center of the primary-facing hemisphere, 90° the center of the leading hemisphere, 180° the center of the anti-primary hemisphere, and 270° the center of the trailing hemisphere.[10] However, libration due to non-circular orbits or axial tilts causes this point to move around any fixed point on the celestial body like an analemma.
## References
1. Oxford English Dictionary
2. Vespucci, Amerigo. "Letter from Seville to Lorenzo di Pier Francesco de' Medici, 1500." Pohl, Frederick J. Amerigo Vespucci: Pilot Major. New York: Columbia University Press, 1945. 76-90. Page 80.
3. ^ a b
4. "λ = Longitude east of Greenwich (for longitude west of Greenwich, use a minus sign)."
John P. Snyder, , USGS Professional Paper 1395, page ix
5. Read HH, Watson Janet (1975). Introduction to Geology. New York: Halsted. pp. 13–15.
6. Rapp, Richard H. (1991). Geometric Geodesy, Part I, Dept. of Geodetic Science and Surveying, Ohio State Univ., Columbus, Ohio.[1](Chapter 3)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9090358018875122, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/234768/function-cannot-be-used-as-p-m-f
|
# Function cannot be used as p.m.f
I have the following question: Give one reason why the following function cannot be a probability mass function:
$$p(x)= \frac{1}{10} (x+2)$$ where $x=1,2,3$
The solution that i have, but i'm not sure if its an answer, is that if I sum all possible outcomes, they are different than $1$.
-
It will help you tremendously in your future studies if you learn to write more correctly; it will help in avoiding confusion when you get to more complicated topics. You are not summing all possible outcomes: the outcomes in this instances are $1, 2, 3$, and whether the outcomes sum to $1$ or not is totally irrelevant. What you are summing is all the probability masses as given by the alleged probability mass function, and these should sum to $1$; but they do not, and hence the given $p(x)$ is not a valid probability mass function. – Dilip Sarwate Nov 11 '12 at 14:36
## 2 Answers
You do need the total probability for all $x$ to sum to $1$, but in this case the sum of the probabilities of all possible outcomes is greater than $1$, not less than.
That is, $3/10 + 4/10 + 5/10 = 12/10 > 1$.
So the core reason is right (i.e., sum must be $1$) but your arithmetic looks to be a bit off.
-
The sum is $12/10$ which is greater than $1$.
-
yup just corrected my mistake, thanks =) – VP. Nov 11 '12 at 10:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934519350528717, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/167409-setting-up-quadratic-equation-worded-question-print.html
|
# Setting up a quadratic equation from a worded question...
Printable View
• January 4th 2011, 01:35 AM
glovergooner
Setting up a quadratic equation from a worded question...
Hi, not sure whether this is Pre-uni or uni mathematics but any help would be greatly appreciated. The question states:
"The canteen in the Davis Road building occupies a total floor space of 12000m2. It is to be redeveloped into three separate areas. Area 1 will be self service. Its length will be twice its width. Area 2 will be table service. Its width will be 2m more than the width of Area 1 but its length will be half the length of Area 1. Area 3 will be academic staff only. Its width will be 1m more than the width of Area 2 and its length will be 1m more than the length of Area 1. Set up an appropriate quadratic equation, and use to calculate the width and length of Area 1, Area 2 and Area 3. Show all steps in your calculation clearly."
Could anyone please help me with an explanantion of how to go about completing this question? I don't even know where to start...
Thank you.
• January 4th 2011, 01:57 AM
jgv115
Ok think about it this way.
$A1+A2+A3 = 12000$ So all the individual areas add up to the grand total area of $12000m^2$
Now you have work out the area of each individual area
Quote:
Area 1 will be self service. Its length will be twice its width.
We know that $A= l*w$
So in this case $A1=2w*w$
Quote:
Area 2 will be table service. Its width will be 2m more than the width of Area 1 but its length will be half the length of Area 1
$A2 = (2+w) * (0.5* 2w)$
Quote:
Area 3 will be academic staff only. Its width will be 1m more than the width of Area 2 and its length will be 1m more than the length of Area 1
$A3= (3+w) * (2w+1)$
Now sub all those individual equations back into equation 1 and solve for w
$12000 = 2w^2 + (2+w)(w) + (3+w)(2w+1)$
I leave you to do the rest. You have to expand then either factorise or use the quadratic formula
• January 4th 2011, 03:14 AM
glovergooner
Fantastic, thank you so much. This is great.
All times are GMT -8. The time now is 05:16 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334498643875122, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/55720-sin-sec-cosine.html
|
# Thread:
1. ## Sin, sec and cosine.
If sec^2(x) – 1 = sin(x) , which one of the following is a possible value for x?
how do you do this?
The choices are
a) - $\frac{\pi}{2}$
b)= $\frac{\pi}{4}$
c) 0
d) $\frac{\pi}{2}$
e) $\frac{\pi}{4}$
thanks in advance!
2. Originally Posted by fabxx
The choices are
a) - $\frac{\pi}{2}$
b)= $\frac{\pi}{4}$
c) 0
d) $\frac{\pi}{2}$
e) $\frac{\pi}{4}$
thanks in advance!
see, by inspection, 0 works. $\sec^2 0 - 1 = 1 - 1 = 0 = \sin 0$
note that the left hand side (which is actually $\tan^2 x$) is an even function, so if (a) and (b) don't work, (d) and (e) would give the same answer and hence not work either. (c) being 0 catches your eye though, and i would test that first, and what do you know, it worked!
doing this with the algebra would be a bit more complicated and time consuming. so just do the check. you have to check at most 3 values, which you can do pretty quickly if you memorized the trig values for the special angles.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9156126976013184, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Pattern_recognition
|
# Pattern recognition
For other uses, see Pattern recognition (disambiguation).
In machine learning, pattern recognition is the assignment of a label to a given input value. An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of classes (for example, determine whether a given email is "spam" or "non-spam"). However, pattern recognition is a more general problem that encompasses other types of output as well. Other examples are regression, which assigns a real-valued output to each input; sequence labeling, which assigns a class to each member of a sequence of values (for example, part of speech tagging, which assigns a part of speech to each word in an input sentence); and parsing, which assigns a parse tree to an input sentence, describing the syntactic structure of the sentence.
Pattern recognition algorithms generally aim to provide a reasonable answer for all possible inputs and to perform "most likely" matching of the inputs, taking into account their statistical variation. This is opposed to pattern matching algorithms, which look for exact matches in the input with pre-existing patterns. A common example of a pattern-matching algorithm is regular expression matching, which looks for patterns of a given sort in textual data and is included in the search capabilities of many text editors and word processors. In contrast to pattern recognition, pattern matching is generally not considered a type of machine learning, although pattern-matching algorithms (especially with fairly general, carefully tailored patterns) can sometimes succeed in providing similar-quality output to the sort provided by pattern-recognition algorithms.
Pattern recognition is studied in many fields, including psychology, psychiatry, ethology, cognitive science, traffic flow and computer science.
## Overview
Pattern recognition is generally categorized according to the type of learning procedure used to generate the output value. Supervised learning assumes that a set of training data (the training set) has been provided, consisting of a set of instances that have been properly labeled by hand with the correct output. A learning procedure then generates a model that attempts to meet two sometimes conflicting objectives: Perform as well as possible on the training data, and generalize as well as possible to new data (usually, this means being as simple as possible, for some technical definition of "simple", in accordance with Occam's Razor, discussed below). Unsupervised learning, on the other hand, assumes training data that has not been hand-labeled, and attempts to find inherent patterns in the data that can then be used to determine the correct output value for new data instances. A combination of the two that has recently been explored is semi-supervised learning, which uses a combination of labeled and unlabeled data (typically a small set of labeled data combined with a large amount of unlabeled data). Note that in cases of unsupervised learning, there may be no training data at all to speak of; in other words, the data to be labeled is the training data.
Note that sometimes different terms are used to describe the corresponding supervised and unsupervised learning procedures for the same type of output. For example, the unsupervised equivalent of classification is normally known as clustering, based on the common perception of the task as involving no training data to speak of, and of grouping the input data into clusters based on some inherent similarity measure (e.g. the distance between instances, considered as vectors in a multi-dimensional vector space), rather than assigning each input instance into one of a set of pre-defined classes. Note also that in some fields, the terminology is different: For example, in community ecology, the term "classification" is used to refer to what is commonly known as "clustering".
The piece of input data for which an output value is generated is formally termed an instance. The instance is formally described by a vector of features, which together constitute a description of all known characteristics of the instance. (These feature vectors can be seen as defining points in an appropriate multidimensional space, and methods for manipulating vectors in vector spaces can be correspondingly applied to them, such as computing the dot product or the angle between two vectors.) Typically, features are either categorical (also known as nominal, i.e., consisting of one of a set of unordered items, such as a gender of "male" or "female", or a blood type of "A", "B", "AB" or "O"), ordinal (consisting of one of a set of ordered items, e.g., "large", "medium" or "small"), integer-valued (e.g., a count of the number of occurrences of a particular word in an email) or real-valued (e.g., a measurement of blood pressure). Often, categorical and ordinal data are grouped together; likewise for integer-valued and real-valued data. Furthermore, many algorithms work only in terms of categorical data and require that real-valued or integer-valued data be discretized into groups (e.g., less than 5, between 5 and 10, or greater than 10).
### Probabilistic classifiers
Many common pattern recognition algorithms are probabilistic in nature, in that they use statistical inference to find the best label for a given instance. Unlike other algorithms, which simply output a "best" label, oftentimes probabilistic algorithms also output a probability of the instance being described by the given label. In addition, many probabilistic algorithms output a list of the N-best labels with associated probabilities, for some value of N, instead of simply a single best label. When the number of possible labels is fairly small (e.g., in the case of classification), N may be set so that the probability of all possible labels is output. Probabilistic algorithms have many advantages over non-probabilistic algorithms:
• They output a confidence value associated with their choice. (Note that some other algorithms may also output confidence values, but in general, only for probabilistic algorithms is this value mathematically grounded in probability theory. Non-probabilistic confidence values can in general not be given any specific meaning, and only used to compare against other confidence values output by the same algorithm.)
• Correspondingly, they can abstain when the confidence of choosing any particular output is too low.
• Because of the probabilities output, probabilistic pattern-recognition algorithms can be more effectively incorporated into larger machine-learning tasks, in a way that partially or completely avoids the problem of error propagation.
### How many feature variables are important?
Feature selection algorithms, attempt to directly prune out redundant or irrelevant features. A general introduction to feature selection which summarizes approaches and challenges, has been given.[1] The complexity of feature-selection is, because of its non-monotonous character, an optimization problem where given a total of $n$ features the powerset consisting of all $2^n-1$ subsets of features need to be explored. The Branch-and-Bound algorithm [2] does reduce this complexity but is intractable for medium to large values of the number of available features $n$. For a large-scale comparison of feature-selection algorithms see .[3]
Techniques to transform the raw feature vectors (feature extraction) are sometimes used prior to application of the pattern-matching algorithm. For example, feature extraction algorithms attempt to reduce a large-dimensionality feature vector into a smaller-dimensionality vector that is easier to work with and encodes less redundancy, using mathematical techniques such as principal components analysis (PCA). The distinction between feature selection and feature extraction is that the resulting features after feature extraction has taken place are of a different sort than the original features and may not easily be interpretable, while the features left after feature selection are simply a subset of the original features.
## Problem statement (supervised version)
Formally, the problem of supervised pattern recognition can be stated as follows: Given an unknown function $g:\mathcal{X}\rightarrow\mathcal{Y}$ (the ground truth) that maps input instances $\boldsymbol{x} \in \mathcal{X}$ to output labels $y \in \mathcal{Y}$, along with training data $\mathbf{D} = \{(\boldsymbol{x}_1,y_1),\dots,(\boldsymbol{x}_n, y_n)\}$ assumed to represent accurate examples of the mapping, produce a function $h:\mathcal{X}\rightarrow\mathcal{Y}$ that approximates as closely as possible the correct mapping $g$. (For example, if the problem is filtering spam, then $\boldsymbol{x}_i$ is some representation of an email and $y$ is either "spam" or "non-spam"). In order for this to be a well-defined problem, "approximates as closely as possible" needs to be defined rigorously. In decision theory, this is defined by specifying a loss function that assigns a specific value to "loss" resulting from producing an incorrect label. The goal then is to minimize the expected loss, with the expectation taken over the probability distribution of $\mathcal{X}$. In practice, neither the distribution of $\mathcal{X}$ nor the ground truth function $g:\mathcal{X}\rightarrow\mathcal{Y}$ are known exactly, but can be computed only empirically by collecting a large number of samples of $\mathcal{X}$ and hand-labeling them using the correct value of $\mathcal{Y}$ (a time-consuming process, which is typically the limiting factor in the amount of data of this sort that can be collected). The particular loss function depends on the type of label being predicted. For example, in the case of classification, the simple zero-one loss function is often sufficient. This corresponds simply to assigning a loss of 1 to any incorrect labeling and implies that the optimal classifier minimizes the error rate on independent test data (i.e. counting up the fraction of instances that the learned function $h:\mathcal{X}\rightarrow\mathcal{Y}$ labels wrongly, which is equivalent to maximizing the number of correctly classified instances). The goal of the learning procedure is then to minimize the error rate (maximize the correctness) on a "typical" test set.
For a probabilistic pattern recognizer, the problem is instead to estimate the probability of each possible output label given a particular input instance, i.e., to estimate a function of the form
$p({\rm label}|\boldsymbol{x},\boldsymbol\theta) = f\left(\boldsymbol{x};\boldsymbol{\theta}\right)$
where the feature vector input is $\boldsymbol{x}$, and the function f is typically parameterized by some parameters $\boldsymbol{\theta}$.[4] In a discriminative approach to the problem, f is estimated directly. In a generative approach, however, the inverse probability $p({\boldsymbol{x}|\rm label})$ is instead estimated and combined with the prior probability $p({\rm label}|\boldsymbol\theta)$ using Bayes' rule, as follows:
$p({\rm label}|\boldsymbol{x},\boldsymbol\theta) = \frac{p({\boldsymbol{x}|\rm label}) p({\rm label|\boldsymbol\theta})}{\sum_{L \in \text{all labels}} p(\boldsymbol{x}|L) p(L|\boldsymbol\theta)}.$
When the labels are continuously distributed (e.g., in regression analysis), the denominator involves integration rather than summation:
$p({\rm label}|\boldsymbol{x},\boldsymbol\theta) = \frac{p({\boldsymbol{x}|\rm label}) p({\rm label|\boldsymbol\theta})}{\int_{L \in \text{all labels}} p(\boldsymbol{x}|L) p(L|\boldsymbol\theta) \operatorname{d}L}.$
The value of $\boldsymbol\theta$ is typically learned using maximum a posteriori (MAP) estimation. This finds the best value that simultaneously meets two conflicting objects: To perform as well as possible on the training data (smallest error-rate) and to find the simplest possible model. Essentially, this combines maximum likelihood estimation with a regularization procedure that favors simpler models over more complex models. In a Bayesian context, the regularization procedure can be viewed as placing a prior probability $p(\boldsymbol\theta)$ on different values of $\boldsymbol\theta$. Mathematically:
$\boldsymbol\theta^* = \arg \max_{\boldsymbol\theta} p(\boldsymbol\theta|\mathbf{D})$
where $\boldsymbol\theta^*$ is the value used for $\boldsymbol\theta$ in the subsequent evaluation procedure, and $p(\boldsymbol\theta|\mathbf{D})$, the posterior probability of $\boldsymbol\theta$, is given by
$p(\boldsymbol\theta|\mathbf{D}) = \left[\prod_{i=1}^n p(y_i|\boldsymbol{x}_i,\boldsymbol\theta) \right] p(\boldsymbol\theta).$
In the Bayesian approach to this problem, instead of choosing a single parameter vector $\boldsymbol{\theta}^*$, the probability of a given label for a new instance $\boldsymbol{x}$ is computed by integrating over all possible values of $\boldsymbol\theta$, weighted according to the posterior probability:
$p({\rm label}|\boldsymbol{x}) = \int p({\rm label}|\boldsymbol{x},\boldsymbol\theta)p(\boldsymbol{\theta}|\mathbf{D}) \operatorname{d}\boldsymbol{\theta}.$
### Frequentist or Bayesian approach to pattern recognition?
The first pattern classifier – the linear discriminant presented by Fisher – was developed in the Frequentist tradition. The frequentist approach entails that the model parameters are considered unknown, but objective. The parameters are then computed (estimated) from the collected data. For the linear discriminant, these parameters are precisely the mean vectors and the Covariance matrix. Also the probability of each class $p({\rm label}|\boldsymbol\theta)$ is estimated from the collected dataset. Note that the usage of ‘Bayes rule’ in a pattern classifier does not make the classification approach Bayesian.
Bayesian statistics has its origin in Greek philosophy where a distinction was already made between the ‘a priori’ and the ‘a posteriori’ knowledge. Later Kant defined his distinction between what is a priori known – before observation – and the empirical knowledge gained from observations. In a Bayesian pattern classifier, the class probabilities $p({\rm label}|\boldsymbol\theta)$ can be chosen by the user, which are then a priori. Moreover, experience quantified as a priori parameter values can be weighted with empirical observations – using e.g., the Beta- (conjugate prior) and Dirichlet-distributions. The Bayesian approach facilitates a seamless intermixing between expert knowledge in the form of subjective probabilities, and objective observations.
Probabilistic pattern classifiers can be used according to a frequentist or a Bayesian approach.
## Uses
The face was automatically detected by special software.
Within medical science, pattern recognition is the basis for computer-aided diagnosis (CAD) systems. CAD describes a procedure that supports the doctor's interpretations and findings.
Other typical applications of pattern recognition techniques are automatic speech recognition, classification of text into several categories (e.g., spam/non-spam email messages), the automatic recognition of handwritten postal codes on postal envelopes, automatic recognition of images of human faces, or handwriting image extraction from medical forms.[5] The last two examples form the subtopic image analysis of pattern recognition that deals with digital images as input to pattern recognition systems.[6][7]
Optical character recognition is a classic example of the application of a pattern classifier, see OCR-example. The method of signing one's name was captured with stylus and overlay starting in 1990.[citation needed] The strokes, speed, relative min, relative max, acceleration and pressure is used to uniquely identify and confirm identity. Banks were first offered this technology, but were content to collect from the FDIC for any bank fraud and did not want to inconvenience customers..[citation needed]
Neural networks (neural net classifiers) have many real-world applications in image processing, a few examples:
• identification and authentication: e.g., license plate recognition,[8] fingerprint analysis and face detection/verification;[9]
• medical diagnosis: e.g., screening for cervical cancer (Papnet)[10] or breast tumors;
• defence: various navigation and guidance systems, target recognition systems, etc.
For a discussion of the aforementioned applications of neural networks in image processing, see e.g.[11]
In psychology, pattern recognition, making sense of and identifying the objects we see is closely related to perception, which explains how the sensory inputs we receive are made meaningful. Pattern recognition can be thought of in two different ways: the first being template matching and the second being feature detection. A template is a pattern used to produce items of the same proportions. The template-matching hypothesis suggests that incoming stimuli are compared with templates in the long term memory. If there is a match, the stimulus is identified. Feature detection models, such as the Pandemonium system for classifying letters (Selfridge, 1959), suggest that the stimuli are broken down into their component parts for identification. For example, a capital E has three horizontal lines and one vertical line.[12]
## Algorithms
Algorithms for pattern recognition depend on the type of label output, on whether learning is supervised or unsupervised, and on whether the algorithm is statistical or non-statistical in nature. Statistical algorithms can further be categorized as generative or discriminative.
### Categorical sequence labeling algorithms (predicting sequences of categorical labels)
Supervised:
• Conditional random fields (CRFs)
• Hidden Markov models (HMMs)
• Maximum entropy Markov models (MEMMs)
Unsupervised:
• Hidden Markov models (HMMs)
### Classification algorithms (supervised algorithms predicting categorical labels)
Parametric:[13]
• Linear discriminant analysis
• Quadratic discriminant analysis
• Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class.)
Nonparametric:[14]
• Decision trees, decision lists
• Kernel estimation and K-nearest-neighbor algorithms
• Naive Bayes classifier
• Neural networks (multi-layer perceptrons)
• Perceptrons
• Support vector machines
• Gene expression programming
### Clustering algorithms (unsupervised algorithms predicting categorical labels)
• Categorical mixture models
• Deep learning methods
• Hierarchical clustering (agglomerative or divisive)
• K-means clustering
• Kernel principal component analysis (Kernel PCA)
Unsupervised:
### Parsing algorithms (predicting tree structured labels)
Supervised and unsupervised:
Supervised (?):
### Regression algorithms (predicting real-valued labels)
Supervised:
• Gaussian process regression (kriging)
• Linear regression and extensions
• Neural networks
Unsupervised:
Unsupervised:
• ??
## Which classifier to choose for a classification task?
This article contains an extensive list of statistical classifiers for supervised and unsupervised classification tasks, clustering and general regression prediction. When considering building a classifier e.g., for a software application, a number of different aspects influence the choice of the preferred classifier type to use.
Building or training a classifier is essentially statistical inference. This means that an attempt is made to identify stochastic (often unknown) relations between feature variables and the categories to be predicted. For example, the influence of increased cholesterol on the risk of a heart attack for a patient, within the next year. Which other variables besides the current cholesterol level determine this risk? The two categories to 'predict' by a classifier are 'heart attack likely', or 'heart attack unlikely'.
The theoretically optimal classifier is called the Bayes classifier. It minimizes the loss-function or risk as defined here. When all types of misclassifications are associated with equal losses (outcome A becomes B is as undesired as when outcome B becomes A), the Bayes classifier with the minimal error rate (on a test set) is the optimal one for the classification task. In general, it is unknown what is the optimal classifier type and true parameters $\boldsymbol{\theta}$. However, bounds on the optimal Bayes error rate have been derived. For, for example, the K-nearest neighbor classifier theoretical results are derived that bound the error rate, in relation to the optimal Bayes error rate.
In essence, building a classifier brings model selection with it. Feature selection – using only a subset of the available feature variables to predict the most likely categorical outcome – by itself entails model selection. Choosing among the extensive set of different classifiers makes model selection even more complex. A theoretical analysis of this search problem has been presented as the no free lunch theorem:[15] No particular classification algorithm is 'the best' for all problems. The pragmatic approach to this open problem is to combine prior knowledge of the classification task (e.g. distributional assumptions) with a search process where different types of classifiers are developed and their performance compared.
The search for the 'best' model that predicts observations and relations between these is a problem that was discovered already in ancient Greece. In medival times, Occam's razor was formulated:
“plurality should not be posited without necessity”.
In this context, it means that if a simple classifier with only a few parameters (small $\boldsymbol{\theta}$) does the job as well as a much more complex classification algorithm, choose the simpler one. Only to add that the performance of a classifier is but one of the criteria to apply when choosing the best classification model. Distributional assumptions, insight provided into the discovered relations between variables, whether the classification algorithm can cope with missing feature variables, whether a change in class prior probability [16] can be incorporated, speed of the training algorithm, memory requirements, parallelization of the classification process, resemblance to the human perceptual system, and other aspects as well. In visual pattern recognition invariance to variations in color, rotation and scale are extra properties that need to be accounted for.
### Supervised classification
When choosing the most appropriate supervised classifier, the generally accepted heuristic is to:
1. Separate the available data, at random, into a training set and a test set. Use the test set only for the final performance comparison of the trained classifiers.
2. Experiment by training a number of classification algorithms, including parametric (discriminant analysis, multinomial classifier[17] ) and non-parametric algorithms (k-nearest neighbor, a support vector machine, a feed-forward neural network, a standard decision-tree algorithm).
3. Test distributional assumptions of the (continuous) feature-distributions per category. Are they Gaussian?
4. Which subset of feature variables contributes mostly to the discriminative performance of the classifier?
5. Are elaborate confidence intervals needed for the error-rates and class-predictions[18] ?
6. White-box versus black-box considerations may render specific classifiers unsuited for the job.
## References
1. Isabelle Guyon Clopinet, André Elisseeff (2003). An Introduction to Variable and Feature Selection. The Journal of Machine Learning Research, Vol. 3, 1157-1182. Link
2. Iman Foroutan, Jack Sklansky (1987). "Feature Selection for Automatic Classification of Non-Gaussian Data". IEEE Transactions on Systems, Man and Cybernetics 17 (2): 187–198. doi:10.1109/TSMC.1987.4309029. .
3. Mineichi Kudo, Jack Sklansky (2000). "Comparison of algorithms that select features for pattern classifiers". Pattern Recognition 33 (1): 25–41. doi:10.1016/S0031-3203(99)00041-2. .
4. For linear discriminant analysis the parameter vector $\boldsymbol\theta$ consists of the two mean vectors $\boldsymbol\mu_1$ and $\boldsymbol\mu_2$ and the common covariance matrix $\boldsymbol\Sigma$.
5. Milewski, Robert; Govindaraju, Venu (31 March 2008). "Binarization and cleanup of handwritten text from carbon copy medical form images". Pattern Recognition 41 (4): 1308–1315. doi:10.1016/j.patcog.2007.08.018.
6. Richard O. Duda, Peter E. Hart, David G. Stork (2001) Pattern classification (2nd edition), Wiley, New York, ISBN 0-471-05669-3
7. R. Brunelli, Template Matching Techniques in Computer Vision: Theory and Practice, Wiley, ISBN 978-0-470-51706-2, 2009
8. Egmont-Petersen, M., de Ridder, D., Handels, H. (2002). "Image processing with neural networks - a review". Pattern Recognition 35 (10): 2279–2301. doi:10.1016/S0031-3203(01)00178-9.
9. "A-level Psychology Attention Revision - Pattern recognition | S-cool, the revision website". S-cool.co.uk. Retrieved 2012-09-17.
10. No distributional assumption regarding shape of feature distributions per class.
11. David H. Wolpert (2001). The Supervised Learning No Free Lunch Theorems. Technical report MS-269-1, NASA Ames Research Center. Link
12. Relative frequency of each class in the training and test sets.
13. Ned Glick (1973) Sample-Based Multinomial Classification, Biometrics, Vol. 29, No. 2, pp. 241-256.
14. Geoffrey J. McLachlan (2004) Discriminant Analysis and Statistical Pattern Recognition, Wiley Series in Probability and Statistics, New Jersey, ISBN 0-471-69115-1
## Further reading
• Fukunaga, Keinosuke (1990). Introduction to Statistical Pattern Recognition (2nd ed.). Boston: Academic Press. ISBN 0-12-269851-7.
• Bishop, Christopher (2006). Pattern Recognition and Machine Learning. Berlin: Springer. ISBN 0-387-31073-8.
• Koutroumbas, Konstantinos; Theodoridis, Sergios (2008). Pattern Recognition (4th ed.). Boston: Academic Press. ISBN 1-59749-272-8.
• Hornegger, Joachim; Paulus, Dietrich W. R. (1999). Applied Pattern Recognition: A Practical Introduction to Image and Speech Processing in C++ (2nd ed.). San Francisco: Morgan Kaufmann Publishers. ISBN 3-528-15558-2.
• Schuermann, Juergen (1996). Pattern Classification: A Unified View of Statistical and Neural Approaches. New York: Wiley. ISBN 0-471-13534-8.
• Godfried T. Toussaint, ed. (1988). Computational Morphology. Amsterdam: North-Holland Publishing Company.
• Kulikowski, Casimir A.; Weiss, Sholom M. (1991). Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. Machine Learning. San Francisco: Morgan Kaufmann Publishers. ISBN 1-55860-065-5.
• Jain, Anil.K.; Duin, Robert.P.W.; Mao, Jianchang (2000). "Statistical pattern recognition: a review". IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (1): 4–37. doi:10.1109/34.824819.
• An introductory tutorial to classifiers (introducing the basic terms, with numeric example)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 45, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8689191937446594, "perplexity_flag": "middle"}
|
http://physics.aps.org/articles/v2/17
|
# Viewpoint: Cloaking at a distance
, School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews, Fife KY16 9SS, Scotland, UK
Published March 2, 2009 | Physics 2, 17 (2009) | DOI: 10.1103/Physics.2.17
A proposal for a new type of cloaking device suggests a way to hide both a distant object and the cloak itself.
#### Complementary Media Invisibility Cloak that Cloaks Objects at a Distance Outside the Cloaking Shell
Yun Lai, Huanyang Chen, Zhao-Qing Zhang, and C. T. Chan
Published March 2, 2009 | PDF (free)
Electromagnetic cloaking devices that render objects invisible are now an active research subject, a development that has understandably aroused considerable interest among researchers, the military, and the general public. The basic idea of cloaking is a device that guides incoming light around an interior region such that the light emerges as if it had propagated through empty space; any object in the interior region is thereby rendered invisible. In a paper appearing in Physical Review Letters, Yun Lai and colleagues at the Hong Kong University of Science and Technology present a theoretical design for cloaking that is far less intuitive than this simple picture [1]. In their design, the object to be hidden is placed at a specific position in the vicinity of a new type of cloaking device that is tailor-made for the object. At this position, and only at this position, both object and cloak become invisible. In practice, this “cloaking at a distance” would be severely limited, but it is another example of how geometrical ideas can reveal strange possibilities.
The original cloaking recipes [2, 3] used coordinate transformations of Maxwell’s equations as a design tool. In simple terms: they introduced a material medium that, as regards electromagnetism, makes light bend in a curved path around a “hole.” The path that the light follows is the same that would be expected if Maxwell’s equations were transformed to a different coordinate system. Anything placed in this hole is inaccessible to electromagnetism and thus is hidden, but if the transformation ultimately maps the coordinate system back to itself at some distance from the hole, then the solutions beyond that distance are the same as if the material were not there—the material (cloak), the hole, and anything inside the hole are invisible.
Mathematically, the coordinate transformation produces a singularity, which in real space shows up as an infinite phase velocity for the light at the boundary of the hole. Phase velocities faster than the speed of light in vacuum are possible, but only in dispersive materials where the optical behavior changes significantly with frequency. In practice, these severe demands mean that the cloak will only operate at a single frequency and in the first experimental realization of this idea, the cloak worked only at a microwave frequency [4]. More recently, theorists have figured out ways to overcome the need for infinite phase velocities [5].
In the last few years, Graeme Milton, Nicolae Nicorovici, and collaborators [6, 7, 8] have studied a type of cloaking at a distance where point dipoles outside of the cloak are hidden. In contrast, Lai et al.’s new theory works for objects of arbitrary size and shape [1]. Their approach to cloaking again exploits the equivalence between making a coordinate transformation and introducing a certain type of optical material. In this case, the transformation is a folding of a spatial coordinate back on itself, for example $x→x′=-x$. This kind of transformation is equivalent [9] to introducing a medium with a negative refractive index (sometimes called left-handed media) in which an entering light ray is bent (refracted) in the “wrong direction.” The classic example [10] corresponds to the simple transformation above and gives permittivity and permeability $ε′=μ′=-1$ (in empty space $ε=μ=1$), but a general folding transformation will give negative $ε′$ and $μ′$ that vary in space.
In a medium of refractive index $n$, light experiences a distance $s$ as an effective distance—called an optical path length—given by $ns$. Thus a transformation that folds back on itself gives rise to a negative optical path length. To see that the optical path length is negative, consider the transformation $x′=-x$, applied to the region $0<x<L$ in Fig. 1a. This folds $x=L$ back onto $x=-L$, so these two planes become optically equivalent. The region $-L<x<0$ of overlap is exactly equivalent to a material with $ε′=μ′=-1$, which cancels out a slab of empty space of the same thickness [9]: As light moves from $0$ to $-L$ the negative-index material “undoes” the propagation from $L$ to $0$ so that the light ends up exactly as it was at $L$.
This optical cancellation property of the folding transformation can also be used to cancel an object. Consider an object, with permittivity $εob$ and permeability $μob$, within a distance $L$ from the same slab [Fig. 1b]. The transformation $x′=-x$ in the region $0<x<L$ gives a negative-index slab as before, but the transformation of Maxwell’s equations in the object produces an effective material with permittivity $-εob$ and permeability $-μob$, as shown in [Fig. 1b]. The cancellation argument holds as before, so we now have a left-handed slab that will optically cancel a slab of empty space containing the object. The part of the slab with permittivity $-εob$ and permeability $-μob$ functions as an “anti-object”.
In their description of cloaking at a distance, Lai et al. switch to spherical coordinates and perform the folding transformation on the radial coordinate. Figure 1c shows how one can design a cloak that optically cancels a spherical shell of empty space. The transformation folds the shell $b<r<c$ onto the shell $a<r<b$, meaning space is also compressed in the transformation. This results in an inhomogeneous shell with negative $ε′$ and $μ′$. The region from $r=a$ to $r=c$ now has zero optical length, but for cloaking we require something that behaves optically like empty space, not like the absence of space. This is remedied by filling the central core $r<a$ with a material of permittivity $ε′′$ and permeability $μ′′$ that has the optical path length of an empty sphere of radius $c$. The device now behaves optically as empty space and is therefore invisible. Finally, we can imagine placing an object in the empty shell $b<r<c$ and use the folding transformation to obtain an anti-object in the left-handed shell [Fig. 1d]. The object and the sphere are now invisible, provided the actual object is placed in the exact location where it is mapped onto the anti-object by the folding transformation.
Lai et al. [1] have confirmed their design by numerical simulations. A more conventional description of this cloaking effect is that the scattered light from the object and the device interfere destructively. But this conventional picture has its own cloaking effect: it completely hides the simple geometry operating behind the scenes.
The cloaking recipe described here only works in a stationary regime, where the light can explore and adjust to the entire region of cloak plus object: there is no spooky action at a distance. Moreover, this effect could only be implemented in a very limited way. Left-handed media can be constructed using modern metamaterials [11], but a given device functions only in a narrow bandwidth and losses are a major problem. Nevertheless, the work described here is another example of how clever thinking about geometry and new materials are inspiring a new approach to optics (see [12] for a review). While the major practical impact of these developments will probably be in rather mundane devices, the full richness of this new terrain needs to be explored. Even in the 21st century, classical optics still has some magic left.
### References
1. Y. Lai, H. Chen, Z-Q. Zhang, and C. T. Chan, Phys. Rev. Lett. 102, 093901 (2009).
2. U. Leonhardt, Science 312, 1777 (2006).
3. J. B. Pendry, D. Schurig, and D. R. Smith, Science 312, 1780 (2006).
4. D. Schurig, J. J. Mock, B. J. Justice, S. A. Cummer, J. B. Pendry, A. F. Starr, and D. R. Smith, Science 314, 977 (2006).
5. U. Leonhardt and T. Tyc, Science 323, 110 (2009).
6. G. W. Milton and N. A. P. Nicorovici, Proc. R. Soc. A 462, 3027 (2006).
7. N. A. P. Nicorovici, G. W. Milton, R. C. McPhedran, and L. C. Botten, Opt. Express 15, 6314 (2007).
8. G. W. Milton, N. A. P. Nicorovici, R. C. McPhedran, K. Cherednichenko, and Z. Jacob, New. J. Phys. 10, 115021 (2008).
9. U. Leonhardt and T. G. Philbin, New J. Phys. 8, 247 (2006).
10. V. G. Veselago, Sov. Phys. Usp. 10, 509 (1968).
11. A. K. Sarychev and V. M. Shalaev, Electrodynamics of Metamaterials (World Scientific, 2007)[Amazon][WorldCat].
12. U. Leonhardt and T. G. Philbin, arXiv:0805.4778; Prog. Optics (to be published).
### About the Author: Thomas Philbin
Thomas Philbin completed his research degrees at Trinity College Dublin and the National University of Ireland. He later held postdoctoral appointments at the University of St Andrews in Scotland, the Max Planck Institute in Erlangen, Germany, and the National University of Singapore. He currently holds a Royal Society of Edinburgh/Scottish Government Personal Research Fellowship at St Andrews. His current research interests are Casimir forces, artificial black holes, and uses of differential geometry in optics.
## Related Articles
### More Optics
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Wave-Shaping Surfaces
Viewpoint | May 6, 2013
## New in Physics
Wireless Power for Tiny Medical Devices
Focus | May 17, 2013
Pool of Candidate Spin Liquids Grows
Synopsis | May 16, 2013
Condensate in a Can
Synopsis | May 16, 2013
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Fire in a Quantum Mechanical Forest
Viewpoint | May 13, 2013
Insulating Magnets Control Neighbor’s Conduction
Viewpoint | May 13, 2013
Invisibility Cloak for Heat
Focus | May 10, 2013
Desirable Defects
Synopsis | May 10, 2013
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 47, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8911191821098328, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/2310/monte-carlo-portfolio-risk-simulation/2313
|
# Monte carlo portfolio risk simulation
My objective is to show the distribution of a portfolio's expected utilities via random sampling.
The utility function has two random components. The first component is an expected return vector which is shocked by a random Gaussian variable ($ER$). The shock shifts the location parameter of the return distribution.
The second random component is a covariance matrix which is multiplied by a random scalar $S$. The scalar is drawn from a normal random variable, centered on $1$, where the variance of $S$ corresponds to our uncertainty in whether volatility increases or decreases.
My initial plan was to take separate draws of the $S$ and $EV$ and simply calculate the utility. However, clearly these random variables are not conditionally independent. In particular, research suggests that when volatility is increasing (i.e. $S > 1$), the expected returns distribution has a lower overall mean. Or, if volatility is contracting rapidly it is likely that the expected returns distribution has a higher mean.
In other words, I should be sampling from the joint distribution of $EV$ and $S$ as opposed to the marginal distributions of $EV$ and $S$ separately.
What's a good technique to estimate and sample these random variables from their joint distribution? The "correct" approach I can think of involves defining a set of states, estimating the transition probabilities between state pairs, and sampling $EV$ and $S$ conditional on random draws of a third state variable. Seems like overkill!
A crude variation of this would be to build a transition matrix such as [ High Vol to High Vol, Low to Low, Low to High Vol, High to Low Vol ] where the location and scale parameters ($ER$ and $S$) are informed by an "expert" (i.e. casual inspection of the data).
Are there other techniques that I may be missing that provide a solid "80/20 rule" solution for sampling from this joint distribution, or is state-space (markov models and the like) the only way to go? For example, perhaps there is a non-parametric technique to estimate the relationship between these two variables.
-
"research suggests that when volatility is increasing (i.e. 'S' is > 1), the expected returns distribution has a lower overall mean" if you assume a diffusion process of the sort $dS=mSdt+sSdz$ which has a mean of $m-s^2/2$ – strimp099 Nov 3 '11 at 16:22
## 1 Answer
Since both $ER$ and $S$ are gaussian random, why not just assume their dependence is captured by their covariance, and make your draws from the bivariate normal distribution? It is hard to construct any other way of making two marginal gaussians cointegrated.
Even if the variables were not gaussian, you would probably find yourself relating them using a gaussian copula anyway.
-
Right on +1. Question is how to estimate the covariance. Perhaps using a non-parametric bootstapping approach -- or using the mean of rolling historical returns, and the sum of elements of a rolling covariance matrix and estimating the covariance of these pairs? – Quant Guy Nov 4 '11 at 18:16
I think the best way to go about this is to fit a gaussian copula to capture the dependency between the marginal distributions. This is identical to saying the dependence is captured by their covariance. Thanks! – Quant Guy Nov 7 '11 at 16:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301966428756714, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/102146/is-following-subset-w-of-v-also-a-subspace?answertab=votes
|
# Is following subset W of V also a subspace?
Under following conditions
$a, b \in \mathbb{R}, V = \mathbb{R}^{2}, W = \{(x, y)\ |\ ax + by = 0 \}$
is W a subspace of V? I know the basics, but how would I prove that addition and multiplication are closed over this subset?
-
## 1 Answer
Let $(v_1,v_2),(w_1,w_2)\in W.$ We have $$\begin{cases} av_1+bv_2=0\\ aw_1+bw_2=0 \end{cases}$$ What happens when you add these equations? What happens when you multiply the first equation by a real number $r$?
-
Thanks, this is actually very clear to me now.. I feel stupid for not seeing that myself. So it is obviously a subspace, since both new equations you mention are still zero? – Mats Jan 24 '12 at 23:28
Comment corrected. Yes. Only you need to use the algebraic properties of the real numbers to see the vectors $(v_1+w_1,v_2+w_2)$ and $(rv_1,rv_2)$ in the resulting equations. – user23211 Jan 24 '12 at 23:39
Yes, of course. Thanks a lot! – Mats Jan 24 '12 at 23:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496903419494629, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/library.php?do=view_item&itemid=38
|
Physics Forums
Menu
Home
Action
My entries
Defined browse
Select Select in the list MathematicsPhysics Then Select Select in the list Then Select Select in the list
Search
elastic collision
Definition/Summary
A collision is said to be elastic if the total kinetic energy of all the bodies involved in the collision remains constant. Most collisions are NOT elastic. Conservation of momentum applies to ALL unrestrained collisions.
Equations
For a two body collision, $$m_1v_1^2 + m_2v_2^2 = m_1(v^\prime)_1^2 + m_1(v^\prime)_2^2$$
Scientists
Recent forum threads on elastic collision
Breakdown
Physics > Classical Mechanics >> Newtonian Dynamics
See Also
Images
Extended explanation
When two bodies with known velocities collide, two (or more) equations are generally needed to calculate their velocities after the collision. Conservation of momentum and of angular momentum are always valid equations for this purpose (but see below for restrictions on their use in restrained collisions). Conservation of energy is not, unless the problem specifies that the collision is elastic. For example, although conservation of energy applies to a body sliding or rolling down a curved path, it does not apply if the path has a sharp angle, such as where a ramp meets the ground, since that is a collision, and there is no reason to assume that it is elastic. Restrained collision: A collision is restrained if an external impulsive force acts on one or more of the bodies involved. For example, consider an unattached ball colliding with a ball hanging on the end of a string. If the collision is from below, then the string will go slack, and there is no force from the string to consider. If the collision is from above, then the tension in the string will force the first ball to move along an imaginary spherical surface … this tension force is external and impulsive, and cannot be ignored. However, it is along the direction of the string, which is vertical, and so conservation of momentum still applies in any horizontal direction. Conservation of angular momentum also applies, about any axis which passes through the string … but in this case it gives the same result as conservation of momentum, which is easier to use. If, instead of a ball on a string, there is a rigid rod suspended from a hinge, then in all cases the hinge forces the rod to move so that (obviously ) the hinged end is stationary. So there is an external impulsive force at the hinge. Unlike the string, the direction of this impulse at the hinge is unknown, and so there is no known direction in which momentum is conserved. However, the impulse has no torque about any axis through the hinge, and so angular momentum is conserved about any such axis. If one ball is constrained to move along a physical track, or on a physical surface, then there will be an external impulsive reaction force (there may also be a friction force, but it should not be impulsive). Momentum will be conserved in the direction of the track, or in any direction along the surface. Impulse: Impulse is force times time. From Newton's second law, total impulse = change in momentum By comparison, work done is force times distance, and total work done = change in energy. A force which imposes a sudden change in momentum (a "jerk") is impulsive. A force which imposes a smooth change in momentum is not impulsive. This is all in the context of a collision … any collision, if examined with a fast enough "camera", will be smooth, but we prefer to examine it on a longer time-scale, in which it is jerky by comparison with other forces.For example, friction and gravity are forces which are determined smoothly by the distance moved, and so are not impulsive. Why doesn't conservation of energy apply to an inelastic collision? The problem is that we are focusing on kinetic energy. In an inelastic collision, kinetic energy is converted into other forms. Common examples include heat, sound, and shock waves.
Commentary
tiny-tim @ 03:58 PM Aug29-08
Added Restrained collision and Impulse, and reference to angular momentum, all as a result of comments by i_island0 in a forum thread.
Hootenanny @ 08:03 AM May11-08
Modified the defintion and added the equation.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211683869361877, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/101262/projective-varieties-basics
|
Projective varieties - basics
I am taking an introductive course in (real) algebraic geometry and I got stuck at some basic exercises.
They regard affine and (real) projective varieties, as follows:
1. Prove that the punctured projective space, $\mathbb{P}^n - \{x\}$ is neither projective, nor quasi-affine, when $n \geq 2$.
2. Prove that $\mathbb{P}^1 \times \mathbb{P}^1$ and $\mathbb{P}^2$ are birationally equivalent, but not isomorphic.
Now, to clarify some things. We study more or less based on I. R. Shafarevich - Basic Algebraic Geometry (or something like that).
Our definitions of the notions involved are:
• X is quasi-affine if it is a Zariski open set in an affine variety
• $\mathbb{P}^n$ is supposed to mean $\mathbb{P}^n(k)$, for an algebraically closed field $k$ and is the space of "directions" in $\mathbb{A}^{n+1}-\{0\}$. Specifically, $\mathbb{P}^n=\mathbb{A}^n/\sim$, where $x\sim y \Leftrightarrow \exists \lambda \in k, \ s.t. x=\lambda\cdot y$.
• birational equivalence means that there exist rational functions from either to the other, whose composite is the identity (either way), but that these need not be defined everywhere
• isomorphism is usually treated in terms of isomorphic fields under the isomorphism induced by the initial morphism.
Note that the course is absolutely basic, without (co)homology, schemes, sheaves etc. Just the basics that I listed, along with Krull dimension.
Thank you.
-
1 Answer
1) a) The variety $V=\mathbb{P}^n - \{x\}$ is not projective because it is not compact (an argument valid for $k=\mathbb C$ ).
b) It is not quasi-affine because the global functions $\Gamma(V,\mathcal O_V)=k \;$ do not generate the sheaf $\mathcal O_V$.
[This argument may be a bit premature with respect to your present knowledge; if that is the case, come back to this answer a little later. Anyway the concept of quasi-affine variety is essentially useless in an introductory course. You have plenty of vital notions to absorb before.]
2) a) Represent $\mathbb{P}^1 \times \mathbb{P}^1$ as a quadric $Q\subset \mathbb{P}^3$ and project $Q$ on a plane $P \subset \mathbb{P}^3$ from a point $p \in \mathbb{P}^3$ not on $Q$. This will yield a birational equivalence.
b) Two curves in $\mathbb{P}^2$ always intersect (weak Bézout) , whereas for $a\neq b$ the curves $\lbrace a\rbrace \times \mathbb{P}^1\subset \mathbb{P}^1 \times \mathbb{P}^1$ and $\lbrace b\rbrace \times \mathbb{P}^1\subset \mathbb{P}^1 \times \mathbb{P}^1$ are disjoint.
Edit Here are two alternative proofs addressing Adrian's request in his comment.
1) a) Over an arbitrary algebraically closed field $k$, the variety $V=\mathbb{P}^n_k \setminus \lbrace x\rbrace$ is not projective.
Suppose $x=[1:0:...:0]$ and consider the curve $\;\mathbb A^1_k\setminus \lbrace 0\rbrace\to V:t\mapsto [1:t:...:0]$.It cannot be extended across $t=0$ but if $V$ were projective it could.
1) b) The variety $V$ is not quasi-affine either. If it were an open subset of the affine variety $W$, its $k$-algebra of global regular functions $k[V]$ would suffice to separate its points: just take the restrictions $f|V$ of the global functions $f$ on $W$.
This is not the case at all since the global regular functions on $V$ extend to $\mathbb{P}^n$ and are thus constant : $k[V]=k$
-
Thank you very much, but: 1.a) We generally work over a generic algebraically closed field, so I suppose a particular argument for $k=\mathbb{C}$ would not do... b) I have zero knowledge about sheafs... I am quite sure that there is an elementary argument, since our prof usually gives easy questions :) 2. a&b) Great, thank you! – AdrianM Jan 22 '12 at 14:18
Dear @Adrian, I have written an edit giving a more general or more elementary proof of the statements you mention. – Georges Elencwajg Jan 22 '12 at 16:46
Thanks again. It's not entirely clear, to be honest, but surely it will become once I study a bit more. I find it quite hard many times when I study something which I don't understand or like from a textbook to accept some explanations which don't resemble the textbook. It is the case here my knowledge regarding algebraic geometry is veeeery fragile :) Anyway, sincerely thank you! – AdrianM Jan 22 '12 at 17:43
Dear @Adrian: algebraic geometry is a hard subject to learn for everybody because of the numerous techniques involved. It can also be frustrating because intuitively obvious results like "$\mathbb A^n$ has dimension $n$ " are difficult to prove rigorously. So don't be discouraged by your difficulties: you will overcome them! – Georges Elencwajg Jan 22 '12 at 18:46
Ha-ha! Thank you, sir! The problem is that I have a final examination tommorow morning and questions like these are far from my reach. It bugs me, because they seem basic, like stuff you obtain as a direct consequence of the definitions. However, thankfully I did grasp most of the other subjects proposed. :) – AdrianM Jan 22 '12 at 19:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397755861282349, "perplexity_flag": "head"}
|
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.rae/1199377486
|
### Weighted Orlicz-Type Integral Inequalities for the Hardy
Operator
C. J. Neugebauer
Source: Real Anal. Exchange Volume 32, Number 2 (2006), 495-510.
#### Abstract
We study integral inequalities for the Hardy operator $Hf$ of the form $\int_0^\infty\Phi[Hf^p]\,d\mu\leq c_0\int_0^\infty\Phi[c_1f^p]\,d\mu$, where $\Phi$ is convex, $\mu$ is a measure on $\mathbb R_+$, $1\leq p < \infty$, and $f$ is non-increasing. The results we obtain are extensions of the classical $B_p-$ weight theory [1,5].
First Page:
Primary Subjects: 42B25, 42B35
Keywords: weights; Hardy operato
Full-text: Open access
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7854352593421936, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/hamiltonian?sort=votes&pagesize=30
|
# Tagged Questions
The hamiltonian tag has no wiki summary.
learn more… | top users | synonyms
1answer
275 views
### State of Matrix Product States
What is a good summary of the results about the correspondence between matrix product states (MPS) or projected entangled pair states (PEPS) and the ground states of local Hamiltonians? Specifically, ...
3answers
68 views
### Constructing a Hamiltonian (as a polynomial of $q_i$ and $p_i$) from its spectrum
For a countable sequence of positive numbers $S=\{\lambda_i\}_{i\in N}$ is there a construction producing a Hamiltonian with spectrum $S$ (or at least having the same eigenvalues for $i\leq s$ for ...
2answers
499 views
### Regularisation of infinite-dimensional determinants
Can a regularisation of the determinant be used to find the eigenvalues of the Hamiltonian in the normal infinite dimensional setting of QM? Edit: I failed to make myself clear. In finite ...
1answer
86 views
### Hamilton operator in absence of causal order?
I hope, this question isn't too broad or vague. In a recent paper, Ognyan Oreshkov et al. worked out a theory of quantum correlations in absence of any causal order, dropping the assumptions of a ...
2answers
686 views
### How to construct the Hamiltonian matrix?
I'm trying to understand if there's a more systematic approach to build the matrix associated with the Hamiltonian in a quantum system of finite dimension. For example, I know that for the ammonia ...
2answers
391 views
### Expectation value of time-dependent Hamiltonian
I'm trying to solve a problem in QM with a forced quantum oscillator. In this problem I have a quantum oscillator, which is in the ground state initially. At $t=0$, the force $F(t)=F_0 \sin(\Omega t)$ ...
3answers
152 views
### How to express a Hamiltonian operator as a matrix
Suppose we have Hamiltonian on $\mathbb{C}^2$ $$H=\hbar(W+\sqrt2(A^{\dagger}+A))$$ We also know $AA^{\dagger}=A^{\dagger}A-1$ and $A^2=0$, letting $W=A^{\dagger}A$ How can we express $H$ as \$H=\hbar ...
3answers
463 views
### What is the relationship between Schrödinger equation and Boltzmann equation?
The Schrödinger equation in its variants for many particle systems gives the full time evolution of the system. Likewise, the Boltzmann equation is often the starting point in classical gas dynamics. ...
1answer
187 views
### Question concerning the Lindhard function
I'm having a question concerning the Lindhard function. The reference I'm using is the standard text "Quantum Theory of Solids" by Charles Kittel. I'm concerned with Chapter 6, subchapter "Method of ...
4answers
574 views
### How to calculate the quantum expectation of frequency of a particle?
I know how to calculate the expectation of < $\Psi$|A|$\Psi$ > where the operator A is the eigenfunction of energy, momentum or position, but I'm not sure how to perform this for a pure frequency. ...
2answers
215 views
### Can an Electromagnetic Gauge Transformation be Imaginary?
The Hamiltonian of a non-relativistic charged particle in a magnetic field is $$\hat{H}~=~\frac{1}{2m} \left[\frac{\hbar}{i}\vec\nabla - \frac{q}{c}\vec A\right]^2$$. Under a gauge transformation ...
1answer
614 views
### Evolution operator for time-dependent Hamiltonian
When i studyed QM I'm only working with non time-dependent Hamiltonians. In this case unitary evolution operator has the form $$\hat{U}=e^{-\frac{i}{\hbar}Ht}$$ that follows from this equation ...
1answer
119 views
### Finding the energy levels of an electron in a plane perpendicular to a uniform magnetic field
Suppose we have an electron, mass $m$, charge $-e$, moving in a plane perpendicular to a uniform magnetic field $\vec{B}=(0,0,B)$. Let $\vec{x}=(x_1,x_2,0)$ be its position and $P_i,X_i$ be the ...
2answers
126 views
### Do asymptotically similar potentials yield similar energy levels asymptotically?
Let there be given two Hamiltonians $$H_1~=~ p^{2}+f(x) \qquad \mathrm{and} \qquad H_2~=~ p^{2}+g(x).$$ Let's suppose that for big big $x$, the potentials are asymptotically similar in the sense ...
1answer
201 views
### Second quantization
In second quantization we use Hamiltonian in form: $$H=\int d^3x [ \psi^{\dagger}(x) h \psi(x)],$$ where $h$ is Hamiltonian density. The field operators have following form: \psi = \sum\limits _{i} ...
1answer
208 views
### Does the vacuum energy problem of quantum field theory only occur in the Hamiltonian approach, or also in the path integral approach and in AQFT?
In a standard QFT class, you're being indoctrinated that there is the "infinite vacuum energy density problem". (This is sometimes paraphrased as the "cosmological constant problem", which is in my ...
1answer
162 views
### center of mass Hamiltonian of a Hydrogen atom
I'm working through Mattuck's "A Guide to Feynman Diagrams in the Many-Body Problem", but I'm stuck on a bit which I feel should be trivial. In section 3.2 (p 43 in the Dover edition) he gives a ...
2answers
92 views
### What is a symmetry of a physical system?
If I understand correctly, in many context in physics (quantum mechanics?), a physical system is specified by giving its Hamiltonian. I also hear that symmetries are rather essential. As far as the ...
1answer
128 views
### How does a state in quantum mechanics evolve?
I have a question about the time evolution of a state in quantum mechanics. The time-dependent Schrodinger equation is given as $$i\hbar\frac{d}{dt}|\psi(t)\rangle = H|\psi(t)\rangle$$ I am ...
1answer
154 views
### Conjugate Transpose of Hamiltonian Matrix
I read some notes saying, $$i\hbar \frac{dC_{i}(t)}{dt} = \sum_{j}^{} H_{ij}(t)C_{j}(t)\tag{1}$$ where $C_{i}(t) = \langle i|\psi(t)\rangle$ and $H_{ij}$ is hamiltonian matrix. However, what is ...
1answer
137 views
### Alkali atom in oscilating electromagnetic field
I am trying to calculate atom - light (EM field) interaction Hamiltonian, and the results I get seem to me rather unphysical - I get some nonzero matrix elements which should not be there. Please, can ...
1answer
93 views
### It seems to me that superpotentials can be defined in a theory with or without supersymmetry. Is this true?
I recently read "An Introduction to Supersymmetry in Quantum Mechanical Systems" by T. Wellman (amongst other sources) in an effort to find out what a superpotential actually is and how it relates to ...
1answer
492 views
### How to write the Fröhlich Hamiltonian in one dimension?
I am currently working on a (functional) analysis problem refining Pekar's Ansatz (or adiabatic approximation, as it is called in his beautiful 1961 manuscript "Research in Electron Theory of ...
3answers
230 views
### When Hamiltonian and the total energy are the same
In which condition, the Hamiltonian is the same as the total energy of the system, or say $H=T+V$?
3answers
544 views
### Is there a valid Lagrangian formulation for all classical systems?
Can one use the Lagrangian formalism for all classical systems, i.e. systems with a set of trajectories $\vec{x}_i(t)$ describing paths? On the wikipedia page of Lagrangian mechanics, there is an ...
2answers
108 views
### How do we know that $\psi$ is the eigenfunction of an operator $\hat{H}$ with eigenvalue $W$?
I am kind of new to this eigenvalue, eigenfunction and operator things, but I have come across this quote many times: $\psi$ is the eigenfunction of an operator $\hat{H}$ with eigenvalue $W$. ...
1answer
157 views
### Find the Hamiltonian given $\dot p$ and $\dot q$
I have these equations: $$\dot p=ap+bq,$$ $$\dot q=cp+dq,$$ and I have to find the conditions such as the equations are canonical. Then, I have to find the Hamiltonian $H$. To answer to the first ...
4answers
302 views
### Why the Hamiltonian and the Lagrangian are used interchangeably in QFT perturbation calculations
Whenever one needs to calculate correlation functions in QFT using perturbations one encounters the following expression: $\langle 0| some\ operators \times \exp(iS_{(t)}) |0\rangle$ where, ...
1answer
103 views
### Perturbation method & eigenvalues
I have a problem but I don't understand the question. It says: "Show that, to first order in energy, the eigenvalues are unchanged." What does it mean? It means that if the Hamiltonian has the ...
1answer
303 views
### Solving time dependent Schrodinger equation in matrix form
If we have a Hilbert space of $\mathbb{C}^3$ so that a wave function is a 3-component column vector $$\psi_t=(\psi_1(t),\psi_2(t),\psi_3(t))$$ With Hamiltonian $H$ given by H=\hbar\omega ...
1answer
52 views
### Hamiltonian of polymer chain
I'm reading up on classical mechanics. In my book there is an example of a simple classical polymer model, which consists of N point particles that are connected by nearest neighbor harmonic ...
1answer
179 views
### Computing a density of states of Hamiltonian $H=xp$
How could I compute the integral $$N(E)~=~ \int dx \int dp~ H(E-xp)$$ the 'Area' inside the Phase space is taken for $x \ge 0$ and $p\ge 0$? The result should be N(E)~=~ ...
1answer
93 views
### Does a constant of motion always imply a Hamiltonian formulation?
If a continuous dynamical system has a constant of motion that is a function of all its variables, and is not already evidently Hamiltonian, is it always possible to use a change of variables and ...
2answers
402 views
### Two expressions for expectation value of energy
I was looking up expectation value of energy for a free particle on the following webpage: http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/expect.html It says that $E=\frac{p^2}{2m}$ and ...
1answer
132 views
### The relation between Hamiltonian and Energy
I know Hamiltonian can be energy and be a constant of motion if and only if: Lagrangian be time-independent, potential be independent of velocity, coordinate be time independent. Otherwise ...
0answers
42 views
### Boundary condition Hamiltonian with point tinteractions
I`m studying the Hamiltonian with point interaction centered in $y$ in three dimensions. I know that the elements in the domain of the Hamiltonian are of the form $$\psi=\phi+qG^z(\cdot-y)$$ where ...
1answer
47 views
### Hamiltonians, density of state, BECs
When working with Bose-Einstein condensates trapped in potentials, how can one tell what the density of state of a system of identical bosons given the Hamiltonian, $H$? (I have been told that it is ...
1answer
67 views
### What does $\psi_j(r_i)$ mean?
I have a mean-field Hamiltonian for N electrons. The mean-field potential felt by electron $i$ at position ${\bf r}_i$ is given by $V^{(i)}_{int}({\bf r}_i)=\sum_{j\ne i}|\psi_j({\bf r}_i)|^2$ I ...
1answer
199 views
### Cyclic Coordinates in Hamiltonian Mechanics
I was reading up on Hamiltonian Mechanics and came across the following: If a generalized coordinate $q_j$ doesn't explicitly occur in the Hamiltonian, then $p_j$ is a constant of motion ...
3answers
120 views
### The notion of bounded states in quantum mechanics and their characterization with operators
Is there any case of potential $V$, such that the continuity of the operator $H=c\ \Delta+V$ is not spoiled? And I don't know any non-differnetial operator examples for continous spectra. I ...
1answer
129 views
### Quantum Stat-Mech Proof of an Inequality for the Partition Function
I have the following problem that I was unable to solve for class, but I had a couple first steps that I started with that I am unable to finish. I know I can't get this since it's already been ...
3answers
189 views
### Factors of $c$ in the Hamiltonian for a charged particle in electromagnetic field
I've been looking for the Hamiltonian of a charged particle in an electromagnetic field, and I've found two slightly different expressions, which are as follows: H=\frac{1}{2m}(\vec{p}-q \vec{A})^2 ...
1answer
148 views
### Symmetry and overlapping of ground states
In a quantum mechanics, there is the following formula to derive the zero energy $E_0$ of a perturbed Hamiltonian $$H = H_0 + V$$ knowing the zero energy $W_0$ of the free Hamiltonian $H_0$: E_0 = ...
1answer
266 views
### The Hermiticity of the Laplacian (and other operators)
Is the Laplacian operator, $\nabla^{2}$, a Hermitian operator? Alternatively: is the matrix representation of the Laplacian Hermitian? i.e. \langle \nabla^{2} x | y \rangle = \langle x | ...
1answer
306 views
### Commutation relation with Hamiltonian
How do we get $[\beta , L] = 0$ , where $L$= orbital angular momentum and $\beta$= matrix from Dirac equation?
3answers
262 views
### Propagators and Probabilities in the Heisenberg Picture
I'm trying to understand why $$\Bigl|\langle0|\phi(x)\phi(y)|0\rangle\Bigr|^2$$ is the probability for a particle created at $y$ to propagate to $x$ where $\phi$ is the Klein-Gordon field. What's ...
2answers
386 views
### Canonical transformations and conservation of energy
I have an important doubt about the nature of canonical transformations in hamiltonian mechanics. Suppose I have a one-degree-of-freedom lagrangian system, whose hamiltonian depends explicitly on ...
1answer
68 views
### Transform hamiltonian
I have got the following Quantum Hamiltonian: $$H=\frac{p^{2}}{2m}+k_{1}x^{2}+k_{2}x+k_{3}$$ Which transformation can I use to change this Hamiltonian into an harmonic oscillator hamiltonian? ...
2answers
216 views
### Confusion about Free Energy and the Hamiltonian
I'm probably making a relatively basic mistake here, but I'm a bit confused about the relation between the Hamiltonian and Helmholtz free energy. From what I can see, the free energy can be written ...
1answer
83 views
### Where can I find hamiltonians + lagrangians?
Where would you say I can start learning about Hamiltonians, Lagrangians ... Jacobians? and the like? I was trying to read Ibach and Luth - Solid State Physics, and suddenly (suddenly a Hamiltonian ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9194150567054749, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/8331/how-is-formula-for-converting-pressure-from-mmhg-to-pa-derived
|
# How is formula for converting pressure from mmHg to Pa derived?
Today my younger brother asked me from where does the 1 Pa = 0.00750061683 mmHg formula for mercury barometer come. He needs a way to derive it, or an academic source which can be cited.
After doing some calculations we got the formula for a standard U-tube manometer: $P=\frac{h_2}{h_1}P_0$ where $P_0$ is atmospheric pressure, $P$ is pressure being measured, $h_1$ is height is mercury column exposed to atmospheric pressure and $h_2$ is height of the column exposed to pressure being measured.
The problem is that in the case of a barometer, the $h_2$ is exposed to vacuum and I don't know how to use that.
I've searched the Internet, and got countless sites which explain how a mercury column barometer works, but I was unable to find a site which explains which forces are acting there and how the number was derived. To make things even worse, none of the physics books I have access to have a detailed explanation.
-
## 1 Answer
If the height difference between the mercury level in the two arms is $h$ (it's called $\Delta h$ on the figure), then
$$P_1 - P_2 = h\rho g$$
where $P_1,P_2$ are the pressures in both wings (called $P,P_{\rm ref}$ on the figure). One of them is the measured atmospheric pressure. The two pressures are being subtracted because the air pushes the liquid from the two sides in two opposite directions. You may also move $P_2$ to the right hand side, so that the two sides exactly express the pressure in both directions (to be specific, you may think about forces acting on a special separator inserted to the point $B$ at the bottom of the figure - most of the mercury cancels, only the height difference doesn't).
The basic-school formula $h\rho g$ for the pressure may be derived as the force of the mercury column per unit area of the base. The mass is $V\rho = A h\rho$, the force is $g$ times larger i.e. $A h \rho g$, and the force per unit area is therefore $h\rho g$ because $A$ cancels. My derivation is only valid for "cylindrical" shapes but the $h\rho g$ formula is actually true for any shape - the pressure only depends on the depth $h$ beneath the surface.
Restricting our attention to the pressure and height differences only, it's clear that $h=1$ millimetre of mercury corresponds to the pressure difference:
$$\delta P = h \rho g = 0.001 \,{\rm m} \times 13,595.1\, {\rm kg}/{\rm m}^3 \times 9.80665\,{\rm m}/{\rm sec}^2 = 133.332 \,{\rm Pa}$$
The inverse relationship is 1 Pascal is equivalent to $1/133.332 = 0.0075006$ mmHg. The exact values of the densities are a little bit conventional - the densities depend on temperature and pressure and the gravitational acceleration depends on the place. In the past, 1 mmHg wasn't needed that accurately. In the modern era, we define 1 mmHg by your relationship, and 1 Pa is much more accurately defined in terms of "fundamental physics".
-
Thanks a lot! The 15 character limit and 15 second limit is idiotic. – AndrejaKo Apr 9 '11 at 19:28
@AndrejaKo The minimum character limit is there to filter out comments that just add noise, such as "Thanks a lot!". Upvotes and Accepts should be thanks enough. – deadly Mar 5 at 11:56
@deadly Except I've had numerous situations where just a few characters would be sufficient. Also don't assume that I don't know to accept and upvote. – AndrejaKo Mar 5 at 19:52
@AndrejaKo I was attempting to explain the rationale behind the minimum character requirement, not impugning your ability to accept and upvote. – deadly Mar 5 at 22:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537628293037415, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/65034?sort=newest
|
## Useful tricks in experimental mathematics
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There are a few computational tricks which are useful in experimental mathematics. These tricks are mostly very elementary and often only given as exercices in books. A typical example is the following:
Suppose that a sequence $s_0,s_1,s_2,\dots$ converges exponentially fast. Then the sequence $t_i=s_i-\frac{(s_{i+1}-s_i)^2}{s_{i+2}-2s_{i+1}+s_{i}}$ converges (generally) faster and has the same limit. Having only access to a few initial terms of a sequence which seems to converge quickly, this trick improves thus guesses concerning the limit.
This suggests two questions:
1. Is there a nice book/article containing a list of useful tricks "ready for use"?
2. What tricks are useful for you?
For clarity let me state that I do not count Euclid's algorithm, LLL or such things as tricks. they are already implemented and ready for use in computer-algebra systems. (A nice book concerning tricks might have however also ulterior chapters mentioning such useful algorithms and describing them very briefly.)
-
14
Should this be community wiki? – Federico Poloni May 15 2011 at 12:37
1
P.S. Mathematica is able to do the Aitken "trick", but the function does not seem to be explicitly advertised. Try `SequenceLimit[(*sequence*), Method -> {"WynnEpsilon", "Degree" -> 1}]` – J. M. May 15 2011 at 14:12
1
Because the question has no single definite answer but rather a "big-list" style set of answers, it probably merits transition into the "community wisdom" mode... :-) – S. Sra May 15 2011 at 14:13
1
I am disappointed with this question after reading the title. I would much prefer this being about how to gain insight into a non-trivial mathematical fact by doing an "experiment" than how to evaluate expressions numerically (there are huge books about this). E.g. how do I spot the prime number theorem by staring at a table of primes? – Helge Jun 21 2011 at 18:35
1
@Helge: I fear (or hope) that there is no method for doing such a thing: If the discovery of an interesting mathematical fact were entirely algorithmic (or based on few useful tricks), it would be much less fun. – Roland Bacher Jun 22 2011 at 7:45
show 7 more comments
## 6 Answers
For the first question ("is there a nice book/article..."), I think the answer ie Yes: Sanjoy Mahajan's Street-Fighting Mathematics, which also exists in a free CC version, summarizes a good number of useful tricks and meta-tricks, some well-known, some less so.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The tricks I regularly use:
• Create more examples. Always.
• As a corollary of the above, time is well spent on making algorithms that presents examples nicely.
• The Online Encyclopedia of Integers (OEIS), is your friend.
• Or, if that does not work, put your sequence or constant into WolframAlpha.
• If the numerical data looks strange, redo! Some software do not warn when the precision is lost. Some software (Mahtmematica for example), do not consider $1/2$ and $0.5$ to be equal.
• Take time to learn your software! You are more tempted to try new stuff, if it is easy to code.
-
My favorite trick in experimental mathematics is to prove things.
-
If you are attempting to guess the solution of a problem that is a number and the usual tricks (LLL or PSLQ) don't work, you can try to introduce an extra parameter in the problem, making the solution a function of that parameter. Then, you can study that function numerically. In some cases it is then possible to guess this function based on its behavior, which then solves the original problem.
E.g., for the critical percolation problem on a cylinder of circumference L, it had been conjectured using numerical work that the probability that a point is on a cluster that wraps around the cylinder has the asymptotic behavior of 0.81099753.... L^(-5/48), see here. Then guessing an analytic expression for the number 0.81099753.... was only possible when considering a generalized version of the original problem that has an extra parameter in it and then guessing the function of that parameter. That then led to this result from which the conjecture follows that $0.81099753\ldots = \frac{2^{23/72}}{3^{5/48}}\frac{\pi^{1/4}\exp\left(1/4 \zeta'(-1)\right)}{\sqrt{\Gamma\left(1/4\right)}}$
-
I can't respond to Federico's comment directly but I want to point out that you could (in principle!) solve two (or more systems) as: `blkdiag(A,A)\[b;c]`. HOWEVER it seems that matlab doesn't know enough to exploit the block diagonal structure and this runs slower that precomputing the inv. However, it may have higher numerical accuracy (not sure).
````% generate random large A,b,c
% A=sparse(A); % make things a bit more "fair"
>> tic;A\b;A\c;toc
Elapsed time is 0.035227 seconds.
>> A=blkdiag(A,A);
>> tic;A\[b;c];toc
Elapsed time is 0.060273 seconds.
````
One "trick" that I live by is: exploit Matrix structure. This means understanding the alphabet soup of factorization techniques and when to use each one and why.
-
I am not sure what counts as a trick and what doesn't, but I'd like to suggest
Don't invert matrices!
In nearly all practical applications, solving a linear system is faster and more accurate than computing the inverse entry-by-entry.
Unfortunately, I know no computer algebra system that takes advantage of this bit of wisdom and implements inversion as returning a proxy.
-
1
Matlab's $/$ and $\backslash$ operators are sort of an example of this. – Nate Eldredge May 15 2011 at 12:42
2
Uh, that's not the same thing. For instance, if I have to solve two linear systems with the same matrix, `A\b;A\c` is inefficient because it computes the LU factorization twice. On the other hand, if `inv(A)` computed an LU factorization and returned a proxy, then `inv(A)*b` and `inv(A)*c` would be both solved with the superior method, without hassle. And you wouldn't have to teach the engineers not to use `inv(A)`. – Federico Poloni May 15 2011 at 13:15
Federico is right; presumably Moler and company had enough sense to have "backslash" perform a decomposition as opposed to an explicit inversion. – J. M. May 15 2011 at 14:24
Sorry, small mistake: if `inv` returned a proxy, then one would have to write `T=inv(A);T*b;T*c` in order to solve two systems with one LU factorization. `inv(A)*b;inv(A)*c` doesn't work since the function `inv` is called twice. – Federico Poloni May 15 2011 at 14:30
5
As a matter of course: Mathematica 's `LinearSolve[]` (at least in the new versions) accepts a square matrix as the only argument, returning a `LinearSolveFunction[]` object that can then be applied to various vectors (as expected, a compactly stored LU decomposition is embedded within the object)... – J. M. May 15 2011 at 17:43
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379329085350037, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4206492
|
Physics Forums
## Second moment of area
how do calculate send moment of area.
here is the exampel. I do understand the way one calculate Ixx and Iyy and Ixy. the smaller part has thikness t and the biggest part 3t.
I get :
if you place global coordinate system on the top you will get the position of center of gravity to CG=(3a/8,-a/4).
Now I do: define variable s which is the road
Ixx=∫y²dA=∫t(s sin30)²ds=t[(S³/3)(1/4)]=ta³/12
and we do the same with the right part. and we will get 3ta³/12
Adding these two gives ta³/3. but in the solution they have
(ta³/3)*(1/2)².
I dont get where (1/2)² is comming from.
This is not homework! This is an example from a exam!
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Homework Help well, sin(30)=1/2 ... which may give us a clue. From your results: the first part gives: $$\frac{ta^3}{12}=\frac{ta^3}{3}\frac{1}{2^2}$$ - the second part gives: $$\frac{3ta^3}{12}=\frac{3ta^3}{3}\frac{1}{2^2}$$... adding them together gives: $$\frac{ta^3}{3}\frac{1}{2^2}+\frac{3ta^3}{3}\frac{1}{2^2}=\frac{4ta^3} {3}\frac{1}{2^2}$$... so the question is not so much where the (1/2)2 comes from - but where the factor of 4 went. I see you have two beams length a, one of thickness t and the other of thickness 3t, which meet at an angle which is not specified on the diagram. There is a "30" on the diagram which I take to mean that one of the beams makes and angle of 30 degrees to something but the "something" is not specified. From your analysis I'm guessing you are trying to find the polar moment Jxx by integrating over each beam and adding them - thus: you have oriented your coordinate axis so the y-axis is parallel to the bending force? if ##dA=t.ds## and ##y=s\sin(30)## then I guess the beams meet at 90 degrees to each other and the light (thickness 1t) beam forms an angle of 30 degrees to the horizontal. It this correct? I'd rather not guess! I think we'd need this information.
Quote by Simon Bridge well, sin(30)=1/2 ... which may give us a clue. From your results: the first part gives: $$\frac{ta^3}{12}=\frac{ta^3}{3}\frac{1}{2^2}$$ - the second part gives: $$\frac{3ta^3}{12}=\frac{3ta^3}{3}\frac{1}{2^2}$$... adding them together gives: $$\frac{ta^3}{3}\frac{1}{2^2}+\frac{3ta^3}{3}\frac{1}{2^2}=\frac{4ta^3} {3}\frac{1}{2^2}$$... so the question is not so much where the (1/2)2 comes from - but where the factor of 4 went. I see you have two beams length a, one of thickness t and the other of thickness 3t, which meet at an angle which is not specified on the diagram. There is a "30" on the diagram which I take to mean that one of the beams makes and angle of 30 degrees to something but the "something" is not specified. From your analysis I'm guessing you are trying to find the polar moment Jxx by integrating over each beam and adding them - thus: you have oriented your coordinate axis so the y-axis is parallel to the bending force? if ##dA=t.ds## and ##y=s\sin(30)## then I guess the beams meet at 90 degrees to each other and the light (thickness 1t) beam forms an angle of 30 degrees to the horizontal. It this correct? I'd rather not guess! I think we'd need this information.
Hi simon! thank you for answer.
Yes the right answer according to solution is 4ta³/12*(1/2²). sorry if I wrote it wrong at the first place. my bad.
I still don't get where (1/2) ² is comming from. I write it again how I do it: yes the angles is 30 on the both sides and is not 90 at the top No! y-axis is parallell to
##∫y².dA=∫t.(s \sin30)²ds=[t.(s³/3 )sin30²]=[ta³/12]##. so (1/2) just went.
##∫y².dA=∫3t.(s \sin30)²ds=[3(t.s³/3 )sin30²]=[3ta³/12]##. so (1/2) just went.
Adding these two:
ta³/3 which is not correct . the answer is :
##(ta³/3).(1/2²)##
Recognitions:
Homework Help
## Second moment of area
sin2(30) = (1/2)2 like I said - look carefully at the derivation I showed you at the start of post #2 ... you have exactly the same result they have only you have multiplied out the known values... that's where the (1/2) "just went". i.e. $$\int y^2dA = \int (s\sin(30))^2tds = \int \frac{ts^2}{2^2}ds$$... keep the ##(1/2)^2## like that instead of evaluating it and see what happens.
Quote by Simon Bridge sin2(30) = (1/2)2 like I said - look carefully at the derivation I showed you at the start of post #2 ... you have exactly the same result they have only you have multiplied out the known values... that's where the (1/2) "just went". i.e. $$\int y^2dA = \int (s\sin(30))^2tds = \int \frac{ts^2}{2^2}ds$$... keep the ##(1/2)^2## like that instead of evaluating it and see what happens.
No. Actully my last result is :
$$\frac{ts^3}{3}$$
while their is :
$$\frac{ts^3}{3}\frac{1}{4}$$
Recognitions:
Homework Help
but in the solution they have (ta³/3)*(1/2)².
Yes the right answer according to solution is 4ta³/12*(1/2²).
while their is: $$\frac{ts^3}{3}\frac{1}{4}$$
Which is it?
What you need to do is identify and report the correct solution and also go back over your integration showing the intermediate steps.
You should also provide the information requested in post #2.
they are all the same. what i get from my integration is : ta^3 /3 and the slution in the exam says that the result should be: (4ta^3 / 12)(1/4) which is (ta^3/3)(1/4) which is (ta^3/3)(1/2^2) so all of the same the problemnis that just get ta^3/3which is 4 times bigger than that in the solution.
For simplyfing and save time I paste the question and answer. http://i47.tinypic.com/19pe1k.jpg Solution: http://i50.tinypic.com/2ce6d61.jpg would you please tell and show me how they calculate the moment of inertia? thanks a lot
Recognitions: Homework Help OK - now I see. The model answer is just using a rule - knowing the second moment for a beam and how to combine them. That way they could do it in one step. Simple addition, that you used, does not always hold. I think you should look at the examples in the wikipedia article on "second moment of inertia" to see what I mean then go through the calculation again without leaving out any steps. Meantime, I'll see if I can't produce a useful model answer for you.
I will be very thankfull if you could explain it to me. Whatever I do I cant get where the (1/2)² is coming from. Will you do me some calculations? please? I have an exam soon and it is a real pain in the ***.
Recognitions: Homework Help Sorry for the delay - I've been somewhat busy myself. Preparing for exams over xmas/new-year is a bummer! Anyhow - I had a go looking for the 1-step approach used by the model answer and got nowhere. Someone with more recent experience should do better ... I can only conclude that the examiner was expecting people to use a remembered result from class. You should look through your notes although it is possible that the particular method was not used this year. It is also possible that the model answer or the question is somehow in error :) Anyway - the way to check would be to use the transformations. Divide the area into two rectangular areas. Let area 1 be the thin rectangle and area 2 be the fat one. Use the following observations: (iirc) For a rectangular area with extent a along the x axis and t along the y axis, the CG moments are: $$J_{xx}=\frac{at^3}{12}\; ; \; J_{yy}=\frac{a^3t}{12}\; ; \; J_{xy}=0$$ After a rotation angle ##\phi## in the x-y plane: $$[J_{xx}]_{rot}=\frac{1}{2}(J_{xx}+J_{yy})+\frac{1}{2}(J_{xx}-J_{yy})\cos(2\phi)-J_{xy}\sin(2\phi)$$ ...etc. ... but all that is for an axis through the centroid. If the axis is a distance d from the centroid, then you use the parallel axis theorum: $$J_{xx}=[J_{xx}]_{CG}+Ad^2$$ ... for you, for ##J_{xx}##, ##A=4ta## and ##d=a/4## (if the x-axis passes through the apex of the triangle formed by the beams.) ... you'll have to modify for ##J_{yy}## but you already know the center of mass coordinates. There is going to be some approximation here - you'll have to use your judgement. I was working with the origin through the center of one end of each - which means there is an overlap of area ##3t^2/4## which I hope is small. You may want to use a different geometry. For the small area I get: $$J_{xx}=\frac{3}{4}\frac{at^3}{12}+\frac{ta^3}{12}$$ ... which seems suggestive. It's been a while since I had to do these though so check yourself.
Amazing. thank you very much Simon!!
Tags
second moment area
Thread Tools
| | | |
|--------------------------------------------|------------------------|---------|
| Similar Threads for: Second moment of area | | |
| Thread | Forum | Replies |
| | Mechanical Engineering | 5 |
| | General Math | 3 |
| | Mechanical Engineering | 0 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399553537368774, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/08/14/factoring-real-polynomials/
|
# The Unapologetic Mathematician
## Factoring Real Polynomials
Okay, we know that we can factor any complex polynomial into linear factors because the complex numbers are algebraically closed. But we also know that real polynomials can have too few roots. Now, there are a lot of fields out there that aren’t algebraically closed, and I’m not about to go through all of them. But we use the real numbers so much because of the unique position it holds by virtue of the interplay between its topology and its algebra. So it’s useful to see what we can say about real polynomials.
We start by noting that since the real numbers sit inside the complex numbers, we can consider any real polynomial as a complex polynomial. If the polynomial has a real root $r$, then the complex polynomial has a root $r+0i$. So all the real roots still show up.
Now, we might not have as many real roots as the degree would indicate. But we are sure to have as many complex roots as the degree, which will include the real roots. Some of the roots may actually be complex numbers like $a+bi$. Luckily, one really interesting thing happens here: if $a+bi$ is a root, then so is its complex conjugate $a-bi$.
Let’s write out our polynomial $c_0+c_1X+...+c_nX^n$, where all the $c_i$ are real numbers. To say that $a+bi$ is a root means that when we substitute it for ${X}$ we get the equation
$c_0+c_1(a+bi)+...+c_n(a+bi)^n=0$
Now we can take the complex conjugate of this equation
$\overline{c_0+c_1(a+bi)+...+c_n(a+bi)^n}=\overline{0}$
But complex conjugation is a field automorphism, so it preserves both addition and multiplication
$\overline{c_0}+\overline{c_1}\left(\overline{a+bi}\right)+...+\overline{c_n}\left(\overline{a+bi}\right)^n=\overline{0}$
Now since all the $c_i$ (and ${0}$) are real, complex conjugation leaves them alone. Conjugation sends $a+bi$ to $a-bi$, and so we find
$c_0+c_1(a-bi)+...+c_n(a-bi)^n=0$
So $a-bi$ is a root as well. Thus if we have a (complex) linear factor like $\left(X-(a+bi)\right)$ we’ll also have another one like $\left(X-(a-bi)\right)$. These multiply to give
$\begin{aligned}\left(X-(a+bi)\right)\left(X-(a-bi)\right)=X^2-(a-bi)X-(a+bi)X+(a+bi)(a-bi)\\=X^2-(2a)X+(a^2+b^2)\end{aligned}$
which is a real polynomial again.
Now let’s start with our polynomial $p$ of degree $n$. We know that over the complex numbers it has a root $\lambda$. If this root is real, then we can write
$p=(X-\lambda)\tilde{p}$
where $\tilde{p}$ is another real polynomial which has degree $n-1$. On the other hand, if $\lambda=a+bi$ is complex then $\bar{\lambda}$ is also a root of $p$, and so we can write
$p=(X-(a+bi))(X-(a-bi))\tilde{p}=(X^2-(2a)X+(a^2+b^2))\tilde{p}$
where $\tilde{p}$ is another real polynomial which has degree $n-2$.
Either way, now we can repeat our reasoning starting with $\tilde{p}$. At each step we can pull off either a linear term or a quadratic term (which can’t be factor into two real linear terms).
Thus every real polynomial factors into the product of a bunch of linear polynomials and a bunch of irreducible quadratic polynomials, and the number of linear factors plus twice the number of quadratic factors must add up to the degree of $p$. It’s not quite so nice as the situation over the complex numbers, but it’s still pretty simple. We’ll see many situations into the future where this split between two distinct real roots and a conjugate pair of complex roots (and the border case of two equal real roots) shows up with striking qualitative effects.
### Like this:
Posted by John Armstrong | Algebra, Polynomials, Ring theory
## 4 Comments »
1. you certainly meant “factoring complex polynomials” rather than “factoring complex numbers” in the 1st line.
Comment by | August 17, 2008 | Reply
2. certainly, and certainly fixed
Comment by | August 17, 2008 | Reply
3. [...] indeed, some real polynomials have no roots. But all is not lost! We do know something about factoring real polynomials. We can break any one down into the product of linear terms like and quadratic terms like . If [...]
Pingback by | March 13, 2009 | Reply
4. [...] on a real vector space of dimension , we can find its characteristic polynomial. We can factor a real polynomial into the product of linear terms and irreducible quadratic terms with . These give us a list of [...]
Pingback by | April 9, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 35, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391725063323975, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/94951/a-name-for-a-weak-topology
|
## A name for a weak topology
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $V$ be a real vector space and let $V'$ be the algebraic dual of $V$, i.e. the space of all the linear functionals $V\to\mathbb{R}$. Then there exists the weakest topology $\tau$ which makes all the elements of $V'$ continuous, and $\tau$ is locally convex and Hausdorff. For example, if the dimension on $V$ is finite, then $\tau$ is the usual Euclidean topology.
I have two questions:
1. Is there a commonly used name for $\tau$?
2. Let $\tau'$ be the maximal locally convex topology on $V$, i.e. the weakest topology that makes all the seminorms on $V$ continuous. Of course $\tau'$ is finer than $\tau$, but $\tau'$ and $\tau$ could coincide in some cases (for example, for finite dimensional spaces). What is the exact relationship between $\tau$ and $\tau'$?
-
Is $V$ already a topological vector space? – Giuseppe Apr 23 2012 at 15:11
1
No, it isn't. The topology tau only depends on the underlying linear structure of V. – Roberto Frigerio Apr 23 2012 at 15:33
Just a wee addendum to the answer below. The topology $\tau'$ is the Mackey topology on $V$, i.e., the finest locally convex topology for the duality $(V,V')$, aka the topology of uniform convergence on the weakly (i.e., $\sigma(V',V)$) compact subsets of $V'$. Thus, in a certain sense, they are as far apart as possible---the finest and the weakest locally convex topologies for this duality. – jbc Feb 16 at 12:06
## 2 Answers
I got into trouble in following prof. Johnson's suggestion, and now I am quite convinced that the topologies $\tau$ and $\tau'$ coincide only when $V$ is finite-dimensional.
In fact, let ${x_i}_{i\in I}$ be a Hamel basis for $V$, and let us consider the seminorm
$p(\sum_{i\in I} v_i x_i)=\sum_{i\in I} |v_i|$
(this seminorm is well-defined since every element in $V$ admits a unique representation as a finite linear combination of elements of the basis). Then the set $U=p^{-1}(-1,1)$ is open in $\tau'$, and does not contain any nontrivial linear subspace of $V$.
On the other hand, since $\tau$ is the weak topology with respect to a family of linear functionals (in fact, with respect to all the linear functionals), every $\tau$-neighbourhood of $0\in V$ must contain a finite-codimensional linear subspace of $V$. Therefore, if the dimension of $V$ is not finite, then the set $U$ introduced above is open with respect to $\tau'$, and not open with respect to $\tau$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The two topologies are the same. See e.g. Problem 20G in Kelley-Namioka "Linear Topological Spaces".
-
1
BTW, they call the topology the "strongest locally convex topology on $V$". – Bill Johnson Apr 23 2012 at 15:35
Bill, are you sure? this seems strange. – Pietro Majer Apr 24 2012 at 16:10
2
Indeed it is. I did not think about the problem but just gave the reference and misquoted it. In K-L problem 20G, the statement is that all admissible topologies on `$V^*$` (rather than on $V$ itself) are the same, which is pretty clear because `$V^*$` is a product of copies of the scalar field. Roberto'a answer is the correct one. – Bill Johnson Apr 24 2012 at 17:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9207893013954163, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/260775/convergence-divergence-of-sum-k-2-infty-frac-cos-ln-ln-k-ln-k
|
# Convergence\Divergence of $\sum_{k=2}^{\infty} \frac{\cos(\ln(\ln k))}{\ln k}$
Test for convergence the series $$\sum_{k=2}^{\infty} \frac{\cos(\ln(\ln k))}{\ln k}$$ My first thought was related to the use of the integral test, but things seem hard.
Could we resort here to some nicers tools? Thanks
-
## 1 Answer
Pick an integer $n$ and consider all the terms with $\ln \ln k \in [2n\pi-1 ; 2n\pi+1]$. Then $\cos(\ln(\ln k)) \ge \cos 1$ and $\ln k \le e^{2n\pi+1}$, so $a_k \ge (\cos 1) e^{-2n\pi-1}$.
Next, we want to estimate how many $k$ have $\ln \ln k \in [2n\pi-1 ; 2n\pi+1]$. This is equivalent to $k \in [e^{e^{2n\pi-1}};e^{e^{2n\pi+1}}]$, whose length is greater than $C e^{e^{2n\pi+1}}$ for some $C>0$ (because $e^{e^{2n\pi-1}}$ is negligible in front of this).
Therefore the sum of those consecutive terms is greater than $C e^{e^{2n\pi+1}}(\cos 1)e^{-2n\pi-1}$. And since $e^x/x$ gets arbitrarily big as $x$ gets larger, you deduce that the sum of those consecutive terms can be arbitrarily large if you choose $n$ high enough.
As a result, the sequence $\sum_{k=0}^n a_k$ is not a Cauchy sequence, and doesn't converge.
-
+1. And this nearly shows that the closure of the set of partial sums is the whole real line. – Did Dec 17 '12 at 16:50
@mercio: Very nice. Thanks! – Chris's wise sister Dec 17 '12 at 17:00
Really good. I tried that approach myself but couldn't make it work. – Martin Argerami Dec 17 '12 at 21:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9488391876220703, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/5080/is-secure-remote-snap-possible/5087
|
# Is secure remote snap possible?
### Scenario:
• We have a central server $S$.
• We have a number of peripheral servers $P_i$
• We have some individuals $U_j$
• A given individual may be "known" to one or more peripheral servers. Each peripheral server generates unique IDs for the individuals it knows and stores a map $f_i: U_j \to \textrm{ID}$ and the corresponding inverse $f_i^{-1}$
• A peripheral server may share its IDs but may never share the identities of the individuals it knows.
• The peripheral servers can communicate securely with $S$.
• The peripheral servers regularly transmit to $S$ a map from IDs to some data.
### Problem:
$S$ wants to determine whether $P_1$'s ID $a$ corresponds to the same individual $u$ as $P_2$'s ID $b$ without ever knowing the value of $u$. If so, it will merge the data from the different peripheral servers. (Details of the merge method are out of scope). Is this possible?
In essence this is "mental snap", or perhaps zero-knowledge set intersection.
### Rejected approach:
• The domain of individuals is too small to simply send hashes to $S$ and compare the hashes: this would allow identifying the individuals by brute force.
-
1
Could you please expand some on your problem statement? Specifically, I'm wondering who is allowed to know what and who trusts who. Clearly $S$ cannot learn the value of $u$, but should $P_1$ and $P_2$ know each others $u$ values? Should the peripheral servers learn whether or not there was a match? Can $P_1$ and $P_2$ determine the answer between the two of them and tell $S$ the answer (i.e., does $S$ trust them to tell the truth in that case)? – mikeazo♦ Oct 19 '12 at 11:36
## 1 Answer
Yup, should be possible. Look at multiparty secure computation protocols.
In particular, you might want to look at secure protocols for private set intersection. $P_1$ and $P_2$ can use such a protocol to find the individuals that are in the intersection of a set known to $P_1$ and a set known to $P_2$. Then, they can let $S$ know whether there is any intersection and what the correspondence between IDs is
Specifically: $P_1$ has a set of individuals ($f_1^{-1}(a)$), and $P_2$ has another set ($f_2^{-1}(b)$); now a private set intersection protocol lets us check whether these sets have any elements in common, without revealing anything else about the sets. In such a protocol, the set of individuals never leaves $P_1$ or $P_2$, but we still have a way to learn whether $P_1$'s set $P_1$ has any overlap with $P_2$'s set. The details are, well, detailed, but if you are interested, look up any reference on private set intersection.
This approach scales beyond pairwise comparison of individual IDs; if the server has a set of IDs on $P_1$ and a set of IDs on $P_2$, you can use this approach to find whether there is any overlap between these sets and if so, what the correspondence is between the IDs.
-
No, he wants to find the corresponding pairs of IDs. $\:$ (On the other hand, the reference to $\hspace{1 in}$ multiparty secure computation is correct and highly relevant.) $\:$ – Ricky Demer Oct 17 '12 at 21:55
@RickyDemer, Right, I mis-spoke. I've fixed it. What I intended is that you apply private set intersection to the set of individuals (a subset of $U$), not the set of IDs, but somehow what came out of my fingers didn't match what was I intended. My mistake! $P_1$ has a set of individuals ($f_1^{-1}(a)$), and $P_2$ has another set ($f_2^{-1}(b)$); a private set intersection protocol lets us check for non-empty intersection; the set of individuals never leaves $P_1$ or $P_2$, but we have a way to learn whether $P_1$'s set $P_1$ has any overlap with $P_2$'s set. – D.W. Oct 18 '12 at 5:49
Forgive my lack of knowledge on private set intersection. Does private set intersection returns whether or not the intersection is empty or does it return the intersection? If it is the latter, then this will not work. If it is the former, then it seems like a valid solution. – mikeazo♦ Oct 18 '12 at 11:19
@mikeazo, it returns the intersection. (But it could be modified, e.g., by pre-hashing all elements or something.) Why do you say this will not work? – D.W. Oct 18 '12 at 15:32
Wouldn't the value of $u$ be known then to $S$. $f_1^{-1}(a)=u$ and $f_2^{-1}(b)=u$. I.e., these are sets of a single element. The intersect would be $u$ which reveals the value of $u$ to the server. Or am I missing something? – mikeazo♦ Oct 18 '12 at 15:54
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262030124664307, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/3458/plotting-complex-quantity-functions/3460
|
# Plotting Complex Quantity Functions
Trying to plot with complex quantities seems not to work properly in what I want to accomplish. I would like to know if there is a general rule/way of plotting when you have complex counterparts in your function. I tried looking up ContourPlot and DensityPlot but I only have one single variable as ContourPlot asks for two variables in order to plot. The expression I am trying to plot is as so:
eqn := (25 Pi f I)/(1 + 10 Pi f I)
Plot[eqn,{f,-5,5}]
If there something else that is missing here?
-
Plot displays $\mathbb{R}\to\mathbb{R}$ functions. How is it supposed to interpret I? – rcollyer Mar 25 '12 at 1:25
Is your variable f supposed to be just real (as suggested by the domain in your Plot expression)? Or do you want it to take more general complex values, too? – murray Mar 25 '12 at 16:45
@murray: Well the function f is complex valued. It reads as G(f) = (25 Pi f I) / (1 + 10 Pi f I). So, what I was trying to accomplish is plot the spectrum or "Fourier Transform (frequency response)", of the function $g(t)$. Where $f$ just represent the frequency variable from the time-domain. I hope that makes sense to clear up your question. – night owl Mar 26 '12 at 0:07
@night owl: a typical communication between a mathematician and a non-matthematician? (I think of a complex-valued function of a real variable; you speak about respone of frequency.) But I understand! You do have a function from the real numbers to the complex numbers, so the way to represent it visually is unclear. For things like density plots and contour plots, one is dealing with a domain consisting of pairs of reals or, equivalently,complex numbers and range real numbers. Your situation is precisely the reverse. – murray Mar 26 '12 at 3:55
@murray: Edit: I meant to say the function G is complex, but can be seen from above. – night owl Mar 26 '12 at 4:45
show 1 more comment
## 4 Answers
The way you could use ContourPlot here, assuming your variable f is complex (f == x + I y) :
eqn[x_, y_] := (25 Pi ( x + I y) I)/(1 + 10 Pi ( x + I y) I)
{ContourPlot[Re@eqn[x, y], {x, -1, 1}, {y, -1, 1}, PlotPoints -> 50],
ContourPlot[Im@eqn[x, y], {x, -1, 1}, {y, -1, 1},
PlotRange -> {-0.5, 0.5}, PlotPoints -> 50]}
These are respectively real and imaginary parts of the function eqn.
Let's plot the absolute value of eqn :
Plot3D[ Abs[ eqn[x, y]], {x, -1, 1}, {y, -1, 1}, PlotPoints -> 40]
And we complement with the plot of real and imaginary parts of eqn in the real domain :
eqnR[x_] := (25 Pi x I)/(1 + 10 Pi x I)
Plot[{ Tooltip@Re@eqnR[x], Tooltip@Im@eqnR[x]}, {x, -0.25, 0.25},
PlotStyle -> Thick, PlotRange -> All]
-
Nice To know how it would work to do it as a contour and how it looks. But I really just needed it to output a regular curve. I mentioned contour because I had looked up how to plot complex variables. I might have to accept the other answer as to that it what I was more so looking to get. It is to see what the frequency spectrum is of a function. – night owl Mar 25 '12 at 1:55
I added the real and imaginary parts of eqn in one plot. Tooltips help to distinguish parts of eqn while the mouse pointer is on a given curve. – Artes Mar 25 '12 at 2:42
You may have won back the title! This is what I got doing it in matlab, and gnu plot, but could not figure it out here. This is what the frequency response should resemble, from my results in the other programs. :). Did you make this possible still using Plot by using the Re@eqnR[x] and Im@eqnR[x].? – night owl Mar 25 '12 at 4:02
@nightowl I'am not sure what you are asking for. You can remove Tooltip as well if you want, here it is superfluous. – Artes Mar 25 '12 at 14:31
1
I would advise only to study the issue on a case by case basis, otherwise there is no resonable answer. – Artes Mar 26 '12 at 0:32
show 6 more comments
The following function gives the complete information for a function $f:\mathbb{C}\mapsto\mathbb{C}$, by giving the absolute value as $z$-coordinate, and the argument as colour:
ComplexFnPlot[f_, range_, options___] :=
Block[{rangerealvar, rangeimagvar,g},
g[r_,i_]:=(f/.range[[1]]:>r+I i);
Plot3D[Abs[g[rangerealvar,rangeimagvar]],
{rangerealvar, Re[range[[2]]], Re[range[[3]]]},
{rangeimagvar, Im[range[[2]]], Im[range[[3]]]}, options,
ColorFunction->(Hue[Mod[Arg[g[#1,#2]]/(2*Pi) + 1, 1]]&),
ColorFunctionScaling->False]]
For example, the call
ComplexFnPlot[Gamma[z],{z,-3.5-3.5I,3.5+5.5I},PlotRange->{0,4}]
gives
Positive real numbers are red, negative real numbers are cyan. One can e.g. see that the poles of the Gamma function are of order one because going round them you go through the colour cycle just once.
-
Yes, such a plot does encode "complete information". But discerning from such a plot just how the function behaves is problematic. – murray Apr 15 '12 at 14:57
1
@murray: No more problematic than any other information gathered from graphs. Of course, reading from a graph can never replace a proper analysis. However it can give strong hints about what one is likely to find in that analysis. – celtschk Apr 15 '12 at 15:02
Just use ParametricPlot and split up the real and imaginary parts as shown below:
eqn = (25 Pi f I)/(1 + 10 Pi f I)
ParametricPlot[{Re[eqn], Im[eqn]}, {f, -5, 5}, AspectRatio -> 1]
Note that you can use Set (=) rather than SetDelayed (:=) here.
-
Thanks. When should one use the SetDelayed versus Set preferably. Because I know that I usually use and see others use it when defining independent variables to your function such as Artes has done. This looks a bit different from what I got using a different program. I wonder whats that about? – night owl Mar 25 '12 at 1:57
1
– Verbeia♦ Mar 25 '12 at 2:19
Thanks, for that link. That was helpful to distinguish the two. :). – night owl Mar 25 '12 at 3:23
Here are two common ways to visualize complex functions. The first plots the image of a rectangle in the complex plane. The second plots real and imaginary contours on top of one another, illustrating the fact that they meet at right angles.
f[z_] := E^z;
pic1=ParametricPlot[{Re[f[x+I*y]],Im[f[x+I*y]]},
{x,0,1},{y,0,Pi/2}, ImageSize -> 300];
pic2 = Show[
ContourPlot[Re[f[x+I*y]],{x,0,3},{y,0,Pi/2},
ContourShading -> False],
ContourPlot[Im[f[x+I*y]],{x,0,3},{y,0,Pi/2},
ContourShading -> False],
AspectRatio -> Automatic, ImageSize -> 400
];
Row[{pic1,pic2}]
There should be many more examples at the Wolfram Demonstrations site.
-
Such a domain-codomain representation can be made more meaningful by using various dynamic treatments, e.g., allowing the user to move a Locator in the domain and seeing its image in the codomain; or moving (for such a polar plot) the radial segment or the circular arc and seeing the corresponding image in the codomain. And David Park's Presentations application makes doing this easier, since among other things it can directly process geometric objects described in terms of complex numbers without having to separate things into real and complex parts. – murray Apr 15 '12 at 15:03
– Mark McClure Apr 15 '12 at 19:15
no, not quite like your demonstration (except that, like all such, it makes certain curves to curves). The ones by David Park and I (1) use polar as well as rectangular coordinates; either map a Locator point to its image or else vary a parameter to move a curve in domain and its image in codomain; use Presentations for simplicity in programming and true-to-the-spirit direct use of complex numbers and complex-valued curves. – murray Apr 16 '12 at 0:17
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8793447017669678, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/108297/strong-induction-proofs-done-with-weak-induction?answertab=active
|
# Strong Induction proofs done with Weak Induction
I've been told that strong induction and weak induction are equivalent. However, in all of the proofs I've seen, I've only seen the proof done with the easier method in that case. I've never seen a proof (in the context of teaching mathematical induction), that does the same proof in both ways, and I can't seem to figure out how to do them myself. It would put my mind at ease if I could see with my own eyes that a proof done with strong induction can be completed with weak induction. Does anyone have a link to proofs proved with both, or could anyone show me a simple proof here? I'm more interested in proofs were strong induction is the easier method.
-
By "strong" and "weak" I assume that you say that the "weak" induction is when we only assume the precedent case holds, and the "strong" one is when all precedent cases are assumed? – Patrick Da Silva Feb 11 '12 at 22:56
Yes, maybe a better term would have been incomplete or complete? There are many different ways to state it. – gsingh2011 Feb 11 '12 at 23:14
There is no better term, it just required precision. It's fine! – Patrick Da Silva Feb 11 '12 at 23:20
@gsingh2011: Strong induction doesn't have "base cases", it has special cases. See this answer. – Arturo Magidin Feb 12 '12 at 3:42
## 2 Answers
The idea is that if something is proved with "strong" induction, i.e. by assuming all preceding cases, then you can use "weak" induction on the hypothesis "all preceding cases hold". Let me explain with mathematical notation, perhaps it'll be a little clearer.
Suppose you want to prove a proposition for all $n \ge 1$, i.e you want to show that for all $n \ge 1$, $P(n)$ is true, where $P(n)$ is some proposition. Define the proposition $Q(n)$ by "$P(k)$ is true for all $k$ with $1 \le k \le n$". Then showing that $P(n)$ is true using "strong" induction is equivalent to showing that $Q(n)$ is true using "weak" induction. But $P(n)$ is true for all $n$ if and only if $Q(n)$ is true for all $n$, hence the proof techniques are completely equivalent (in the sense that using one technique or the other has the same veracity ; it doesn't mean that one is more or less complicated to use than the other).
At some point in the study of mathematics you stop making the distinction between "strong" and "weak". You just say that you're using "induction". I wouldn't be sure that you stop distinguishing this if you study logic though, but let's just leave those kind of problems to logicians, shall we.
Hope that helps,
-
This helps a lot. So the only difference between two proofs of "strong" and "weak" induction would be statement of the induction hypothesis. However, in both cases, the induction hypothesis would be mean the exact same thing. And in both cases, the validity of the induction hypothesis needs to be backed by multiple base cases if necessary (as many strong induction proofs have multiple while weak have a single base case, this shows that you would still need the multiple base cases for weak induction). – gsingh2011 Feb 11 '12 at 23:25
To sum it up, yeah. – Patrick Da Silva Feb 11 '12 at 23:45
Statements that say that two propositions are equivalent have to be done carefully, because the background theory is important.
Specifically, you are talking about two statements about the natural numbers:
1. Induction (or "weak" induction): Let $S\subseteq \mathbb{N}$ be such that:
• $0\in S$; and
• For all $n\in\mathbb{N}$, if $n\in S$ then $s(n)\in S$.
Then $S=\mathbb{N}$.
2. Strong induction: Let $S\subseteq \mathbb{N}$ be such that:
• For all $n\in\mathbb{N}$, if $\{k\in\mathbb{N}\mid k\lt n\}\subseteq S$ then $n\in S$.
Then $S=\mathbb{N}$.
Above, $s(n)$ is the successor function.
The main difficulty is to establish exactly what our "background" is. The induction Schema makes sense in the context of Peano's postulates; Strong induction requires a defined property of $\lt$.
Moreover, it is not the case that induction and strong induction are equivalent axioms! That is, if we take the other four Peano axioms,
1. $0\in\mathbb{N}$.
2. If $n\in\mathbb{N}$, then $s(n)\in\mathbb{N}$.
3. For all $n\in\mathbb{N}$, $0\neq s(n)$.
4. If $s(n)=s(m)$ then $n=m$.
then it is not true that Axioms 1-4 + Induction yields a theory equivalent to Axioms 1-4 + Strong induction, even if you throw in an order so that you can state Strong induction!
To see this, consider a disjoint union of two copies of the natural numbers; let's call one copy the "green" natural numbers, and the other copy the "purple" natural numbers (I usually use blue and red, but let's avoid politics this year...). We interpret the primitives as follows: $\mathbb{N}$ is the set that contains all green and all purple natural numbers. $0$ corresponds to the green $0$. If $n$ is green, then $s(n)$ is the green $n+1$; if $n$ is purple, then $s(n)$ is the purple $n+1$. The order is defined as follows: if $n$ is green and $m$ is purple, then $n\lt m$. If both $n$ and $m$ are of the same color, then $n\lt m$ if and only if $n$ is smaller than $m$ in the usual order.
This model satisfies Peano's axioms 1 through 4; it also satisifies the strong induction postulate.
However, in this model, the set $S$ of all green natural numbers satisfies the hypothesis of "Induction" but is not all of $\mathbb{N}$: green $0$ is in $S$, and if $n$ is a green natural number, then so is $s(n)$. This means that this set is not a model for Peano arithmetic. So it is false that weak and strong induction can be swapped with one another and yield equivalent theories with the other four Peano postulates.
However, if we add a further property, namely
For every $n\in\mathbb{N}$, either $n=0$ or else there exists $m\in\mathbb{N}$ such that $n=s(m)$;
then the first four axioms, plus this property, plus strong induction does imply weak induction.
I guess the moral is that the statement "weak induction is equivalent to strong induction" has to be made precise before it is true; one has to specify a "background theory" or a set of "background properties" which we may take for granted before the equivalence is established. But in the presence of those "background properties", then Patrick Da Silva's argument is the standard one: any proof (over a suitable theory for $\mathbb{N}$) that uses induction can be reshaped (in a straightforward way) to become a proof that uses strong induction instead; and any proof that uses strong induction can be reshaped (in a more-or-less algorithmic manner) into a proof that uses weak induction.
-
Great answer @Arturo ! Where could I read more about non-standard models (of anything) such as the one you described? – magma Feb 24 '12 at 15:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433305263519287, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/28002-multi-step-linear-inequalities.html
|
# Thread:
1. ## Multi-Step Linear Inequalities
i got every problem right on these things, except for ones involving Fractions. i thought i had it, but...
here's the ones i missed. (note: 1/2=one half, idk how to do fractions.) i tried to divide by the fraction, which went badly, and ended up with huge decimals...so i guess that wasnt right.
Also, i don't know how to do a Greater than or equal to, or a less than or equal to, sign, so i'm just going to use <(=) or >(=)
2/3x + 3 >(=) 11
6 >(=) 7/3x - 1
-1/2x + 3 < 7
thanks in advance
2. Let's look at the first one:
$\frac{2}{3}x + 3 \geq 11$
First step, subtract three from both sides of the inequality using the rule:
"An equal quantity may be added to, (or subtracted from) both sides of an inequality without changing the inequality"
$\frac{2}{3}x \geq 8$
Now we have to multiply both sides by 3 using the rule:
"An equal positive quantity may multiply (or divide) both sides of an inequality without changing the inequality."
$2x \geq 24$
Using the same rule above, we have to divide by 2 to isolate x:
$x \geq 12$
And there you go.
Another IMPORTANT rule is this:
"If both sides of an inequality are multiplied (or divided) by a negative quantity then the inequality is reversed"
I think you can solve the other two.
3. ok, i think i got it, but just to point out.. you only subtracted 2 from the right side, so it would really be (at that point) 2/3x >(=) 8
thanks
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430252909660339, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/182396/question-about-proof-of-hahn-banach-lemma
|
# Question about proof of Hahn-Banach lemma
I think they do something unnecessary in my notes in the proof of the following lemma:
The idea of the proof is to partially order the set $\Sigma$ of pairs $(X_i, f_i)$ where $X_i$ is a linear subspace of $X$ containing $Y$ and $f_i : X_i \to \mathbb R$ is a linear map. Then apply Zorn to get a maximal element $(X^\prime, f^\prime)$ and show that $X^\prime = X$. The way to do this is by contradiction. One assumes that there is a point $z$ in $X \setminus X^\prime$ and then defines a $g$ such that $(X^{\prime \prime}, g)$ where $X^{\prime \prime}$ is spanned by $z$ and $X^\prime$ is strictly greater than $(X^\prime, f^\prime)$ in $\Sigma$, contradicting maximality.
In the notes we define $g$ in terms of $f^\prime$: If $z \in X \setminus X^\prime$ and $y \in X^{\prime \prime}$ then $g(y) = f^\prime(x^\prime) + \lambda c$ for some $\lambda$ and some $x^\prime \in X^\prime$. All we need to show is $g \leq p$. And this is the part where I think they are doing something superfluous. They pick $c \in [A,B]$ (see below) where I think it's enough to pick $c \leq B$:
If those pngs aren't readable then you can also find the proof in these notes at the very start of chapter 6. Thanks for your help.
-
3
How do you prove (6.2) and treat the case of $\lambda \lt 0$ in (6.3) if $c \lt A$? – t.b. Aug 14 '12 at 10:13
@t.b. Let me see, I think I ignored the case $\lambda < 0$ in my proof! : ) Thank you! – Matt N. Aug 14 '12 at 10:14
1
In your sketch of the Zornication part of the proof you probably wanted "$f$ is a linear function majorized by $p$" instead of "$f$ is a continuous linear functional". – Martin Sleziak Aug 14 '12 at 10:55
@MartinSleziak Yep, that's right! Thank you! – Matt N. Aug 14 '12 at 10:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390620589256287, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/27652-finding-unit-vectors.html
|
# Thread:
1. ## Finding unit vectors
Directions: Find a unit vector in the direction of line PQ.
P(7, -4) Q(-3, 2)
the textbook say the answer is: 1/SquareRoot(34) * (-5, 3)
sorry I don't know how to make the square root symbol...
ok, so I know how to find unit vectors for single points. can anyone please explain how to do this one? is it similar to the other method?
Thanks!
2. find the vector PQ first which is (-10 , 6) , then divide by the magnitude of the vector to get a unit vector.
3. Originally Posted by bobak
find the vector PQ first which is (-10 , 6) , then divide by the magnitude of the vector to get a unit vector.
This might be a stupid question, but how did you get (-10, 6) ??
4. To find a unit vector in any direction, one divides by the length in that direction.
If V is a non-zero vector then $\frac{1}{{\left\| V \right\|}}<br />$ is a unit vector parallel to V.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272693395614624, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=3868557
|
Physics Forums
## Quantum Field Theory lecture by Tong (Cambridg)
I am struggling with equation 1.5 in Tong's QFT course. I try to understand/explain it in strict calculus, i.e. without physics shortcuts like "small variations". I guess in the full blown explanation, $\delta S$ is a total derivative.
To be specific, with total derivative I mean the linear map that best approximates a given function $f$ at a given point. For $f:ℝ\toℝ$ we have $D(f,x_0):ℝ\toℝ$, i.e. $D(f,x_0)(h) \in ℝ$. Often it is also denoted as just $\delta f$.
In terms of total derivative, I wonder if the following simplification of Tong's equation 1.5 still catches what is going in in mathematical terms (no longer physical). Let $S_{a,b}(f) = \int_a^b f(x)dx$ a functional that maps functions $f$ to the real line. Then $D(S_{a,b},f) = \delta S_{a,b}$ should be well defined given any necessary smoothness conditions. In particular $D(S_{a,b},f)$ maps functions $h$ of the same type of $f$ to real numbers. Because the integral is linear, so my hunch, its best linear approximation should be itself. InTong's course, equation 1.5, first line, I find what I understand to be
$$\delta \int_a^b f(x) dx = \int_a^b \delta f dx$$
Can anyone explain how the algebraic types on the left and on the right would match up? My interpretation is, that on the left I have a the total derivative of a functional, which itself should be a functional, written explicitly as $D(S_{a,b},f)$. On the right I have the integral over, hmm, the total derivative of $f$, where I don't see how this could be a functional?
Any hints appreciated.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Science Advisor You need to read up on the calculus of variations: http://en.wikipedia.org/wiki/Calculus_of_variations
Recognitions:
Science Advisor
Quote by birulami I am struggling with equation 1.5 in Tong's QFT course. I try to understand/explain it in strict calculus, i.e. without physics shortcuts like "small variations". I guess in the full blown explanation, $\delta S$ is a total derivative. To be specific, with total derivative I mean the linear map that best approximates a given function $f$ at a given point. For $f:ℝ\toℝ$ we have $D(f,x_0):ℝ\toℝ$, i.e. $D(f,x_0)(h) \in ℝ$. Often it is also denoted as just $\delta f$. In terms of total derivative, I wonder if the following simplification of Tong's equation 1.5 still catches what is going in in mathematical terms (no longer physical). Let $S_{a,b}(f) = \int_a^b f(x)dx$ a functional that maps functions $f$ to the real line. Then $D(S_{a,b},f) = \delta S_{a,b}$ should be well defined given any necessary smoothness conditions. In particular $D(S_{a,b},f)$ maps functions $h$ of the same type of $f$ to real numbers. Because the integral is linear, so my hunch, its best linear approximation should be itself. InTong's course, equation 1.5, first line, I find what I understand to be $$\delta \int_a^b f(x) dx = \int_a^b \delta f dx$$ Can anyone explain how the algebraic types on the left and on the right would match up? My interpretation is, that on the left I have a the total derivative of a functional, which itself should be a functional, written explicitly as $D(S_{a,b},f)$. On the right I have the integral over, hmm, the total derivative of $f$, where I don't see how this could be a functional? Any hints appreciated.
Mathematicians always struggle to understand the meaning of the symbol $\delta$ which often used by physicists. I found that puzzling, after all the “calculus of variations” is a very old mathematical discipline.
With the help of $\delta$, physicists can do in two lines what mathematicians would do in two pages! This is why physicists are so keen on using $\delta$. To demonstrate this to you, I (pretending to be a mathematician) will derive the Euler-Lagrange equations the same way some mathematicians do it using the calculus of variations.
First, we need to define the Lagrange function for a relativistic classical field. Let $\bar{D}\subset \mathbb{R}^{4}$ be a bounded region corresponding to a bounded space-time region $\bar{U}\subset M^{4}$. $D$ is an open subset of $\mathbb{R}^{4}$ and $U \subseteq M^{4}$. Let $f: \ \bar{D} \rightarrow \mathbb{R}^{s}$ be a twice differentiable function, i.e., $f \in \mathcal{l}^{2}(\bar{D},\mathbb{R}^{s})$. Let $\phi^{A}\equiv \pi^{A}\circ f$ where $\pi^{A}: \ D \rightarrow \mathbb{R}$ are some finite set of projection mappings, and $A$ is a multi-index taking values in $\{0,1,2, … ,s\}$; $A=0$ is a scalar $\phi$(i.e., no index), $A=1$ is a vector; $\phi^{a}, \ a = 0, 1,.., 3$ (i.e.,one index), $A=2$ is rank two tensor; $\phi^{ab}$, … . Since $\phi^{A}: \ D \rightarrow \mathbb{R}$, we can associate $y^{A} = \phi^{A}(x)$ with components of a Minkowski tensor field, and $y^{A}_{a}= \phi^{A}_{,a}$ with the derivatives of those components with respect to coordinates $x^{a}$.
Let the Lagrange function $\mathcal{L}: \bar{D} \times \mathbb{R}^{s}\times \mathbb{R}^{4s}\rightarrow \mathbb{R}$, and assume that it is twice differentiable. It’s real value is denoted by $\mathcal{L}(x,y^{A},y^{A}_{a})$. Let the action functional $S$ be the totally differentiable function $S: \ \mathcal{l}^{2}(\bar{D};\mathbb{R}^{s}) \rightarrow \mathbb{R}$ such that
$$S(f) = \int_{\bar{D}}d^{4}x \ \mathcal{L}(x,y^{A},y^{A}_{a})|_{N},$$
with $N$ is the submanifold $y^{A}=\phi^{A} (x), \ y^{A}_{a}=\partial_{a}\phi^{A} (x)$. A new function $f + \epsilon h$, with the same boundary value as $f$, can be defined by
$$y^{A} = \phi^{A}(x) + \epsilon h^{A}(x)$$
$$h^{A}(x)|_{\partial D} = 0.$$
It is assumed that $h^{A}$ is twice differentiable, $\epsilon \ll 1$ is a small positive number and $\partial D$ is the boundary of the domain $D$. The “variation” of the action integral is then defined by
[tex]
\delta S(f) = S(f + \epsilon h) - S(f) = \int_{\bar{D}} d^{4}x ( \mathcal{L}(x,y,y_{a})|_{N'} - \mathcal{L}(x,y,y_{a})|_{N}),
[/tex]
where $N'$ is the submanifold $y = \phi (x) + \epsilon h(x), \ y_{a}= \partial_{a}\phi + \epsilon \partial_{a}h$.
Expanding $\mathcal{L}|_{\bar{N}}$ in powers of $\epsilon$ leads to
[tex]
\delta S(f) = \epsilon \int_{\bar{D}}d^{4}x \{ h^{A}(x) \frac{\partial \mathcal{L}}{\partial y^{A}}|_{N} + \partial_{a}h^{A}(x) \frac{\partial \mathcal{L}}{\partial y^{A}_{a}}|_{N}\} + \mathcal{O}(\epsilon^{2}).
[/tex]
Now, integrating by part and using Gauss’s theorem allow us to write
[tex]
\delta S(f) = \epsilon \int_{\bar{D}}d^{4}x \ h^{A}(x) E_{A}(x) + \epsilon \int_{\partial D}d\Sigma^{a}\ h^{A}(x) \frac{\partial \mathcal{L}}{\partial y^{A}_{a}}|_{N} + \mathcal{O}(\epsilon^{2}), \ \ (1)
[/tex]
where
[tex]
E_{A}(x) \equiv \frac{\partial \mathcal{L}}{\partial y^{A}}|_{N} - \frac{d}{dx^{a}}\left( \frac{\partial \mathcal{L}}{\partial y^{A}_{a}}|_{N}\right).
[/tex]
Since $h^{A}(x)|_{\partial D}= 0$, the second integral in eq(1) vanishes. To pick the “critical” value of the action integral $S(f)$, we must demand that
$$\lim_{\epsilon \rightarrow 0^{+}}\left( \frac{\delta S(f)}{\epsilon}\right) = 0.$$
Thus we find that
$$\int_{\bar{D}}d^{4}x \ h^{A}(x) E_{A}(x) = 0,$$
for all $h^{A}(x)$ that vanishes on the boundary $\partial D$. This implies that
$$E_{A}(x) = 0,$$
must hold in the domain $D \subset \mathbb{R}^{4}.$
Now, going back to be a physicist again, I have to say that the above derivation is a total waste of space and time.
Sam
## Quantum Field Theory lecture by Tong (Cambridg)
Thanks samalkhaiat, it is pity that I found your derivation only. I seem to have missed the notification. It certainly helps me to understand the topic better.
You are certainly right in that for physics it is often practical to use shortcut notation. Personally I prefer to first read and understand it the mathematical background before then moving forward to shorthand notations. As an example, when I first saw the Einstein summation convention over equals indexes without further explanation, I was completely lost of what to do with those unbound variables.
In the meantime I came across the wonderful book "Emmy Noether's wonderful theorem" by Neuenschwander. It contains just the right dose of hints about things like summation over double indexes or which the free variables are etc that allows to bridge the gap from pure math knowledge to physics math notation. Further he manages to explain the Lagrangian without lone deltas. Really nice book.
Thread Tools
| | | |
|----------------------------------------------------------------------|----------------------------------------|---------|
| Similar Threads for: Quantum Field Theory lecture by Tong (Cambridg) | | |
| Thread | Forum | Replies |
| | High Energy, Nuclear, Particle Physics | 1 |
| | Beyond the Standard Model | 0 |
| | Quantum Physics | 10 |
| | Quantum Physics | 3 |
| | Quantum Physics | 16 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8905506730079651, "perplexity_flag": "head"}
|
http://dsp.stackexchange.com/questions/5915/what-is-the-meaning-of-the-dft
|
# What is the meaning of the DFT? [duplicate]
Possible Duplicate:
Real Discrete Fourier Transform
What is the most lucid, intuitive explanation for the various FTs - CFT, DFT, DTFT and the Fourier Series?
Discrete Time Fourier Transform
I read about the Discrete Fourier Transform. I understand how to compute it given a sequence of numbers(digital signal), but I still didn't get what's the meaning of the new sequence I get as the output.
I mean, I know that the Fourier Transform takes an analog signal representation in the time domain and returns its representation in the frequency domain, its spectrum. If I look at the analog signal in a time-window $T$, sampling it $N$ times , then the sampling interval will be $t_0 = T/N$ and the sampling frequency will be $f_s =1/t_0$. The sampled values of the analog signal give me a new digital signal, lets say $s_n$
So what is the meaning of the DFT of this digital signal $s_n$? How the DFT sequence related to $s_n$? How it is related to the original analog signal? What is the meaning of these frequency components I get? Should they be taught of an approximation the previous analog signal frequency components or as sampled values of its spectrum?
Thanks a lot!
-
1
Hey! Welcome to dsp.SE :) I'm not an expert of DFT, but a very quick search of the site gives me a lot of questions and explanations of it: 1, 2, 3. Maybe there's something on the site already that can help you, did you look through all the offered answers? We try to avoid duplicating same information in multiple answers. I'm sure that we will be glad to help you with any questions you might have left after that :) – penelope Nov 6 '12 at 9:53
## marked as duplicate by Lorem IpsumNov 6 '12 at 14:37
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 1 Answer
As you noted, the discrete Fourier transform (DFT) maps a length-$N$ sequence to its (also length-$N$) frequency-domain equivalent. The time-domain signal $s[n]$ is transformed into a frequency-domain signal $S[k]$, such that the following two relations hold true:
$$S[k] = \sum_{n=0}^{N-1} s[n] e^{\frac{-j2\pi k n}{N}} \text{ (the DFT)}$$ $$s[n] = \sum_{k=0}^{N-1} S[k] e^{\frac{j2\pi k n}{N}} \text{ (the inverse DFT)}$$ (ignoring scale factors that can be placed in many different places depending upon the preference of the author; one way to make them "symmetric" is to place a factor of $\frac{1}{\sqrt{N}}$ in front of each equation: the unitary DFT).
There are a number of intuitive interpretations for the action implied by these equations. Here's the way I usually think about them:
• Forward DFT: Each DFT output value $S[k]$ is a complex value that represents the amplitude and phase of $s[n]$'s content at frequency $\frac{2\pi k}{N}$. The multiplication by the exponential function effectively shifts $s[n]$ down in frequency by $\frac{2\pi k}{N}$. The sum over $N$ time samples can then be thought of as applying a decimating lowpass filter. So, effectively, the $N$ outputs of the DFT represent the results of applying a bank of equally-spaced filters across $s[n]$'s frequency band, thus measuring "how much" energy is present in various frequency bands in the input signal.
• Inverse DFT: The DFT output values $S[k]$ are used to weight a number of complex exponential functions in order to resynthesize the original time-domain signal $s[n]$. The frequency associated with $S[k]$ is the additive inverse of the frequency used during the forward DFT. This view clearly shows that the DFT can be thought of as a means to decompose an arbitary finite-length time-domain signal $s[n]$ into a set of weighted orthogonal complex exponential functions (or "complex sinusoids").
Now, extending this concept a bit: when you sample a continuous-time signal $s(t)$ at discrete time instants, you get a (potentially infinite-length) discrete-time signal $s[n]$. Applying the Fourier transform to such a discrete-time signal results in the discrete time Fourier transform (DTFT), which is continuous in frequency and, like the DFT, periodic in frequency with period $2\pi$. Assuming that a large-enough sample rate was used during the conversion to discrete-time, the result of the DTFT will look a lot like the Fourier transform of the original signal $s(t)$.The DFT is only applicable to finite-length signals, so in order to apply it, one must truncate the discrete-time signal $s[n]$ to some finite length $N$, resulting in a potentially-shorter signal $s_N[n]$.
• For this special case of a finite-length discrete-time signal, the $N$ DFT outputs are just equally-spaced samples of $s_N[n]$'s DTFT.
• $s_N[n]$'s DTFT is related to $s[n]$'s DTFT; they may differ if $s[n]$ was longer than $N$ samples long originally and some truncation/windowing was involved in generating a finite sample length.
• $s[n]$'s DTFT is related to the original signal's Fourier transform, with the caveat that the DTFT is periodic, so all of the frequency content from the continuous-time signal that is unambiguously representable in discrete-time (the bandwidth of which is determined by the sample rate used) is contained within a single length-$2\pi$ period of $s[n]$'s DTFT.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930394172668457, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/152642-rates-change.html
|
# Thread:
1. ## Rates of Change
6)Show a numerical method of approximating the instantaneous rate of change at x = 3 for the function ƒ(x) = -x2 + 4x + 1 using secants. Show two numerical approximations.
7)Show a graphical method of approximating the instantaneous rate of change at x = 3 for the function ƒ(x) = -x2 + 4x + 1 using secants. Show two graphical approximations.
2. Originally Posted by ilovemymath
6)Show a numerical method of approximating the instantaneous rate of change at x = 3 for the function ƒ(x) = -x2 + 4x + 1 using secants. Show two numerical approximations.
Pick two values of Δx to plug into the difference quotient
$m_{sec} = \dfrac{f(x + \Delta x) - f(x)}{\Delta x}$
For example, you could let Δx = 0.1 and you would get
$m_{sec} = \dfrac{f(3 + 0.1) - f(3)}{0.1}$
or
$m_{sec} = \dfrac{f(3.1) - f(3)}{0.1}$
Plug in 3 and 3.1 into $f(x) = -x^2 + 4x + 1$ to find f(3) and f(3.1).
After that, pick another value for Δx and repeat.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8478360176086426, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/71006/semi-simple-kahler-groups
|
## Semi-Simple Kahler Groups?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We say that a Kahler manifold is a Kahler group if it is also a Lie group. I would like to know which semi-simple Lie groups are also Kahler groups?
-
I was about to reply, but then I saw that you meant something quite different than what I thought. Kahler group usually means fundamental group of a compact Kahler manifold. But anyway, for the question you asked, do you want the metric to be invariant under the group action? – Donu Arapura Jul 22 2011 at 20:37
Yes, it should be invariant. – Jean Delinez Jul 24 2011 at 14:15
## 2 Answers
Semisimple Lie groups admit bi-invariant metrics (although not necessarily positive-definite) and it is not hard to show that if a Lie group admits a bi-invariant metric and also a left-invariant Kähler structure, then the group is abelian, contradicting the assumption that it was semisimple. Hence no semisimple Lie group admits a left-invariant Kähler structure.
In the case where the Kähler structure is not left-invariant, the two structures do not talk to each other and hence you are asking whether a manifold which admits the structure of a semisimple Lie group could also admit a Kähler structure. The identity component of such a manifold is (rationally) homotopy equivalent to a product of odd spheres (of dimension at least 3), so $H^2$ vanishes and thus, if compact, they again cannot admit a Kähler structure.
I'm not sure about the noncompact case, though; but it looks unlikely to me at this time.
-
1
In fact, it is enough for the group to be unimodular in order to deduce that if it has a left-invariant Kähler structure it is abelian. This is proved in a paper by Lichnerowicz and Medina (springerlink.com/content/p7p8gl6h5163j465) – José Figueroa-O'Farrill Jul 22 2011 at 22:54
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
See http://eom.springer.de/h/h047640.htm and references therein.
-
How does this answer the question? – José Figueroa-O'Farrill Jul 22 2011 at 22:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9135028719902039, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/62596-very-tough-polynomial-question.html
|
# Thread:
1. ## Very tough polynomial question.
Let $f(x)=a_0x^n+a_1x^{n-1}+...+a_{n-1}x+a_n$be a polynomial with integer coefficients and assume that for a given prime integer $p$that:
1. $p$ does not divide $a_0$,
2. $p$ divides $a_1,a_2,...,a_n$
3. $p^2$ does not divide $a_{n-1}$
Prove that either $f(x)$ has a rational root, or $f(x)$is irreducible over the filed $Q$
What is the meaning of the last sentence? Do I have to show that there are two cases: case1 irreducible, case2 has rational root?
I assume that I should consider two cases. Case1 is easy, it occurs when $p^2$ does not divide $a_{n}$. In this case Eisenstein's criterion gives answer. But what about case2 when $p^2$ divides $a_{n}$ ?
I managed to show the following in case2:
$f(x)=x(a_0x^{n-1}+a_1x^{n-2}+...+a_{n-1})+a_n$ where $a_0x^{n-1}+a_1x^{n-2}+...+a_{n-1}$ is irreducible according to Eisenstein's criterion. I have a feeling that $f(x)$has a rational root, but do not know how to show it...
2. First you should have learned that roots in $\mathbb Q$*are of the form $\frac{\pm \text{ factors of }a_n}{ \text{ factors of }a_0}$ If not tell me I'll show you.
Notice that it could be the case that none of them works. In this case the polynomial is irreducible over $\mathbb Q$.
Else, let $*c_0$ be a root found using the criterion $\frac{\pm \text{ factors of }a_n}{ \text{ factors of }a_0}$.
Then it is possible to rewrite the polynomial as $g(x)=(x-c_0)(b_0x^{n-1}+b_1x^{n-2}+...+b_{n-2}x+b_{n-1})$. You can now simply factor the second polynomial with the criterion $\frac{\pm \text{ factors of }b_{n-1}}{ \text{ factors of }b_0}$.
At this point you can use the Eisenstein's criterion.
$h(x)=b_0x^{n-1}+b_1x^{n-2}+...+b_{n-2}x+b_{n-1}$
Even though you have divided these coefficients by some number, you can again put it in the form of a polynomial with integer coefficient by multiplying the whole thing by the least common multiple of the denominator.
You still have that some $p|b_i$ but $p^2 \text{ doesn't divide } b_{n-1}$ that is some multiple of $a_{n-1}$.
Therefore, this h(x) is irreducible and you thus are either one or no root.
3. =vincisonfire;229638]First you should have learned that roots in $\mathbb Q$*are of the form $\frac{\pm \text{ factors of }a_n}{ \text{ factors of }a_0}$
I know that and I tried but did not find very helpful.
You did not use $a_n$ so you continued my case2 ?
4. The three conditions you post add up all together.
You have only one case with three conditions. Not three with one condition.
What you did in your "case 2" doesn't work.
Any polynomial of the form $ax^2 +bx$ is reducible. But you can't divide the polynomial in the way you did. You can get a factor out but not a term.
Remark that I used the three conditions in my answer. I'm pretty sure I'm right even though my explanation can seem a little rusty.
5. You still have that some $p|b_i$ but $p^2 \text{ doesn't divide } b_{n-1}$ that is some multiple of $a_{n-1}$.
Therefore, this h(x) is irreducible and you thus are either one or no root.
How do you prove this?
6. When you make the division, you'll find that $b_{n-1} = \frac{a_n}{c_0}$ because $a_0x^n+a_1x^{n-1}+...+a_{n-1}x+a_n = (x-c_0)(b_0x^{n-1}+b_1x^{n-2}+...+b_{n-2}x+b_{n-1})$
You thus have an integer that p does not divide, divided by an integer that p does divide.
Since p divides all $a_i$ expcept $a_0$, when you multiply everything by the least common multiple you are sure you won't have a $p^2$ factor in the factorization of $b_{n-1}$.
Although, you are sure that you will have p factor in some $a_i$ since $p$ doesn't divide $a_0$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 45, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549592733383179, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/32391/what-is-the-difference-between-a-battery-and-a-charged-capacitor?answertab=active
|
# What is the difference between a battery and a charged capacitor?
What is the difference between a battery and a charged capacitor?
I can see lot of similarities between capacitor and battery. In both these charges are separated and When not connected in a circuit both can have same Potential difference V.
The only difference is that battery runs for longer time but a capacitor discharges almost instantaneously. Why this difference? What is the exact cause for the difference in the discharge times?
-
## 5 Answers
A battery generates a voltage by a chemical reaction. There is a class of chemical reactions called redox reactions that involve the transport of electrons, and you can use the reaction to drive electrons through an external circuit. This is the basis of a battery. The battery will continue to provide power until all the reagents have been used up and the reaction stops. A battery generates a potential difference that is related to the free energy change of the reaction occurring in the battery. Note that there is no charge separation in a battery until you connect it to something and current flows.
A capacitor is completely different. It has a potential only because charge has been stored on it, and when you connect the capacitor to an external circuit a current only flows until all the charge has drained. Unlike a battery, the voltage on a capacitor is variable and is proportional to the amount of charge stored on it.
-
A quick note about capacitors storing charge... they don't, not as used in electric circuits. Instead, charge is "moved" from one plate to the other via the external circuit. The energy is stored in the electric field between the plates. To "charge" a capacitor is not to store charge on it but, but to store energy - by separating charge. – Alfred Centauri Jul 21 '12 at 11:53
Also, a quick note about charge separation in a battery. There is an initial amount of charge separation that establishes the open circuit voltage across the battery. It's not much, but it has to happen to develop that voltage. – Alfred Centauri Jul 21 '12 at 11:56
Agreed. I actually have no idea what the capacitance of a battery is, but it's small. – John Rennie Jul 21 '12 at 14:02
Why this difference? What is the exact cause for the difference in the discharge times?
The difference is due to the difference in the amount of energy stored.
Consider, for example, a typical alkaline cell. From this chart, we see that a new alkaline long life AAA cell stores about 5kJ of energy.
Now, consider a large electrolytic capacitor of, say, 1000$\mu F$. Charged to the same nominal voltage of the cell, 1.5$V$, the energy stored is just over 1mJ. That's 6 orders of magnitude less.
Put another way, to store the same amount of energy at 1.5$V$ as the AAA battery would require a capacitor of about 4400$F$(!!!). That's an enormous capacitance.
-
Some complementary remarks:
Seen as black boxes, both are simply voltage sources. A battery is designed to supply, as far as possible, a constant voltage whereas a capacitor has no such "dosage capabilities".
From a applied standpoint, batteries store a lot of charge, but are slow to charge/discharge. Capacitors are the exact opposite: low storage, fast usage. That is why the flash Ina camera uses a battery to charge a capacitor.
There is also a lot of research in super-capacitors, which try to yield the best of both worlds.
-
In battery charges are not separated. It is a oxydation/reduction chemical reaction that produces the electrons: ie the current. In capacitors, the electrons are pre stocked before use during the charge of the capacitor.
-
Between the plates of a battery, there is a mechanism (usually a chemical reaction of some kind) that maintains a charge separation across the plates. There is no such mechanism between the plates of a capacitor, so the charge separation cannot be maintained as charge drains off the plates.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454864859580994, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/39034/energy-in-electric-field?answertab=active
|
# Energy in electric field
I'm having some trouble understanding a homework question and would appreciate some help.
The question is as follows:
Jenny charges a capacitor with the help of a battery. She then removes the battery and halves the distance between the two plates. How does the energy conserved in the capacitor change?
I solved the question like this:
$E=\frac{U}d$ and $U= \frac{W}Q$
$\implies W=EQd$
Hence, the energy doubles (as $E=\frac{U/d}2 \implies 2E=\frac{U}d$)
where:
$W$ = energy, $E$ = Electric field intensity, $Q$ = charge, $U$ = voltage, $d$ = distance
This answer doesn't make any sense to me, considering the law of conservation of energy (So, Where did all that energy come from?)
It also turns out that my answer is wrong, in fact the energy is halved. This also doesn't make any sense to me, what am I missing?
-
2
@CrazyBuddy do you think you could cut down on your over use of the exclamations mark?!!!!!!! – Physiks lover Oct 4 '12 at 17:30
@Physikslover: Hello there Physiks lover, Is it discouraging or disgusting..? I'm just using those to express something which requires attention. I'm trying to avoid it. But, can't stop.. Forced by habit. Sorry..! – Ϛѓăʑɏ βµԂԃϔ Oct 4 '12 at 17:36
1
Excellent question! @CrazyBuddy while you're right that the homework tag is appropriate here (well, probably), this is exactly the kind of homework question we want - a conceptual question, not looking for help on a solution. – David Zaslavsky♦ Oct 4 '12 at 18:16
## 2 Answers
1) There's a problem with your $U=W/Q$ (or, equivalently, $W=QU$) formula. You can review Crazy Buddy's analysis, or do it yourself by imagining charging a capacitor from 0 to a final voltage U with a constant current I. The instantaneous voltage u(t) grows linearly with time, so when you integrate the energy under the power curve $\left(U = \int {p(t) dt} =I \int{u(t)dt} \right)$ you're figuring the area of a triangle, and a factor of 1/2 emerges. The correct formula is $W=(1/2) QU$.
2) The key to this problem is that the battery is disconnected before the plates are moved. Once disconnected, the charge Q on the capacitor is fixed; hence the electric field E is also fixed (via Gauss' law), independent of the plate spacing. Since the voltage $U=Ed$, reducing the spacing reduces the voltage, and the energy.
[For grins, you might consider a different case in which the battery is not disconnected. Then the voltage on the capacitor, not the charge, is fixed, and the result when the plate spacing is reduced is quite different.]
3) As pointed out by others, the field does work as the spacing is reduced.
-
The energy stored by a capacitor with capacitance $C$ and charged to voltage $V$ is equal to $\frac{1}{2}CV^2$
For a parallel plate capacitor, the capacitance $C$ is equal to $\frac{\epsilon A}{d}$
So, when the distance between the plates is halved, the capacitance is doubled.
Also, for the same capacitor, we have that the voltage $V$ is equal to $\frac{Q}{C}$.
So, when the capacitance is doubled, the voltage is halved.
Denoting the initial capacitance and voltage by $C_0, V_0$, the final energy, after the distance is halved is given by:
$E = \dfrac{1}{2}(2C_0)(\frac{V_0}{2})^2 = \dfrac{1}{4}C_0V_0^2 = \dfrac{E_0}{2}$
So, halving the distance between the plates of a disconnected, charged capacitor reduces the stored energy by half. Why?
There is a force between the plates acting to pull them together. When you allow the plates to move closer, there is work done by the system, reducing the stored energy.
-
1
This was marked as homework, so the standard on physics.stackexchange is to not work out the answer in detail, but to show where they have gone wrong and give hints towards the right answer... – FrankH Oct 21 '12 at 5:47
@FrankH is right, although in this case the OP already has the answer so I'm not going to make a fuss about it. – David Zaslavsky♦ Oct 21 '12 at 16:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489895701408386, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/21553/example-in-the-book-a-simple-accelerometer/24511
|
# A simple accelerometer
You tape one end of a piece of string to the ceiling light of your car and hang a key with mass m to the other end (Figure 5.7). A protractor taped to the light allows you to measure the angle the string makes with the vertical. Your friend drives the car while you make measurements. When the car has a constant acceleration with magnitude a toward the right, the string hangs at rest (relative to the car), making an angle $B$ with the vertical.
• (a) Derive an expression for the acceleration $a$ in terms of the mass m and the measured angle $B$.
• (b) In particular, what is a when $B$ = 45? When $B$ = 0?
I don't care about the answers, the important thing is the following:-
The book says The string and the key are at rest with respect to the car, but car, string, and key are all accelerating in the +x direction. Thus, there must be a horizontal component of force acting on the key.
That's the reason the book decided to consider a force in the $+x$ direction, but I'm looking for a better explanation: how would I find detect the force in the $+x$ direction in another way? To me, when I draw the free body diagram of the string, there looks to be no force acting on the $+x$ direction! I understand it starts with noticing that the string is attached to the ceiling of the car, and that the car has force causing acceleration in one direction, but I don't know how to go further than that.
-
There is a force, remember f = ma ? – Martin Beckett Feb 27 '12 at 18:16
What type of force is it? – w4j3d Feb 27 '12 at 18:40
I understand there is force, but there could be multiple forces, and the force of the string (horizontally) doesn't have to be equal to the force of the car (horizontally), since they are not an action-reaction pair. – w4j3d Feb 27 '12 at 18:45
There is only the string tension and gravity acting on the key (no fictitious "force of the car"). See my answer for more details. – kleingordon Apr 28 '12 at 9:19
## 3 Answers
Acceleration and force are not the same thing.
So first simplify the problem by just talking about acceleration.
For example, suppose the car is accelerating forward at a rate of 1.0 meter per second per second. It also feels an "acceleration" due to gravity, which is 9.8 m/(s^2). If you want to know the angle at which the key hangs, it is the angle whose tangent is 1.0/9.8. Hopefully that is enough for you to run with.
Then if you want to know force, make use of f=ma. i.e. multiply the acceleration by the vehicle's mass.
-
Since the net acceleration is in the +x direction, the x component of the tension in the string must be providing the net force, since the only other force that applies, gravity, has no x component. Meanwhile, the y component of the string tension cancels the y acceleration from gravity. There is no other force acting on the key.
Finally, I believe there is a typo in the statement of the question. Your expression for the acceleration will include the gravitational acceleration g but not the mass of the key.
-
Did you forget to include the D'Alembert forces in the Free Body Diagram?
http://en.wikipedia.org/wiki/D'Alembert's_principle
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341636896133423, "perplexity_flag": "head"}
|
http://logiciansdoitwithmodels.com/author/replicakill/
|
# Logicians do it with Models
A self-guided jaunt in philosophical logic
## Author Archive
### Logicomix Available for Free Online Reading via Scribd
February 24, 2013
It’s a fun story about the early search for mathematical foundations involving figures like Bertrand Russell, George Cantor, Ludwig Wittgenstein, Gottlob Frege, Kurt Gödel and Alan Turing.
http://www.scribd.com/doc/98921232/Bertrand-Russell-Logicomix
And for those interested, here’s a review of Logicomix by noted philosopher of mathematics, Paolo Mancosu:
https://philosophy.berkeley.edu/file/509/logicomix-review-january-2010.pdf
### The Nonsense Math Effect
December 28, 2012
An article suggesting that non-math academics are impressed by equations even if the equations are nonsense:
http://journal.sjdm.org/12/12810/jdm12810.pdf
Tags:Academia, Education, Math, Research
### Well Ordering Infinite Sets, the Axiom of Choice and the Continuum Hypothesis
December 17, 2012
We’re trying to understand the relationship between $\aleph_{0}$, and other infinite cardinal numbers. We’ve reviewed two ways (here and here) to generate infinite cardinals greater than $\aleph_{0}$. The first way is by considering the set of all subsets of positive integers, the power set $\mathcal{P}(\mathbb{N})$ of $\mathbb{N}$ with cardinality $2^{\aleph_{0}}$. The second way is by considering the set of countably infinite ordinals, $\omega_{1}$ and its cardinality, $\aleph_{1}$.
We begin by introducing the axiom of choice. The axiom of choice (AC) is an axiom of set theory that says, informally, that for any collection of bins, each containing at least one element, it’s possible to make a selection of at least one element from each bin. It was first set forth by Zermelo in 1904 and was controversial for a time (early 20th century) due to its being highly non-constructive.
For example, AC is equivalent to the well ordering theorem (WOT) stating that every set can be well ordered, and it can be proven using AC/WOT that there are non-Lebesgue measurable sets of real numbers. Nevertheless, this result is consistent with the result that no such set of reals is definable! And use of the AC to achieve unintuitive mathematical results, like paradoxical decompositions in geometry in the spirit of the Banach-Tarski paradox, have also fueled skepticism and mistrust of the axiom. But contemporary mathematicians make free use of the axiom and hardly mind that it was ever controversial.
Formally, the axiom of choice says the following:
For each family $(X_{i})_{i \in I}$ of non-empty sets $X_{i}$, the product set $\prod_{i \in I}{(X_{i})}$ is non-empty.
The elements of $\prod_{i \in I}{(X_{i})}$ are actually “choice functions”
$(x_{i})_{i \in I}$, $I \rightarrow \bigcup_{i \in I}{(x_{i})}$, satisfying $x(i) = x_{i}$ for each $i \in I$.
AC is important when thinking about infinite cardinals in part because of its equivalence to the WOT. Because WOT says that every set can be well ordered, it follows that each cardinal (any set really) can be associated with an ordinal number and you can count them via the ordinals. For instance, $\aleph_{0}$ can be represented by $\omega$, $\aleph_{1}$ by $\omega_{1}$ and in general AC/WOT enables us to give the von Neumann definition of cardinal numbers where the cardinality of a set $X$ is the least ordinal $\alpha$ such that there is a bijection between $X$ and $\alpha$.
AC and WOT do a lot of other work as well in terms of ordering infinities. We say that an $aleph$ is the cardinal number of a well ordered infinite set. Because every set is well order able, all infinite cardinals are $alephs$. It was shown by Friedrich Hartogs in 1915 that trichotomy, which is a property of an order relation on a set $A$ such that for any $x, y \in A$, $x < y, x = y$, or $x > y$, is equivalent to AC.
So with AC we can list, in (a total) order, the infinite cardinals and compare them. Thus we can set up the following infinite list of $alephs$, $\aleph_{0}, \aleph_{1}, \aleph_{2}, \cdots, \aleph_{\alpha}, \cdots$. We know that $\aleph_{1}$ is greater than and distinct from $\aleph_{0}$. The cardinal $\aleph_{1}$ is the cardinality of the ordinal $\omega_{1}$ which is larger than all countable ordinals, so $\aleph_{1}$ is distinguished from $\aleph_{0}$. What about $2^{\aleph_{0}}$?
It’s unclear where $2^{\aleph_{0}}$ fits in in the list $\aleph_{0}, \aleph_{1}, \aleph_{2}, \cdots, \aleph_{\alpha}, \cdots$. The infinite cardinal $2^{\aleph_{0}}$ is the cardinality of the set of subsets of $\mathbb{N}$, but it’s also the cardinality of the set, $\mathbb{R}$, of real numbers or the set of points on a line, known as the continuum. (This equality is provable using the Cantor-Schröder-Bernstein theorem and follows from the proof of the uncountability of $\mathbb{R}$.)
George Cantor, the inventor of set theory, conjectured in 1878 that there is no set whose cardinality is strictly between the cardinality of the integers ($\aleph_{0}$) and the cardinality of the continuum ($2^{\aleph_{0}}$). Since we’re assuming AC, that means that $2^{\aleph_{0}} = \aleph_{1}$.
This conjecture is famously known as the Continuum Hypothesis (CH) and was the first of 23 problems in David Hilbert’s famous 1900 list of open problems in mathematics. The problem of whether or not CH is true remains open to this day although Kurt Gödel (in 1940) and Paul Cohen (in 1963) showed that CH is independent of the axioms of Zermelo-Fraenkel set theory, if these axioms are consistent.
There is much of philosophical interest surrounding the mathematics of the continuum hypothesis and I hope to be able to turn my attention to those topics in the future.
Posted in Axiom of Choice, Axioms of Set Theory, Continuum Hypothesis, Set Theoretic Infinity, Transfinite Cardinals | Leave a Comment »
### Priest, Beall and Armour-Garb’s “The Law of Non-Contradiction” Available Online via Scribd
November 5, 2012
Priest, Beall and Armour-Garb’s “The Law of Non-Contradiction”, (link below) is a great collection of essays on the philosophy and logic of dialetheism, the belief that that there are sentences A, such that both A and its negation, ¬A are true. Non-classical, paraconsistent logics may be necessary for formalizing and understanding physical and social systems.
http://www.scribd.com/doc/62132941/3/Letters-to-Beall-and-Priest
Posted in Non-Classical Logic | 2 Comments »
### Paul Cohen Reflects on the Nature of Mathematics
November 5, 2012
Reflections on the nature of mathematics from Paul Cohen’s “Comments on the foundations of set theory” in Scott and Jech, Axiomatic Set Theory, Vol.1, p. 15.
### A Mathematician’s Survival Guide, by Pete Casazza
April 29, 2012
“All My Imaginary Friends Like Me” -Nikolas Bourbaki
From §2 of “A Mathematician’s Survival Guide“, a good read for all, not just mathematical people.
### The Cardinality of Uncountable Sets II
March 6, 2012
We can get at another infinite cardinal that is greater than $\aleph_{0}$ by thinking about the ordinal numbers. Think of the natural, or counting numbers $\textup{N} = (1, 2, 3, \cdots)$. These numbers double as the finite cardinal numbers. Cardinal numbers, we have seen, express the size of a set or the number of objects in a collection (e.g., as in “24 is the number of hours in a day”). But they also double as the finite ordinal numbers, which indicate a place in an ordering or in a sequence (e.g., as in ” the letter ‘x’ is the 24th letter in the English alphabet”). In the case of finite collections, the finite ordinal numbers are the same as the finite cardinals.
But when we start thinking of infinite collections the similarities diverge. In order to see the differences in the infinite case we should get clear on what an ordinal number is. Say that a set $x$ is transitive if, and only if, every element of $x$ is a subset of $x$, where $x$ is not a urelement (something that is not a set). Now say that a set $x$ is well-ordered by the membership relation, $\in$, if $\{\langle y, z\rangle \in x \times x : y \in z\}$. What this does is simply order the elements of $x$ in terms of membership. We can do this type of thing with transitive sets. Combining these two definitions we get the definition of an ordinal number: a set $x$ is an ordinal if, and only if, $x$ is transitive and well-ordered by $\in$.
Now, let $\omega = \{1, 2, 3, \cdots\}$ (i.e., the set $\textup{N}$ of natural numbers). $\omega$ is an ordinal because when we think of the natural numbers as constructed by letting $0 = 0$ and letting $1 = \{0\}$, the singleton set of $0$, $2 = \{0, \{0\}\}$, and so on and so forth it satisfies the definition above of an ordinal number as a transitive set well-ordered by $\in$. So we can set up the sequence $1, 2, 3, \cdots, \omega$ with all ordinals less than $\omega$ either equal to $0$ or one of its successors.
Suppose that you take the natural numbers and re-arrange (re-order) them so that $0$ is the last element. This is weird because the regular ordering of the natural numbers has no last element. But still, you can think of there being a countable infinity of natural numbers $1, 2, 3, \cdots$ prior to the appearance of $0$. So we have the standard order of $\textup{N}$ and we have added another element, $0$. If we let $\omega$ be the standard order of $\textup{N}$, then we have just described $\omega + 1$. We can do the same thing by now setting $1$ as the last element of $\omega + 1$ and thus get $\omega + 2$. Note that addition (and multiplication below) does not commute; e.g., $1 + \omega = \omega$ and not $\omega + 1$. This process can be generalized (e.g., $n, n +1, \cdots, 0, 1, \cdots, n -1$) to get $\omega + n$.
Again, doing something weird: take the natural numbers and put all the even numbers first, followed by the odd numbers. It’ll look like this, $0, 2, 4, \cdots, 2n, \cdots, 1, 2, 3, \cdots 2n+1, \cdots$. Here we have taken $\omega$ (the evens) and appended it to $\omega$ (the odds) so we have $\omega + \omega$. In each case, $\omega$, $\omega + n$ and $\omega + \omega$, we are dealing with the same cardinality, the cardinality of $\textup{N}$.
We’ve created a variety of different ordinal numbers here, and, as they represent different orderings of the natural numbers, they are all countable. There are many more countably infinite ordinals. For example, the ordinal $\omega \cdot 2 = \{ 0, 1, 2, \cdots, \omega, \cdots, \omega+1, \omega+2, \cdots \}, \omega \cdot 2+1, \cdots \omega^{2}, \omega^{3}, \omega^{4}, \cdots \omega^{\omega}, \cdots \omega^{\omega^{\omega}} \cdots$.
Taking the countable ordinals and laying them out (kind of like in the previous sentence but starting with $1, 2, 3, \cdots, \omega, \cdots, \omega + 1, \omega + 2, \cdots$) we end up with a a set that is itself an ordinal. In order to see this let $\alpha$ be the set of countable ordinals. If $\beta \in \alpha$ then $\beta \subset \alpha$ since the members of $\beta$ are countable ordinals. Therefore $\alpha$ is an ordinal.
It is in fact the first uncountable ordinal because if it were countable, then $\alpha$ would be a member of itself and there would be an infinitely descending sequence of ordinals. But because the ordinals are transitive sets (see definition above), this cannot be the case. So the set of countable ordinals is uncountable. (It is also the smallest such set because the ordinals are well ordered by $\in$, so every ordinal in $\alpha$ is a member of $\alpha$ and countable.) This uncountable set goes by the name $\omega_{1}$.
Here we see how the similarities between the ordinal numbers and the cardinal numbers in the finite case diverge in the infinite case. Whereas there is only one countably infinite cardinal, $\aleph_{0}$, there are uncountably many countably infinite ordinals, namely all countably infinite ordinals less than $\omega_{1}$.
It is natural to wonder about the cardinality of the set $\omega_{1}$ of countable ordinals. Its cardinality is transfinite and is denoted by the uncountable cardinal number, $\aleph_{1}$.
So far we’ve talked about $\aleph_{0}, 2^{\aleph_{0}}$ (see here, and here, respectively), and have generated $\aleph_{1}$ by considering the uncountable set of countably infinite ordinals, $\omega_{1}$. In the next update we’ll talk more about the relationship between these cardinal numbers as well as the celebrated axiom of choice.
Posted in Cardinal Numbers, Ordinal Numbers, Transfinite Cardinals | 1 Comment »
### Cantor’s Attic
January 15, 2012
Cantor’s Attic: a resource on mathematical infinity. Philosophically rich mathematics is the best mathematics.
### FOM Posting on A. Kiselev’s claim that “There are no weakly inaccessible cardinals” in ZF
August 17, 2011
Alex Kiselev claims to have shown “There are no weakly inaccessible cardinals” in Zermelo-Fraenkel set theory (ZF). This would have the consequence that strongly inaccessible cardinals don’t exist either and so on for all the other large cardinals. Martin Davis on the FOM list cautions that the claim is “highly dubious”.
Here are links to Kiselev’s papers:
Part 1: http://arxiv.org/abs/1010.1956
Part 2: http://arxiv.org/abs/1011.1447
Link to the FOM list entry: http://www.cs.nyu.edu/pipermail/fom/2011-August/015694.html
Posted in Cardinal Numbers, Inaccessible Cardinals | 2 Comments »
### The Cardinality of Uncountable Sets I
August 11, 2011
We saw earlier that $\aleph_{0}$ can accommodate a countable infinity of countable infinities. Now we’re going to produce an infinite cardinal number that is bigger than $\aleph_{0}$. Cardinal numbers express the size of, or number of elements of a set. Countably infinite sets have cardinality $\aleph_{0}$. We know that countably infinite sets can be matched up (by way of a one-to-one and onto function) to the set of positive integers. So if we can find a cardinal number greater than $\aleph_{0}$, we can find a set greater than the set of positive integers, or an uncountably infinite set.
We’re going to begin with an infinite list of subsets of the set of positive integers:$S_{1}, S_{2}, S_{3} \cdots$ And we’re going to represent each of these sets by a function of positive integers:
$s_{n}(x)= \begin{cases} 1, &\text{if } x \in S_{n};\\ 0, &\text{if } x\notin S_{n}. \end{cases}$
For example, if the third set in our list, $S_{3}$ is the set of positive even integers, then the values of the function $s_{3}(x)$ are $s_{3}(1) = 0, s_{3}(2) = 1, s_{3}(3) = 0 \cdots$ And if the fourth set in our list, $S_{4}$ is the set of squares, then the values of the function $s_{4}(x)$ are $s_{4}(1) = 1, s_{4}(2) = 0, s_{4}(3) = 0 \cdots$ and so on.
Imagine now that we set up an infinite table like this. The top row will be our header row, or our x-axis and will contain, in the first position the label “Sets of positive integers”. To the right of this label we list out the positive integers, $1, 2, 3, \cdots$ These are our columns. Immediately below the label “Sets of positive integers” we list the names of the each subset of positive integers, $S_{1}, S_{2}, S_{3} \cdots$ This is our y-axis and extends infinitely downwards. We now fill out the values of each coordinate using the values of the functions $s_{1}, s_{2}, s_{3} \cdots$ extending to the right of each set name on the y-axis. Basically the rows of the infinite table contain the 0-1 representation of each of the sets.
It looks like this:
You may have noticed that the diagonal values of the table are in bold. The bold values form a sequence, $s_{1}(1), s_{2}(2), s_{3}(3), \cdots$ called the diagonal sequence. We’ll return to the diagonal sequence in a moment. But first we want to ask: does our list (along the y-axis) contain all of the sets (subsets) of positive integers? It doesn’t if we can always produce a set different from each of the sets on the list.
Here is where the diagonal comes in. The diagonal sequence is just a sequence of 0′s and 1′s and and may very well encode a set of positive integers appearing in our list. But we’re trying to find a set that does not appear on the list. It’s easy to find one if we think of a sequence that contains positive integers that do not appear in the diagonal sequence. So we can take the diagonal sequence and create the antidiagonal by changing 1′s to 0′s and 0′s to 1′s in the the diagonal sequence. So let the antidiagonal be given by subtracting each element of the diagonal sequence from 1. The antidiagonal is $1 - s_{1}(1), 1 - s_{2}(2), 1 - s_{3}(3), \dots$ and does not appear anywhere as a row in our table.
But suppose that the antidiagonal did appear as, say, the kth row in our table, thus representing the kth subset of positive integers. It would look like this: $s_{k}(1) = 1 - s_{1}(1), s_{k}(2) = 1 - s_{2}(2), s_{k}(3) = 1 - s_{3}(3), \cdots s_{k}(k) = 1 - s_{k}(k), \cdots$ But $s_{k}(k) = 1 - s_{k}(k)$ can never obtain because $s_{k}(k)$ has to either be 0 or 1. If it’s a 0, then we have $0 = 1 - 0 = 1$. But if it’s a 1 then we have $1 = 1 - 1 = 0$. In either case, it is absurd, so the antidiagonal, whatever it may be, must be different from any set appearing in the list $S_{1}, S_{2}, S_{3} \cdots$ of subsets of positive integers. If we take the antidiagonal and append it to our list of subsets of positive integers all we have to do is repeat the argument and we will end up with another distinct antidiagonal sequence that does not appear on our list.
The set of all subsets of the positive integers is called the power set of the set of positive integers. If the set of positive integers is denoted by $\textup{N}$ then the power set of $\textup{N}$ is $\mathcal{P}(\textup{N})$.
What can we say about the cardinality of $\mathcal{P}(\textup{N})$? Well, for any one of the sequences given by our list of subsets of positive integers, the first digit can be either 0 or 1, and the same is true for the second and third digits, and so on for all $\aleph_{0}$ digits in the sequence. This means that there are $2 \times 2 \times 2 \times \cdots$ (for $\aleph_{0}$ factors) possible sequences of 0′s and 1′s and so there are $2^{\aleph_{0}}$ sets of positive integers. And we have just shown by means of the antidiagonal that $2^{\aleph_{0}} > \aleph_{0}$.
So $2^{\aleph_{0}}$ is an infinite cardinal number that is greater than $\aleph_{0}$ and the power set, $\mathcal{P}(\textup{N})$, of the positive integers is an uncountably infinite set.
In the next update we’ll produce anothe cardinal number, $\aleph_{1}$ greater than $\aleph_{0}$.
Posted in Cardinal Numbers, Elementary Logic, Transfinite Cardinals | 2 Comments »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 150, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9060602784156799, "perplexity_flag": "head"}
|
http://cs.stackexchange.com/questions/342/not-all-red-black-trees-are-balanced
|
# Not all Red-Black trees are balanced?
Intuitively, "balanced trees" should be trees where left and right sub-trees at each node must have "approximately the same" number of nodes.
Of course, when we talk about red-black trees*(see definition at the end) being balanced, we actually mean that they are height balanced and in that sense, they are balanced.
Suppose we try to formalize the above intuition as follows:
Definition: A Binary Tree is called $\mu$-balanced, with $0 \le \mu \leq \frac{1}{2}$, if for every node $N$, the inequality
$$\mu \le \frac{|N_L| + 1}{|N| + 1} \le 1 - \mu$$
holds and for every $\mu' \gt \mu$, there is some node for which the above statement fails. $|N_L|$ is the number of nodes in the left sub-tree of $N$ and $|N|$ is the number of nodes under the tree with $N$ as root (including the root).
I believe, these are called weight-balanced trees in some of the literature on this topic.
One can show that if a binary tree with $n$ nodes is $\mu$-balanced (for a constant $\mu \gt 0$), then the height of the tree is $\mathcal{O}(\log n)$, thus maintaining the nice search properties.
So the question is:
Is there some $\mu \gt 0$ such that every big enough red-black tree is $\mu$-balanced?
The definition of Red-Black trees we use (from Introduction to Algorithms by Cormen et al):
A binary search tree, where each node is coloured either red or black and
• The root is black
• All NULL nodes are black
• If a node is red, then both its children are black.
• For each node, all paths from that node to descendant NULL nodes have the same number of black nodes.
Note: we don't count the NULL nodes in the definition of $\mu$-balanced above. (Though I believe it does not matter if we do).
-
There seem to be multiple (equivalent?) definitions for red-black trees around. Can you please fix one? – Raphael♦ Mar 14 '12 at 8:49
1
@Raphael: Good point. I have edited. Thanks! – Aryabhata Mar 14 '12 at 9:01
@Aryabhata: what's with the uniqueness ($\mu'>\mu$) in your edit? I'm fine with the fact that $\frac13$-balanced implies $\frac14$-balanced. I don't think you should have to find the exact $\mu$ to prove height is $O(\log n)$. Am I missing something? – jmad Mar 14 '12 at 12:34
Furthermore, you require a negative statement to provide a counterexample chain with one tree for every $n \in \mathbb{N}$. Any infinite chain that is non-decreasing in node size would be sufficient, wouldn't it? – Raphael♦ Mar 14 '12 at 14:03
1
– JeffE Mar 14 '12 at 14:48
show 14 more comments
## 2 Answers
Claim: Red-black trees can be arbitrarily un-$\mu$-balanced.
Proof Idea: Fill the right subtree with as many nodes as possible and the left with as few nodes as possible for a given number $k$ of black nodes on every root-leaf path.
Proof: Define a sequence $T_k$ of red-black trees so that $T_k$ has $k$ black nodes on every path from the root to any (virtual) leaf. Define $T_k = B(L_k, R_k)$ with
• $R_k$ the complete tree of height $2k - 1$ with the first, third, ... level colored red, the others black, and
• $L_k$ the complete tree of height $k-1$ with all nodes colored black.
Clearly, all $T_k$ are red-black trees.
For example, these are $T_1$, $T_2$ and $T_3$, respectively:
[source]
[source]
[source]
Now let us verify the visual impression of the right side being huge compared to the right. I will not count virtual leaves; they do not impact the result.
The left subtree of $T_k$ is complete and always has height $k-1$ and therefore contains $2^k - 1$ nodes. The right subtree, on the other hand, is complete and has height $2k - 1$ and thusly contains $2^{2k}-1$ nodes. Now the $\mu$-balance value for the root is
$\qquad \displaystyle \frac{2^k}{2^k + 2^{2k}} = \frac{1}{1 + 2^k} \underset{k\to\infty}{\to} 0$
which proves that there is no $\mu > 0$ as requested.
-
Nice pictures!! – JeffE Mar 14 '12 at 15:02
+1: Agree with JeffE! – Aryabhata Mar 14 '12 at 15:10
I am wondering if I should seed another question with an AVL tree instead of red-black tree :-) – Aryabhata Mar 14 '12 at 20:28
@Aryabhata: Why not? – Raphael♦ Mar 14 '12 at 21:13
I didn't want to add too many questions, and many of them similar. I guess I can space them out. – Aryabhata Mar 14 '12 at 21:16
No. Consider a red-black tree with the following special structure.
• The left subtree is a complete binary tree with depth $d$, in which every node is black.
• The right subtree is a complete binary tree with depth $2d$, in which every node at odd depth is red, and every node at even depth is black.
It's straightforward to check that this is a valid red-black tree. But the number of nodes in the right subtree ($2^{2d+1}-1$) is roughly the square of the number of nodes in the left subtree ($2^{d+1}-1$).
-
You beating me to it by a minute is what I get for wrestling with graphviz! XD – Raphael♦ Mar 14 '12 at 15:00
+1: Thanks! But the number of nodes is of the from $2^{2d+1} + 2^{d+1} - 1$. Can we perhaps 'pad' these sufficiently to get a tree a given size $n$? (It looks like that should be do-able). – Aryabhata Mar 14 '12 at 15:03
1
You already have a counterexample for infinitely many $n$, so why bother?. But I suppose if you wanted to, you could add more red nodes to the left subtree, or take some red nodes out of the right subtree. – JeffE Mar 14 '12 at 15:05
@JeffE: Basically the counterexample chain would then be a 'dense' subset, rather than a 'sparse' subset. Perhaps I will change the formulation of the question. – Aryabhata Mar 14 '12 at 15:08
I am going to accept Raphael's answer because of the figures. Thank you. – Aryabhata Mar 14 '12 at 15:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279381036758423, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/128570/integers-in-sets-sorting
|
# Integers in sets- sorting
Moderator Note: At the time that this question was posted, it was from an ongoing contest. The relevant deadline has now passed.
Imagine that the integers are split into sets $A$, $B$, $C$ with the restriction that the negative of an integer belonging to $A$ should belong to $B$, and ineteger that can be represented as the sum of two integers belonging to $B$ should be in $A$. It is fairly simple to show that the converse of both these restrictions should also hold true- the negative of an integer in $B$ is in $A$ and the sum of two integers in $A$ is in $B$.
In what sorts of ways can we arrange all integers in these sets?
-
1
Your example doesn't work, as the sums of elements of $B$ (positive integers) aren't in $A$ (negative integers). – jwodder Apr 6 '12 at 0:44
## 1 Answer
I interpret "split" as meaning $A$, $B$, $C$ must be disjoint and nonempty. Let $a_0$ be the member of $A$ with least absolute value. Then for integers $k$, $k a_0 \in A$ if $k \equiv 1 \mod 3$, $B$ if $k \equiv 2 \mod 3$, $C$ if $k \equiv 0 \mod 3$.
Moreover, every integer not a multiple of $a_0$ must be in $C$.
-
Could you explain why this is further? Also are the roles of A and B effectively interchangeable in your solution? – Ali Apr 13 '12 at 15:58
Yes, the problem is symmetric in $A$ and $B$, and $B = -A$. – Robert Israel Apr 15 '12 at 5:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941806972026825, "perplexity_flag": "head"}
|
http://en.m.wikibooks.org/wiki/CLEP_College_Algebra/Polynomials
|
# CLEP College Algebra/Polynomials
## Polynomials
A polynomial is an expression containing any number of variables and constants. The variables are combined by adding, subtracting, and multiplying. The variables themselves can be raised to a positive whole-number power.
### Types
A monomial is the product of any number of variables, each raised to any positive whole-number power. Thus, monomials do not involve addition or subtraction. Monomials may be multiplied by a constant.
These are all monomials:
• $25x^2$
• $7xyz^3$
• $x$
A binomial is the sum of two monomials.
• $x + y$
• $3x^2y - z^5$
• $2z + 5$
A trinomial is the sum of three monomials (or a binomial and monomial).
• $x^2 + 4x + 4$
• $x^2 + xy + y^2$
• $5x + 4y - 8z$
### Simplifying
Simplifying a polynomial (or "collecting like terms") is the process of reducing a polynomial to its shortest form. The number before a term is the term's coefficient. Add or subtract the coefficients of terms that have the same combination of variables. That is, add or subtract the coefficients of like terms.
To make communication of monomials easier, we oftentimes use the convention of writing the variables of each term in alphabetical order, and we use exponential notation so that in each term each letter appears only once. We like to put the number (the numerical coefficient) at the beginning of the term.
If the terms are each expressed in such a manner, we can quickly identify like terms. Two terms are "like" if, when you cover up the coefficient of each term, the rest of the terms are identical to one another.
We cannot combine "unlike" terms. That is, we leave them alone.
• $3x + 5y + 7x + 2 = (3 + 7)x + 5y + 2 = 10x + 5y + 2$
• $4x^2 - 2x - x^2 + x + 4 + x = (4 + -1)x^2 + (-2 + 1 + 1)x + 4 = 3x^2 + 4$
A polynomial must be simplified before it can be classified as a monomial/binomial/trinomial/polynomial.
### Degree
We can talk about the degree of a term, or the degree of a polynomial for a certain variable. In this context, I am using "polynomial" to include monomials, binomials, and trinomials.
Most of the time, we talk about degree for polynomials that only contain one variable. In this setting, the degree of a single term is the exponent for the variable in that term. For example:
• The degree of $-5 = -5x^0$ is zero.
• The degree of $4x = 4x^1$ is one.
• The degree of $-12x^5$ is five.
For a polynomial of a single variable, the degree of the polynomial is the largest exponent that appears on that variable. The degree of $6 - 2x^3 +12x +50x^2$ is three.
To find the leading coefficient of a polynomial, identify the numerical coefficient of the term having the largest degree. The leading coefficient of $6 - 2x^3 +12x +50x^2$ is negative two.
(Remember, $6 - 2x^3 +12x +50x^2 = 6 + -2x^3 +12x +50x^2$.)
To make things easier, we oftentimes like to write polynomials either in ascending or descending order. Ascending order means that the degrees of the terms ascend (get bigger)
as you go from left to right, and descending order means that the degrees of the terms descend (get smaller) as you go from left to right. When we write $6 - 2x^3 +12x +50x^2$
in descending order, we get $-2x^3 +50x^2 +12x + 6$.
If a polynomial is given in descending order, then the degree of the polynomial is the degree of the first term, and the leading coefficient is the number out front.
↑Jump back a section
## Factoring
### Common Factor
Factoring polynomials deals with picking out the common factor. Just like before when we factored a real number we apply the same idea to binomials, trinomials, and other polynomials.
$x^8+x^5+x^4=x^4(x^4+x+1)$
We picked out the common variable in the trinomial. If all of these would have been different variables there would be no common variables to factor.
Just like when we factored out the common variables we can also factor out coefficients with the variables.
$42x^3+12x^2+24x=6x(7x^2+2x+4)$
### Grouping
When there is no common term that travels throughout the polynomial you can factor by grouping. When factoring by grouping you need to split up the polynomial into two binomials.
$x^3+x^2+2x+4$
From here we group the first binomial $x^3+x^2$ and factor out the common variable.
From there we group the second binomial from the polynomial. $2x+4$
$=x^2(x+1)+2(x+2)$
This can also be used to group polynomials with different variables.
$x^2y^3+2x^2y+4xy^3+8xy$
By remembering the foil method we can move backwards and solve this problem. We know that we multiply "first" so there must not be any coefficients in the first two variables of the binomials.
$(x^2+ )(y^3+ )$
From there we go to "inside". We see that the first term $x^2$ is multiplied by the last term in the second binomial. So we get
$(x^2+)(y^3+2y)$
Now go to the "outside" and we see $4xy^3$. Since we have the first term of the second binomial then we can see that the last term in the first binomial is $4x$. Therefore, we get
$(x^2+4x)(y^3+2y)$
This doesn't always work because the middle terms often end up having the same common variables and get simplified. It's helpful to start with the first terms and then go to the last terms. If you can get these two things you should be able to find the middle term/s.
### Perfect Squares/Cubes
↑Jump back a section
## Expanding
### Distribution
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904205858707428, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/54184/is-the-uncertainty-principle-just-saying-something-about-what-an-observer-can-kn/54190
|
# Is the uncertainty principle just saying something about what an observer can know or is it a fundamental property of nature?
I ask this question because I have read two different quotes on the uncertainty principle that don't seem to match very well. There are similar questions around here but I would like an explanation that reconciles these two interpretations specifically:
1. Feynman talks about the uncertainty principle in one of his lectures and mentions it as the reason why electrons don't crash into the atom's nucleus: If they did they would have an exact location and momentum which is not allowed by the uncertainty principle. In saying this it is clear that the uncertainty principle is a fundamental property of nature because it has an effect on where an electron can reside.
2. Recently I read - somewhere else but I forgot where exactly - an account of the uncertainty principle where there was explained how we can measure position of a particle by firing another particle into it, the collision disturbs the velocity of the observed particle therefore we can not know its momentum anymore.
Now, 2) very much seems like a limitation of what the observer can know, while 1) attributes a fundamental property of nature to it (electrons don't crash into the nucleus). What is the correct way to think about this?
-
1
Strictly speaking there is currently no unique answer on this question even between main stream physicists! and many of them puts this question in "Metaphysics" category because, essentially, any of those two views will not affect the theoretical prediction that actually reflects reality to a very high accuracy. – TMS Feb 17 at 10:59
Ok I see, that helps thanks! And in the case it isn't a property of nature we don't know why an electron doesn't crash into the nucleus, correct? – Jeroen Moons Feb 17 at 16:28
– Qmechanic♦ Feb 17 at 19:06
## 2 Answers
The short answer is : it is a fundamental property of nature.
The very short answer is "quantum"
The long answer:
From the beginning of the 20th century, slowly but certainly Nature revealed to us that when we go the very small dimensions its form is quantum. It started in the middle of the nineteenth century , with the table of elements which showed regularities that could not be explained except by an atomic model with equal electrons to the charge of the nucleus.
There were efforts to understand why the electrons which were part of the atoms did not spiral down into the nucleus and disappear, with the Bohr model . This introduced the idea of "quantum". The energy the electrons were allowed to have in the possible orbits around the nucleus was postulated to be quantized. In a similar way that the vibrations on a string have specific frequencies allowed with wavelengths which are multiples of the length of the string, the electrons about the atoms could have only specific energies.
Then a plethora of experimental results led theorists to postulate quantum mechanics from a few "axioms" . Starting with the Schrodinger equation formal theoretical quantum mechanics took off and we never looked back because it fits perfectly all known experimental data in the microcosm, and not only.
The uncertainty principle is a lynch pin in the mathematical formulation of quantum mechanics.
A premiss is that all predictions of the QM theory are given as probability distributions, i.e. no observable can be predicted except as a probable value.
In quantum mechanics to every physical observable there corresponds an operator which acts on the state functions under study. Operators often are represented by differential forms and the algebra of operators holds. In quantum mechanics two operators can be commuting, that is they can be like real numbers a*b-b*a=0, or not, the value can be different than 0. This means that one is working in a larger set than the real numbers, complex numbers are needed.
The Heisenberg uncertainty principle for position and momentum as it appears in the fundamental postulates of quantum mechanics is a commutation relationship between conjugate variables, x and p, represented by their corresponding operators:
This relationship is very fundamental in the theory of Quantum Mechanics which describes very successfully matter as we have studied up to now, mathematically. If the HUP were falsified it would falsify the foundations of QM.
Now on the subject of the electron and the nucleus. The quantum mechanical solutions that describe the orbitals of the hydrogen atom, for example, have non zero probabilities for the electron to find itself in the center of the nucleus, when the angular momentum is zero. So it is not clear to me how Feynman could have used that hand waving argument you are describing in your question. After all we do have electron capture nuclear reactions. He is probably basing the argument on the very small volume the nucleus occupies with respect to the atomic orbitals which will give a very small probability of capture.
-
Thank you very much, that is very illuminating! I'm doing self-study from the Feynman Lectures, I still have a long way to go :) – Jeroen Moons Feb 17 at 16:31
2
-1: What you wrote is the canonical commutation relation for position and momentum; the uncertainty relation is $\sigma_x \sigma_p \geq \hbar/2$. – joshphysics Feb 17 at 19:55
– anna v Feb 17 at 20:37
1
@annav Yeah I mean I understand that the operator algebra is used to derive the generalized uncertainty relation, and I think you make a valid point, but I think that as a semantic issue, it's a bad idea to refer to cummutation relations as uncertainty relations because it will confuse people who aren't familiar with this fact. – joshphysics Feb 17 at 20:53
@joshphysics I will try to clarify the connection. It is important on how the HP is fundamental: because conjugate variables are fundamental postulates in QM. If HP goes QM has to be modified – anna v Feb 18 at 5:28
This is an interesting question. It deserves an answer without mathematical complications!
Uncertainty is not an anthropocentric phenomenon
Laws of Nature
To get some understanding of this one must understand one thing: Whatever happens in nature, whether in the animate or inanimate world (including ourselves), there are rules that dictate how things will happen, we call them the laws of nature, and these are what scientists are trying to discover and understand in as much depth as possible. We only discover approximately what these are, however, by building models through which we are trying to get as close to reality as we can or are “allowed to” by our limited ability of observation and brain capacity.
Experiment
When we do an experiment, we are in a constant ‘dialogue’ with nature, and sometimes we even ‘provoke’ her to see how see will respond so that we can get closer to her secrets! The models of physics which we develop in order to explain the outcomes of our ‘dialogue’ with nature, will inevitably contain numbers (the physical constants) which help us put some order in our conversation with nature. Planck’s constant is one of those physical constants. Without it, nothing we have learnt during our conversation with nature would make sense! The evidence that such physical constant is real, does exist and makes sense, comes from the continued conversation we are having with nature (the outcomes of our experiments), she ‘allows’ us to measure it and shows us almost every corner of the world where she is using it. The uncertainty principle is yet another rule of nature, which she imposes through Planck's constant. The fact that it is not zero ensures that uncertainty is a deep property of nature. Also, the fact that it has such a small value ensures that this uncertainty affects only objects at the quantum scale.
So, uncertainty and probability are necessary ingredients in the workings of the universe. It is not an anthropocentric phenomenon as we are just a part of it. Does anybody know why it has to be this way? Perhaps this is the way nature manages to achieve all the beautiful divergence we observe in it. This is probably the reason why she always has better ways to go about something than we can think off!!
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9617387652397156, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/35102-rings-domains.html
|
Thread:
1. Rings and domains
Hi guys. Struggling with a few bits and pieces. Can't seem to get these from my notes or text book. If someone could talk me through them it would be very much appreciated
[IMG]file:///C:/DOCUME%7E1/Jasdeep/LOCALS%7E1/Temp/moz-screenshot.jpg[/IMG][IMG]file:///C:/DOCUME%7E1/Jasdeep/LOCALS%7E1/Temp/moz-screenshot-1.jpg[/IMG]35. Show that for D = 3, 6, 7 the group of units (Z[√D])× in the ring Z[√D] is
infinite by exhibiting infinitely many units in each of the rings.
38. Let : Q[X] → Q(√3) be the map defined by (a0 + a1X + ... + anXn) =
(a0 + a1√3 + ... + an(√3)n). Show that is a surjective ring homomorphism. Prove that Ker = (X2 − 3)Q[X]. Deduce that the factor ring Q[X]/Ker is isomorphic to Q(√3).
40. Let : R → S be a ring homomorphism. Prove that if u ∈ R× is a unit then
(u) ∈ S× is a unit and that (u−1) = (u)−1.
44. Show that the ideal (2, 1+√−5)R of the ring R = Z[√−5] can not be principal.
47. Prove that Z[√D] is an Euclidean domain, with respect to the norm defined by
N(a + ib) = a2 − Db2 in the following cases D = −2,−3, 2, 3, 6, 7.
2. Originally Posted by jdizzle1985
38. Let : Q[X] → Q(√3) be the map defined by (a0 + a1X + ... + anXn) =
(a0 + a1√3 + ... + an(√3)n). Show that is a surjective ring homomorphism. Prove that Ker = (X2 − 3)Q[X]. Deduce that the factor ring Q[X]/Ker is isomorphic to Q(√3).
You need to prove that $I(f(x)g(x)) = I(f(x))I(g(x))$ this follows because of how we defined polynomial multiplication. It is surjective because if $\alpha \in \mathbb{Q}(\sqrt{3})$ then it means $\alpha = a+b\sqrt{3}$ and so $I(a+bx) = x$. The kernel are all $f(x)$ so that $f(\sqrt{3}) = 0$, since $x^2 - 3$ is an irreducible polynomial it must divide any $f(x)$ with $f(\sqrt{3})=0$ this means $\left< x^2 - 3\right> = \ker I$. Now by fundamental homomorphism thereom for rings the results follows.
3. Originally Posted by jdizzle1985
40. Let : R → S be a ring homomorphism. Prove that if u ∈ R× is a unit then
(u) ∈ S× is a unit and that (u−1) = (u)−1.
Let $\phi: R\mapsto S$ be a ring homomorphism. Let $u$ be a unit in $R$ then it means there exists $v\in R$ so that $uv=1$ but then $\phi (uv) = \phi(1)\implies \phi(u)\phi(v) = 1'$ and so $\phi(v)$ is the inverse for $\phi(u)$. Furthermore this shows $\phi(u^{-1}) = (\phi(u))^{-1}$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.880060076713562, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/59284/volume-of-a-pyramid-using-an-integral/59304
|
# Volume of a pyramid, using an integral
I have a calculus exam tomorrow and this is a possible question. However, I don't know how to handle this question.
Suppose you have 3 points in space: $p_1=(a,0,0)$, $p_2=(0,b,0)$ and $p_3=(0,0,c)$, $a,b,c \gt 0$. If we connect these points we get a pyramid in the first octant, with the origin as a top.
(i) Proove that the volume of this pyramid is given by $V = \frac{1}{6}abc$ by using a volume integral. Use the formula we made for non-revolution-solids. (Integrate $A(x)$ from $a$ to $b$, where $A(x)$ is the area of the intersection of the solid with the plane perpendicular to the $x$-axis in $x$. (Maybe $A(y)$ is better suitable in this problem.)) Hint: Calculate the surface of a slice at height $z = z_0$, with $z_0$ is a constant. (Maybe I should find $A(z)$?)
(ii) If you know the equation of the plane $V$ through these points is given by $V \leftrightarrow \frac{x}{a} + \frac{y}{b} + \frac{z}{c} = 1$, and that $p=(1,2,3)$ is an element of $V$, find the minimal volume of the pyramid cut off from the first octant. Explain physically why this has to be a minimum.
Notes: Excuse my English, it's not my native language. The original question is in Dutch. We use the textbook Calculus 6E, metrical edition, by James Stewart. Which is fortunately in English, so I should understand most of your answers.
Thanks a lot already!
-
As to your second question, the answer is found by taking $(a,b,c) = (3,6,9)$, giving a volume of $27$. The question basically comes down to the non-linear optimization problem "minimize the volume $\frac{1}{6} abc$ under the constraint $\frac{1}{a} + \frac{2}{b} + \frac{3}{c} = 1$". However my knowledge in this area is a bit rusty, so I cannot tell you exactly why $(3,6,9)$ must be optimal. – TMM Aug 23 '11 at 20:43
## 2 Answers
To calculate the volume of some object, which runs from height $z=z_0$ to $z=z_1$, you can use e.g.
$$\int_{z=z_0}^{z_1} A(z) \ dz$$
where $A(z)$ is the area of the object (if we took a slice of it) at height $z$.
For the pyramid, you could integrate along the z-axis, so that the areas become right triangles. You can verify that at $z = 0$ the sides are $a,b$ while at $z=c$ the sides are $0,0$, and the sides' lengths decrease linearly in $z$. So at height $z$ the sides are $a(1 - \frac{z}{c})$ and $b(1 - \frac{z}{c})$ respectively, giving $A(z) = \frac{1}{2} a b (1 - \frac{z}{c})^2$. Filling this in in the integral above we get
$$\int_{z=0}^{c} A(z) \ dz = \frac{1}{2} a b \int_{z=0}^{c} (1 - \frac{z}{c})^2 \ dz = \frac{1}{2} a b \int_{z=0}^{c} (1 - \frac{2z}{c} + \frac{z^2}{c^2}) \ dz = \frac{1}{2} a b(c - c + \frac{1}{3} c) = \frac{1}{6}abc$$
Another way to calculate the volume would be to make a triple integral over the whole object (and integrate 1), and make sure the bounds are right. You have that $x,y,z \geq 0$ and furthermore requirements like $y \leq b(1 - \frac{z}{c})$ and $x \leq a(1 - \frac{z}{c} - \frac{y}{b})$. The right integral then gives
$$\int_{z=0}^{c} \int_{y=0}^{b(1 - \frac{z}{c})} \int_{x=0}^{a(1 - \frac{z}{c} - \frac{y}{b})} 1 \ dx \ dy \ dz = \frac{1}{6} abc$$
Hope this helps. This would be the way I'd do it, but there may be faster ways to do it.
-
Thanks, it is clear that I have a lack of insight. The first integral is the one I'm looking for. – Mats Aug 23 '11 at 19:44
Pyramid and equations of the lines situated on the planes $y=0$ and $z=0$.
$$y=0,\qquad\frac{x}{a}+\frac{z}{c}=1\Leftrightarrow z=\left( 1-\frac{x}{a}\right) c,$$
$$z=0,\qquad\frac{x}{a}+\frac{y}{c}=1\Leftrightarrow y=\left( 1-\frac{x}{a}\right) b.$$
The area $A(x)$ is given by
$$A(x)=\frac{1}{2}\left( 1-\frac{x}{a}\right) b\left( 1-\frac{x}{a}\right) c=% \frac{bc}{2}\left( 1-\frac{x}{a}\right) ^{2},$$
because the intersection of the solid with the plane perpendicular to the $x$-axis in $x$ is a right triangle with catheti $\left( 1-\frac{x}{a}\right) c$ and $\left( 1-\frac{x}{a}\right) b$. Hence
$$\begin{eqnarray*} V &=&\int_{0}^{a}A(x)dx \\ &=&\frac{bc}{2}\int_{0}^{a}\left( 1-\frac{x}{a}\right) ^{2}dx \\ &=&\frac{bc}{2}\int_{0}^{a}1-\frac{2x}{a}+\frac{x^{2}}{a^{2}}dx \\ &=&\frac{bc}{2}\left( \int_{0}^{a}1dx-\int_{0}^{a}\frac{2x}{a}dx+\int_{0}^{a}% \frac{x^{2}}{a^{2}}dx\right) \\ &=&\frac{bc}{2}\left( a-\frac{2}{a}\int_{0}^{a}xdx+\frac{1}{a^{2}}% \int_{0}^{a}x^{2}dx\right) \\ &=&\frac{bc}{2}\left( a-\frac{2}{a}\frac{a^{2}}{2}+\frac{1}{a^{2}}\frac{a^{3}% }{3}\right) \\ &=&\frac{bc}{2}\left( a-a+\frac{a}{3}\right) \\ &=&\frac{abc}{6} \end{eqnarray*}$$
-
The image is wonderful, thanks! – Mats Aug 23 '11 at 20:03
@Mats: You are welcome! – Américo Tavares Aug 23 '11 at 20:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393376708030701, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/27644-find-remainder-congruence-theorem.html
|
# Thread:
1. ## Find remainder with congruence theorem
This is a problem we ran over in class, but I don't fully understand it.
Find remainder when $3^{1000000}$ is divided by 26.
Solution:
$1000000 = (3)(333333)+1$, therefore $3^{1000000} = 3^{(333333)(3)+1} = 3^{333333}(3)$
By using the congruence theorem, we know that if a congruence b, mod n, then a and b would have the same remainder upon being divided by n.
Then it goes that the remainder is 3, but how do you get that?
2. Originally Posted by tttcomrader
This is a problem we ran over in class, but I don't fully understand it.
Find remainder when $3^{1000000}$ is divided by 26.
Solution:
$1000000 = (3)(333333)+1$, therefore $3^{1000000} = 3^{(333333)(3)+1} = 3^{333333}(3)$
By using the congruence theorem, we know that if a congruence b, mod n, then a and b would have the same remainder upon being divided by n.
Then it goes that the remainder is 3, but how do you get that?
Note that $3^3 = 27$ this means that $3^3 \equiv 27 \equiv 1(\bmod 26)$.
Thus, $(3^3)^{333333} \equiv 1^{333333} (\bmod 26)$.
Thus, $3^{999999}\equiv 1(\bmod 26)$
Multiply both sides by $3$ to get your answer.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9653843641281128, "perplexity_flag": "head"}
|
http://mathhelpforum.com/statistics/148901-rayleigh-distribution-unbiased-estimator-print.html
|
# Rayleigh distribution and unbiased estimator
Printable View
• June 19th 2010, 12:57 PM
losm1
Rayleigh distribution and unbiased estimator
$X_1 ... X_n$ is random sample from Rayleigh distribution $f(x;\Theta)$
1. Show that $E(X^2) = 2\Theta$ and than construct unbiased estimator of parameter $\Theta$ based on $\sum_{k=1}^n X_k^2$
2. Estimate parameter $\Theta$ from following $n=10$ observations:
Code:
`16.88 10.23 4.59 6.66 13.68 14.23 19.87 9.40 6.51 10.95`
---
1. I have just plugged in 2 theta in Rayleigh's variance formula and it evaluates to true, but I'm not sure about correct way of constructing unbiased estimator $\hat{\Theta}$
$Var(X)=E(X^2) - E^2(X)$
$Var(X)=2\Theta - E^2(X)$
$\frac{4-\pi}{2}\Theta=2\Theta - \sqrt{\Theta\cdot\frac{\pi}{2}}^2$
$4\Theta - \pi\Theta = 2(2\Theta - \frac{\pi\Theta}{2})$
$\Theta = \Theta$
2. I need help with this one
• June 19th 2010, 02:05 PM
mr fantastic
Quote:
Originally Posted by losm1
$X_1 ... X_n$ is random sample from Rayleigh distribution $f(x;\Theta)$
1. Show that $E(X^2) = 2\Theta$ and than construct unbiased estimator of parameter $\Theta$ based on $\sum_{k=1}^n X_k^2$
2. Estimate parameter $\Theta$ from following $n=10$ observations:
Code:
`16.88 10.23 4.59 6.66 13.68 14.23 19.87 9.40 6.51 10.95`
---
1. I have just plugged in 2 theta in Rayleigh's variance formula and it evaluates to true, but I'm not sure about correct way of constructing unbiased estimator $\hat{\Theta}$
$Var(X)=E(X^2) - E^2(X)$
$Var(X)=2\Theta - E^2(X)$
$\frac{4-\pi}{2}\Theta=2\Theta - \sqrt{\Theta\cdot\frac{\pi}{2}}^2$
$4\Theta - \pi\Theta = 2(2\Theta - \frac{\pi\Theta}{2})$
$\Theta = \Theta$
2. I need help with this one
Use the unbiased estimator found in part 1!
• June 20th 2010, 02:07 AM
losm1
First, how do I know that it is unbiased?
Second, basically I should just calculate $E(X^2)$ with given dataset?
Thanks
• June 20th 2010, 02:26 AM
SpringFan25
An estimate is unbiased if its expected value is equal to the true value of the parameter being estimated. So, you can confirm the estimate is unbiased by taking its expectation.
$\hat{\Theta} = \frac{\sum{x^2}}{2n}$
$E(\hat{\Theta}) = E \left(\frac{\sum{x^2}}{2n} \right)$
$E(\hat{\Theta}) = 0.5n^{-1}1 E \left(\sum{x^2} \right)$
$E(\hat{\Theta}) = 0.5n^{-1} \left(\sum{E(x_i^2)} \right)$
$E(\hat{\Theta}) = 0.5n^{-1} \left(\sum{E(X^2)} \right)$
$E(\hat{\Theta}) = 0.5n^{-1} * n{E(X^2) \right)$
$E(\hat{\Theta}) = 0.5 E(X^2) \right)$
$E(\hat{\Theta}) = \Theta$
And the estimator is unbiased.
I have been told, but i never managed to prove, that all method of moments estimates are unbiased and you dont need to check each time. But im not convinced...
All times are GMT -8. The time now is 02:14 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 35, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8884759545326233, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/tagged/chi-squared+fitting
|
# Tagged Questions
1answer
74 views
### Confusion with Chi-Square test
I have an extremely simple question regarding chi-square test. So far I have found two different formulas for it: $$\chi^2=\sum\frac{(x_o-x_e)^2}{x_e}$$ and ...
1answer
157 views
### Efficient fitting of noncentral chi-squared distribution to data?
I am looking for the most efficient way to fit a noncentral chi-squared distribution with fixed d.o.f. to a given data set. So the inputs are d.o.f. and the data and the output should be the ...
1answer
600 views
### $\chi^2$ fitting with correlated errors
I have been doing some fitting using the fairly standard expression, $$\chi^2 = \sum_i \frac{(y_{i} - F(x_{i}, \theta))^2}{\sigma_{i}^{2}}$$ where y is my measured data, $\sigma$ is the experimental ...
0answers
173 views
### What are alternatives to uniform distribution when trying to fit observed data distribution?
I have a dataset of 500 observations and I have to find the best fitting distribution. If I look at the histogram of the data and the QQ plot my data seems to follow an uniform distribution. But a ...
1answer
213 views
### What is the distribution of a chi-square minimizing function?
Suppose I have a set of $N$ experimental points of the form \begin{equation} \{x_i, y_i, d_i\}, \end{equation} where $i=1,...,N,$ and $d_i$ are errorbars for $y_i$. To fit the data, I minimize the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8904051780700684, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.