url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://physics.stackexchange.com/questions/28743/from-knowing-just-the-change-in-kinetic-energy-can-we-find-the-friction-force-a?answertab=active
|
# From knowing just the change in kinetic energy, can we find the friction force and engine power?
I understand this topic well enough to get all the task done because they aren't very creative. But for my exam I think I should have this clear.
During the acceleration the force from the engine is of course bigger than air resistance and friction. This force, can we find it? And then the entire force the engine applies for the acceleration. Not just the stub you after subtracting for friction and air resistance.
$W = F \times s$
$F = m \times a$
We have all the work done by forces at work, and the stretch of road is easy to calculate. If I now do this $\frac{F}{m} =a$ will that output be the correct acceleration? And this force that we found, is that a sum force? Because if that's the force sum on the car I can't find the engines power output which is what I want. And what about the friction force at work, I think we can't find it when we just have the change in kinetic energy. Is that right? Primarily I would like to know if the change in kinetic energy can be tied somehow to the engines output during the acceleration.
The book has this nice equation too: $P = F \times v$
But that's just constant speed.
Since we know the time maybe this can be used: $P = \frac{W}{T}$
That just seems a little too easy.
Edit: I got a B on the exam, which means I'll be at again this fall. Not due to this question. Haha.
-
## 2 Answers
No, you cannot do that unless you account for change in internal energy (temperature changes, dissipation to atmosphere ...) Ref: Section 13 Chapter 3 on Fictional Work, Physics by Resnick Halliday & Krane
-
## Did you find this question interesting? Try our newsletter
email address
Making the question as simple and concise as possible is a very good way to address it; the way it stands, I can't understand your question. So in the hope that it helps, I'm going to present a simpler question, partly resembling yours, and address the simpler question: a car is moving at a constant velocity on a horizontal road, what are the different forces involved and what's their work and power?
An easy way of thinking about the problem, at least as a starting point, is that there are two external forces acting horizontally on the car: the air resistance and the friction between the tires and the road; interestingly enough, the friction with the road is the driving force which cancels the air resistance, hence the net force becomes zero and the car maintains its velocity. Now how about the works? The work (and power) due to the friction with the road is positive (for it to be possible we should assume the tires slide on the road) and the work (and power) due to the air resistance is negative; the two cancel each other and the net work (and power) is zero, hence the kinetic energy (and the velocity) is constant.
Now what's the role of the engine? In the engine, chemical energy gets converted to mechanical energy and this mechanical energy gets transfered (in the rotational form) to the tires. If one is interested in the dynamics of the tires then, the engine is crucial to be considered, but if one is only interested in the dynamics of the whole car, the interaction between the engine and the wheels is only internal and according to the 3rd law can be neglected when considering the whole car as the system. So as said earlier, the friction with the road (as the driving force) and the air resistance (as the resistive force) are the only two forces to be considered for the dynamics of the whole car.
Try to adapt similar arguments to the case of an accelerating car on a horizontal road.
-
Lets say a car changes speed from 10 m/s to 20m/s. That's an increase in kinetic energy of 60kj. Let's assume weight of car 1200kg, and that the acceleration takes 8 seconds. delta v / delta t, gives 1.25m/s^2 as acceleration. Using road formula i get 120 meters. 60kj/12 = 500N. That would be the force that actually accelerates the car. But this isn't general, because I had to make up a few facts. If I would have chosen a different time the force would have been much greater. But one more detail, now!. 1200kg*1.25 = 1500N. 1500-500 = 1000N. Is the friction 1000N i this case? – Algific May 22 '12 at 7:10
What I meant was that 1500 is the total pull forward by the car and 1000N is the friction. – Algific May 22 '12 at 7:24
the difference between the initial and final kinetic energy is the work made by the Friction force $\int F_{friction}.dr =E_{final}-E_{initial}$ – Jose Javier Garcia Sep 20 '12 at 18:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9601072669029236, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/77229?sort=oldest
|
## dimension of a real affine variety
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $V$ be a real affine variety in $\mathbb R^n$, i.e. the zero set of a real polynomial $p(x_1,\dots,x_n)$. Consider the following three definitions of the dimension of $V$, $dim(V)$.
Definition 1: if $I$ is the ideal of polynomials vanishing on $V$, then $dim(V)$ is the maximum dimension of a coordinate subspace in $V(\langle LT(I)\rangle)$ with a given graded order $>$ on the monomials ($\sum\alpha_i>\sum\beta_i$ implies $x^\alpha>x^\beta$) (see the book "Ideals, Varieties, and Algorithms" chapter 9)
Definition 2: $dim(V)$ is the largest $d$ such that there exists an injective semi-algebraic map from $(0, 1)^d$ to $V$ (see "Algorithms in Real Algebraic Geometry" chapter 5)
Definition 3: $dim(V)$ is the largest $d$ such that there exists a subset of $V$ homeomorphic to $(0, 1)^d$
Are these three definitions, for the case of real affine varieties, pairwise equivalent, or pairwise different, or something else?
-
Have you looked in Bochnak's book "Real algebraic geometry"? – Richard Kent Oct 6 2011 at 22:53
I haven't, but now I've found the book and it seems like it sheds some light on this. thx. – filipm Oct 7 2011 at 15:42
## 3 Answers
They are all equivalent, including definition 1, to the Krull dimension of $S/I$, where $S=\mathbb{R}[x_1,\ldots,x_n]$ is the polynomial ring $I$ lives in. This is very good news for people like me who want to apply algebraic geometry to statistics, where numbers are mostly real.
Here's how it goes:
Definition 1 is a way of computing the Krull dimension of $S/I$ via Groebner bases. See, e.g., p. 250 of Computing in algebraic geometry: a quick start using SINGULAR by Wolfram Decker and Christoph Lossen
Definition 2 is shown equivalent to Krull dimension in Corollary 2.8.9 of Real Algebraic Geometry by Bochnak, Coste and Roy. Note that they define the "dimension" of a real variety to be Krull dimension of its coordinate ring. I recommend reading the whole chapter.
Definition 3 is equivalent to Defintion 2 because any semialgebraic set admits a decomposition into finitely many pieces homeomorphic to $(0,1)^d$ (see Theorem 2.3.6 in RAG), and a finite union of semialgebraic sets of dimension less than $d$ cannot contain a set of dimension $d$. I.e., the only way a real variety can contain an open set homeomorphic to $(0,1)^d$ is by containing a semialgebraic set homeomorphic to $(0,1)^d$ in its semialgebraic cell decomposition.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
At least for definitions 2 and 3 : ${\rm dim}_2 (V) \leq {\rm dim}_3 (V)$ since a semialgebraic map is peicewise a homeomorphism.
-
Definition 2 and 3 are equivalent. What you want here is the notion of Cylindrical Algebraic Decomposition (see e.g. Bochnak, Coste and Roy's book). The $d$ you're looking for in both cases is the dimension of the largest cell in the decomposition, and it is a routine result that this dimension does not depend on the decomposition itself.
As for Definition 1, I am not sure I understand exactly what you're saying. You're using a total-degree monomial ordering, then taking the zero-set of the leading terms of the variety's ideal... I'll come back to this if I remember.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9006461501121521, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/105035-rational-expressions.html
|
# Thread:
1. ## Rational expressions
I've been looking at the examples in the book and can't find any examples on similar to this problem and I'm trying to figure out how I would go about solving it. Thank You
r- r^2-1/r / 1- r-1/r everything is all divided but I don't know how to write the tags in format.
2. Originally Posted by Rheanna
I've been looking at the examples in the book and can't find any examples on similar to this problem and I'm trying to figure out how I would go about solving it. Thank You
r- r^2-1/r / 1- r-1/r everything is all divided but I don't know how to write the tags in format.
Hi Rheanna,
Parentheses might help. Is this your rational expression?
$\dfrac{r-r^2-\dfrac{1}{r}}{1-r-\dfrac{1}{r}}$
3. Originally Posted by masters
Hi Rheanna,
Parentheses might help. Is this your rational expression?
$\dfrac{r-r^2-\dfrac{1}{r}}{1-r-\dfrac{1}{r}}$
top one is r square 2-1/r
bottom one is r-1/r
it comes out to 1 as the answer but i'm trying to figure out how to come up with that.
4. Originally Posted by masters
Hi Rheanna,
Parentheses might help. Is this your rational expression?
$\dfrac{r-r^2-\dfrac{1}{r}}{1-r-\dfrac{1}{r}}$
Originally Posted by Rheanna
top one is r square 2-1/r
bottom one is r-1/r
You mean like this:
$\dfrac{r-\dfrac{r^2-1}{r}}{1-\dfrac{r-1}{r}}$
5. yes
6. Originally Posted by masters
You mean like this:
$\dfrac{r-\dfrac{r^2-1}{r}}{1-\dfrac{r-1}{r}}$
Alrighty then,
$\dfrac{r-\dfrac{r^2-1}{r}}{1-\dfrac{r-1}{r}}=\dfrac{\dfrac{r^2-(r^2-1)}{r}}{\dfrac{r-(r-1)}{r}}=\dfrac{\dfrac{r^2-r^2+1}{r}}{\dfrac{r-r+1}{r}}=\dfrac{\dfrac{1}{r}}{\dfrac{1}{r}}=1$
7. lol, even staring at the problem i'm still lost.
8. Originally Posted by masters
Alrighty then,
$\dfrac{r-\dfrac{r^2-1}{r}}{1-\dfrac{r-1}{r}}=\dfrac{\dfrac{r^2-(r^2-1)}{r}}{\dfrac{r-(r-1)}{r}}=\dfrac{\dfrac{r^2-r^2+1}{r}}{\dfrac{r-r+1}{r}}=\dfrac{\dfrac{1}{r}}{\dfrac{1}{r}}=1$
Let me do it another way. See if this helps.
Multiply numerator and denominator by r.
$\dfrac{r-\dfrac{r^2-1}{r}}{1-\dfrac{r-1}{r}}$
$\dfrac{r\left(r-\dfrac{r^2-1}{r}\right)}{r\left(1-\dfrac{r-1}{r}\right)}=\dfrac{r^2-r^2+1}{r-r+1}=\frac{1}{1}=1$
9. yeah that r square was confusing me. ugh I know I got 0 on this pre test and Thursday is the test test.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9586684703826904, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/113756/extension-and-self-injective-ring
|
# Extension and Self Injective Ring
Let $R$ be a self injective ring. Then $R^n$ is an injective module. Let $M$ be a submodule of $R^n$ and let $f:M\to R^n$ be an $R$-module homomorphism. By injectivity of $R^n$ we know that we can extend $f$ to $\tilde{f}:R^n\to R^n$.
My question is that if $f$ is injective, can we also find an injective extension $\tilde{f}:R^n\to R^n$?
Thank you in advance for your help.
-
1
The answer is yes if $M$ is essential, or if $R^n$ is the injective hull of $M$. I don't know what happens in the general case though. – Steve D Feb 26 '12 at 23:07
If $R$ is Artinian, this implies that $\operatorname{Aut}(R^n)$ operates transitively on each set of isomorphic submodules of $R^n$. It'd be nice :) – Mariano Suárez-Alvarez♦ Feb 29 '12 at 4:51
– Harry Altman Mar 5 '12 at 2:21
– Harry Altman Mar 5 '12 at 2:33
## 2 Answers
The question is also true without any commutativity for quasi-Frobenius rings.
Recall that a quasi-Frobenius ring is a ring which is one-sided self injective and one-sided Noetherian. They also happen to be two-sided self-injective and two-sided Artinian.
For every finitely generated projective module $P$ over a quasi-Frobenius ring $R$, a well-known fact is that isomorphisms of submodules of $P$ extend to automorphisms of $P$. (You can find this on page 415 of Lam's Lectures on Modules and Rings.)
Obviously your $P=R^n$ is f.g. projective, and injecting $M$ into $P$ just results in an isomorphism between $M$ and its image, so there you have it!
In fact, this result seems a bit overkill for your original question, so I would not be surprised if a class properly containing the QF rings and satisfying your condition exists.
-
Well, this is true if $R$ is commutative and noetherian; I don't know whether that's good enough for what you want. (This solution may be overcomplicated; I do not actually know how to prove all the facts I am using.)
If $R$ is commutative, noetherian, and self-injective, then it's Artinian, it's a finite product of local Artinian rings, hence we can reduce to the local case.
So say $R$ is commutative, noetherian, and local (and hence Artinian, but I won't use that). Take an injective hull of $M$ inside $R^n$; call it $Q$. So $f$ extends injectively to $f:Q\rightarrow R^n$, and we now need to extend it from $Q$ to all of $R^n$. Since $Q$ is injective, it is a direct summand of $R^n$, and hence is also projective. But we assumed $R$ was local, and hence $Q$ is free; say it is isomorphic to $R^m$, $m\le n$.
Then an injective function $R^k \rightarrow R^n$ is the same as an (ordered) linearly independent subset of $R^n$, of $k$ elements. So we have $m$ linearly independent elements of $R^n$ and we want to extend it to $n$ such. We can extend it to a maximal linearly independent set, certainly; the question then is just if such a set necesssarily has $n$ elements.
Now, since we assumed $R$ was commutative and noetherian, we can apply the theorem of Lazarus quoted here, and say yes, a maximal linearly independent set of $R^n$ necessarily has $n$ elements, and so having extended $f$ to $Q\cong R^m$, we can further extend it to $R^n$.
-
My understanding of the theorem of Lazarus that you quoted does not guarantee that we can extend $Q$ to $R^n$. It only says if we start we an $R$-module $M$ all maximal linear independent sets in $M$ has the same cardinality. Do I miss something? – user9077 Mar 6 '12 at 20:59
Again, consider what this means when $M$ is free, say isomorphic to $R^n$. It means that all maximal linearly independent sets have size $n$, and thus (since any linearly independent set can be extended to a maximal one) that all linearly independent sets can be extended to one of size $n$. But an (ordered) linearly independent set of size $k$ is exactly the same as an injective homomorphism from $R^k$. So every injective homomorphism from some $R^k$ can be extended to one from $R^n$; and $Q\cong R^m$. – Harry Altman Mar 6 '12 at 22:56
Thank you for your explanation. I am not very good at ring and module theory, so please be patient with me. If we take our $M$ to be the $R^n$ the Lazarus theorem says that every maximal linear independent set in $M$ is of sixe $n$. I understand that. But doesn't that we are looking for is maximal linearly independent set that contains $Q$. Is this extra requirement (containing $Q$) does not make any difference at all? – user9077 Mar 6 '12 at 23:12
Remember, I'm using the fact that $Q$ is free, since I assumed $R$ was local. So, if you prefer to think of it that way, we are taking a free basis for $Q$ and extending it to a maximal linearly independent set. Which then must have size $n$. – Harry Altman Mar 6 '12 at 23:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412523508071899, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/119989/equivalence-of-1-norm-and-relative-entropy/119994
|
## equivalence of 1-norm and relative entropy?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For two pmf $p=\lbrace p_i\rbrace$ and $q=\lbrace q_i\rbrace$ on the same finite alphabet, we know that relateive entropy $D(p\|q)=\sum p_i\log\frac{p_i}{q_i}$ and 1-norm $\|p-q\|_1=\sum |p_i-q_i|$ are both measures of their distance. But it is unfortunate that relative entropy is not a norm. My question is: even so, do we still have equivalence between these two measure of distance? To be specific, assume $\|p-q\|_1\le C$ for some positive constant $C$, do we have $D(p\|q)\le MC$ for some positive $M$? If have, how to prove? Thanks a lot!
-
## 2 Answers
No, that is not true. Let $p^{(n)}\to q$ in $L^1$ such that $p^{(n)}$ lies in the (relative) interior of the probability simplex whereas $q$ is on the boundary (of the simplex), i.e., $q_i=0$ for some $i$. Then $D(P^{(n)}\|q)=\infty$ for every $n$.
But the other direction is true because of the Pinsker's inequality $\|p-q\|_1\le \sqrt{2D(p\|q)}$ (you may already know this fact!).
-
Yes, you are right. I should have considered the circumstances that $q_i=0$ for some symbol. But what if I put a restriction on $q$ that all $q_i$'s are positive? – zzzhhh Jan 27 at 7:21
2
In that case ($q_i>0$ for all $i$), $D(p\|q)$ is continuous in $(p,q)$ and since convergence in $L^1$ is equivalent to pointwise convergence, we have equivalence of convergence in $L^1$ and $D(\cdot\|\cdot)$. – Ashok Jan 27 at 7:37
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I have found a solution, but it seems not complete (will explain at the end)
$D(p\|q)=|D(p\|q)|=|\sum p_i\log p_i-\sum p_i\log q_i|=|\sum p_i\log p_i-\sum p_i\log q_i+\sum q_i\log q_i-\sum q_i\log q_i|$ $=|H(q)-H(p)+\sum(q_i-p_i)\log q_i|\le|H(q)-H(p)|+\sum|q_i-p_i||\log q_i|$. If we denote $M_1=\max\lbrace|\log q_i|=\log\frac{1}{q_i}\rbrace$, the last equation will be $\le M_2\|p-q\|_1+M_1\|p-q\|_1=M\|p-q\|_1$ where $M=M_1+M_2$. Here I claimed that $|H(q)-H(p)|\le M_2\|p-q\|_1$ for some positive number $M_2$ due to the Mean Value Theorem extended to multi-dimensional space and concavity of entropy wrt pmf. This seems true but I am not very sure. That's why I think the proof is incomplete. Could anyone please justify this inequality (it's better to point out such a proposition in some textbook)? Thank you!
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464123249053955, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/228629/solve-the-trigonometric-equation/228925
|
# Solve the trigonometric equation
Solve the equation $$\cos x -2\cos 2x+3 \cos 3x -4\cos 4x = \dfrac{1}{2}.$$ I tried, put $t = \cos x$.
-
1
Yes, and what happened when you did that? – Gerry Myerson Nov 4 '12 at 3:08
Did you try the angle sum identity to break up some of those cosines? Remember: $cos(2x)=cos^2(x)-sin^2(x)$. – Todd Wilcox Nov 4 '12 at 5:08
Or better still $\cos 2x = 2 \cos^2 x - 1$. It should be possible to turn this into a big polynomial in $\cos x$. – user22805 Nov 4 '12 at 9:12
Note that $$\begin{align} \sum\limits_{k=1}^n (-1)^k\sin kx&=\frac{1}{2}\sec\frac{x}{2}\sum\limits_{k=1}^n (-1)^k 2\sin kx\cos\frac{x}{2}\\ &=\frac{1}{2}\sec\frac{x}{2}\sum\limits_{k=1}^n (-1)^k \left(\sin\left(k+\frac{1}{2}\right)x+\sin\left(k-\frac{1}{2}\right)x\right)\\ &=\frac{1}{2}\sec\frac{x}{2}\left(-\sin\frac{x}{2}+(-1)^n\sin\left(n+\frac{1}{2}\right)x\right)\\ \end{align}$$ – Norbert Nov 4 '12 at 12:37
hence $$\begin{align} \sum\limits_{k=1}^n(-1)^{k-1}k\cos kx &=-\frac{d}{dx}\left(\sum\limits_{k=1}^n (-1)^k\sin kx\right)\\ &=-\frac{d}{dx}\left(\frac{1}{2}\sec\frac{x}{2}\left(-\sin\frac{x}{2}+(-1)^n\sin\left(n+\frac{1}{2}\right)x\right)\right)\\ &=\frac{1}{4}\sec^2\frac{x}{2}(1-(n+1)(-1)^n\cos nx-n(-1)^n\cos(n+1)x) \end{align}$$ Setting $n=4$ we get $$\cos x-2\cos 2x+3\cos 3x-4\cos 4x=\frac{1}{4}\sec^2\frac{x}{2}(1-5\cos 4x-4\cos 5x)$$ – Norbert Nov 4 '12 at 12:38
show 2 more comments
## 1 Answer
With $c=\cos x$ we have $\cos 2x=2c^2-1$, and $\cos 3x = 4c^3-3c$, and finally $\cos 4x = 8c^4-8c^2+1$. If you take your equation, move the 1/2 to the left side, make the above substitutions, and multiply by $-2$, you'll get $$64c^4-24c^3-56c^2+16c+5=0.$$ This factors as $(2c-1)(32c^3+4c^2-26c-5)=0$. The cubic here has no rational roots, and it looks like one would have to resort to the cubic equation (a mess) to find its zeros. Numerically the zeros of the cubic are about $-.859,-.195,+.929$, all can be cosine of an angle. So from the other root $c=1/2$ of the linear factor, you'll have a total of eight solutions in each interval $[2k \pi,(2k+2)\pi]$. Looks like only the solutions from $\cos x=1/2$ will be familiar angles: $\pi/3$ and $5\pi/3$ and their translates by $2 \pi n$.
-
Thank you very much. – minthao_2011 Nov 4 '12 at 15:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343065619468689, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/110911/books-about-capacity-theory
|
## Books about Capacity theory
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
While I was studying the book Variation et Optimisation de formes by Antoine Henrot and Michel Pierre, I encountered a section about the capacity associated to the $H^1$ norm, which is defined for every compact by:
$$\operatorname{cap}(K)=\inf \lbrace \|u\|_{H^1(\Bbb{R}^N)} : v\in C_0^\infty(\Bbb{R}^N), v \geq 1 \text{ on }K\rbrace$$
The definition can be extended to open sets and then to every set of $\Bbb{R}^N$, relative capacity with respect to an open set can be defined by restricting the integral and the smooth function space to an open set D, etc.
The capacity has some strange properties which are unnatural at a first sight, like the fact that the capacity of $\partial K$ is the same as the capacity of $K$ for a compact $K$.
I want to understand better what capacity really means, and for that I tried to find all sort of books about potential theory (even the ones referred in the mentioned book), and all seem to have the same way of dealing with the subject: the setting is very general and abstract and the definition presented above just as a particular case.
Do you know any book, article or course notes which deal with this specific capacity in detail explaining:
• the definition and the intuition behind the capacity;
• examples of capacity computation for simple sets (using capacitary potentials);
• the connection between the capacity and the Sobolev spaces ?
In the mentioned book the study of capacity is made in section 3.3. It contains all the definitions and all the needed properties of the capacity, but I still feel that I need a better understanding. That's why I asked this question.
-
1
I think the following lecture notes could be useful for your first two points (not the third, which I'm not familiar with unfortunately): emis.de/journals/SAT/papers/14/14.pdf It's only for capacity on $\mathbb{C}$ though. Also, for intuition: the capacity of a set is defined in a way to mimic the concept of capacity of a capacitator in physics/electrical engineering: if a set has positive capacity, the condensator obtained by having a perfect conductor of that set has positive capacity. This should help for calculating examples. A condensator also has the mentioned property of the boundary. – Jan Jitse Venselaar Nov 1 at 18:26
## 1 Answer
Maz'ya's book contains a fruitful treatment of Capacity and Weighted capacity and its relation with Sobolev spaces theory, in particular the (weighted) Sobolev inequality or Poincare inequality. Heinonen's book contains the treatment of modulus and capacity in metric setting.
1. Maz'ya, Vladimir Sobolev spaces with applications to elliptic partial differential equations. Second, revised and augmented edition. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 342. Springer, Heidelberg, 2011. xxviii+866 pp.
2.Heinonen, Juha Lectures on analysis on metric spaces. Universitext. Springer-Verlag, New York, 2001. x+140 pp.
3.Heinonen, Juha; Kilpeläinen, Tero; Martio, Olli Nonlinear potential theory of degenerate elliptic equations. Unabridged republication of the 1993 original. Dover Publications, Inc., Mineola, NY, 2006. xii+404 pp.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9110862016677856, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/54731/sums-of-fractional-parts-of-linear-functions-of-n/55119
|
## sums of fractional parts of linear functions of n
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
As $\alpha$ and $\gamma$ range uniformly over $[0,1]$, what is the typical (e.g. median or root-mean-square) order of magnitude of $C_m (\alpha,\gamma)$ := $\sum_{1 \leq k \leq m} \left( {\rm frac}(k\alpha+\gamma) - \frac12 \right)$ where frac($x$) denotes the fractional part of $x$?
I'd settle for an answer in the case where $\gamma = 0$.
I know there are articles that address the question where $\alpha$ is fixed (going back to Hardy), but they don't immediately answer my question. Perhaps one could cobble together an answer using results about how the magnitude of $C_m (\alpha,\gamma)$ is bounded in terms of the continued fraction convergents for $\alpha$, along with results about how the convergents grow for a generic real number.
-
1
I'm confused by "I'd settle for an answer in the case where $\gamma = 0$ ." That doesn't sound like a "case," but a different (and not obviously easier) question. – David Feldman Feb 8 2011 at 7:07
2
This might be a very difficult question. $C_m(\alpha,0)$ is continuous, indeed linear with constant slope, except for discontinuities located on a Farey sequence. So the order of magnitude you ask for might be related to the discrepancy of Farey sequences, and versions of that question turn out equivalent to RH. It's not hard to express $C_m(\alpha,0)$ as a Fourier series, but the Fourier coefficients have number theoretic content. – David Feldman Feb 8 2011 at 23:22
David, I've posted a solution, so 1) the question is not all that difficult, or 2) my solution is wrong, or 3) I've settled the Riemann Hypothesis. RH, I think, is equivalent to sharp estimates of Farey discrepancy, but less-than-sharp estimates are not so hard to come by, and can still be useful. – Gerry Myerson Feb 9 2011 at 4:43
## 3 Answers
I think I can show that `$$\sum_{1 \leq h,k \leq N} \frac{GCD(h,k)^2}{hk}$$` grows linearly. But I get the constant is `$$\sum_{ GCD(i,j)=1} \frac{1}{\max(i,j) i j}$$` This constant is incredibly close to $3$. (I am omitting the $1/12$, so my $3$ is your $0.25$.) My intuition is that they can't be equal, but they agree to a lot of digits, so I am not sure. See below for my computations.
UPDATE: This constant is $3$, due to an identity of Euler. See Marty's answer here. I'll leave the numeric work below for those might be curious how to approximate things like this.
Let's group the sum according to $GCD(h,k)$. So we have `$$\sum_d \sum_{\substack{1 \leq h,k \leq N \\ GCD(h,k)=d}} \frac{d^2}{hk} = \sum_d \sum_{\substack{1 \leq i,j \leq N/d \\ GCD(i,j)=1}} \frac{d^2}{d^2 ij} = \sum_d \sum_{\substack{1 \leq i,j \leq N/d \\ GCD(i,j)=1}} \frac{1}{ij}$$` where $h=di$ and $k=dj$. Grouping on $(i,j)$, we have `$$\sum_{\substack{1 \leq i,j \leq N \\ GCD(i,j)=1}} \frac{\lfloor N/\max(i,j) \rfloor}{ij} = N \sum_{\substack{1 \leq i,j \leq N \\ GCD(i,j)=1}} \frac{1}{\max(i,j) i j} + O \left( \sum_{\substack{1 \leq i,j \leq N \\ GCD(i,j)=1}} \frac{1}{i j} \right)$$` The error term is $O(\log N)^2$, so that's not the dominant term.
Once we check that the sum converges, this will show that your rate of growth is linear with that coefficient. We'll drop the $GCD(i,j)=1$ condition, since that just makes the sum smaller. `$$\sum_{i,j} \frac{1}{\max(i,j) i j} = \sum_{n} \frac{1}{n^2} \left( 2 + \frac{2}{2} + \frac{2}{3} + \cdots + \frac{2}{n-1} + \frac{1}{n} \right) = \sum_{n} n^{-2} O(\log n).$$` Here $n=\max(i,j)$. The final sum converges by the integral test, so the original one does as well.
Now, what is the value of this sum? Notice that, if $(i,j) = (g i', g j')$ with $GCD(i', j')=1$, then $\max(i,j)i j = g^3 \max(i',j') i' j'$. So, if we sum over all pairs, instead of just the relatively prime ones, then we multiply by a factor of $\sum g^{-3} = \zeta(3)$. So we want to compute $$\sum_{1 \leq i,j} \frac{1}{\max(i,j) i j}$$ and, in particular, we want to know how it compares to $3 \zeta(3)$. As we showed above, we can simplify this sum to `$$\sum_{n} \frac{1}{n^2} \left( 2 + \frac{2}{2} + \frac{2}{3} + \cdots + \frac{2}{n-1} + \frac{1}{n} \right).$$`
Now, is this actually the same as $3 \zeta(3)$? I had Mathematica compute the sum of the first $10,000$ terms, using $20$ digit precision for all intermediate computations. If Mathematica can be trusted, the result is $3.6040133$. Now, $2+2/2+2/3+\cdots +2/(n-1) + 1/n = 2 \log n + 2 \gamma + O(1/n)$. So I approximated the rest of the sum by the integral $\int_{10000}^{\infty} 2 (\log t + \gamma) dt/t^2$. (Here $\gamma$ is the Euler gamma constant.) According to Mathematica, this integral is $0.0021575$. The error in approximating a decreasing sum by an integral is bounded by the first term, which is $1.8 \times 10^{-7}$. The error in approximating the harmonic number by a $\log$ should be something like $\int_{10000}^{\infty} dt/t^3 = 5 \times 10^{-9}$; I don't have the energy to turn this into a rigorous bound. So the sum should be $3.6040133 + 0.0021575 \pm 2 \times 10^{-7} = 3.6061708 \pm 2 \times 10^{-7}$. (That error lines up with the last digit given.)
And what is $3 \zeta(3)$? I kid you not, it is $3.6061707$, right in range! So they might be equal, but, if so, I can't see why.
UPDATE: OK, I went back and improved my approximations in two ways: (1) I replaced the Harmonic number $H_n$ by $\log n + \gamma + (1/2) n^{-1} - (1/12) n^{-2} + (1/120) n^{-4}$ and (2) I approximated the sum of the terms past $10000$ by the first few terms of the Euler-Macluarin approximation, up to the $B_4$ term. The result, doing my internal computations with $20$ digits of accuracy: 3.6061707094787828562. The numerical value of $3 \zeta(3)$: Exactly the same!
Something is going on here. If you don't mind, I'll ask a separate question about what.
-
Question asked mathoverflow.net/questions/55141/… – David Speyer Feb 11 2011 at 16:58
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let $C_m(\alpha)=\sum_{k=1}^m((k\alpha))$ where $((x))$ is $x-[x]-1/2$ if $x$ is not an integer, 0 if $x$ is an integer (so this agrees with your definition away from points where $k\alpha$ is an integer). Then we'll get at the root-mean-square magnitude of $C_m(\alpha)$ by $$\int_0^1C_m(\alpha)^2d\alpha=\int_0^1\left(\sum_1^m((k\alpha))\right)^2d\alpha=\sum_{h,k}\int_0^1((h\alpha))((k\alpha))d\alpha$$ Now go to page 25 of Rademacher and Grosswald, Dedekind Sums, where it is proved that this last integral is given by $c^2/(12hk)$, where $c=\gcd(h,k)$. Well, that should get you started.
UPDATE added by David Speyer: The rest of this answer refers to a different sum. See the comments below where this issue is discussed. Gerry, hope you don't mind me adding this, but it bugs me when there is wrong information in an (otherwise very good) answer.
EDIT: Not really fair to leave you with $\sum_{h,k}{\gcd(h,k)\over12hk}$ to evaluate, so I found a reference that does it.
Olivier Bordelles, Mean values of generalized gcd-sum and lcm-sum functions, J. Integer Seq. 10 (2007), no. 9, Article 07.9.2, 13 pp. (electronic), MR2346091 (2008g:11005), available at http://www.cs.uwaterloo.ca/journals/JIS/VOL10/Bordelles2/bordelles61.pdf finds $$\sum_{n\le x}\sum_{j=1}^n{1\over{\rm lcm}(n,j)}={(\log x)^3\over6\zeta(2)}+C_1(\log x)^2+O(\log x)$$ where $C_1$ is some explicit constant that I can't be bothered to type out. Now $1/{\rm lcm}(h,k)=\gcd(h,k)/hk$, and $$\sum_{h=1}^m\sum_{k=1}^mf(h,k)=2\sum_{h\le m}\sum_{k=1}^hf(h,k)-\sum_{h=1}^mf(h,h)$$ for symmetric functions $f$, and it all comes together.
-
1
Thanks, Gerry! But I don't see why the sums in your addendum are relevant to the sum from your original posting; $(\gcd(h,k))^2/(hk)$ is $hk / (\lcm(h,k))^2$, not $1 / \lcm(h,k)$. What am I missing? In any case, I used Mathematica to compute $\sum_{1 \leq h,k \leq m} (\gcd(h,k))^2 / (12hk)$ for various values of $m$, and I found that the values grow roughly linearly with $m$; indeed, they come surprisingly close to lying on a straight line. See jamespropp.org/myerson.pdf . – James Propp Feb 9 2011 at 19:52
I screwed up, forgot it was $c^2$ in the numerator, not just $c$. It's worth checking whether the techniques in the Bordelles paper can be applied to the correct sum. – Gerry Myerson Feb 9 2011 at 22:40
It definitely appears from the data that $(\int_0^1 C_m (\alpha)^2 \ d\alpha)/m$ converges to a constant, and this constant appears to be about .25. Might it actually equal 1/4? One can argue heuristically that the integral should be on the order of $m$ since $C_m (\alpha)$ is a sum of $m$ terms that in some sense should be independent, but curiously, this heuristic doesn't give a coefficient at all close to .25! (Unless I'm making a mistake.) – James Propp Feb 10 2011 at 13:31
I looked at the literature just now, and I believe that R.R. Hall may have given a fairly complete answer to these questions. See his 1998 Crelle paper:
http://www.reference-global.com/doi/abs/10.1515/crll.1998.035
Regards, Sinai
-
Thanks, Sinai! Could someone with (on-line or off-line) access to Journal fur die reine und angewandte Mathematik (Crelles Journal) summarize Hall's paper? (I can only view the first page.) – James Propp Feb 11 2011 at 17:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322013854980469, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/84648/the-spectral-representation-and-isotropic-covariance-functions
|
## The spectral representation and isotropic covariance functions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Caveat: My apologies if this question is poorly phrased. I am an engineer/computer scientist teaching myself mathematics.
The spectral representation of the covariance function of a second order stochastic process at a spatial location $\mathbf{h}$ (in $d$-dimensions) is given as
$$Cov(\mathbf{h}) \; = \; \int_{\mathbb{R}^d} e^{i \: \overline{\omega} . \mathbf{h}} f(\omega) d\omega$$
where $\overline{\omega}$ is a frequency vector i.e. $\overline{\omega} = (\omega_1 , \cdots , \omega_d)$, $f(\omega)$ the spectral density function and $\mathbf{h}$ is (commonly) the separation vector between two arbitrary locations $\mathbf{x}$ and $\mathbf{y}$ i.e. $\mathbf{x} - \mathbf{y}$ or $\mathbf{y} - \mathbf{x}$, and herein lies the problem; this fact implies that covariance is directional and no longer entirely a function of the separation vector, i.e.
$$Cov(\mathbf{x} - \mathbf{y}) \; \ne \; Cov(\mathbf{y} - \mathbf{x})$$
This appears to rule out the existence of isotropic covariance functions; and worse still, there is no basis for assuming that any two points separated by the same distance $r = \| \mathbf{x} - \mathbf{y} \|$ have the same covarience.
Yet I'm aware that isotropic covariance functions exists and are valid but I cannot explain how the above spectral representation admits them. This is what I need help with.
Edit
I think I've found the symmetry part of the answer. Because
$$e^{i \: \overline{\omega} . \mathbf{h}} \: = \: cos( \overline{\omega} . \mathbf{h} ) + i \: sin(\overline{\omega} . \mathbf{h})$$
$C(\mathbf{h}) = C(-\mathbf{h}) \:\:$ if $\:\:sin(\overline{\omega} . \mathbf{h}) = 0$.
-
mathoverflow.net/questions/47684/… – Steve Huntsman Dec 31 2011 at 19:37
Thanks Steve. I've already come across your reply to that post. Unfortunately, it does not address my question. – Olumide Dec 31 2011 at 20:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9013491272926331, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/26845/lorentz-force-in-dirac-theory-and-its-classical-limit
|
# Lorentz force in Dirac theory and its classical limit
It is well known that in Dirac theory the time derivative of $P_i=p_i+A_i$ operator (where $p_i=∂/∂_i$, $A_i$ - EM field vector potential) is an analogue of the Lorentz force:
$\frac{dP_i}{dt} = e(E_i+[v×B]_i)$
On the other hand, in classical theory we have the same equation for $p_i$ instead of $P_i$. How comes that the effect of $A_i$ in Dirac theory vanishes in the classical limit?
-
This question is not clear. What do you mean by classical limit? As far as I can tell you are discussing classical mechanics. – Jon Nov 14 '11 at 9:56
@MurodAbdukahimov: $P_i = p_i + A_i$ is used broadly to describe magnetic force - also in classical mechanics (its the way how you introduce magnetic force in Hamiltonians & Lagrangians). So it has nothing to do with a classical limit. – Piotr Migdal Nov 14 '11 at 10:32
@Jon: In Hamiltonian formalism the (full) time derivative of the operator is a sum of the partial time derivative of the operator and its commutator with Hamiltonian. Consider Dirac theory in Hamiltonian form. You will see that $\frac{dP_i}{dt}$ is given by the formula presented in my question. But it is not the same as the formula in classical mechanics, where $P=p+A$ need to be replaced by $p$ only. My question is why there is a difference. Normally the quantum equations can be reduced to classical ones by setting $h=0$. But this is not the case. Is my question clear now? – Murod Abdukhakimov Nov 14 '11 at 11:01
– Jon Nov 14 '11 at 11:08
@Jon: I mean "Dirac equation in EM field". And I know what the "minimal coupling" is. The problem is that both classical and quantum mechanical Lagrangians are constructed using minimal coupling procedure. But the result is different, and the difference is not vanishing in classical limit. – Murod Abdukhakimov Nov 14 '11 at 11:20
show 4 more comments
## 3 Answers
I found the question completely clear and sort of important for this piece of learning about magnetism in quantum mechanics; the only problem is that it assumes a wrong answer to the question.
Nothing is vanishing; indeed, it would be very bad if a finite term such as $\vec v\times \vec B$ evaporated without a trace while taking the classical limit. However, one must be careful about the quantum mechanical commutators in order to see that everything is valid.
The full quantum mechanical (non-relativistic) Hamiltonian (which may be obtained as the non-relativistic limit of the Dirac equation) is $$H = \frac{1}{2m}(\vec p+ \vec A)^2 + V(\vec x)$$ where $V(\vec x)$ primarily contains the electrostatic potential and generates the electric force $e\vec E = -i\nabla \Phi$ in a way you find uncontroversial. Your problem is localized to the terms containing $\vec A$ or its derivatives. I kept the conventions for the normalization and sign of $\vec A$ to be just like yours although it's a bizarre convention: one would usually add the factor of $q$ or $e$ in front of $\vec A$, too.
In these conventions of yours, $\vec p = -i\hbar \vec \nabla$, as you correctly wrote. However, this object isn't the usual $m\vec v$. Instead, the full $\vec P = \vec p + \vec A$ is equal to $m\vec v$, a multiple of the velocity. This is usually denoted as $\vec p$ in non-quantum physics but we obviously need to map it to $\vec P$: in the limit, once again, the usual non-quantum momentum is $\vec P$, not $\vec p$: $\vec P$ is gauge-covariant, $\vec p$ isn't, and the non-quantum momentum of a particle is clearly gauge-covariant so it can't be $\vec p$.
To quantum mechanically calculate the time-derivative of $P_i$ (the momentum as directly calculated from the velocity), we must compute $1/(-i\hbar)$ times the commutator of $P_i$ with the Hamiltonian (the Heisenberg equations of motion). The commutator of $P_i$ with $V(x)$, the electrostatic potential energy, gives us the usual electric force.
However, we must also add the commutator of $P_i$ with the first term of the Hamiltonian which is $P_j P_j / 2m$. It is not zero because the different components of $P_j$ don't commute with one another. Instead, $$\frac{1}{2m} [P_i,P_j P_j] = \frac{1}{m} [P_i,P_j] P_j + {\rm subleading\,\,in\,\,}\hbar$$ The commutator of $[P_i,P_j]$ is nonzero because $P_i$ depends both on $\vec p$ as well as $\vec x$: those two lowercase objects are the objects with the usual simple commutation relations. We have $$[P_i,P_j] = [p_i,A_j] - [p_j,A_i]$$ in your conventions. You can see that the commutators on the right hand side are nothing else than $-i\hbar$ times the components of ${\rm curl} \vec A = \vec B$, more precisely $\epsilon_{ijk}B_k$. This is multiplied by $P_j/m = v_j$ above, so the total term in the commutator clearly gives $\epsilon_{ijk}v_j B_k = (v\times B)_i$, which is – after $-i\hbar$ is cancelled between the commutator and the factor in the Heisenberg equation (I ignored this factor) – exactly the magnetic force. Again, the usual convention would have the charge $q$ in front of it but I followed your conventions for the normalization of $\vec A$.
However, the correctly calculated classical limit obviously does generate and has to generate the full Lorentz force including the magnetic piece. The overall sign of $\vec A$ and the Lorentz force wasn't tracked very carefully above but believe me that it works (and has to work) as well when the calculation is done perfectly.
Because Murod repeated his or her doubts in the relativistic case, let me rerun the derivation above for the full relativistic Dirac equation. Its Hamiltonian is $$H = \gamma_0 (P^i \gamma_i - m + A_0)$$ Note that if you multiply it by $\gamma_0$ and move everything to the same side, you get the simple and uniform operator $P^\mu \gamma_\mu -m$ which must annihilate the Dirac spinorial wave function. $A_0$ is the electric potential – normally one would write $q$ explicitly in front of this term as well.
The commutator $[H,P^i]$ which is what determines the change of the momentum of a relativistic classical (non-quantum) particle even though this energy-momentum vector is sometimes called $p_\mu$ in non-quantum relativistic physics of particles. However, this object comes from the limit (and should be identified with) $P_\mu$. Again, the commutator with $A_0$ produces the electric force. The commutator of $P^i$ with $P^j\gamma_j$ produces the Lorentz force except that $\gamma_j$ is here instead of $v_i$. But that's OK, $\gamma^\mu$ acts on the Dirac spinors just like the velocity 4-vector. Note that $\gamma^\mu$ is formally a vector that squares to one – a unit time-like vector – and it must be the velocity because it's the only gauge-invariant spacetime direction picked by the (quickly oscillating) plane wave. So in the relativistic case, the derivation of the non-quantum equation for the particle is almost identical, just with $v^\mu$ expressed as the matrix $\gamma^\mu$ instead of $\vec P / m$, a part of the reason why the Dirac equation manages to be a first-order equation (for the price of having many components and matrices).
-
Thanks, Lubos. I know that there is no problem with non-relativistic hamiltonian. What about Dirac hamiltonian? $H=c[\alpha_1 P_1 + \alpha_2 P_2+\alpha_3 P_3]+m c^2 \alpha_4 - e\phi$, where $P_i=-ih\frac{∂}{∂_i}+\frac{e}{c} A_i$ – Murod Abdukhakimov Nov 14 '11 at 13:15
Dear @Murod, you obviously get the right limit from the relativistic Dirac equation, too. The Hamiltonian I wrote may be easily obtained as the non-relativistic limit of the Dirac equation. It's very clear that the momentum $\vec p$ always naturally combines with $\vec A$ to produce $\vec P$: the derivatives in the Dirac equation coupled to electromagnetism are the covariant derivatives and they stay this way in the non-relativistic limit as well. You can also avoid the non-relativistic limit and go from the Dirac equation to non-quantum but relativistic eqn of motion: $A$ is still there. – Luboš Motl Nov 14 '11 at 13:23
At any rate, note that your original question had nothing to do with relativity vs. non-relativity. If the magnetic force dropped while taking the classical (non-quantum) limit, it would drop in the same way in the non-relativistic limit as well. It doesn't drop. Again, even in the full Dirac equation, $P_i$ including both $\partial$ as well as $A$ terms is what gets interpreted as the momentum (and/or energy) in the classical (non-quantum) limit, despite the fact that the capitalization isn't the most usual notation in classical physics. Also in relativity, be careful about nonzero $[P,P_j]$. – Luboš Motl Nov 14 '11 at 13:26
Sorry, I missed an important point in your answer. You wrote that the full $P=p+A$ is equal to $mv$. This is what makes me confused. Why in classical theory it is not the "full" $P$, but just $p$? – Murod Abdukhakimov Nov 14 '11 at 13:28
I've added special paragraphs rerunning the same derivation for the relativistic case. – Luboš Motl Nov 14 '11 at 13:38
show 5 more comments
## Did you find this question interesting? Try our newsletter
email address
The question(v1) seems to be caused by a misunderstanding. Let $\vec{p}^{kin}=\gamma m_0 \vec{v}$ denote the kinetic momentum (also known as the mechanical momentum), and let $$\vec{p}^{can}~=~\vec{p}^{kin}+q\vec{A}$$ denote the canonical momentum. Quantum mechanically, $$\vec{p}^{can}~=~\frac{\hbar}{i}\vec{\partial}.$$
The Lorentz force law
$$\frac{d\vec{p}^{kin}}{dt}~=~ q(\vec{E} + \vec{v} \times \vec{B})$$
applies also for the classical case, see e.g., Landau and Lifshitz, Vol.2, The Classical Theory of Fields, Chapter 3.
-
2
Yes, It's indeed just a mix up of the canonical momentum and the kinetic momentum. – Hans de Vries Nov 22 '11 at 16:54
– Qmechanic♦ Dec 19 '12 at 12:37
As Qmechanic already pointed out: In order to obtain the kinetic momentum you have to take the derivatives (which give you the canonical momentum) and then subtract the interaction with $A^\mu$.
So everything is ok and the Dirac equation exactly reproduces the classical result. You can gain a deeper understanding of this if you write the Lorentz force in a more advanced way by using the electromagnetic field tensor.
$\frac{\partial j^\mu}{\partial \tau} ~~=~~ \frac{q}{mc}\,F^{\mu}_{~\nu}\,j^\nu~$
Which couples the E field with the boost generators K and the B field with the rotation generators J
$F^{\mu}_{~\nu} ~~=~~ \Big(\,\mathsf{E}^i\,\hat{K}^i + \mathsf{B}^i\,\hat{J}^i\,\Big) \ =\ \left( \begin{array}{rrrr} ~\ 0\ \ & ~~\mathsf{E}_x & ~~\mathsf{E}_y & ~~\mathsf{E}_z \ \\ ~ \mathsf{E}_x & \ 0\ \ & ~~\mathsf{B}_z & - \mathsf{B}_y \ \\ ~ \mathsf{E}_y & - \mathsf{B}_z & \ 0\ \ & ~~\mathsf{B}_x \ \\ ~ \mathsf{E}_z & ~~\mathsf{B}_y & - \mathsf{B}_x & \ 0\ \ \ \end{array} \right)$
For spinors the equivalent interaction generator of time evolution is:
${\cal F}^\mu_{~\nu}\,\varphi ~=~ \left(\,\vec{E}\cdot\hat{\mathbb{K}} + \vec{B}\cdot\hat{\mathbb{J}}\,\right)\varphi$
$\mathbb{K}^i ~=~ -\tfrac12\,\gamma^i\gamma^o, ~~~~~~~~~~ \mathbb{J}^i ~=~ \tfrac{i}{2}\,\gamma^5\gamma^i\gamma^o$
Again the electric field boosts while the magnetic field rotates.
The classical time evolution due to the classical electromagnetic field tensor $F$ operating on the current is exactly the same as when the Spinor field tensor ${\cal F}$ operates on the spinor.
$\exp(F^{\mu}_{~\nu}\,t)\,\bar{\varphi}\,\gamma^\nu\varphi ~~=~~ \overline{\Big(\exp({\cal F}^\mu_{~\nu}\,t)\varphi\Big)}\,\gamma^\mu \, \Big(\exp({\cal F}^\mu_{~\nu}\,t)\varphi\Big)$
If you work out the series expansion of the exponential functions you can find all kind of beauties like.
$\begin{aligned} &\dot{\bar{\varphi}}\gamma^\mu\dot{\varphi} &=~~~ &\tfrac12\,T^\mu_{~\nu}~\bar{\varphi}\,\gamma^\nu\varphi \\ &\dot{\bar{\varphi}}\gamma^5\gamma^\mu\dot{\varphi} &=~~~ &\tfrac12\,T^\mu_{~\nu}~\bar{\varphi}\,\gamma^5\gamma^\nu\varphi \\ &\dot{\bar{\varphi}}~\mathbb{K}^\mu\,\dot{\varphi} &=~~~ &\tfrac12\,T^\mu_{~\nu}~\bar{\varphi}\,\mathbb{K}^\nu\,\varphi \\ &\dot{\bar{\varphi}}~\mathbb{J}^\mu\,\dot{\varphi} &=~~~ &\tfrac12\,T^\mu_{~\nu}~\bar{\varphi}\,\mathbb{J}^\nu\,\varphi \\ \end{aligned}$
Where T is the symmetric stress energy tensor of the electromagnetic field.
The term ${\cal F}^\mu_{~\nu}\,\varphi$ is just the extra term which occures if you square the Dirac equation with interaction. (although its role there is generally poorly interpreted) The squared Dirac equation contains the second order derivative in time so it should include a term which accounts for the spinor boosts and spinor rotates due to the electromagnetic field. The Klein Gordon equation does not need such a term because it describes a scalar field and scalars are per definition Lorentz invariant.
Regards, Hans
-
Thank you Hans! That's exactly what I'm looking for. The action of electromagnetic field on charged spinors in QED need to be reduced to rotation and boost, and I just wanted to see how it works. Could you please advise any literature on this? I still need to check how this can be derived from Dirac equation: $\frac{\partial j^\mu}{\partial \tau} ~~=~~ \frac{q}{mc}\,F^{\mu}_{~\nu}\,j^\nu~$. – Murod Abdukhakimov Nov 22 '11 at 17:25
I see that there is something on your web page. – Murod Abdukhakimov Nov 22 '11 at 17:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 95, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401225447654724, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/134051/solving-lim-limits-x-to0-fracx-sinxx2-without-lhospitals-rule/134068
|
# Solving $\lim\limits_{x\to0} \frac{x - \sin(x)}{x^2}$ without L'Hospital's Rule.
How to solve $\lim\limits_{x\to 0} \frac{x - \sin(x)}{x^2}$ Without L'Hospital's Rule? you can use trigonometric identities and inequalities, but you can't use series or more advanced stuff.
-
$\sin$ is odd, so $\sin'' = 0$ (or you can just use $\sin'' = -\sin$). Thus you have $\sin x = x + 0x^2 + O(x^2)$ and that's it. – savick01 Apr 19 '12 at 19:30
@savick01 Maybe he doesn't want Taylor polynomials either. – Peter Tamaroff Apr 19 '12 at 19:31
@PeterT.off Maybe, but it's not very hard. By definition of the derivative he has: $f(x) = f(0) + f'(0)x + O(x)$, so also: $f'(x) = f'(0) + f''(0)x + O(x)$ and by integrating that he gets $f(x) = f(0) + f'(x) x + f''(0) \frac{x^2}{2} + O(x^2)$, so he can even include the proof in his work. – savick01 Apr 19 '12 at 19:36
@Peter T.off This is implcit usage of L'Hopital's rule – Norbert Apr 19 '12 at 19:36
2
See groups.google.com/group/sci.math/msg/5e39a97048392a83 (sci.math thread "The approximation sin(x) = x - (1/6)x^3"), which is also at mathforum.org/kb/message.jspa?messageID=6865083 I started to LaTeX the relevant part (the part titled "NON-CALCULUS PROOF THAT sin(x) > x - (1/6)x^3"), but quickly realized that I simply don't have the time to rewrite it now. – Dave L. Renfro Apr 19 '12 at 19:48
show 1 more comment
## 3 Answers
The given expression is odd; therefore it is enough to consider $x>0$. We then have $$0<{x-\sin x\over x^2}<{\tan x -\sin x\over x^2}=\tan x\ {1-\cos x\over x^2}={\tan x\over2}\ \Bigl({\sin(x/2)\over x/2}\Bigr)^2\ ,$$ and right side obviously converges to $0$ when $x\to0+$.
-
2
Wait. That's only obvious because you know that $\sin(x)\over x$ converges to 1. So wouldn't you need to show that as well, within the constraints given in the question? – Mark Adler Apr 19 '12 at 20:09
3
@MarkAdler: showing that $\sin(x)/x$ is 1 follows from the elementary inequality $\sin (x) < x$, $0<x<\pi/2$ (you can convince yourself of this inequality by staring at a unit circle long enough). – Fabian Apr 20 '12 at 14:46
First off, my point is that the answer above just converts one limit problem into another, without providing the solution for the second. It still does not. Second, how does that follow? $x/2<x$ for $0<x<\pi/2$, however ${x/2\over x}$ does not go to $1$. It goes to $1/2$. – Mark Adler Apr 20 '12 at 20:34
This can be done geometrically.
Surprisingly, two answers I wrote in this regard(geometric proofs of limits) before can be combined to give a solution for this.
$$\lim_{x \to 0} \frac{ \tan x - x}{x^2} = 0 \tag{1}$$
A geometric proof of that can be found here: Limit, solution in unusual way
$$\lim_{x \to 0} \frac{1 - \cos x}{x} = 0 \tag{2}$$
A geometric proof of that can be found here: Finding the limit of $(1-\cos(x))/x$ as $x\to 0$ with squeeze theorem
To combine the two:
$$\tan x - x = \frac{\sin x - x \cos x}{\cos x} = \frac{(\sin x - x) + x(1 - \cos x)}{\cos x}$$
-
3
Of course, you also need the geometric proof that $\lim_{x\to 0} \frac{\sin x}{x} = 1$. – Aryabhata Apr 19 '12 at 20:16
We will in fact prove that $\lim_{x \to 0} \dfrac{x-\sin(x)}{x^3} = \dfrac16$. This implies that $\lim_{x \to 0} \dfrac{x-\sin(x)}{x^2} = 0$.
Let $$S=\lim_{x \to 0} \dfrac{x-\sin(x)}{x^3}$$ Replacing $x$ by $2y$, we get that \begin{align} S & = \lim_{y \to 0} \dfrac{2y-\sin(2y)}{(2y)^3} = \lim_{y \to 0} \dfrac{2y-2 \sin(y) \cos(y)}{8y^3}\\ & = \lim_{y \to 0} \dfrac{2y - 2 \sin(y) + 2 \sin(y) - 2 \sin(y) \cos(y)}{8y^3}\\ & = \lim_{y \to 0} \dfrac{2 y - 2 \sin(y)}{8y^3} + \lim_{y \to 0} \dfrac{2 \sin(y) - 2 \sin(y) \cos(y)}{8y^3}\\ & = \dfrac14 \lim_{y \to 0} \dfrac{y-\sin(y)}{y^3} + \dfrac14 \lim_{y \to 0} \dfrac{\sin(y) (1 - \cos(y))}{y^3}\\ & = \dfrac{S}4 + \dfrac14 \lim_{y \to 0} \dfrac{\sin(y) 2 \sin^2(y/2)}{y^3}\\ & = \dfrac{S}4 + \dfrac18 \lim_{y \to 0} \dfrac{\sin(y)}{y} \dfrac{\sin^2(y/2)}{(y/2)^2}\\ & = \dfrac{S}4 + \dfrac18 \lim_{y \to 0} \dfrac{\sin(y)}{y} \lim_{y \to 0} \dfrac{\sin^2(y/2)}{(y/2)^2}\\ & = \dfrac{S}4 + \dfrac18\\ \dfrac{3S}4 & = \dfrac18\\ S & = \dfrac16 \end{align}
Hence, $$\lim_{x \to 0} \dfrac{x-\sin(x)}{x^2} = \lim_{x \to 0} \left(\dfrac{x-\sin(x)}{x^3} \right)x = \left(\lim_{x \to 0} \dfrac{x-\sin(x)}{x^3} \right) \left( \lim_{x \to 0} x \right) = \dfrac{\lim_{x \to 0} x}6 = 0$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.889420211315155, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/92182/if-x-i-are-iid-finding-ex-1-x-2-cdots-x-k-mid-x-1-x-2-cdots-x
|
# If $X_i$ are iid, finding $E(X_1 + X_2 + \cdots + X_k \mid X_1 + X_2+ \cdots +X_n=b)$
I just wonder if anybody can help me to prove the following identity:
Given a series of i.i.d. non-negative random variables $X_1, X_2, ..., X_n$, then $$E(X_1+X_2+ \cdots +X_k \mid X_1+X_2+ \cdots +X_n=b)=b \cdot \frac{k}{n} .$$
-
1
19 minutes. – Did Dec 17 '11 at 9:49
Huh? What did your comment mean? lol – Patrick Da Silva Dec 18 '11 at 8:14
## 1 Answer
You can reduce yourself to the case where $k = 1$ because the expectation is a linear operator.
Since $X_i$'s are i.i.d., $$\mathbb E(X_i \, | \, X_1 + \dots + X_n = b )$$ does not depend on $i$ (as long as $1 \le i \le n$). Thus $$n \, \mathbb E \left( X_i \, \left| \sum_{i=1}^n X_i = b \right. \right) = \sum_{i=1}^n \, \mathbb E \left(X_i \, \left| \, \sum_{i=1}^n X_i = b \right. \right) = \mathbb E \left( \sum_{i=1}^n X_i \, \left| \, \sum_{i=1}^n X_i = b \right. \right) = b$$ so that $$\mathbb E \left( X_i \, \left| \sum_{i=1}^n X_i = b \right. \right) = \frac bn.$$ Your case can then be solved by linearity of expectation.
Hope that helps,
-
1
Well, I was unsure myself, but don't remember seeing a "the". Irrespective of the correct usage, I like your answer, +1. :-) – Srivatsan Dec 17 '11 at 7:20
3
Since $X_i$'s are independent is (not necessary and) not enough. Independent and identically distributed is enough. – Did Dec 17 '11 at 9:47
2
Really? And somebody is forcing you to accept answers (1) which you do not understand, (2) 19 minutes after you submitted the question, and (3) while your brain is not working? Wow... – Did Dec 18 '11 at 0:16
1
Exchangeable instead of iid would also work. – Robert Israel Dec 18 '11 at 3:21
2
Patrick: A random vector $(X_k)_{1\leqslant k\leqslant n}$ is exchangeable when the distribution of $(X_{\sigma(k)})_{1\leqslant k\leqslant n}$ does not depend on the permutation $\sigma$. Every i.i.d. sequence is exchangeable. If $(Y_k)_k$ is i.i.d. and independent on $Z$ and $X_k=\Phi(Y_k,Z)$, then $(X_k)_k$ is exchangeable. If $(Y_k)_k$ is i.i.d. and $S$ is their sum, then $(Y_k)_k$ conditionally on $[S=s]$ is exchangeable. And so on. There is a LLN for exchangeable sequence where one converges to a (possibly non degenerate) tail random variable... This is a nice subject, if you ask me. – Did Dec 18 '11 at 7:26
show 6 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388720989227295, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/152605/proving-that-lim-limits-x-to-0-fracex-1x-1/152616
|
# Proving that $\lim\limits_{x \to 0}\frac{e^x-1}{x} = 1$
I was messing around with the definition of the derivative, trying to work out the formulas for the common functions using limits. I hit a roadblock, however, while trying to find the derivative of $e^x$. The process went something like this:
$$\begin{align} (e^x)' &= \lim_{h \to 0} \frac{e^{x+h}-e^x}{h} \\ &= \lim_{h \to 0} \frac{e^xe^h-e^x}{h} \\ &= \lim_{h \to 0} e^x\frac{e^{h}-1}{h} \\ &= e^x \lim_{h \to 0}\frac{e^h-1}{h} \end{align}$$
I can show that $\lim_{h\to 0} \frac{e^h-1}{h} = 1$ using L'Hôpital's, but it kind of defeats the purpose of working out the derivative, so I want to prove it in some other way. I've been trying, but I can't work anything out. Could someone give a hint?
-
4
use the power series representation of $e^x$. – user20266 Jun 1 '12 at 20:20
4
What is the definition of $\exp(x)$ you are working with? – user17762 Jun 1 '12 at 20:22
1
What is your definition of $\exp x$? Often its the solution to $y'=y,y(0)=1$, in which case the derivative of $e^x$ comes for free. Otherwise the series representation works. – anon Jun 1 '12 at 20:22
@Thomas: Isn't using the power series representation of $e^x$ pretty much equivalent to saying that $(e^x)' = e^x$? – Javier Badia Jun 1 '12 at 20:22
2
I cannot stress enough how incredibly useful the estimates $1+x \leq e^x \leq 1/(1-x)$ are, where $x < 1$ for the upper bound. – WimC Jun 1 '12 at 20:23
show 4 more comments
## 5 Answers
Consider the differential equation
$$y - \left( {1 + \frac{x}{n}} \right)y' = 0$$
It's solution is clearly $$y_n={\left( {1 + \frac{x}{n}} \right)^n}$$
If we let $n \to \infty$ "in the equation" one gets
$$y - y' = 0$$
One should expect that the solution to this is precisely
$$\lim_{n \to \infty} y_n =y=\lim_{n \to \infty} \left(1+\frac x n \right)^n := e^x$$
Also note $$\mathop {\lim }\limits_{n \to \infty } {\left( {1 + \frac{x}{n}} \right)^n} = \mathop {\lim }\limits_{n \to \infty } {\left( {1 + \frac{x}{{xn}}} \right)^{xn}} = \mathop {\lim }\limits_{n \to \infty } {\left[ {{{\left( {1 + \frac{1}{n}} \right)}^n}} \right]^x}$$
My approach is the following:
I have as a definition of $\log x$ the following:
$$\log x :=\lim_{k \to 0} \frac{x^k-1}{k}$$
Another one would be
$$\log x = \int_1^x \frac{dt}t$$ Any ways, the importance here is that one can define $e$ to be the unique number such that
$$\log e =1$$
so that by definition
$$\log e =\lim_{k \to 0} \frac{e^k-1}{k}=1$$
From another path, we can define $e^x$ as the inverse of the logarithm. Since
$$(\log x)'=\frac 1 x$$
the inverse derivative theorem tells us
$$(e^x)'=\frac{1}{(\log y)'}$$
where $y=e^x$
$$(e^x)'=\frac{1}{(1/y)}$$
$$(e^x)'=y=e^x$$
The looking at the difference quotient, one sees that by definition one needs
$$\mathop {\lim }\limits_{h \to 0} \frac{{{e^{x + h}} - {e^x}}}{h} = {e^x}\mathop {\lim }\limits_{h \to 0} \frac{{{e^h} - 1}}{h} = {e^x}$$
so that the limit of the expression is $1$. One can also retrieve from the definition of the logarithm that
$$\eqalign{ & \frac{x}{{x + 1}} <\log \left( {1 + x} \right) < x \cr & \frac{1}{{x + 1}} < \frac{{\log \left( {1 + x} \right)}}{x} <1 \cr}$$
Thus
$$\mathop {\lim }\limits_{x \to 0} \frac{{\log \left( {1 + x} \right)}}{x} = 1$$
a change of variables $e^h-1=x$ gives the result you state. In general, we have to go back to the definition of $e^x$. If one defines $${e^x} = 1 + x + \frac{{{x^2}}}{2} + \cdots$$
Then
$$\frac{{{e^x} - 1}}{x} = 1 + \frac{x}{2} + \cdots$$
$$\mathop {\lim }\limits_{x \to 0} \frac{{{e^x} - 1}}{x} = \mathop {\lim }\limits_{x \to 0} \left( {1 + \frac{x}{2} + \cdots } \right) = 1$$
from the defintion we just chose.
-
I'm seeing that I haven't really thought this through. I would say that the only definition of $e^x$ that makes this problem an interesting problem is the compound interest one, since the ones you suggest are related to the fact that $(e^x)' = e^x$. But then it seems that the problem reduces to showing that the derivative of $\lim_{n \to \infty}(1+\frac{x}{n})^n$ is equal to itself. – Javier Badia Jun 1 '12 at 20:37
I guess I was looking for answers that use mostly the definition of a limit together with some of $e^x$'s properties, but I'm not sure that's practical. – Javier Badia Jun 1 '12 at 20:40
Solve $$y - \left( {1 + \frac{x}{n}} \right)y' = 0$$ Then let $n\to \infty$. – Peter Tamaroff Jun 1 '12 at 20:51
That works, then. Don't get me wrong, the rest of your answer is great too, it's just not precisely what I had in mind when I thought of the question. Thanks! – Javier Badia Jun 1 '12 at 21:15
If (as is fairly commonly done in calculus courses) one defines $\ln x$ by $$\ln x=\int_1^x \frac{dt}{t},$$ then it is easy to show that the derivative of $\ln x$ is $\frac{1}{x}$. One can either appeal to the Fundamental Theorem of Calculus, or operate directly via a squeezing argument.
If we now define $e^x$ as the inverse function of $\ln x$, then the fact that the derivative of $e^x$ is $e^x$ follows from the basic theorem about the derivative of an inverse function.
There are many ways to define the exponential function. The proof of the result you are after depends heavily on the approach we choose to take.
-
Define $$f_n(x)=\left(1+\frac{x}{n}\right)^n\tag{1}$$ Note that $$f_n^{\,\prime}(x)=\left(1+\frac{x}{n}\right)^{n-1}\tag{2}$$ On compact subsets of $\mathbb{R}$, both $(1)$ and $(2)$ converge uniformly to $e^x$. This means that $$\frac{\mathrm{d}}{\mathrm{d}x}e^x=e^x\tag{3}$$ Therefore, $$\begin{align} \lim_{x\to0}\frac{e^x-1}{x} &=\lim_{x\to0}\frac{e^x-e^0}{x-0}\\ &=\left.\frac{\mathrm{d}}{\mathrm{d}x}e^x\right|_{x=0}\\ &=\left.e^x\right|_{x=0}\\ &=1\tag{4} \end{align}$$
-
Let say $y=e^h -1$, then $\lim_{h \rightarrow 0} \dfrac{e^h -1}{h} = \lim_{y \rightarrow 0}{\dfrac{y}{\ln{(y+1)}}} = \lim_{y \rightarrow 0} {\dfrac{1}{\dfrac{\ln{(y+1)}}{y}}} = \lim_{y \rightarrow 0}{\dfrac{1}{\ln{(y+1)}^\frac{1}{y}}}$. It is easy to prove that $\lim_{y \rightarrow 0}{(y+1)}^\frac{1}{y} = e$. Then using Limits of Composite Functions $\lim_{y \rightarrow 0}{\dfrac{1}{\ln{(y+1)}^\frac{1}{y}}} = \dfrac{1}{\ln{(\lim_{y \rightarrow 0}{(y+1)^\frac{1}{y}})}} = \dfrac{1}{\ln{e}} = \dfrac{1}{1} = 1.$
-
1) Using Apostol's definition in "Calculus I", 6.12, or in Finney's " Calculus and Analytic Geometry". 6.3 , or in Swokowski's "Calculus and Analytic Geometry", def. 7.8, we have that $$\,\,e^x=y\Longleftrightarrow^{def.} \log y=x$$ and assuming we know the usual about the logarithmic function we get $$y=e^x\Longrightarrow \log y=x\Longrightarrow\frac{1}{y}\,dy=dx\Longrightarrow \frac{dy}{dx}=y=e^x\Longrightarrow (e^x)'=e^x$$ and now using the definition of derivative, for any real we have $$e^x=(e^x)'=\lim_{h\to 0}\frac{e^{x+h}-e^x}{h}=e^x\lim_{h\to 0}\frac{e^h-1}{h}\Longrightarrow \lim_{h\to 0}\frac{e^h-1}{h}=1$$since, by this def. of the exponential function, we get using the basic properties of logarithm $$e^{x+h}=e^xe^h\Longleftrightarrow \log(e^xe^h)=x+h ....$$$${}$$
(2) By one of the most usual, and elementary, of the definitions of the number $\,e\,$ , we get: $$e:=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n\Longrightarrow e^h=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{hn}=\lim_{k\to\infty}\left(1+\frac{h}{k}\right)^k$$putting $\,\,k:=hn\,\,$, so$$\lim_{h\to 0}\frac{e^h-1}{h}=\lim_{h\to 0}\frac{\lim_{n\to\infty}\left(1+\frac{h}{n}\right)^n-1}{h}=\lim_{h\to 0}\frac{\lim_{n\to\infty}\sum_{k=0}^n\binom{n}{k}\left(\frac{h}{n}\right)^k-1}{h}=$$$$=\lim_{h\to 0}\lim_{n\to\infty}\frac{\sum_{k=1}^n\binom{n}{k}\left(\frac{h}{n}\right)^k}{h}=1+\lim_{h\to 0}\lim_{n\to\infty}\rlap{/}{h}\frac{\sum_{k=2}^n\binom{n}{k}\frac{h^{k-1}}{n^k}}{\rlap{/}{h}}=1$$as every single summand in what's left in that sum is multiplied by a positive power of $\,h$...
As far as I know, most authors go with the first approach, I couldn't find even one that goes with the second approach, and some even go with the definition of the exponential function as a power series...now you choose!
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 34, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496507048606873, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Frequency
|
# Frequency
For other uses, see Frequency (disambiguation).
Three cyclically flashing lights, from lowest frequency (top) to highest frequency (bottom). f is the frequency in hertz (Hz), meaning the number of cycles per second. T is the period in seconds (s), meaning the number of seconds per cycle. T and f are reciprocals.
Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as temporal frequency, which emphasizes the contrast to spatial frequency. The period is the duration of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example, if a newborn baby's heart beats at a frequency of 120 times a minute, its period (the interval between beats) is half a second.
## Definitions and units
For cyclical processes, such as rotation, oscillations, or waves, frequency is defined as a number of cycles per unit time. In physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by a Greek letter ν (nu).
For counts per a time interval, the SI unit, the unit of frequency is the hertz (Hz), named after the German physicist Heinrich Hertz: 1 Hz means that an event repeats once per second. A previous name for this unit was cycles per second.
A traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated RPM. 60 RPM equals one hertz.[1]
The period, usually denoted by T, is the length of time taken by one cycle, and is the reciprocal of the frequency f:
$T = \frac{1}{f}$
The SI unit for period is the second.
## Measurement
Sinusoidal waves of various frequencies; the bottom waves have higher frequencies than those above. The horizontal axis represents time.
### By counting
Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period, then dividing the count by the length of the time period. For example, if 71 events occur within 15 seconds the frequency is:
$f = \frac {71}{15 \,\mbox{sec}} \approx 4.7 \,\mbox{hertz} \,$
If the number of counts is not very large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time.[2] The latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an average error in the calculated frequency of Δf = 1/(2 Tm), or a fractional error of Δf / f = 1/(2 f Tm) where Tm is the timing interval and f is the measured frequency. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small.
### By stroboscope
An older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope. This is an intense repetitively flashing light (strobe light) whose frequency can be adjusted with a calibrated timing circuit. The strobe light is pointed at the rotating object and the frequency adjusted up and down. When the frequency of the strobe equals the frequency of the rotating or vibrating object, the object completes one cycle of oscillation and returns to its original position between the flashes of light, so when illuminated by the strobe the object appears stationary. Then the frequency can be read from the calibrated readout on the stroboscope. A downside of this method is that an object rotating at an integer multiple of the strobing frequency will also appear stationary.
### By frequency counter
Higher frequencies are usually measured with a frequency counter. This is an electronic instrument which measures the frequency of an applied repetitive electronic signal and displays the result in hertz on a digital display. It uses digital logic to count the number of cycles during a time interval established by a precision quartz time base. Cyclic processes that are not electrical in nature, such as the rotation rate of a shaft, mechanical vibrations, or sound waves, can be converted to a repetitive electronic signal by transducers and the signal applied to a frequency counter. Frequency counters can currently cover the range up to about 100 GHz. This represents the limit of direct counting methods; frequencies above this must be measured by indirect methods.
### Heterodyne methods
Above the range of frequency counters, frequencies of electromagnetic signals are often measured indirectly by means of heterodyning (frequency conversion). A reference signal of a known frequency near the unknown frequency is mixed with the unknown frequency in a nonlinear mixing device such as a diode. This creates a heterodyne or "beat" signal at the difference between the two frequencies. If the two signals are close together in frequency the heterodyne is low enough to be measured by a frequency counter. This process only measures the difference between the unknown frequency and the reference frequency, which must be determined by some other method. To reach higher frequencies, several stages of heterodyning can be used. Current research is extending this method to infrared and light frequencies (optical heterodyne detection).
## Frequency of waves
For periodic waves, frequency has an inverse relationship to the concept of wavelength; simply, frequency is inversely proportional to wavelength λ (lambda). The frequency f is equal to the phase velocity v of the wave divided by the wavelength λ of the wave:
$f = \frac{v}{\lambda}.$
In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes:
$f = \frac{c}{\lambda}.$
When waves from a monochrome source travel from one medium to another, their frequency remains the same—only their wavelength and speed change.
## Examples
### Physics of light
Complete spectrum of electromagnetic radiation with the visible portion highlighted
Main articles: Light and Electromagnetic radiation
Visible light is an electromagnetic wave, consisting of oscillating electric and magnetic fields traveling through space. The frequency of the wave determines its color: 4×1014 Hz is red light, 8×1014 Hz is violet light, and between these (in the range 4-8×1014 Hz) are all the other colors of the rainbow. An electromagnetic wave can have a frequency less than 4×1014 Hz, but it will be invisible to the human eye; such waves are called infrared (IR) radiation. At even lower frequency, the wave is called a microwave, and at still lower frequencies it is called a radio wave. Likewise, an electromagnetic wave can have a frequency higher than 8×1014 Hz, but it will be invisible to the human eye; such waves are called ultraviolet (UV) radiation. Even higher-frequency waves are called X-rays, and higher still are gamma rays.
All of these waves, from the lowest-frequency radio waves to the highest-frequency gamma rays, are fundamentally the same, and they are all called electromagnetic radiation. They all travel through a vacuum at the speed of light.
Another property of an electromagnetic wave is its wavelength. The wavelength is inversely proportional to the frequency, so an electromagnetic wave with a higher frequency has a shorter wavelength, and vice-versa.
### Physics of sound
Main article: Sound
Sound is made up of changes in air pressure in the form of waves. Frequency is the property of sound that most determines pitch.[3] The frequencies an ear can hear are limited to a specific range of frequencies.
Mechanical vibrations perceived as sound travel through all forms of matter: gases, liquids, solids, and plasmas. The matter that supports the sound is called the medium. Sound cannot travel through a vacuum.
The audible frequency range for humans is typically given as being between about 20 Hz and 20,000 Hz (20 kHz). High frequencies often become more difficult to hear with age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz.[4]
### Line current
In Europe, Africa, Australia, Southern South America, most of Asia, and Russia, the frequency of the alternating current in household electrical outlets is 50 Hz (close to the tone G), whereas in North America and Northern South America, the frequency of the alternating current in household electrical outlets is 60 Hz (between the tones B♭ and B; that is, a minor third above the European frequency). The frequency of the 'hum' in an audio recording can show where the recording was made, in countries using a European, or an American, grid frequency.
## Period versus frequency
As a matter of convenience, longer and slower waves, such as ocean surface waves, tend to be described by wave period rather than frequency. Short and fast waves, like audio and radio, are usually described by their frequency instead of period. These commonly used conversions are listed below:
| | | | | Frequency | Period (time) |
|--------------|------------|-------------|-------------|-------------|-----------------|
| 1 mHz (10−3) | 1 Hz (100) | 1 kHz (103) | 1 MHz (106) | 1 GHz (109) | 1 THz (1012) |
| 1 ks (103) | 1 s (100) | 1 ms (10−3) | 1 µs (10−6) | 1 ns (10−9) | 1 ps (10−12) |
## Other types of frequency
• Angular frequency ω is defined as the rate of change of angular displacement, θ, (during rotation), or the rate of change of the phase of a sinusoidal waveform (e.g. in oscillations and waves), or as the rate of change of the argument to the sine function:
$y(t) = \sin\left( \theta(t) \right) = \sin(\omega t) = \sin(2 \pi f t).\,$
$\frac{d \theta}{dt} = \omega = 2\pi f.\,$
Angular frequency is commonly measured in radians per second (rad/s) but, for discrete-time signals, can also be expressed as radians per sample time, which is a dimensionless quantity.
• Spatial frequency is analogous to temporal frequency, but the time axis is replaced by one or more spatial displacement axes. E.g.:
$y(t) = \sin\left( \theta(t,x) \right) = \sin(\omega t + kx) \,$
$\frac{d \theta}{dx} = k .\,$
Wavenumber, k, sometimes means the spatial frequency analogue of angular temporal frequency. In case of more than one spatial dimension, wavenumber is a vector quantity.
## Frequency ranges
The frequency range of a system is the range over which it is considered to provide a useful level of signal with acceptable distortion characteristics. A listing of the upper and lower limits of frequency limits for a system is not useful without a criterion for what the range represents.
Many systems are characterized by the range of frequencies to which they respond. Musical instruments produce different ranges of notes within the hearing range. The electromagnetic spectrum can be divided into many different ranges such as visible light, infrared or ultraviolet radiation, radio waves, X-rays and so on, and each of these ranges can in turn be divided into smaller ranges. A radio communications signal must occupy a range of frequencies carrying most of its energy, called its bandwidth. Allocation of radio frequency ranges to different uses is a major function of radio spectrum allocation.
## References
1. Davies, A. (1997). Handbook of Condition Monitoring: Techniques and Methodology. New York: Springer. ISBN 978-0-412-61320-3.
2. Bakshi, K.A.; A.V. Bakshi, U.A. Bakshi (2008). Electronic Measurement Systems. US: Technical Publications. pp. 4–14. ISBN 978-81-8431-206-5.
3. Pilhofer, Michael (2007). Music Theory for Dummies. For Dummies. p. 97. ISBN 9780470167946.
4. Elert, Glenn; Timothy Condon (2003). "Frequency Range of Dog Hearing". The Physics Factbook. Retrieved 2008-10-22.
## Further reading
• Giancoli, D.C. (1988). Physics for Scientists and Engineers (2nd ed.). Prentice Hall. ISBN 0-13-669201-X
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193882942199707, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/170127/what-is-the-meaning-of-this-analysis-problem-and-give-some-hint-please?answertab=active
|
# What is the meaning of this analysis problem and give some hint please?
What is the meaning of this analysis problem and give some hint please?
This problem was founded on Analysis 1 by Herbert Amann and Joachim Escher on page 100.
Determine the following subsets of $\Bbb R^2$ by drawing:
$$A = \{(x,y) \in \Bbb R^2 : |x-1| + |y+1| \leq 1\},$$
$$B = \{(x,y) \in \Bbb R^2 : 2x^2+y^2>1, |x| \leq |y|\},$$
$$C = \{(x,y) \in \Bbb R^2 : x^2-y^2>1, x-2y<1, y-2x<1\}.$$
-
1
Draw them? The question seems incomplete. – ncmathsadist Jul 13 '12 at 1:22
@ncmathsadist - That is the whole problem in the book. – Victor Jul 13 '12 at 1:24
It could ask for descriptions, either by a picture or in words: "a square rotated by 45 degrees", "the exterior of an ellipse within two opposing quadrants", etc. – user31373 Jul 13 '12 at 1:26
1
I would assume words. The point of the exercises is (probably) to make you fluent in reading and interpreting set definitions like these. – Robert Mastragostino Jul 13 '12 at 1:29
## 1 Answer
For example C, there are three regions of the plane defined. To find each one, change the inequality to an equals sign and plot the graph, then decide which of the graph is the region of interest. The first is a hyperbola opening toward the $+x$ and $-x$ axes. You want the regions that do not contain the origin. The second is a line of slope $\frac 12$ going through $(1,0)$ and you want the half-plane above it. The third is a line of slope $2$ going through $(0,1)$ and you want the half-plane below it. If you plot all three on the same axes and shade the regions of interest, your result is the area shaded all three ways, the intersection of the regions of each inequality. Since the inequalities are strict, the dividing lines are not included.
-
To Ross Millikan - Oh, i think the college level question would be to ask the region of area that satisfy A,B,C... – Victor Jul 13 '12 at 1:45
1
@Victor: I read it to be a three part question, with separate answers for A, B, and C. Each one has more than one region, so you are already asked to find the intersection. – Ross Millikan Jul 13 '12 at 1:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515928626060486, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/282173/expressing-hyperbolic-functions-in-terms-of-e
|
Expressing hyperbolic functions in terms of $e$.
Express $\tanh(-3)$ in terms of $e$, where $\tanh$ is the hyperbolic tangent.
This is what I did:
$$\begin{align} \tanh(-x)&=\dfrac{e^{-2x}-1}{e^{-2x}+1}\\\\\\ \tanh(-3)&=\dfrac{e^{-2\times-3}-1}{e^{-2\times-3}+1}\\\\\\ \tanh(-3)&=\dfrac{e^6-1}{e^6+1} \end{align}$$
However, this is wrong, as the actual solution is:
$$\tanh(-3)=-\dfrac{e^3-1}{e^3+1}$$
1. What have I done that is unacceptable, hence making my solution wrong?
2. How is the actual solution obtained? (Full explanation would be helpful)
-
You have too many minus signs! – andybenji Jan 19 at 18:59
I don't understand? – Olly Price Jan 19 at 19:18
You plug in $-3$ into the equation of $\tanh(-x)$, which gives you $\tanh(--3) = \tanh(3)$. – andybenji Jan 19 at 20:05
You shouldn't have the $2x$s in the second line. When you substitute in $-3$ for $x$, the $x$'s go away. – Ross Millikan Jan 19 at 20:15
1
That's not an $x$, that is a multiplication sign $\times$. – GEdgar Jan 19 at 21:01
2 Answers
Using the definition $$\tanh(x) = \frac{e^{2x}-1}{e^{2x}+1}$$So we plug in $-3$ wherever we see an $x$ to get that $$\tanh(-3) = \frac{e^{2 \cdot-3}-1}{e^{2\cdot-3}+1}=\frac{e^{-6}-1}{e^{-6}+1}$$So we multiply by $\frac{e^6}{e^6}$ to get $$= \frac{1-e^6}{1 + e^6}$$So other than a little minus sign error, I think you're correct!
-
Okay, but the definition I used for $tanh(-x)$ is the correct definition, in which case, surely I haven't done I minus sign error? Let me know where I'm going wrong with this statement please thanks for the answer as well – Olly Price Jan 20 at 22:47
Your first attept is really right. In fact, you got $$\tanh(-3)=-\frac{e^3-e^{-3}}{e^3+e^{-3}}=-\frac{e^3-\frac{1}{e^{3}}}{e^3+\frac{1}{e^{-3}}}=-\frac{e^6-1}{e^6+1}$$ Knowing that $\tanh(x)$is an odd function also, the actual solution you pointed doesn't seem right result.
-
Sign error!! – andybenji Jan 20 at 4:54
@andybenji: Where?? – Babak S. Jan 20 at 5:16
numerator should be $e^{-3} - e^3$. – andybenji Jan 20 at 7:31
1
@andybenji: Thanks andy, but I have a minus before the fraction. And both of us noted the same ting at last. Didn't we? – Babak S. Jan 20 at 12:12
+ I like! $~~~~~~~~~~~~$:-) – amWhy Feb 13 at 0:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9004153609275818, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/77180/motivation-behind-defining-the-ramification-divisor
|
## Motivation behind defining the Ramification Divisor
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to understand what exactly is the motivation for defining the notion of a ramification divisor of a function.
As I see the definition,
If $f$ is a meromrophic function between two Riemann surfaces - say $X$ and $X'$ then let $\nu_p(f)$ be the ramification (or order) of the function $f$ at $p$. Basically if one is working in local coordinates such that $z(p)=0$ then $f$ in a neighbourhood of $p$ looks like $f=z^{\nu_p}h(z)$ where $h(z)$ is a holomorphic function which is never $0$ in a neighbourhood of $p$.
• In the above definition of ramification, can the function $h$ be always set to unity? By choosing coordinate in$X'$ such that $f(p)=0$? (...I am not sure..)
• Does anything in the above definition depend on $X$ or $X'$ being compact?
Now for a similar map $f$ one defines its ramification divisor ($R_f$) as $R_f = \sum _{p \in X} (\nu_f(p) - 1)p$
• Its not clear to me whether people define ramification divisors for meromorphic functions too since i almost seem to see the texts exclusively using it in the case of non-constant holomorphic functions. I would be glad if someone can clarify this...may be I am missing something very basic.
• Also this definition almost exclusively seems to be used when $X$ and $X'$ are compact Riemann surfaces. Is that somehow necessary?
{I guess in all this discussion one has to keep in mind that a holomorphic function on a Riemann surface and a holomorphic function between two Riemann surfaces are defined "differently" - as i see it. I guess there is no analogue of Liouville's theorem in the later case.}
• Why that "-1" in the definition? Is $\nu_p(f)$ always greater than $1$ ?
• Let $q \in X'$ and let $p_1$ be a pre-image of $q$ under $f$ with multiplicity of $m_1$. Then I guess one will say that $\nu_f(p_1) = m_1$. Now is it obvious that any "small" perturbation of $q$ can only "split" $p_1$ into $m_1$ points each with $\nu_f = 1$? That nothing else can happen? For "large" enough perturbation to $q$ isn't it possible for many of its pre-images to "join up" and have larger ramifications than initially?
• consider this set, $p \in X' \vert f^{-1} (p)$ has all points with $\nu_f(p)=1$ (called "simple points"?). Is this set open and dense in $X'$?
• Finally a curiosity - is there a "simple" way to see the Riemann-Hurwitz formula without using the Poincare-Hopf formula?
-
You have a lot of questions. I'll just address just a couple. One way to understand it is look at the the basic example $y=x^n$, then $dy= nx^{n-1}dx$. Note the exponent is precisely the coefficient of the ramification divisor; so it measures the "differenc" between differentials upstairs and downstairs. Since the issue is local, compactness is not essential, but the divisor might be an infinite sum in general. – Donu Arapura Oct 4 2011 at 22:26
The point is not that $\nu_p(f)$ is always greater than 1 but that it is always greater than or equal to 1 and usually is just 1. Only at $p$ where there is ramification do you have $\nu_p(f) > 1$, so in the definition of $R_f$ you get a finite sum. If the coefficient was $\nu_f(p)$ then the formal sum would have uncountably many nonzero terms! – KConrad Oct 5 2011 at 2:32
@KConrad Can you kindly explain as to why only at finitely many points is the ramification greater than 1? May be I am missing something basic! I wonder if that is related to there being only finitely many singular points on a compact Riemann surface - but I can't relate the two things. At least in local coordinates it seems that if the ramification is greater than 1 then partial derivatives there will start vanishing and that would then be a singular point by definition. Is that the argument? – Anirbit Oct 7 2011 at 0:37
## 2 Answers
A few answers:
• As the comments mention, the "-1" is certainly needed to get a divisor in the first place, since $\nu_p$ is usually equal to 1 and exceeds 1 at the ramification points. Thus, the support of the ramification divisor as you define it is precisely the ramification locus.
• Ramification is a local phenomenon, so compactness is totally irrelevant.
• A meromorphic function on a Riemann surface $X$ can be interpreted as a map $X\to \mathbb{P}^1$ (the poles go to $\infty\in \mathbb{P}^1$ with ramification index equal to the degree of the pole).
• Here's how I think of/recall the Riemann-Hurwitz formula for $f:X\to X'$: Imagine that you have triangulated $X'$ such that all ramification points (or the images thereof if you think of them on $X$) occur at vertices. Now consider the "pullback" of this triangulation to $X$ (look a the preimages of the faces, edges, and vertices). If you compute the Euler characteristic of $X$ using this pullback triangulation you will see that it differs from the degree of $f$ times the Euler characteristic of $X'$ (computed using the original triangulation) exactly by the degree of your ramification divisor, and the Riemann-Hurwitz formula drops out!
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A few more answers:
$h$ can be made to be a unit (=nowhere zero) in a small enough neighborhood; this is just manipulating the Taylor series in local coordinates.
The set of "simple" (=non-ramified) points is open and dense (think in terms of derivatives). If you are interested in the image of the set of ramified points under $f$, Sard's theorem can help.
Regarding small perturbations, "splitting" a ramified point into simple points is not the only possible scenario. Compare two perturbations of $f(z)=z^3$ in $\mathbb{C}$: $z^3+az^2$ and $z^3+a$, $z$ and $a$ in a neighborhood of $0$. I am not sure what you mean by "large" perturbations.
An alternative intuition for Riemann-Hurwitz on compact surfaces if the critical points of the map $f$ are nondegenerate can be given using a Morse function on $X'$; see
MR2126710 (2006a:30040) Stawiska, Małgorzata: Riemann-Hurwitz formula and Morse theory. The $p$p-harmonic equation and recent advances in analysis, 209–211, Contemp. Math., 370, Amer. Math. Soc., Providence, RI, 2005
-
@Margaret Thanks for your reply. Isn't the case where you propose to use the Sard's theorem a converse question of what I was asking? I was asking about the possibly open and dense nature of those points in the target Riemann surface whose preimages in the domain Riemann surface are simple (= non-ramified). May be I am confusing but I think you ate looking at the opposite case if images in the range space of the simple points in the domain space. Apologies if I am miseading! – Anirbit Oct 7 2011 at 0:52
@Anirbit: The set of ramified points is closed, and a non-constant holomorphic map between Riemann surfaces is open. So the image of all nonramified points is open. Sard's theorem can be used to deduce density (the image of the set of ramified points, being of zero Lebesgue measure, cannot contain an open set). For compact surfaces you don't need this argument, as a holomorphic map has then only finitely many ramified points. – Margaret Friedland Oct 7 2011 at 17:13
@Margaret Thanks for the reply. Can you explain this in the language of arguing that the points with positive ramification are isolated? That might help understand better. – Anirbit Oct 8 2011 at 21:50
If you think of ramification points as zeros of the derivative, then of course they are isolated in a complex domain, so you get only finitely many of them. (This is not true in higher dimensional manifolds, and you need an argument like my previous one.) – Margaret Friedland Oct 11 2011 at 19:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380599856376648, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/143453/uniform-convergence-of-sequence-to-the-exponent-function
|
# Uniform convergence of sequence to the exponent function
Let $f_n(z) = (1-z^2/n)^n$, and let $f(z)=\operatorname{exp}(-z^2)$. I need to show that $f_n$ converges uniformly to $f$ in any closed disc.
I saw this: Uniform Convergence of an Exponential Sequence of Functions, but I'm not sure the methods used there are applicable here because we're talking about the complex plane, and the logarithm function doesn't behave nicely there.
-
– draks ... May 10 '12 at 12:40
Thanks. Must've missed that. I'm trying to expand e^z to degree n such as the remainder is epsilon/2, and this is what I get: $\left|e^w-(1+w/n)^n\right| = \left|\sum\limits_{k=0}^n \frac{w^k}{k!} + w^{n+1}g_{n+1}(w) - \sum\limits_{k=0}^n \binom{n}{k} \frac{w^k}{n^k}\right| \leq \frac{\varepsilon}{2} + \sum_{k=0}^n \frac{|w|^k}{k!}\left|\frac{n!}{(n-k)!n^k}-1\right|$ I can't really continue from here because $|w|^k$ can be quite large as $k \approx n \to \infty$. – George May 10 '12 at 16:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593297839164734, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/2502/how-accurately-is-the-moment-of-perihelion-of-earth-known-and-how-is-it-measure
|
# How accurately is the moment of perihelion of Earth known, and how is it measured?
Earth's perihelion passed about nine hours ago. How accurately do we know the moment of closest approach of the Earth to the center of the sun? How do we make this measurement?
-
## 3 Answers
Two things that we are really good at measuring in astronomy: time and angles. With respect to the sun and other planets, we can measure our relative orientation in the solar system to better than a milliarcsec ($8\times 10^{-10}$ of a circle). A year is $3\times 10^7$ sec, so we know passage of pericenter to about 0.02sec. This precesion can be built up even more by observations over many years (the greeks knew the average length of a month to much better than a second by comparing eclipse times separated by hundreds of years). Of course, all our atomic clocks keep absolute time much better than this, and in practice we know (because of these accurate astronomical measurements) that the earth rotation is not nearly so constant, so we often have to add or subtract leap-seconds.
EDIT: from radar ranging, we can actually measure our instantaneous location in the solar system to within 3m. With an orbital velocity of about 4.7km/s, that gives pericenter passage to better than a millisecond!
-
I'm not sure about this 0.02sec figure. Measuring where we are would only give us that much precision if we also knew exactly where the pericenter is. My question isn't about how we measure the length of a year or measure our position relative to the other planets. It's about how we know where in our orbit we're closest to the sun. – Mark Eichenlaub Jan 4 '11 at 17:50
Well, they all go into the same fit. If we don't know our orbit perfectly, the positions of the planets will be wrong, so by measuring the planets, we constrain our own orbit. I just looked up in Murray&Dermott the SS orbital elements. They list all longitudes of pericenter to 8 digits, so I bet I'm not far off. – Jeremy Jan 4 '11 at 22:27
Okay. Thanks for the reference and info! – Mark Eichenlaub Jan 5 '11 at 5:27
We know the Earth's orientation to .001 as, but the Earth's orientation has no effect on the time of perihelion. – Nick Dec 13 '12 at 21:02
## Did you find this question interesting? Try our newsletter
email address
I'm pretty sure the current method of measuring solar-system scale distances is radar ranging. I don't know enough about the details of this for measuring the Earth-Sun distance. A first order estimate of the error in measuring the moment of closest approach would have to be something of the order of the 16 minute delay time in sending a signal from Earth and receiving a response on the Earth's surface from the reflection event at the Sun's surface. There might be subtle problems tied to the fact that the Sun doesn't have a solid surface, but we could certainly do radar ranging with several planets--triangulating the Earth-Sun distance using Mercury and Venus, for example.
and like Jeremy said, we could probably improve this time by using several years' worth of data.
-
I think this is right--radar ranging is even better than astrometry. – Jeremy Jan 4 '11 at 22:31
The time of perihelion is not measured by radar. The Sun is not a solid body, there is no single surface to reflect a signal. Radar measures the distance to the surface of a body, not to its center of mass, so the accuracy of radar is limited by our knowledge of the topography of the body. In practice, ephemerides are used.
An anomalistic year is the mean duration between two periapsis (or apoapsis) events. It is about 365.2596358 days. The Earth is offset from the Earth-moon barycenter by about 4674.9 kilometers, this causes the actual time of perihelion to vary by a few minutes compared that predicted by simple Keplerian motion. The relative orientation of the Sun and Jupiter is another significant perturbation.
The current JPL ephemeris predicts the Earth-Sun distance to within one kilometer. Given the velocity of the Earth at perihelion is 30286 m/s, that would imply the moment of perihelion is predictable to within 34 milliseconds.
After perihelion has occurred, measurements taken during the event, including radar ranging (of the inner planets), and VLBI, can be used to improve the accuracy.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363690614700317, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/524/what-place-do-prime-numbers-have-in-cryptography?answertab=votes
|
# What place do prime numbers have in cryptography?
My understanding of hashing and encryption is rather limited. I certainly do not understand the mathematical formulas at play in these algorithms. With that said, what part do prime numbers play in cryptography?
When I was in college, one of my professors told me that the fact that there is no formula to predict a prime number (other than just trying them) is what makes many encryption schemes (like PGP) so secure, since it's not possible to guess the number used for the public/private key in any reasonable amount of time. Is that correct?
-
1
– jug Aug 27 '11 at 17:19
## 3 Answers
No, the fact that there's no known practical formula that produces only prime numbers doesn't really come into play; if someone found one tomorrow, that wouldn't have any cryptographical implications.
You may want to go through the How does asymmetric encryption work? thread; the short answer is that for public key operations, the public and the private keys must be related, but not in an obvious way. What means that whatever scheme is used has some mathematical structure. Number theory is a place where we find 'related-but-not-in-an-obvious-way' mathematical structures, and primes are rather, err, primary in number theory.
Exactly how the primes are used depends on which asymmetric algorithm is used; in RSA, we exploit the fact that, given $M^e \mod N$ (and $e$ and $N$), there's no known way to recover $M$ without knowing the prime factors of $N$ (but with that information, it's easy). In DH/DSS, we exploit the fact that, given $G^x \mod p$ (and $G$ and $p$), there's no known way to recover $x$ (and we have $p$ be a prime because the problem with composite moduli becomes a lot easier if someone finds the factorization). In Elliptic Curve Cryptography, we exploit the difficulty of recovery $k$ given $kG$ (where $G$ is a known point); a prime is involved because Elliptic Curves are defined only a field, and the only finite fields (that is, fields with a bounded set of elements) have size $p^N$, for prime $p$ (and positive integer $N$).
Now, you occasionally see primes used in symmetric crypto; it's not nearly as common (and those primes aren't secret).
-
Yes, that's correct.
Take a look at this question / answer: How does asymmetric encryption work? about the related math, in a simple way.
When you deal with large numbers, there is no easy way to find all the factors of such a number (if they are not mostly small factors).
And think the following: If you multiply one prime number by another one, you'll end up with a third number. Like $ab = c$. Now it is easy to come from $a$ and $b$ to $c$, but it is hard to come from $c$ to $a$ and $b$ (although there are exactly those two factors of $c$).
There are some factoring algorithms more efficient than simply trying all numbers, but still there is no algorithm sufficiently fast to factor sufficiently large (and still practically usable) composite numbers in our life-time. (On the other hand, there are efficient algorithms to show that a number is or is not a prime number, without finding any factors. This is actually used to generate a key.)
For RSA we use $a$ and $b$ (in fact, $\phi(c) = (a-1)(b-1) = c+1 - (a+b)$) to create the private key, while $c$ is part of the public key.
-
– Paŭlo Ebermann♦ Aug 26 '11 at 17:06
@Paŭlo Ebermann: would you mind editing my answer to make it correct? I know you have a far better knowledge than me, it'd be better to have a correct answer here in the forum – woliveirajr Aug 26 '11 at 18:11
I hope this edit did not go overboard. Feel free to edit again. – Paŭlo Ebermann♦ Aug 26 '11 at 18:47
@Paŭlo Ebermann: thanks! :) – woliveirajr Aug 26 '11 at 18:51
Prime numbers are so important because for every number $a$ there is exactly only one multiplicatively inverse number $a^{-1}$, so that $a·a^{-1} \mod N = 1$ only if $gcd(a, N) = 1$, i.e. if they are relatively prime.
If you take a prime number as the modulus, you have not to care about relative primeness anymore, any numbers between $0$ than $N$ are relatively prime to $N$.
Another reason is the Fermat's little theorem used in RSA.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496728181838989, "perplexity_flag": "head"}
|
http://www.purplemath.com/learning/viewtopic.php?f=9&t=1543&p=4752
|
# The Purplemath Forums
Helping students gain understanding and self-confidence in algebra.
## Find equal sum: 1 + 2 + 3 = 1 * 2 * 3
Complex numbers, rational functions, logarithms, sequences and series, matrix operations, etc.
7 posts • Page 1 of 1
### Find equal sum: 1 + 2 + 3 = 1 * 2 * 3
by japiga on Thu Sep 23, 2010 11:48 am
Find all combinations of the three natural numbers which their sum is equal to their product of those factors (three natural numbers).
I have found just one combination 1 + 2 + 3 = 1 * 2 * 3. But it is only one combination, and the question is: is it only one combination and how to prove it?
japiga
Posts: 32
Joined: Mon Sep 20, 2010 3:28 pm
Sponsor
Sponsor
by stapel_eliz on Thu Sep 23, 2010 7:28 pm
japiga wrote:...how to prove it?
Use algebra.
Assuming the numbers are meant to be integers or whole numbers, and assuming that they are meant to be consecutive, then you are trying to find solutions to x + (x + 1) + (x + 2) = x(x + 1)(x + 2).
stapel_eliz
Posts: 1701
Joined: Mon Dec 08, 2008 4:22 pm
### Re: Find equal sum: 1 + 2 + 3 = 1 * 2 * 3
by japiga on Fri Sep 24, 2010 7:47 am
Yes, and I got results (three solutions for x, x1 = 1, x2 = -1 i x3 = 3), and consequently, we have 3 consecutive numbers of each of x with following equations: 1 + 2 + 3 = 1 * 2 * 3; -1 + 0 + 1 = -1 * 0 * 1 and 3 + 4 + 5 is not equal to 3 * 4 * 5 (so in this case we my exclude solution for x3, or to put other numbers in opposite order, such as: 3, 2, 1 and now it gives a sense, 3 + 2 + 1 = 3 * 2 * 1. Do you think that I’ve solved it?
japiga
Posts: 32
Joined: Mon Sep 20, 2010 3:28 pm
by stapel_eliz on Fri Sep 24, 2010 1:57 pm
japiga wrote:Yes, and I got results (three solutions for x, x1 = 1, x2 = -1 i x3 = 3)....
How did you arrive at these solutions? (If you plug them back into the original equation, provided earlier, which one(s) work?)
stapel_eliz
Posts: 1701
Joined: Mon Dec 08, 2008 4:22 pm
### Re: Find equal sum: 1 + 2 + 3 = 1 * 2 * 3
by japiga on Fri Sep 24, 2010 2:18 pm
It works only if x1=1 and x2=-1, but not for x3=3. Is it now correct answer on it?
I arrived on it just solving the equation: x + (x + 1) + (x + 2) = x(x + 1)(x + 2)
x + x + 1 + x + 2 = x(x2 + 2x + x +2)
3x + 3 = x3 + 3x2 + 2x
x3 - 3x2 – x + 3 = 0
x2 (x- 3) – (x – 3) = 0
(x2 – 1)(x -3) = 0
japiga
Posts: 32
Joined: Mon Sep 20, 2010 3:28 pm
by stapel_eliz on Fri Sep 24, 2010 7:38 pm
japiga wrote:3x + 3 = x3 + 3x2 + 2x
x3 - 3x2 – x + 3 = 0
You were correct to here:
. . . . .$3x\, +\, 3\, =\, x^3\, +\, 3x^2\, +\, 2x$
You subtracted the $3x$ and the $3$ from the left-hand side to the right-hand side:
. . . . .$3x\, -\, 3x\, +\, 3\, -\, 3\, =\, x^3\, +\, 3x^2\, +\, 2x\, -\, 3x\, -\, 3$
Then you simplified. But how did you end up with your second line in the quote above?
stapel_eliz
Posts: 1701
Joined: Mon Dec 08, 2008 4:22 pm
### Re: Find equal sum: 1 + 2 + 3 = 1 * 2 * 3
by japiga on Sat Sep 25, 2010 6:14 pm
II have just solved it! We have 3 solutions for x:
x1=1; x2=-1; and x3=-3. So, we have same result when we summarize and multiply all factors x, (x+1) and (x+2).
japiga
Posts: 32
Joined: Mon Sep 20, 2010 3:28 pm
7 posts • Page 1 of 1
Return to Advanced Algebra ("pre-calculus")
• Board index
• The team • Delete all board cookies • All times are UTC
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8853907585144043, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-equations/168018-u_x-3u_y-0-a.html
|
# Thread:
1. ## u_x-3u_y=0
$u_x-3u_y=0$
$u(x,x)=x^2$
$\omega_{\xi}(\cos{\alpha}-3\sin{\alpha})-\omega_{\eta}(\sin{\alpha}+3\cos{\alpha})=0$
$\displaystyle\cos{\alpha}=\frac{1}{\sqrt{10}} \ \ \ \sin{\alpha}=\frac{-3}{\sqrt{10}}$
$\displaystyle\sqrt{10}\omega_{\xi}=0\Rightarrow\om ega_{\xi}=0$
$x=\xi+B\eta \ \ \ y=-3\xi+D\eta$
$\displaystyle\int\omega_{\xi}d\xi=\int 0d\xi\Rightarrow\omega(\xi,\eta)=g(\eta)$
I am lost with initial condition and what to set D and B equal too.
2. You may choose any numbers you like for B and D as long as the change of variables is invertible. - because these do not affect your original equation.
Say you could choose $B = \frac{1}{3}$ and $0$. - it looks nice that way.
Once you have undone your change of variables, set $x = y$ to use your initial condition.
Why do you use the $\cos$ and $\sin$ for coefficients?
3. Originally Posted by PaulRS
Why do you use the $\cos$ and $\sin$ for coefficients?
That is how my book has taught me. Here is what is in the book.
$\xi=x\cos{\alpha}+y\sin{\alpha}, \ \ \ x=\xi\cos{\alpha}-\eta\sin{\alpha}, \ \ \ \eta=-x\sin{\alpha}+y\cos{\alpha}, \ \ \ y=\xi\sin{\alpha}+\eta\cos{\alpha}$
$u(x,y)=u(\xi\cos{\alpha}-\eta\sin{\alpha}, \ \xi\sin{\alpha}+\eta\cos{\alpha})=\omega(\xi,\eta)$
4. $x=\xi+\frac{1}{3}\eta\Rightarrow\eta=3x-3\xi \ \ \ y=-3\xi=x$
$x=3x+y$
$\omega(\xi,\eta)=u(x,y)=g(3x+y)$
$u(x,x)=g(3x+x)=g(4x)=x^2$
How do I incorporate $g(4x)=x^2$ into the general solution $u(x,y)=g(3x+y)\mbox{?}$
5. Well, note that then: $g(x) = \left(\frac{x}{4}\right)^2$ - from the equation you have there.
Now to get $u(x,y)$, plug $3x+y$ into $g$.
6. Originally Posted by PaulRS
Well, note that then: $g(x) = \left(\frac{x}{4}\right)^2$ - from the equation you have there.
Now to get $u(x,y)$, plug $3x+y$ into $g$.
Wouldn't it be $\displaystyle\left(\frac{x}{2}\right)^2\mbox{?}$
7. We want $g(u)$ , but to calculate this we let $u = 4\cdot x$ and then $g(u) = g(4\cdot x) = x^2 = ...\text{in terms of } u$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525147676467896, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/186977-complex-eigenvectors.html
|
# Thread:
1. ## Complex eigenvectors
Hi!
A complex 2x2 matrix is given as
A =
a b
c d
where a, b, c and d are complex numbers.
Find the eigenvectors of U when a = 0, b = i, c = -i and d = 0, and normalize
them to unity.
I think I've found the eigenvalues to be 1 and -1. (If that is right.)
But I have problem finding the eigenvectors. I know how you should do it, and know the equation and so on, but I don't get any results. I need someone to show me how you get the answer.
Looking forward for help!
Best regards!
2. ## Re: Complex eigenvectors
Originally Posted by expresstrain
I think I've found the eigenvalues to be 1 and -1. (If that is right.)
Right.
But I have problem finding the eigenvectors. I know how you should do it, and know the equation and so on, but I don't get any results. I need someone to show me how you get the answer.
$\ker (A-I) \equiv\begin{Bmatrix} -x_1+ix_2=0\\-ix_1-x_2=0\end{matrix}$ ... a basis is $B_1=\{(i,1)\}$ . Now, normalize it.
$\ker (A+I) \equiv\ldots$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396852850914001, "perplexity_flag": "head"}
|
http://medlibrary.org/medwiki/Dirac_equation
|
# Dirac equation
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
Quantum mechanics
Introduction
Glossary · History
Background
Fundamental concepts
Experiments
Formulations
Equations
Interpretations
Advanced topics
People
Quantizations
Mechanics
Interpretations - Theorems
Mathematical Formulations
In particle physics, the Dirac equation is a relativistic wave equation formulated by British physicist Paul Dirac in 1928. It describes fields corresponding to elementary spin-½ particles (such as the electron) as a vector of four complex numbers (a bispinor), in contrast to the Schrödinger equation which described a field of only one complex value.
The Dirac equation is consistent with both the principles of quantum mechanics and the theory of special relativity,[1] and was the first theory to account fully for relativity in the context of quantum mechanics. It accounted for the fine details of the hydrogen spectrum in a completely rigorous way. The equation also implied the existence of a new form of matter, antimatter, hitherto unsuspected and unobserved, and actually predated its experimental discovery. It also provided a theoretical justification for the introduction of several-component wave functions in Pauli's phenomenological theory of spin. Although Dirac did not at first fully appreciate what his own equation was telling him, his resolute faith in the logic of mathematics as a means to physical reasoning, his explanation of spin as a consequence of the union of quantum mechanics and relativity, and the eventual discovery of the positron, represents one of the great triumphs of theoretical physics, fully on a par with the work of Newton, Maxwell, and Einstein before him.[2]
In the limit of zero mass, the Dirac equation reduces to the Weyl equation.
## Mathematical formulation[]
The Dirac equation in the form originally proposed by Dirac is:
$\left(\beta mc^2 + \sum_{k = 1}^3 \alpha_k p_k \, c\right) \psi (\mathbf{x},t) = i \hbar \frac{\partial\psi(\mathbf{x},t) }{\partial t}$
where
• ψ = ψ(x, t) is the wave function for the electron,
• x and t are the space and time coordinates,
• m is the rest mass of the electron,
• p is the momentum, understood to be the momentum operator in the Schrödinger theory,
• c is the speed of light, and ħ = h/2π is the reduced Planck constant.
Dirac's purpose in casting this equation was to explain the behavior of the relativistically moving electron, and so to allow the atom to be treated in a manner consistent with relativity. His rather modest hope was that the corrections introduced this way might have bearing on the problem of atomic spectra. Up until that time, attempts to make the old quantum theory of the atom compatible with the theory of relativity, attempts based on discretizing the angular momentum stored in the electron's possibly non-circular orbit of the atomic nucleus, had failed - and the new quantum mechanics of Heisenberg, Pauli, Jordan, Schrödinger, and Dirac himself had not developed sufficiently to treat this problem. Although Dirac's original intentions were satisfied, his equation had far deeper implications for the structure of matter, and introduced new mathematical classes of objects that are now essential elements of fundamental physics.
The new elements in this equation are the 4 × 4 matrices αk and β, and the four-component wave function ψ. The matrices are all Hermitian and have squares equal to the identity matrix:
$\alpha_i^2=\beta^2=I_4$
and they all mutually anticommute:
$\alpha_i\alpha_j + \alpha_j\alpha_i = 0 \,$
$\alpha_i\beta + \beta\alpha_i = 0 \,$
when i and j are distinct. The single symbolic equation thus unravels into four coupled linear first-order partial differential equations for the four quantities that make up the wave function. These matrices, and the form of the wave function, have a deep mathematical significance. The algebraic structure represented by the Dirac matrices had been created some 50 years earlier by the English mathematician W. K. Clifford. In turn, Clifford's ideas had emerged from the mid-19th century work of the German mathematician Hermann Grassmann in his "Lineale Ausdehnungslehre" (Theory of Linear Extensions). The latter had been regarded as well-nigh incomprehensible by most of his contemporaries. The appearance of something so seemingly abstract, at such a late date, and in such a direct physical manner, is one of the most remarkable chapters in the history of physics.
### Making the Schrödinger equation relativistic[]
The Dirac equation is superficially similar to the Schrödinger equation for a massive free particle:
$-\frac{\hbar^2}{2m}\nabla^2\phi = i\hbar\frac{\partial}{\partial t}\phi.$
The left side represents the square of the momentum operator divided by twice the mass, which is the non-relativistic kinetic energy. Because relativity treats space and time as a whole, a relativistic generalization of this equation requires that space and time derivatives must enter symmetrically, as they do in the Maxwell equations that govern the behavior of light — the equations must be differentially of the same order in space and time. In relativity, the momentum and the energy are the space and time parts of a space-time vector, the 4-momentum, and they are related by the relativistically invariant relation
$\frac{E^2}{c^2} - p^2 = m^2c^2$
which says that the length of this vector is proportional to the rest mass m. Substituting the operator equivalents of the energy and momentum from the Schrödinger theory, we get an equation describing the propagation of waves, constructed from relativistically invariant objects,
$\left(\nabla^2 - \frac{1}{c^2}\frac{\partial^2}{\partial t^2}\right)\phi = \frac{m^2c^2}{\hbar^2}\phi$
with the wave function ϕ being a relativistic scalar: a complex number which has the same numerical value in all frames of reference. The space and time derivatives both enter to second order. This has a telling consequence for the interpretation of the equation. Because the equation is second order in the time derivative, then by the nature of solving differential equations, one must specify both the initial values of the wave function itself and of its first time derivative, in order to solve definite problems. Because both may be specified more or less arbitrarily, the wave function cannot maintain its former role of determining the probability density of finding the electron in a given state of motion. In the Schrödinger theory, the probability density is given by the positive definite expression
$\rho=\phi^*\phi\,$
and this density is convected according to the probability current vector
$J = -\frac{i\hbar}{2m}(\phi^*\nabla\phi - \phi\nabla\phi^*)$
with the conservation of probability current and density following from the Schrödinger equation:
$\nabla\cdot J + \frac{\partial\rho}{\partial t} = 0.$
The fact that the density is positive definite and convected according to this continuity equation, implies that we may integrate the density over a certain domain and set the total to 1, and this condition will be maintained by the conservation law. A proper relativistic theory with a probability density current must also share this feature. Now, if we wish to maintain the notion of a convected density, then we must generalize the Schrödinger expression of the density and current so that the space and time derivatives again enter symmetrically in relation to the scalar wave function. We are allowed to keep the Schrödinger expression for the current, but must replace the probability density by the symmetrically formed expression
$\rho = \frac{i\hbar}{2m}(\psi^*\partial_t\psi - \psi\partial_t\psi^*).$
which now becomes the 4th component of a space-time vector, and the entire probability 4-current density has the relativistically covariant expression
$J^\mu = \frac{i\hbar}{2m}(\psi^*\partial^\mu\psi - \psi\partial^\mu\psi^*)$
The continuity equation is as before. Everything is compatible with relativity now, but we see immediately that the expression for the density is no longer positive definite - the initial values of both ψ and ∂tψ may be freely chosen, and the density may thus become negative, something that is impossible for a legitimate probability density. Thus we cannot get a simple generalization of the Schrödinger equation under the naive assumption that the wave function is a relativistic scalar, and the equation it satisfies, second order in time.
Although it is not a successful relativistic generalization of the Schrödinger equation, this equation is resurrected in the context of quantum field theory, where it is known as the Klein–Gordon equation, and describes a spinless particle field (e.g. pi meson). Historically, Schrödinger himself arrived at this equation before the one that bears his name, but soon discarded it. In the context of quantum field theory, the indefinite density is understood to correspond to the charge density, which can be positive or negative, and not the probability density.
### Dirac's coup[]
Dirac thus thought to try an equation that was first order in both space and time. One could, for example, formally take the relativistic expression for the energy
$E = c\sqrt{p^2 + m^2c^2}\,,$
replace p by its operator equivalent, expand the square root in an infinite series of derivative operators, set up an eigenvalue problem, then solve the equation formally by iterations. Most physicists had little faith in such a process, even if it were technically possible.
As the story goes, Dirac was staring into the fireplace at Cambridge, pondering this problem, when he hit upon the idea of taking the square root of the wave operator thus:
$\nabla^2 - \frac{1}{c^2}\frac{\partial^2}{\partial t^2} = \left(A \partial_x + B \partial_y + C \partial_z + \frac{i}{c}D \partial_t\right)\left(A \partial_x + B \partial_y + C \partial_z + \frac{i}{c}D \partial_t\right).$
On multiplying out the right side we see that, in order to get all the cross-terms such as ∂x∂y to vanish, we must assume
$AB + BA = 0, \;\ldots$
with
$A^2 = B^2 = \ldots = 1.\,$
Dirac, who had just then been intensely involved with working out the foundations of Heisenberg's matrix mechanics, immediately understood that these conditions could be met if A, B, C and D are matrices, with the implication that the wave function has multiple components. This immediately explained the appearance of two-component wave functions in Pauli's phenomenological theory of spin, something that up until then had been regarded as mysterious, even to Pauli himself. However, one needs at least 4 × 4 matrices to set up a system with the properties required — so the wave function had four components, not two, as in the Pauli theory, or one, as in the bare Schrödinger theory. The four-component wave function represents a new class of mathematical object in physical theories that makes its first appearance here.
Given the factorization in terms of these matrices, one can now write down immediately an equation
$\left(A\partial_x + B\partial_y + C\partial_z + \frac{i}{c}D\partial_t\right)\psi = \kappa\psi$
with κ to be determined. Applying again the matrix operator on both side yields
$\left(\nabla^2 - \frac{1}{c^2}\partial_t^2\right)\psi = \kappa^2\psi.$
On taking κ = mc/ħ we find that all the components of the wave function individually satisfy the relativistic energy–momentum relation. Thus the sought-for equation that is first-order in both space and time is
$\left(A\partial_x + B\partial_y + C\partial_z + \frac{i}{c}D\partial_t - \frac{mc}{\hbar}\right)\psi = 0.$
Setting
$(A,B,C) = i\beta \alpha_k\,,D = \beta\,,$
we get the Dirac equation as written above.
### Covariant form and relativistic invariance[]
To demonstrate the relativistic invariance of the equation, it is advantageous to cast it into a form in which the space and time derivatives appear on an equal footing. New matrices are introduced as follows:
$\gamma^0 = \beta \,$
$\gamma^k = \gamma^0 \alpha^k. \,$
and the equation takes the form
$i \hbar \gamma^\mu \partial_\mu \psi - m c \psi = 0 \,.$
In practice one often writes the gamma matrices in terms of 2 × 2 sub-matrices taken from the Pauli matrices and the 2 × 2 identity matrix. Explicitly the standard representation is
$\gamma^0 = \left(\begin{array}{cccc} I_2 & 0 \\ 0 & -I_2 \end{array}\right), \gamma^1 = \left(\begin{array}{cccc} 0 & \sigma_x \\ -\sigma_x & 0 \end{array}\right), \gamma^2 = \left(\begin{array}{cccc} 0 & \sigma_y \\ -\sigma_y & 0 \end{array}\right), \gamma^3 = \left(\begin{array}{cccc} 0 & \sigma_z \\ -\sigma_z & 0 \end{array}\right). \,$
The complete system is summarized using the Minkowski metric on spacetime in the form
$\{\gamma^\mu,\gamma^\nu\} = 2 g^{\mu\nu} \,$
where the bracket expression
$\{a, b\} = ab + ba$
denotes the anticommutator. These are the defining relations of a Clifford algebra over a pseudo-orthogonal 4-d space with metric signature (+ − − −). The specific Clifford algebra employed in the Dirac equation is known today as the Dirac algebra. Although not recognized as such by Dirac at the time the equation was formulated, in hindsight the introduction of this geometric algebra represents an enormous stride forward in the development of quantum theory.
The Dirac equation may now be interpreted as an eigenvalue equation, where the rest mass is proportional to an eigenvalue of the 4-momentum operator, the proportionality constant being the speed of light:
$P_\mathrm{op}\psi = mc\psi. \,$
Using ${\partial\!\!\!\big /}$ (pronounced: "d-slash"[3]) in Feynman slash notation, which includes the gamma matrices as well as a summation over the spinor components in the derivative itself, the Dirac equation becomes:
$i \hbar {\partial\!\!\!\big /} \psi - m c \psi = 0$
In practice, physicists often use units of measure such that ħ = c = 1, known as natural units. The equation then takes the simple form
$(i{\partial\!\!\!\big /} - m) \psi = 0\,$
In the limit $m \rightarrow 0$, the Dirac equation reduces to the Weyl equation, which describes massless spin-1/2 particles.[4]
A fundamental theorem states that if two distinct sets of matrices are given that both satisfy the Clifford relations, then they are connected to each other by a similarity transformation:
$\gamma^{\mu\prime} = S^{-1} \gamma^\mu S.$
If in addition the matrices are all unitary, as are the Dirac set, then S itself is unitary;
$\gamma^{\mu\prime} = U^\dagger \gamma^\mu U.$
The transformation U is unique up to a multiplicative factor of absolute value 1. Let us now imagine a Lorentz transformation to have been performed on the space and time coordinates, and on the derivative operators, which form a covariant vector. For the operator γμ∂μ to remain invariant, the gammas must transform among themselves as a contravariant vector with respect to their spacetime index. These new gammas will themselves satisfy the Clifford relations, because of the orthogonality of the Lorentz transformation. By the fundamental theorem, we may replace the new set by the old set subject to a unitary transformation. In the new frame, remembering that the rest mass is a relativistic scalar, the Dirac equation will then take the form
$( iU^\dagger \gamma^\mu U\partial_\mu^\prime - m)\psi(x^\prime,t^\prime) = 0$
$U^\dagger(i\gamma^\mu\partial_\mu^\prime - m)U \psi(x^\prime,t^\prime) = 0.$
If we now define the transformed spinor
$\psi^\prime = U\psi$
then we have the transformed Dirac equation in a way that demonstrates manifest relativistic invariance:
$(i\gamma^\mu\partial_\mu^\prime - m)\psi^\prime(x^\prime,t^\prime) = 0.$
Thus, once we settle on any unitary representation of the gammas, it is final provided we transform the spinor according the unitary transformation that corresponds to the given Lorentz transformation. The various representations of the Dirac matrices employed will bring into focus particular aspects of the physical content in the Dirac wave function (see below). The representation shown here is known as the standard representation - in it, the wave function's upper two components go over into Pauli's 2-spinor wave function in the limit of low energies and small velocities in comparison to light.
The considerations above reveal the origin of the gammas in geometry, hearkening back to Grassmann's original motivation - they represent a fixed basis of unit vectors in spacetime. Similarly, products of the gammas such as γμγν represent oriented surface elements, and so on. With this in mind, we can find the form of the unit volume element on spacetime in terms of the gammas as follows. By definition, it is
$V = \frac{1}{4!}\epsilon_{\mu\nu\alpha\beta}\gamma^\mu\gamma^\nu\gamma^\alpha\gamma^\beta.$
For this to be an invariant, the epsilon symbol must be a tensor, and so must contain a factor of √g, where g is the determinant of the metric tensor. Since this is negative, that factor is imaginary. Thus
$V = i \gamma^0\gamma^1\gamma^2\gamma^3.\$
This matrix is given the special symbol γ5, owing to its importance when one is considering improper transformations of spacetime, that is, those that change the orientation of the basis vectors. In the standard representation it is
$\gamma_5 = \begin{pmatrix} 0 & I_{2} \\ I_{2} & 0 \end{pmatrix}.$
This matrix will also be found to anticommute with the other four Dirac matrices. It takes a leading role when questions of parity arise, because the volume element as a directed magnitude changes sign under a space-time reflection. Taking the positive square root above thus amounts to choosing a handedness convention on space-time.
### Conservation of probability current[]
By defining the adjoint spinor
$\bar{\psi} = \psi^\dagger\gamma^0$
where $\psi^\dagger$ is the conjugate transpose of $\psi$, and noticing that
$(\gamma^\mu)^\dagger\gamma^0 = \gamma^0\gamma^\mu \,$,
we obtain, by taking the Hermitian conjugate of the Dirac equation and multiplying from the right by γ0, the adjoint equation:
$\bar{\psi}(-i\gamma^\mu\partial_\mu - m) = 0 \,$
where ∂μ is understood to act to the left. Multiplying the Dirac equation by ψ from the left, and the adjoint equation by ψ from the right, and subtracting, produces the law of conservation of the Dirac current:
$\partial_\mu \left( \bar{\psi}\gamma^\mu\psi \right) = 0.$
Now we see the great advantage of the first-order equation over the one Schrödinger had tried - this is the conserved current density required by relativistic invariance, only now its 4th component is positive definite and thus suitable for the role of a probability density:
$J^0 = \bar{\psi}\gamma^0\psi = \psi^\dagger\psi.$
Because the probability density now appears as the fourth component of a relativistic vector, and not a simple scalar as in the Schrödinger equation, it will be subject to the usual effects of the Lorentz transformations such as time dilation. Thus for example atomic processes that are observed as rates, will necessarily be adjusted in a way consistent with relativity, while those involving the measurement of energy and momentum, which themselves form a relativistic vector, will undergo parallel adjustment which preserves the relativistic covariance of the observed values.
### Solutions[]
See Dirac spinor for details of solutions to the Dirac equation. The fact that the energies of the solutions do not have a lower bound is unexpected - see the hole theory section below for more details.
### Comparison with the Pauli theory[]
See also: Pauli equation
The necessity of introducing half-integral spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong inhomogeneous magnetic field, which then splits into N parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two—the ground state therefore could not be integral, because even if the intrinsic angular momentum of the atoms were as small as possible, 1, the beam would be split into three parts, corresponding to atoms with Lz = −1, 0, +1. The conclusion is that silver atoms have net intrinsic angular momentum of 1⁄2. Pauli set up a theory which explained this splitting by introducing a two-component wave function and a corresponding correction term in the Hamiltonian, representing a semi-classical coupling of this wave function to an applied magnetic field, as so:
$H = \frac{1}{2m}\left(\sigma\cdot\left(p - \frac{e}{c}A\right)\right)^2 + e\phi.$
Here A and φ represent the components of the electromagnetic four-potential, and the three sigmas are the Pauli matrices. On squaring out the first term, a residual interaction with the magnetic field is found, along with the usual classical Hamiltonian of a charged particle interacting with an applied field:
$H = \frac{1}{2m}\left(p - \frac{e}{c}A\right)^2 + e\phi - \frac{e\hbar}{2mc}\sigma\cdot B.$
This Hamiltonian is now a 2 × 2 matrix, so the Schrödinger equation based on it must use a two-component wave function. Pauli had introduced the 2 × 2 sigma matrices as pure phenomenology— Dirac now had a theoretical argument that implied that spin was somehow the consequence of the marriage of quantum mechanics to relativity. On introducing the external electromagnetic 4-vector potential into the Dirac equation in a similar way, known as minimal coupling, it takes the form (in natural units)
$(i\gamma^\mu(\partial_\mu + ieA_\mu) - m) \psi = 0\,$
A second application of the Dirac operator will now reproduce the Pauli term exactly as before, because the spatial Dirac matrices multiplied by i, have the same squaring and commutation properties as the Pauli matrices. What is more, the value of the gyromagnetic ratio of the electron, standing in front of Pauli's new term, is explained from first principles. This was a major achievement of the Dirac equation and gave physicists great faith in its overall correctness. There is more however. The Pauli theory may be seen as the low energy limit of the Dirac theory in the following manner. First the equation is written in the form of coupled equations for 2-spinors with the units restored:
$\begin{pmatrix} (mc^2 - E + e \phi) & c\sigma\cdot \left(p - \frac{e}{c}A\right) \\ -c\sigma\cdot \left(p - \frac{e}{c}A\right) & \left(mc^2 + E - e \phi\right) \end{pmatrix} \begin{pmatrix} \psi_+ \\ \psi_- \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}.$
so
$(E - e\phi) \psi_+ - c\sigma\cdot \left(p - \frac{e}{c}A\right) \psi_- = mc^2 \psi_+$
$-(E - e\phi) \psi_- + c\sigma\cdot \left(p - \frac{e}{c}A\right) \psi_+ = mc^2 \psi_-$
Assuming the field is weak and the motion of the electron non-relativistic, we have the total energy of the electron approximately equal to its rest energy, and the momentum going over to the classical value,
$E - e\phi \approx mc^2$
$p \approx m v$
and so the second equation may be written
$\psi_- \approx \frac{1}{2mc} \sigma\cdot \left(p - \frac{e}{c}A\right) \psi_+$
which is of order v/c - thus at typical energies and velocities, the bottom components of the Dirac spinor in the standard representation are much suppressed in comparison to the top components. Substituting this expression into the first equation gives after some rearrangement
$(E - mc^2) \psi_+ = \frac{1}{2m} \left[\sigma\cdot \left(p - \frac{e}{c}A\right)\right]^2 \psi_+ + e\phi \psi_+$
The operator on the left represents the particle energy reduced by its rest energy, which is just the classical energy, so we recover Pauli's theory if we identify his 2-spinor with the top components of the Dirac spinor in the non-relativistic approximation. A further approximation gives the Schrödinger equation as the limit of the Pauli theory. Thus the Schrödinger equation may be seen as the far non-relativistic approximation of the Dirac equation when one may neglect spin and work only at low energies and velocities. This also was a great triumph for the new equation, as it traced the mysterious i that appears in it, and the necessity of a complex wave function, back to the geometry of space-time through the Dirac algebra. It also highlights why the Schrödinger equation, although superficially in the form of a diffusion equation, actually represents the propagation of waves.
It should be strongly emphasized that this separation of the Dirac spinor into large and small components depends explicitly on a low-energy approximation. The entire Dirac spinor represents an irreducible whole, and the components we have just neglected to arrive at the Pauli theory will bring in new phenomena in the relativistic regime - antimatter and the idea of creation and annihilation of particles.
### As a differential equation in one real component[]
In a general case (if a certain linear function of electromagnetic field does not vanish identically), three out of four components of the spinor function in the Dirac equation can be algebraically eliminated, yielding an equivalent fourth-order partial differential equation for just one component. Furthermore, this remaining component can be made real by a gauge transform.[5]
## Physical interpretation[]
The Dirac theory, while providing a wealth of information that is accurately confirmed by experiments, nevertheless introduces a new physical paradigm that appears at first difficult to interpret and even paradoxical. Some of these issues of interpretation must be regarded as open questions.
### Identification of observables[]
The critical physical question in a quantum theory is—what are the physically observable quantities defined by the theory? According to general principles, such quantities are defined by Hermitian operators that act on the Hilbert space of possible states of a system. The eigenvalues of these operators are then the possible results of measuring the corresponding physical quantity. In the Schrödinger theory, the simplest such object is the overall Hamiltonian, which represents the total energy of the system. If we wish to maintain this interpretation on passing to the Dirac theory, we must take the Hamiltonian to be
$H = \gamma^0 \left[mc^2 + c \sum_{k = 1}^3 \gamma^k \left(p_k-\frac{q}{c}A_k\right) \right] + qA^0.$
This looks promising, because we see by inspection the rest energy of the particle and, in case A = 0, the energy of a charge placed in an electric potential qA0. What about the term involving the vector potential? In classical electrodynamics, the energy of a charge moving in an applied potential is
$H = c\sqrt{\left(p - \frac{q}{c}A\right)^2 + m^2c^2} + qA^0.$
Thus the Dirac Hamiltonian is fundamentally distinguished from its classical counterpart, and we must take great care to correctly identify what is an observable in this theory. Much of the apparent paradoxical behaviour implied by the Dirac equation amounts to a misidentification of these observables.
### Hole theory[]
The negative E solutions to the equation are problematic, for it was assumed that the particle has a positive energy. Mathematically speaking, however, there seems to be no reason for us to reject the negative-energy solutions. Since they exist, we cannot simply ignore them, for once we include the interaction between the electron and the electromagnetic field, any electron placed in a positive-energy eigenstate would decay into negative-energy eigenstates of successively lower energy by emitting excess energy in the form of photons. Real electrons obviously do not behave in this way.
To cope with this problem, Dirac introduced the hypothesis, known as hole theory, that the vacuum is the many-body quantum state in which all the negative-energy electron eigenstates are occupied. This description of the vacuum as a "sea" of electrons is called the Dirac sea. Since the Pauli exclusion principle forbids electrons from occupying the same state, any additional electron would be forced to occupy a positive-energy eigenstate, and positive-energy electrons would be forbidden from decaying into negative-energy eigenstates.
If an electron is forbidden from simultaneously occupying positive-energy and negative-energy eigenstates, the feature known as Zitterbewegung, which arises from the interference of positive-energy and negative-energy states, would have to be considered to be an unphysical prediction of time-dependent Dirac theory. This conclusion may be inferred from the explanation of hole theory given in the preceding paragraph. Recent results have been published in Nature [R. Gerritsma, G. Kirchmair, F. Zaehringer, E. Solano, R. Blatt, and C. Roos, Nature 463, 68-71 (2010)] in which the Zitterbewegung feature was simulated in a trapped-ion experiment. This experiment impacts the hole interpretation if one infers that the physics-laboratory experiment is not merely a check on the mathematical correctness of a Dirac-equation solution but the measurement of a real effect whose detectability in electron physics is still beyond reach.
Dirac further reasoned that if the negative-energy eigenstates are incompletely filled, each unoccupied eigenstate – called a hole – would behave like a positively charged particle. The hole possesses a positive energy, since energy is required to create a particle–hole pair from the vacuum. As noted above, Dirac initially thought that the hole might be the proton, but Hermann Weyl pointed out that the hole should behave as if it had the same mass as an electron, whereas the proton is over 1800 times heavier. The hole was eventually identified as the positron, experimentally discovered by Carl Anderson in 1932.
It is not entirely satisfactory to describe the "vacuum" using an infinite sea of negative-energy electrons. The infinitely negative contributions from the sea of negative-energy electrons has to be canceled by an infinite positive "bare" energy and the contribution to the charge density and current coming from the sea of negative-energy electrons is exactly canceled by an infinite positive "jellium" background so that the net electric charge density of the vacuum is zero. In quantum field theory, a Bogoliubov transformation on the creation and annihilation operators (turning an occupied negative-energy electron state into an unoccupied positive energy positron state and an unoccupied negative-energy electron state into an occupied positive energy positron state) allows us to bypass the Dirac sea formalism even though, formally, it is equivalent to it.
In certain applications of condensed matter physics, however, the underlying concepts of "hole theory" are valid. The sea of conduction electrons in an electrical conductor, called a Fermi sea, contains electrons with energies up to the chemical potential of the system. An unfilled state in the Fermi sea behaves like a positively-charged electron, though it is referred to as a "hole" rather than a "positron". The negative charge of the Fermi sea is balanced by the positively-charged ionic lattice of the material.
### In quantum field theory[]
See also: Fermionic field
In quantum field theories such as quantum electrodynamics, the Dirac field is subject to a process of second quantization, which resolves some of the paradoxical features of the equation.
## See also[]
• The Dirac Equation appears on the floor of Westminster Abbey. It appears on the plaque commemorating Paul Dirac's life which was inaugurated on November 13, 1995.[6]
## References[]
1. P.W. Atkins (1974). Quanta: A handbook of concepts. Oxford University Press. p. 52. ISBN 0-19-855493-1 [Amazon-US | Amazon-UK].
2. T.Hey, P.Walters (2009). The New Quantum Universe. Cambridge University Press. p. 228. ISBN 978-0-521-56457-1 [Amazon-US | Amazon-UK].
3. Tommy Ohlsson (22 September 2011). Relativistic Quantum Physics: From Advanced Quantum Mechanics to Introductory Quantum Field Theory. Cambridge University Press. p. 86. ISBN 978-1-139-50432-4 [Amazon-US | Amazon-UK]. Retrieved 17 March 2013.
4. Akhmeteli, Andrey (2011). "One real function instead of the Dirac spinor function". Journal of Mathematical Physics 52 (8): 082303. doi:10.1063/1.3624336. or public version
### Selected papers[]
• Dirac, P. A. M. (1928). "The Quantum Theory of the Electron". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 117 (778): 610. doi:10.1098/rspa.1928.0023.
• Dirac, P. A. M. (1930). "A Theory of Electrons and Protons". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 126 (801): 360. doi:10.1098/rspa.1930.0013. JSTOR 95359.
• Anderson, Carl (1933). "The Positive Electron". Physical Review 43 (6): 491. doi:10.1103/PhysRev.43.491.
• Frisch, R.; Stern, O. (1933). "Über die magnetische Ablenkung von Wasserstoffmolekülen und das magnetische Moment des Protons. I". Zeitschrift für Physik 85: 4. doi:10.1007/BF01330773.
• M. Arminjon, F. Reifler (2012). "Equivalent forms of Dirac equations in curved spacetimes and generalized de Broglie relations". Grenoble (France), New Jersey (USA). arXiv:1103.3201v3.
### Textbooks[]
• Halzen, Francis; Martin, Alan (1984). Quarks & Leptons: An Introductory Course in Modern Particle Physics. John Wiley & Sons. ISBN.
• Dirac, P.A.M., Principles of Quantum Mechanics, 4th edition (Clarendon, 1982)
• Shankar, R., Principles of Quantum Mechanics, 2nd edition (Plenum, 1994)
• Bjorken, J D & Drell, S, Relativistic Quantum mechanics
• Thaller, B., The Dirac Equation, Texts and Monographs in Physics (Springer, 1992)
• Schiff, L.I., Quantum Mechanics, 3rd edition (McGraw-Hill, 1968)
• Griffiths, D.J., Introduction to Elementary Particles, 2nd edition (Wiley-VCH, 2008) ISBN 978-3-527-40601-2 [Amazon-US | Amazon-UK].
## []
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Dirac equation", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Dirac_equation
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 59, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9094170331954956, "perplexity_flag": "head"}
|
http://logiciansdoitwithmodels.com/2010/10/
|
# Logicians do it with Models
A self-guided jaunt in philosophical logic
## Archive for October, 2010
### Clarity In Talking About The Status Of Attributes
October 13, 2010
When the ontological status of a property or attribute is in question, the central issue is the ontological status of an object’s possessing or exemplifying a property. When the ideological status of a property or attribute is in question, the central issue are the kinds of predicates that express the property. The following example brings this out starkly: A property can be mental, but not physical (ideologically), but can be ontologically physical, but not mental.
If a mental predicate $\textup{A}(x) \in \Psi$ is not lawlike coextensive with a physical predicate $\textup{B}(x) \in \Phi$, where $\Psi$ and $\Phi$ are appropriate vocabularies, the attribute expressed by $\textup{A}(x)$ is mental, but not physical. Nevertheless, by the principle of physical exhaustion (PE), the attribute is physical, but not mental (ontologically), as it is possessed solely by physical objects.
Entity-wise, every attribute is mathematical-physical, but only those attributes are mathematical-physical that are expressible by mathematical-physical predicates. So, if physical reductionism is false (which is likely the case), there are attributes which are mathematical-physical entities, but are not mathematical physical attributes.
The confusion arises when terms like “mental” or “material” are used to call out properties or attributes without clearly setting out whether one is talking about the ontological or ideological status of the attribute. This is an important point by Hellman, and we will see how this distinction and physicalist materialism in general are useful in clarifying questions of reductionism and materialism in the philosophy of mind when we turn to an analysis of Chalmers’ anti-materialist claims.
### The Ontological and Ideological Status of Attributes
October 12, 2010
With the principle of Physical Exhaustion (PE: $(\forall x)(\exists \alpha)(x \in \textup{R}(\alpha))$ where $\textup{R}(\alpha)$ is a rank in the hierarchy of the physical), which allows us to say that everything is exhausted by the physical, every attribute is mathematical-physical (i.e., existing somewhere (possibly high-up) on the set-theoretic hierarchy; see this and this). Thus the ontological status of attributes is mathematical-physical. Now, for a given vocabulary $\psi$, and for any attribute that expressed by a predicate that makes essential use of members of $\psi$, call that attribute a $\psi$-attribute. So, for example, attributes expressed by physical predicates are physical attributes and those expressed by psychological predicates are psychological. This is their ideological status.
The ontological status of a thing has to do with which extensions of predicates (of ontological kind) under which the thing falls. The most encompassing of such predicates is “is mathematical physical’. But there are other, narrower ways to distinguish the ontological status of things. For example, some metaphysical predicates, “is abstract”, “is concrete”, or some scientific ones, “is an elementary particle”, “is an event”, “is a person”, “is a social process”, “is a physical magnitude”, etc. Some other ontological kind predicates have empty extensions: “is a soul”, “is a phenomenally raw feel not identifiable with any entity in the hierarchy of the physical”. But if something were to satisfy these predicates, the predicates would indicate the ontological kind of these things. So the important semantic relationship here is satisfaction of ontological kind predicates.
This is different than in the case of the ideological status of attributes, where the important semantic relationship is expression of an attribute by a predicate –i.e., the relationship between the argument and value of a universalizing function. Every entity has an ontological status, but only universals have ideological status determined by the types of predicates for which the universal under consideration is the value of the universalizing function.
So the ideological status of an attribute, is given by the predicates that express it. These predicates themselves can be classified, for instance, according to scientific discipline, psychological-predicates, physical-predicates, Economic/Sociological-predicates, etc. These classifications are historical and are subject to change due to a variety of factors –not all of them scientific. Hellman explains that attributes themselves may have more than one ideological status: for the predicate ‘is in pain at t‘ is coextensive in a law-like fashion with a complex physical predicate then ‘being in pain’ is both a psychological and a physical attribute since it is expressed by both psychological and physical predicates.
In the next update I’ll go into Hellman’s discussion of the confusion that results when the ideological and ideological status of attributes is not clearly stated in debates concerning materialism and the mental.
### Introduction To Hellman’s Treatement of Universals
October 12, 2010
Universals, for our purposes, are sets, predicates, properties, relations and attributes –the same universal can instantiate or subsume numerically distinct things. Among universals we distinguish between those that are extensional, like sets and predicates and those that are intensional, like properties, relations and attributes.
Roughly, the distinction between these two types of universals is the following. Those that are extensional obey the principle that equivalence implies identity, while those that are intensional are in violation of this principle. In cases of non-deformity, the property of being a creature with a kidney is extensionally the same as the property of being a creature with a heart insofar as the same things have each, but the properties are different. Sets and predicates, on the other hand, do satisfy the principle of extensional equivalence: the set of creatures with a kidney and the set of creatures with a heart have the same members, and are thus identical.
Now, universals in general are taken to fall within the range of functions from the set $\textup{P}$ of predicates of a language $\textup{L}$ to the universal they give expression to $f: \textup{P} \mapsto \textup{U}$, where $\textup{U}$ is the set of universals of $\textup{L}$. Hellman explains that among such functions there are some that assign universals with much discrimination: two predicates are mapped to the same universal only if the predicates are identical. And others do so with less discrimination: coextensionality is enough. Hellman imagines a partial ordering of universals based on discrimination criteria with predicates being among the most discriminating and extensions among the least.
Since Hellman is concerned with properties and relations used in science, so the first thing we need to do is set up identity criteria for these attributes. For example, we want to say that temperature is the very same magnitude as mean molecular kinetic energy. So two predicates $\textup{F}$ and $\textup{G}$ are mapped to the same universal $u$ just in case it is a scientific necessity that these two predicates apply to the same things. In other words, $\textup{F}$ and $\textup{G}$ express the same attribute only in case in every model in $\alpha$, $\textup{F}$ and $\textup{G}$ have the same interpretation. Remember that $\alpha$ is the set of structures modeling scientific possibility.
So, how do we express attributes given this way of thinking about them? Hellman suggests using the technique from modal logic, such that the function from members of $\alpha$ to the extensions assigned to that predicate by each model in $\alpha$ is identified with the attribute expressed by that predicate. In this way, two predicates express the same attribute in case they do so in every structure representing a scientific possibility; thereby getting the desired necessity.
If we need to represent universals that discriminate differently than scientific attributes, we can use sets or collections of structures different from $\alpha$ insofar as the universalizing function is no more discriminating than those functions that assign predicates to the same universal in case the predicates are logically equivalent.
For these universals, the identity conditions are the following: functions (i.e., attributes) are identical just in case they assign the same arguments (i.e., members of $\alpha$) to the same values (i.e., extensions of predicates). Or simply say that two predicates $\textup{F}$ and $\textup{G}$ express the same attribute if, and only if, $\forall (x_{1}, \dots, x_{n})(\textup{F}x_{1}, \dots, x_{n} \leftrightarrow \textup{G}x_{1}, \dots, x_{n})$ is a law of science.
This last bit will be what links the discussion of attributes to that of reducibility –since we need to make use of explicit definability.
In the next set of notes I’ll go into the difference that Hellman notes between the ideological and ontological status of universals.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8947691917419434, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/115520/simplifying-an-expression
|
# Simplifying an expression
The following expression is given: $$\frac{x^7+y^7+z^7}{xyz(x^4+y^4+z^4)}$$
Simplify it, knowing that $x+y+z=0$.
-
Use that $z=-(x+y)$ and substitute and suffer for a bit. I think you should get a nice answer :) – Daniel Montealegre Mar 2 '12 at 2:04
@Victor: Diophantine equations? There is no equation and $x,y,z$ can be real/complex. contest-math? How do you know? I am removing those tags. – Aryabhata Mar 2 '12 at 5:04
## 4 Answers
Use that $z=-(x+y)$, so you have that the numerator turns out to be $$x^7+y^7-(x+y)^7=-7x^6y-21x^5y^2-35x^4y^3-35x^3y^4-21x^2y^5-7xy^6$$Also, the denominator turns out to be $$xy(-x-y)(x^4+y^4+(-x-y))^4=-2x^6y-6x^5y^2-10x^4y^3-10x^3y^4-6x^2t^5-2xy^6$$You can factor a $7$ from the numerator and a $2$ from the denominator, and the answer turns out to be $$\frac{7}{2}$$
-
Thanks a lot. , I was trying to set a,b,c as the roots of a third degree polynomial , but that'd be much harder. – Felipe Mar 2 '12 at 2:21
Note: Using Newton's identities, we can calculate the below expressions more easily, following the easy recursive definition.
But, your idea of writing as roots of third degree polynomial works I believe, but requires some work and we show that here:
Let $\displaystyle x,y,z$ be roots of $\displaystyle t^3 + at - b = 0$. We have that $\displaystyle a = xy+yz+zx$ and $\displaystyle b = xyz$.
Since $\displaystyle t^3 = b - at$, multiply by $\displaystyle t^4$ we get $\displaystyle t^7= bt^4 - a t^5$.
Setting $\displaystyle t=x,y,z$ in turn and adding gives us
$\displaystyle x^7 + y^7 + z^7 = b(x^4 + y^4 + z^4) - a(x^5 + y^5 + z^5)$
Similar to above, we get $\displaystyle t^5 = bt^2 - a t^3$, setting $\displaystyle t=x,y,z$ and adding gives us
$\displaystyle x^5 + y^5 + z^5 = b(x^2 + y^2 + z^2) - a(x^2 + y^3 + z^3)$.
Similarly we get
$\displaystyle x^3 + y^3 + z^3 = 3b$
We also have $\displaystyle (x+y+z)^2 = 0$, giving us
$\displaystyle x^2 + y^2 + z^2 = -2a$.
Thus
$\displaystyle x^5 + y^5 + z^5 = -2ab - 3ab = -5ab$.
Now $\displaystyle t^4 = bt - at^2$ and in a similar fashion we get
$\displaystyle x^4 + y^4 + z^4 = -a(x^2+y^2 +z^2) = 2a^2$
Thus $\displaystyle xyz(x^4 + y^4 + z^4) = 2a^2 b$
and $\displaystyle x^7 + y^7 + z^7 = 2a^2b - (-5a^2b) = 7a^2 b$.
Thus the given expression is $\displaystyle \frac{7}{2}$.
This approach can be used to generate identities.
For instance, show that
$\displaystyle 10(x^7 + y^7 + z^7) = 7(x^2 + y^2 + z^2)(x^5 + y^5 + z^5)$
-
Exploit the innate symmetry! Using Newton's identities to rewrite the power sums as elementary symmetric functions is very simple because $\rm\:e_1 = x+y+z = 0\:$ kills many terms.
Write $\rm\ \ c = e_2 = xy + yz + zx,\ \ \ d = e_3 = xyz,\ \ \ p_k =\: x^k + y^k + z^k.$
$\rm\qquad\qquad p_1\ =\ e_1 = 0$
$\rm\qquad\qquad p_2\ = {-}2\: c$
$\rm\qquad\qquad p_3\ =\ \ \ 3\: d$
$\rm\qquad\qquad p_4\ = - c\: p_2\ =\ 2\: c^2$
$\rm\qquad\qquad p_5\ = -c\: p_3 +\: d\: p_2\ = {-}5\: c\: d$
$\rm\qquad\qquad p_7\ = -c\: p_5 +\: d\: p_4\ =\ 7\: c^2 d$
Hence $\rm\displaystyle\ \frac{p_7}{p_4 d}\: =\: \frac{7\: c^2\: d}{2\: c^2\: d}\: =\: \frac{7}{2}$
-
I did mention that in my answer, but +1 for showing it here. – Aryabhata Mar 2 '12 at 5:45
@Aryabhata I hadn't refreshed the page, so I didn't see your latest edit. In any case, it is probably helpful to some readers to see the details. – Gone Mar 2 '12 at 7:23
1
Newton's identities is indeed very helpful – Lorenz Chaos Mar 2 '12 at 9:45
First $(x^3 + y^3 +z^3)(x+y+z) = x^4 + y^4 + z^4 + xy^3 + yx^3 + xz^3 +zx^3 +yz^3+y^3z = 0$ which means
$$x^4 + y^4 + z^4 = -xy(x^2+y^2) - yz(y^2+z^2) -zx (z^2 + x^2) \ \ \text{(1)}$$
$x+y+z=0$ also implies $x^2+y^2=z^2-2xy$, substitute all the sum of square we have $x^4 + y^4 + z^4 = -xy(z^2-2xy) - yz(x^2-2xz)-zx(y^2-2zx)=-xyz(x+y+z)+2(x^2y^2+y^2z^2+z^2x^2)$
So $$x^4 + y^4 + z^4=2(x^2y^2+y^2z^2+z^2x^2) \ \ \text{(2)}$$
Next consider $x^3+y^3-(x+y)^3 = -3xy(x+y)$ (this is a basic identity) So we have $x^3 + y^3 +z^3 = 3xyz$ for $x+y+z = 0$
$3xyz(x^4+y^4+z^4)=(x^3 + y^3 +z^3)((x^4 + y^4 +z^4)=x^7+y^7+z^7+ x^4y^3 + y^4x^3 + x^4z^3 +z^4x^3 +y^4z^3+y^3z^4 = x^7+y^7+z^7 +x^3y^3(x+y)+y^3z^3(y+z)+z^3x^3(z+x)$
By substitution with $\text{(1)}$ and $\text{(2)}$,
$$3xyz(x^4+y^4+z^4) =x^7+y^7+z^7-xyz(x^4+y^4+z^4)/2$$
Therefore the fraction is $\frac{7}{2}$
Wow it's not this long in my thought.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9235301613807678, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/52983/why-is-the-bcs-trial-function-valid-across-the-bec-bcs-crossover
|
# Why is the BCS trial function valid across the BEC-BCS crossover?
In one of the two main theoretical approaches used in describing ultracold Fermi gases and the BEC-BCS crossover, the so-called BCS-Leggett approach, the starting point is the BCS trial wavefunction:
$$\mid BCS \rangle \equiv \prod_{\mathbf{k}} \left( u_{\mathbf{k}} + v_{\mathbf{k}} P^\dagger_{\mathbf{k}} \right) \mid 0 \rangle$$
where the $P^\dagger_{\mathbf{k}}$ operator creates a Cooper pair. It is often asserted that this wavefunction, which may seem tailored for a BCS-like problem, has far greater validity and can also be successfully exploited in describing the BEC-BCS crossover (see for instance: http://arxiv.org/abs/cond-mat/0404274).
Even looking at the original articles by Leggett and Eagles (cited in the reference above) I cannot see why $\mid BCS \rangle$ should be valid in the BEC regime: I am looking for a review article (or even a textbook) addressing this issue.
-
## 1 Answer
You can gain some intuition from looking at the density distribution function in momentum space which for the $|BCS\rangle$ is given by $n_k=v^{2}_k$. In the BCS limit one finds approximately the filled Fermi sphere, while in the BEC limit $n_k\sim 1/(1+[ka]^2)^2$ which is proportional to the square of the Fourier transform of the dimer wave function. For this reason in the BEC limit the state $|BCS\rangle$ describes a condensate of dimers. You can find a little bit more about this question in http://arxiv.org/abs/0706.3360
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9161710739135742, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/31395/what-happens-to-matter-in-a-standard-model-with-zero-higgs-vev/31440
|
# What happens to matter in a standard model with zero Higgs VEV?
Suppose you reset the parameters of the standard model so that the Higgs field average value is zero in the vacuum, what would happen to standard matter?
If the fundamental fermions go from a finite to a zero rest mass, I'm pretty sure that the electrons would fly away from nuclei at the speed of light, leaving positively charged nuclei trying to get away from each other. Looking at the solution for the Hydrogen atom, I don't see how it would be possible to have atoms with zero rest mass electrons.
What happens to protons and neutrons? Since only a very small part of the mass of protons and neutrons is the rest mass of the quarks, and since they're flying around in there at relativistic speeds already, and since the nuclear force is so much stronger than the electrical force with an incredible aversion to naked color, would protons and neutrons remain bound assemblages of quarks and virtual gluons? Would they get a little larger? A little less massive?
What would happen to nuclei? Would they stay together? If the protons and neutrons hold together and their properties change only some, then I might expect the same of nuclei. Different stable isotopes, different sizes, and different masses, but I would expect there would still be nuclei.
Also the W and Z particles go to zero rest mass. What does that do to the electroweak interactions? Does that affect normal stable matter (outside of nuclear decay modes)? Is the weak force no longer weak? What happens to the forces overall?
-
This is a good and interesting question and I would strongly disagree with this being closed. Many serious theoretical particle physicists are studing what the "world" would look like of symmetries and laws of nature, not only the electroweak symmetry were changed. These are valid theoretical investigations from which many things about the our actual laws of nature can be learned too. So please leave this open! – Dilaton Dec 29 '12 at 17:08
## 2 Answers
The analysis of the phase structure of gauge theories is a whole field. Some major breakthroughs were the t'Hooft anomaly matching conditions, the Banks-Zaks theories, Seiberg duality, and Seiberg Witten theory. There is a lot of controversy here, because we don't have experiment or simulation data for most of the space, and there is much more unknown than known.
The first thing to note is that when the Higgs field vacuum expectation value is zero, the Higgs doesn't touch the low energy physics. You can ignore the Higgs at energy scales lower than it's mass, and if this mass is much greater than the proton mass, the result is indistinguishable qualitatively from the Higgsless standard model. So I'll describe the Higgsless standard model.
### Higgsless standard model
Even without the Higgs, electroweak symmetry is broken anyway by QCD condensates. When the Higgs VEV is zero, the W and Z do not become completely massless, although they become much much lighter.
The reason is that QCD has a nontrivial vacuum, where quarks antiquark pairs form a q-qbar scalar fluid that breaks the chiral symmetry of the quark fields spontaneously. This phenomenon is robust to the number of light quark flavors, assuming that there aren't so many that you deconfine QCD. QCD is still asymptotically free with 6 flavors, and it should be confining even with 6 flavors of quarks. So I have no compunctions about assuming the confinement mechanism still works with 6 flavors, and all 6 are now like the up and down quark. Assuming the qualitative vacuum structure is analogous to QCD is plausible and consistent with the anomaly conditions, but if someone were to say "no, the vacuum structure of QCD with 6 light quarks is radically different from the vacuum structure of QCD", I wouldn't know that this is wrong with certainty, although it would be strange.
Anyway, assuming that QCD with 6 light quarks produces the same sorts of condensates as QCD with 3 light quarks (actually 2 light quarks and a semi-light strange quark), the vacuum will be filled with a fluid which breaks SU(6)xSU(6) chiral rotations of quark fields into the diagonal SU(6) subgroup. The SU(6) is exact in the strong interactions and mass terms, it is only broken by electroweak interactions.
The electroweak interactions are entirely symmetric between the 3 families, so there is a completely exact SU(3) unbroken to all orders. The SU(6)xSU(6) breaking makes a collection of massless Goldstone bosons, massless pions. The number of massless pions is the number of generators of SU(6), which is 35. Of these, 8 are exactly massless, while the rest get small masses from electroweak interactions (but 3 of the remaining 27 go away into W's and Z's by Higgs mechanism, see below). The 8 massless scalars give long-range nuclear forces, which are an attractive inverse square force between nuclei, in addition to gravity.
The hadrons are all nearly exactly symmetric under flavor SU(6) isospin, and exactly symmetric under the SU(3) subgroup. All the strongly interacting particles fall into representation of SU(6) now, and the mass-breaking is by terms which are classified by the embedding of SU(3) into SU(6) defined by rotating pairs of coordinates together into each other.
The pions and the nucleons are stable, the pion stability is ensured by being massless, the nucleon stability by approximate baryon number conservation. At least the lowest energy SU(3) multiplet
The condensate order-parameter involved in breaking the chiral SU(6) symmetry of the quarks is $\sum_i \bar{q}_i q_i$ for $q_i$ an indexed list of the quark fields u,d,c,s,t,b. The order parameter is just like a mass term for the quarks, and I have already diagonalized this order parameter to find the mass states. The important thing about this condensate is that the SU(2) gauge group acts only on the left-handed part of the quark fields, and the left-handed and right handed parts have different U(1) charge. So the condensate breaks the SU(2)xU(1) gauge symmetry.
The breaking preserves a certain unbroken U(1) subgroup, which you find by acting the SU(2) and U(1) generators. The left handed quark field has charge 1/6 and makes a doublet, so for the combination $I_3+Y/2$ where I is the SU(2) generator and Y is the U(1) generator, you get a transformation of 2/3 and 1/3 on the top and bottom component, which is exactly the same as $I_z + Y/2$ on the singlets (since they have no I). So this combination isn't chiral, and preserves the vacuum. So the QCD vacuum preserves the ordinary electromagnetic subgroup, which means it makes a Higgs, just like the real Higgs, which breaks the SU(2)xU(1) down to U(1) electromagnetic, with W and Z bosons just like in the standard model.
This is not really as much of a coincidence as it appears to be--- a large part of this is due to the fact that QCD condensates in our universe are not charged, so that they don't break electromagnetism, because u-bar and u have opposite electromagnetic charge transformation. This means that a u-bar u condensation leaves electromagnetism unbroken, and it isn't a surprise that it doesn't leave any of the rest of SU(2) and U(1) unbroken, because it's a chiral condensate, and these are chiral gauge transformations.
The major difference is that there are 3 separate Higgs-like condensates, one for each family, each with an identical VEV, all completely symmetric with each other under the global exact SU(3) family symmetry.
The W's and Z's get a mass from an arbitrary one of these 3, leaving 2 dynamical Higgs-like condensates. The main difference is that these scalar condensates don't necessarily have a simple distinguishable higgs-boson-like oscillation, unlike a fundamental scalar Higgs. The result of this is that the W's and Z's acquire QCD-scale masses, so around 100 MeV for the W's and Z's, as opposed to approximately 100 GeV in the real world. The ratio of the W and Z mass is exactly as in the standard model.
### Behavior of analogs of ordinary objects
The low energy spectrum of QCD is modified drastically, due to the large quark number. The 8 massless pions and 24 nearly massless pions (three of the pions are eaten by the W's and Z's to become part of the massive vectors) include all the diquark degrees of freedom that we call the pions,kaons and certain heavy quark mesons. There will still be a single instanton heavy eta-prime from the instanton violated chiral U(1) part of U(6)xU(6). There should be 35 rho particles splitting into 8 and 27 and 35 A particles splitting into 8 and 27 effectively gauging the flavor symmetry.
The 6 quarks could be thought of as getting a mass from their strong interaction with the Higgs-like condensates, of order some meVs, but since the mass of a quark is defined at short distances, from the propagator, it might be more correct to say the quarks are massless. Some of the particles you see in the data-book, the sigma(660), the f0(980) should disappear (as these are weird--- they might be the product of pion interactions making some extremely unstable bound states, something which wouldn't work with massless pions)
The electron and neutrino will be massless except for nonrenormalizable quark-lepton direct coupling, which would couple the electron to the Higgslike chiral quark condensate. This effect is dimension 6, so the compton wavelength of the electron will be comparable to the current radius of the visible universe. The neutrino mass will be even more strongly suppressed, so it might as well be exactly massless.
The massless electron will lead the electromagnetic coupling (the unHiggsed U(1) left over below the QCD scale) to logarithmically go to zero at large distances, from the log-running of QED screening. So electromagnetism, although it will be the same subgroup of SU(2) and U(1) as in the Higgsed standard model, will be much weaker at macroscopic distances than it is in our universe.
Nuclei should form as usual at short distances, although Isospin is now a nearly exact SU(6) symmetry broken only by electromagnetism, and not by quark mass, and with an exact SU(3) subgroup. So all nuclei come in SU(6) multiplets slightly split into SU(3) multiplets. The strong force will be longer ranged, and without the log-falloff of the electromagnetic force, because the pions quickly run to a free-field theory, since the pion self-interactions are of a sigma-model type. The pion interactions will look similar to gravity in a Newtonian approximation, but scalar mediated, so not obeying the equivalence principle, and disappearing in scattering at velocities comparable to the speed of light.
The combination of a long-ranged attractive nuclear force and a log-running screened electromagnetic force might give you nuclear bound galaxies, held at fixed densities by the residual slowly screened electrostatic repulsion. These galaxies will be penetrated by a cloud of massless electrons and positrons constantly pair-producing from the vacuum.
-
1
@MarkAdler: The pion is not a simple bound state of quarks, this is just wrong (but commonly misrepresented in the literature, and sold to the public). The pion is a Goldstone boson, it's like a sound mode of the quark fluid. The case where the quarks are massless, the symmetry becomes exact, and the pions exactly massless (except for electroweak interactions. And, oops, I just realized that I didn't subtract out the massless pions doing the Higgs mechanism out from the pions, there are 8 massless and 24 nearly massless pions, and 3 pions eaten to make W's and Z's at the QCD scale). – Ron Maimon Jul 6 '12 at 16:00
1
The massless pion is like saying a phonon in a solid is massless (meaning linear dispersion) even though the atoms are not massless. I don't see you asking "is a phonon are the atoms making the phonon squished really small?" The pion is a state where a coherent superposition of the condensed quarks are rotated in opposite directions chirally. BTW, I fixed the eaten pions, and yes, that's what I meant by nuclear bound galaxies. – Ron Maimon Jul 6 '12 at 16:12
2
@MarkAdler: You can't see the "little suckers" in a pion, that's just false. The pion is produced by acting "q-bar \gamma5 q" on the vacuum--- this is an operator which makes two quarks in the microscopic theory, and has a matrix element to the pion in the low-energy theory. The link between a two quark state and a pion state is real. The charge radius of a pion is how the electromagnetic field couples to a chiral shaking of the condensate, this can be sort of local, it doesn't contradict the goldstone property. The idea that the pion is made of two quarks is just an outright embarassment. – Ron Maimon Jul 6 '12 at 17:49
2
I should point out that if you translate a single atom, you have a matrix element with phonon states. This does not mean that a phonon is made from a single atom. Similarly, if you use a pair-operator on electrons, you make a BCS condensate excitation, but the BCS-pair is not exactly composed of two isolated electron excitations, it is a collective excitation. There is a training divide between condensed matter and high energy which made these types of things alien to high energy folks, and led them to dismiss vacuum structure, even though the evidence since Weinberg was overwhelming. – Ron Maimon Jul 6 '12 at 17:57
2
@MarkAdler: You are repeating wrong stuff people say: the deep inelastic regime is high energy, much larger than the mass of the pion, at least some GeV's. In this regime, you can't say you have a single pion, you make more. The deep inelastic probe (if done on a pion) will not see two point quarks, it will see a distribution of quarks, with a large unpredictable number of 10-100 MeV range quarks and gluons (wee partons) which you ignore. What you see is a high energy quark every once in a while carrying most of the energy/momentum of the pion. This is no information on how quarks make a pion. – Ron Maimon Jul 8 '12 at 6:52
show 8 more comments
## Did you find this question interesting? Try our newsletter
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
email address
The Higgs field has a non-zero average value. And because it does, many particles have mass, including the electron, quarks, and the W and Z particles of the weak interactions. If the Higgs field was zero, those particles would be massless or very light. That would be a disaster; atoms and atomic nuclei wouldn't exist. Nothing like human beings, or the earth we live on, could exist without the Higgs field having a non-zero average value.
But, the particles in universe would be more organized.
Let's put it under microscope..
• Instead of the electromagnetic and weak nuclear force present in our world, with its non-zero-Higgs-field, a zero-Higgs-field world has these forces scrambled and rearranged. The rearranged forces are called hypercharge and isospin (for historical reasons; the names are just that, names, without other significance.)
• As part of this scrambling, the force carrier particles are changed; there are 3 W particles and an X particle, and the Z° and photon are missing. And the W and X particles are all massless now.
• The force carriers are now simpler in another sense. The photon affects the W and W particles directly. But the X particle does not directly affect any of the three W particles. The gluons affect themselves as before; the W’s affect themselves too; but the X particle affects no force carriers at all.
Read more at paper by Matt Strassler
-
Thanks for the link! That answers a lot of questions (and raises many more ...). – Mark Adler Jul 6 '12 at 4:44
This is repeating the wrong information in the linked article. There is a nontrivial vacuum even without the Higgs, it is the pion vacuum in our universe. Pion style condensation breaks SU(2)xU(1) to U(1) too, but in a little different way than the Higgs does. – Ron Maimon Jul 6 '12 at 5:08
The article does not answer part of my question. Will there be protons and neutrons (or more likely many flavors of protons and neutrons since all the quark flavors have zero rest mass)? From what I can tell, the strong nuclear force in this model is pretty much independent of the Higgs shenanigans, and it would still require extremely large energies to separate quark colors. So in a relatively cool universe, there should be these composite particles even with massless quarks. Correct? – Mark Adler Jul 6 '12 at 5:13
@MarkAdler: Correct, although there are lots of different neutrons and protons of exactly the same mass, and split electromagnetically from each other. Further, you still get W's and Z's massive. – Ron Maimon Jul 6 '12 at 7:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367551803588867, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/35351/is-it-easier-to-move-or-rotate-an-object
|
# Is it easier to move or rotate an object? [closed]
This is a problem I'm facing on designing a moving bookcase for my home, I just don't have enough physics background to tackle the problem.
Which of the following requires more force/effort?
• A bookcase with a pivot point in one of the back corners that is rotated 180° to its new position (therefore we now face the back of the bookcase).
• A bookcase that is pushed/pulled to the same relative position but with a diagonal push (guided by one rail at the top).
In both cases the bookcase will be supported by equivalent fixed caster wheels aligned with the direction of movement or rotation, with the pivot and the rail acting only as guide.
I don't think it should matter, but the bookcase will be about 1 ft deep and 3 ft wide, I don't have any good source, but I think it would be around 200 kg if fully loaded.
-
## closed as too localized by Qmechanic♦, Sklivvz♦Jan 3 at 13:28
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, see the FAQ.
## 2 Answers
If you want to do this rigorously you need to phrase the question more carefully. Assuming we can neglect friction it's possible to move the bookcase with an arbitrarily small amount of work because you can apply a small force for a short time to accelerate the bookcase then wait for it to move into place. So if you're prepared to wait a long enough time you can move the bookcase with as small a force as you want.
I'd guess you're really asking something like: if I want to move the bookcase in a set time $t$ by first pushing to accelerate it then pushing to slow it down again, is the force bigger if I slide it or if I rotate it?
Start by sliding the bookcase. You need to apply a force F until the bookcase has moved a distance $w/2$ then apply the same force in the other direction to slow it again, and you want to do this in some set time $\tau$. The equation you need is:
$$s = \frac{1}{2}at^2$$
$a$ is the acceleration, which is $F/m$, $s$ is the distance moved and $t$ is the time taken. If you just consider the first half of the move, i.e. moving the bookcase a distance $w/2$ in a time $\tau/2$ then we can put these values in the above equation to get:
$$\frac{w}{2} = \frac{1}{2} \frac{F}{m} \frac{\tau^2}{4}$$
and a quick rearrangement tells us that the force $F$ is given by:
$$F_{slide} = 4 \frac{mw}{\tau^2}$$
Working out the force involved in rotation is pretty similar, except that the equation we need is now:
$$\theta = \frac{1}{2}\frac{T}{I}t^2$$
where $\theta$ is the angle moved, $T$ is the torque and $I$ is the moment of inertia. As above we take the first half of the rotation i.e. rotate the bookcase an angle $\pi/2$ in a time $\tau/2$. You also need to know that torque is force times distance from the hinge i.e. $T = Fw$, and the moment of inertia of slab of width $w$ hinged about the edge is $I = mw^2/3$. So substitute all of this in our equation and we get:
$$\frac{\pi}{2} = \frac{1}{2} \frac{Fw}{mw^2/3} \frac{\tau^2}{4}$$
and rearranging this gives:
$$F_{rot} = \frac{4\pi}{3}\frac{mw}{\tau^2}$$
Lets take the ratio $F_{rot}/F_{slide}$:
$$F_{rot}/F_{slide} = \frac{\pi}{3}$$
So the answer is that the force needed to rotate the bookcase is higher than the force needed to slide it by a factor of $\pi/3$.
BUT
All this is dependant on a specific model of how you move the bookcase, I've ignored friction, and in any case the factor of $\pi/3$ is pretty close to unity. I think all we can really say is that it probably doesn't make much difference whether you slide or rotate the bookcase.
-
Really nice, thank you :) I still haven't digest it all but that is the answer that I needed. – Luiz Borges Sep 1 '12 at 11:41
Ofcourse rotating is easier but you neet to push longer because rather than pushing r-meters, you actually follow a $pi*r$ (180 degrees) road.
For rotating, the static-friction to count is easier for the points closer to the pivot. For the straight-pushing, each point has equally magnitude of static-friction.
-
I thought about it and it looks like it makes sense, but couldn't actually form a rationalle on how that would work (or on how to calculate it). What I thought, was that I could calculate the individual path lenghts of each wheel on the rotation case and somehow compare it to the straight push case, but I don't quite remeber my high school physics and how to work that out. Do you care to elaborate on how I would go on to calculate it (just for fun)? BTW, why would the STATIC friction be smaller (shouldn't it be the kinetic friction)? – Luiz Borges Aug 31 '12 at 23:34
Ok, we can try taking an area*sin(angle) integral in a chat. I dont know how to open a new chat window – huseyin tugrul buyukisik Aug 31 '12 at 23:40
Ok, I just understood why the friction is static when dealing with wheels, but how do I go from the force required to count the static friction and put it into motion to the $F=ma$? What is the force in that case? – Luiz Borges Sep 1 '12 at 0:29
I cant find the exact force withot measuring geometric and material properties of the case. – huseyin tugrul buyukisik Sep 1 '12 at 8:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493650794029236, "perplexity_flag": "middle"}
|
http://jeremykun.com/2011/07/09/set-theory-a-primer/
|
# Set Theory – A Primer
Posted on July 9, 2011 by
It’s often that a student’s first exposure to rigorous mathematics is through set theory, as originally studied by Georg Cantor. This means we will not treat set theory axiomatically (as in ZF set theory), but rather we will take the definition of a set for granted, and allow any operation to be performed on a set. This will be clear when we present examples, and it will be clear why this is a bad idea when we present paradoxes.
## The Basics of Sets
Definition: A set $S$ is a collection of distinct objects, each of which is called an element of S. For a potential element $x$, we denote its membership in $S$ and lack thereof by the infix symbols $\in, \notin$, respectively. The proposition $x \in S$ is true if and only if $x$ is an element of $S$.
Definition: The cardinality of $S$, denoted $|S|$, is the number of elements in S.
The elements of a set can in principal be anything: numbers, equations, cats, morals, and even (especially) other sets. There is no restriction on their size, and the order in which we list the objects is irrelevant. For now, we will stick to sets of numbers and later we will move on to sets of other sets.
There are many ways to construct sets, and for finite sets it suffices to list the objects:
$S = \left \{ 0, 2,4,6,8,10 \right \}$
Clearly this set has cardinality six. The left and right curly braces are standard notation for the stuff inside a set. Another shorter construction is to simply state the contents of a set (let $S$ be the set of even numbers between 0 and 10, inclusive). Sometimes, it will be very important that we construct the sets in a more detailed way than this, because, as we will see, sets can become rather convoluted.
We may construct sets using an implied pattern, i.e. for the positive evens:
$E = \left \{ 2, 4, 6, 8, \dots \right \}$
For now, we simply allow that this set has infinite cardinality, though we will revisit this notion in more detail later. In this way we define two basic sets of numbers:
$\mathbb{N} = \left \{ 1, 2, 3, \dots \right \} \\ \mathbb{Z} = \left \{ 0, -1, 1, -2, 2, \dots \right \}$
We name $\mathbb{N}$ the natural numbers, and $\mathbb{Z}$ the integers. Yet another construction allows us to populate a set with all values that satisfy a particular equation or proposition. We denote this $\left \{ \textup{variable} | \textup{condition} \right \}$. For example, we may define $\mathbb{Q}$, the rational numbers (fractional numbers) as follows:
$\displaystyle \mathbb{Q} = \left \{ \frac{p}{q} | p \in \mathbb{Z} \textup{ and } q \in \mathbb{Z} q \neq 0 \right \}$
This is not quite a complete description of how rational numbers work: some fractions are “equivalent” to other fractions, like 2/4 and 1/2. There is an additional piece of data called a “relation” that’s imposed on this set, and any two things which are related are considered equivalent. We’re not going to go into the details of this, but the interested reader should look up an equivalence relation.
Next, we want to describe certain pieces of sets:
Definition: A set $R$ is a subset of a set $S$, denoted $R \subset S$, if all elements of $R$ are in $S$. i.e., for all $x, x \in R$ implies $x \in S$.
And we recognize that under our standard equality of numbers (i.e. $\frac{4}{1} = 4$), we have $\mathbb{N} \subset \mathbb{Z} \subset \mathbb{Q}$.
We may now define equality on sets, extending the natural idea that two sets are equal if they contain precisely the same elements:
Definition: Two sets $R,S$ are equal if $R \subset S$ and $S \subset R$.
A natural set to construct now is the power set of a given set:
Definition: the power set of $S$, denoted $P(S)$, is the set of all subsets of $S$. i.e. $P(S) = \left \{ R | R \subset S \right \}$.
Elements of this set are sets themselves, and there are two trivial, yet important sets to recognize are in $P(S)$, namely $S \subset S$ itself and $\left \{ \right \} \subset S$, the empty set which contains no elements, is vacuously a subset of every set.
For a finite set $S$, power sets are strictly larger in size, since there exists a singleton set $\left \{ x \right \} \in P(S)$ for each $x \in S$. As an exercise for the reader, determine the size of $P(S)$ for any finite set $S$, expressed in terms of $|S|$. For infinite sets, we simply admit that their power sets are also infinite, since we don’t yet have a way to describe “larger” infinities.
## Building Sets From Other Sets
We have a couple of nice operations we may define on sets, which are rather trivial to define.
Definition: The union of two sets $R,S$, denoted $R \cup S$, is $\left \{ x | x \in S \textup{ or } x \in R \right \}$.
Definition: The intersection of two sets $R,S$, denoted $R \cap S$, is $\left \{ x | x \in S \textup{ and } x \in R \right \}$.
As an exercise, try to prove that $|S \cup R| = |S| + |R| - |S \cap R|$.
The next definition requires one to remember what an ordered tuple $(a_1,a_2, \dots , a_n)$ is. Specifically, an ordered pair is just like a set which allows for repetition and respects the presented order of elements. So $(a,b) \neq (b,a) \neq (a,a,b)$.
Definition: The direct product (or simply product) of two sets $R,S$, denoted $R \times S$, is $\left \{ (x,y) | x \in R \textup{ and } y \in S \right \}$.
This is just like in defining the Cartesian Plane $\mathbb{R}^2 = \mathbb{R} \times \mathbb{R}$ as ordered pairs of real numbers. We can extend this even further by defining $\mathbb{S}^n$ to be the set of all $n$-tuples of elements in $S$.
## Functions, and Their -Jections
Now that we have sets and ways to build interesting sets, we may define mathematical objects which do stuff with them.
Definition: A relation $\sim$ on $R$ and $S$, is a subset of $R \times S$. Denotationally, we write $a \sim b$ as shorthand for $(a,b) \in \sim$.
Relations are natural generalizations of $=$ on numbers. In general relations need no additional properties, but they are not very interesting unless they do. For more discussion on relations, we point the reader to the Wikipedia page on equivalence relations. As an exercise to the reader, prove that set equality (defined above) is an equivalence relation, as expected.
Now, we get to the meat of our discussion on sets: functions.
Definition: A function $f : S \to R$ is a relation on $S$ and $R$, a subset of $S \times R$, with the additional properties that for each $x \in S$, there is exactly one element of the form $(x, \cdot ) \in f$.
Colloquially, functions ‘accept’ a value from $S$ and output something in $R$. This is why we may only have one ordered pair for each $x \in S$, because functions are deterministic. Furthermore, we must be able to put every value from $S$ into our function, so no values $x \in S$ may be without a corresponding element of $f$.
We have special notation for functions, which was established long before set theory was invented. If $x \in S$, we write $f(x)$ to denote the corresponding element $y$ in $(x,y) \in f$. In addition, we say that a value $x$ maps to $y$ to mean $f(x)=y$. If the function is implicitly understood, we sometimes just write $x \mapsto y$. Then we have the following definitions:
Definition: The domain of a function $f: S \to R$, sometimes denoted $\textup{dom}(f)$, is the set of input values $S$. The codomain, denoted $\textup{codom}(f)$, is the set $R$. Since not every value in $R$ must be in a pair in $f$, we call the subset of values of $R$ which are produced by some input in $S$ the range of $f$. Rigorously, the range of $f$ is $\left \{ f(x) | x \in S \right \}$.
Now we may speak of some “interesting” functions:
Definition: A function $f : S \to R$ is a surjection if its range is equal to its codomain. In other words, for every $y \in R$, there is some $x \in S$ with $f(x) = y$, or, equivalently, $f(S) = R$.
Note that $f$ being a surjection on finite sets implies that the domain is at least as big as the codomain. Though it seems trivial, we can use functions in this way to reason about the cardinalities of their domains and codomains.
Definition: A function $f : S \to R$ is an injection if no two different values $x \in S$ map to the same $y \in R$. In other words, if $f(a) = f(b)$, then $a=b$.
Similarly, for finite domains/codomains an injection forces the codomain to be at least as big as the domain in cardinality.
Now, we may combine the two properties to get a very special kind of function.
Definition: A function $f: S \to R$ is a bijection if it is both a surjection and an injection.
A bijection specifically represents a “relabeling” of a given set, in that each element in the domain has exactly one corresponding element in the codomain, and each element in the codomain has exactly one corresponding element in the domain. Thus, the bijection represents changing the label $x$ into the label $f(x)$.
Note that for finite sets, since a bijection is both a surjection and an injection, the domain and codomain of a bijection must have the same cardinality! What’s better, is we can extend this to infinite sets.
## To Infinity, and Beyond! (Literally)
Definition: Two infinite sets have equal cardinality if there exists a bijection between them.
Now we will prove that two different infinite sets, the natural numbers and the integers, have equal cardinality. This is surprising, because despite the fact that the sets are not equal, one is a subset of the other and they have equal cardinality! So here we go.
Define $f : \mathbb{N} \to \mathbb{Z}$ as follows. Let $f(1) = 0, f(2) = 1, f(3) = -1 \dots$. Continuing in this way, we see that $f(2k+1) = -k$, and $f(2k) = k$ for all $k \in \mathbb{N}$. This is clearly a bijection, and hence $|\mathbb{N}| = |\mathbb{Z}|$.
We can extend this to any bijection between an infinite set $S$ of positive numbers and the set $\left \{ \pm x | x \in S \right \}$.
Let’s try to push bijections a bit further. Let’s see if we can construct a bijection between the natural numbers and the positive rationals (and hence the set of all rationals). If integers seemed bigger than the naturals, then the rational numbers must be truly huge. As it turns out, the rationals also have cardinality equal to the natural numbers!
It suffices to show that the natural numbers are equal in cardinality to the nonnegative rationals. Here is a picture describing the bijection:
We arrange the rationals into a grid, such that each blue dot above corresponds to some $\frac{p}{q}$, where $p$ is the x-coordinate of the grid, and $q$ is the y-coordinate. Then, we assign to each blue dot a nonnegative integer in the diagonal fashion described by the sequence of arrows. Note these fractions are not necessarily in lowest terms, so some rational numbers correspond to more than one blue dot. To fix this, we simply eliminate the points $(p,q)$ for which their greatest common divisor is not 1. Then, in assigning the blue dots numbers, we just do so in the same fashion, skipping the places where we deleted bad points.
This bijection establishes that the natural numbers and rationals have identical cardinality. Despite how big the rationals seem, they are just a relabeling of the natural numbers! Astounding.
With this result it seems like every infinite set has cardinality equal to the natural numbers. It should be totally easy to find a bijection between the naturals and the real numbers $\mathbb{R}$.
Unfortunately, try as we might, no such bijection exists. This was a huge result proven by Georg Cantor in his study of infinite sets, and its proof has become a staple of every mathematics education, called Cantor’s Diagonalization Proof.
First, we recognize that every real number has a representation in base 2 as an infinite sequence of 0′s and 1′s. Thus, if there were such a bijection between the natural numbers and reals, we could list the reals in order of their corresponding naturals, as:
$1 \mapsto d_{1,1}d_{1,2}d_{1,3} \dots \\ 2 \mapsto d_{2,1}d_{2,2}d_{2,3} \dots \\ 3 \mapsto d_{3,1}d_{3,2}d_{3,3} \dots$
$\vdots$
Here each $d_{i,j} \in \left \{ 0,1 \right \}$ corresponds to the $j$th digit of the $i$th number in the list. Now we may build a real number $r = d_{1,1}d_{2,2}d_{3,3} \dots$, the diagonal elements of this matrix. If we take $r$ and flip each digit from a 0 to a 1 and vice versa, we get the complement of $r$, call it $r'$. Notice that $r'$ differs from every real number at some digit, because the $i$th real number shares digit $i$ with $r$, and hence differs from $r'$ at the same place. But $r'$ is a real number, so it must occur somewhere in this list! Call that place $k \mapsto r'$. Then, $r'$ differs from $r'$ at digit $k$, a contradiction.
Hence, no such bijection can exist. This is amazing! We have just found two infinite sets which differ in size! Even though the natural numbers are infinitely large, there are so many mind-bogglingly more real numbers that it is a larger infinity! We need new definitions to make sense of this:
Definition: An infinite set $S$ is countably infinite if there exists a bijection $\mathbb{N} \to S$. If no such bijection exists, we call it uncountably infinite, or just uncountable.
Georg Cantor went on to prove this in general, that there cannot exist a bijection from a set onto its power set (we may realize the reals as the power set of the naturals by a similar decimal expansion). He did so simply by extending the diagonalization argument. But since we are dealing with infinite sets, we need even more definitions of “how hugely infinite” these infinite sets can be. These new measures of set cardinality were also invented by Cantor, and they are called transfinite numbers. Their investigation is beyond the scope of this post, but we encourage the reader to follow up on this fascinating subject.
We have still only scratched the surface of set theory, and we have even left out a lot of basic material to expedite our discussion of uncountability. There is a huge amount of debate that resulted from Cantor’s work, and it inspired many to further pick apart the foundations of mathematics, leading to more rigorous formulations of set theory, and extensions or generalizations such as category theory.
## Sets of Sets of Sets, and so on Ad Insanitum
We wrap up this post with a famous paradox, which makes one question whether all of the operations performed in set theory are justified. It is called Russell’s Paradox, after Bertrand Russell.
Suppose we define a set $S$, which contains itself as an element. This does not break any rules of Cantor’s set theory, because we said a set could be any collection of objects. We may even speak of the set of all sets which contain themselves as elements.
Now let us define the set of all sets which do not contain themselves. Call this set $X$. It must be true that either $X \in X$ or $X \notin X$. If $X \in X$, then $X$ is contained in itself, obviously, but then by the definition of $X$, it does not contain itself as an element. This is a contradiction, so $X \notin X$. However, if $X \notin X$, then $X$ satisfies the definition of a set which does not contain itself, so $X \in X$. Again, a contradiction.
This problem has no resolution within Cantor’s world of sets. For this reason (and other paradoxes based on wild constructions of sets), many have come to believe that Cantor’s set theory is not well-founded. That is not to say his arguments and famous results are wrong, but rather that they need to be reproved within a more constrained version of set theory, in which such paradoxes cannot happen.
Such a set theory was eventually found that bypassed Russell’s paradox, and it is called Zermelo-Fraenkel set theory. But even that was not enough! Additional statements, like the Axiom of Choice (which nevertheless does lead to some counter-intuitive theorems), were found which cannot be proved or disproved by the other axioms of ZF set theory.
Rather than give up all that work on axiomatizing set theory, most mathematicians today accept the Axiom of Choice and work around any oddities that arise, resulting in ZFC (ZF + Choice), doing their mathematical work there.
So along with paradoxical curiosities, we have laid all the foundation necessary for reasoning about countability and uncountability, which has already shown up numerous times on this blog.
Until next time!
About these ads
### Like this:
Like Loading...
This entry was posted in Logic, Primers, Set Theory and tagged axiom of choice, bijections, countability, functions, mathematics, power set, primer by j2kun. Bookmark the permalink.
## 4 thoughts on “Set Theory – A Primer”
1. I on May 10, 2013 at 4:34 pm said:
Hi! I’d like to point out that you make a very grave misunderstanding towards the end of this article when you discuss paradoxes arising from set theory. In particular, you seem to get caught up on the usage of the word paradox and don’t make clear that it is being used in multiple senses.
Russell’s paradox is a paradox in the logical sense. That is, if there exists a set of all sets which do not contain themselves, then a logical contradiction arises. The Banach-Tarski paradox is not a paradox in this sense. In fact, it is not logically inconsistent at all. The Banach-Tarski paradox is termed a paradox because it defies intution (or rather, it defies an intution not tempered by a proper understanding of measure theory). The Banach-Tarski paradox (roughly) states that it is possible to divide a sphere into pieces, rotate and translate these pieces, and then rearrange them into two spheres with same radius as the original. This seems like it should be impossible! The reason it is not impossible is that the pieces are what is termed unmeasurable. Essentially what this means is that it is not possible to assign a volume to these sets of points. This is not the same as saying that the pieces have zero volume. Instead, it means that if you assigned them any volume, a contradiction would result.
The area of mathematics which studies which sets can be assigned “volume” is known as measure theory, measures being a generalization of the notion of volume or area. A good place to start if you are interested in the Banach-Tarski paradox would be the construction of a Vitali set. Giuseppe Vitali showed that it is possible to construct a set of real numbers which is not Lebesgue measurable. Lebesgue measure has a technical definition but conforms to what you would expect to be the length of “nice” sets. For example, any interval \$[a,b]\$ has Lebesgue measuer \$b – a\$. In higher dimensions, Lebesgue measure matches the geometric definitions of area and volume. While Vitali’s argument is not prerequisite per se for the proof of the Banach-Tarski paradox, it is a good place to build an understanding of nonmeasurable sets.
To get back to the main point of this comment, it is misleading to say that ZFC results in paradoxes. Although we do not have a proof that ZFC is free of logical inconsistencies (cf. Goedel’s second incompleteness theorem), near universal opinion among mathematicians is that ZFC is consistent. Presenting the Banach-Tarski paradox as similar to Russell’s paradox ignores the very real differences between them. Russell’s paradox (and other paradoxes of naive set theory) were considered serious problems because logical contradictions in a theory make it useless and boring. Anything is provable from a logical contradiction (caveat: in classical logic—there are logics in which this is not true). Both the Banach-Tarski and Russell’s paradox present us with something that contradicts our intution. In the case of Banach-Tarski, our intuition is wrong. In the case of Russell’s paradox, the theory (naive set theory) is wrong (or at least inconsistent).
• on May 10, 2013 at 5:29 pm said:
Yes, I’m well aware of the details of measure theory and of the difference between these two kinds of “paradoxes.” This was simply a misuse of the word in two meanings. That being said, there is a way to describe the BT paradox as a “paradox” in the classical sense. It involves laying down some reasonable axioms about volume being translation and rotation-preserving, etc., and concluding that the BT construction is still valid and hence no such system can be consistent. I believe I was thinking of this formulation when I had originally written this post.
But your clarification is more than ample to entice any readers interested in knowing more.
2. Chris on May 11, 2013 at 11:52 am said:
Thanks for these primers; They are motivating me to spend more time studying math. You have a typo: Colloquially, functions ‘accept’ a value form S and output something in R. (Form should be from.)
3. Tim on May 11, 2013 at 3:11 pm said:
Great page! A couple of very minor points:
- Your definition of the rationals Q would be better if it used q \in N, since allowing q to be in Z means it can be 0, and no rational number has 0 as denominator.
- When you say “Note these fractions are not necessarily in lowest terms, so some blue dots correspond to more than one rational number”, I think what you really mean is “Note these fractions are not necessarily in lowest terms, so some rational numbers correspond to more than one blue dot”.
Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 157, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417407512664795, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/58495/why-hasnt-mereology-suceeded-as-an-alternative-to-set-theory/64342
|
## Why hasn’t mereology suceeded as an alternative to set theory?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have recently run into this wikipedia article on mereology. I was surprised I had never heard of it before and indeed it seems to be seldom mentioned in the mathematical literature. Unlike set theory, which is founded on the idea of set membership, mereology is built upon what I consider conceptually more elementary, namely the relation between parts and the whole.
Personally, I have always found a little bit unsatisfactory (philosophically speaking) the fact that set theory postulates the existence of an empty set. But of course there is the technical aspect and current axiomatizations of set theory seem to be quite good regarding what it allows us to prove.
Now it seems there have been some attempts to relate mereology and set theory, and according to the article, some authors have recently tried to deduce ZFC axioms as theorems in certain axiomatizations of it. Yet, apparently only a couple of well trained mathematicians (one of them Tarski) have discussed mereology, since most people have shown indifference towards the whole subject.
So my questions are: how is it that mereology had no success as a possible foundation for mathematics? Are axiomatizations based on mereology not suitable for most developments or simply not worth the while? If so, which would be the technical reason behind?
-
10
It doesn't have to have no success; even if it has the same success, there's still no incentive to switch. It needs to have greater success in order to make a switch seem like a good idea, and meanwhile we have category theory...! – Qiaochu Yuan Mar 15 2011 at 1:29
22
Things fall apart; the centre cannot hold / Mere ology is loosed upon the world... – Yemon Choi Mar 15 2011 at 5:23
10
@Qiaochu: A comment from Eric Raymond on Plan 9 may be in order here: "Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough." The same could be said of bases for doing mathematics. – Robert Haraway Mar 15 2011 at 13:55
5
This may sound harsh, but: where is the math question here? The OP's motivations for considering mereology seem to be a mixture of psychological and philosophical -- "mereology is built upon what I consider conceptually more elementary" -- but what would be a putative mathematical advantage of having mereological foundations? Note that the majority of working mathematicians are not only happy with set theory as a foundation: moreover, they don't want to think about foundational issues at all, and the (naive) concept of a set is something they have accepted since their school days. – Pete L. Clark May 9 2011 at 2:04
8
@ Pete: Whatever my motivation for asking the question might be (which you can or cannot consider worth the while), the question asks precisely about why mereological foundations are not suitable, compared to set theory; which is a rather technical matter (certainly mathematics). – godelian May 9 2011 at 2:40
show 6 more comments
## 6 Answers
Unlike category theory which is in many ways a freer framework in which to do mathematics and which very nicely captures universal objects and constructions (e.g., limits and colimits), mereology is a more restrictive framework than set theory. The whole/part relation can be captured by set/subset, but set/member cannot simply be recaptured in mereology. For instance, in mereotopology a space is comprised entirely of extended parts, no points. Try reformulating the separation axioms and deriving Urysohn's theorem, for example. (Maybe it can be done. I think so. But it's not immediately clear how.) For these reasons, mereology will remain of interest to nominalistically inclined mathematical philosophers (like Tarski, not to mention Russell and Whitehead in whose work I find mereological inclinations) but is not likely to spark a major mathematical research program, in my opinion.
-
18
Locale theory is topology without points. It proves Tychonoff theorem without using choice. In fact, it is a good idea to consider spaces as more than just bags of points. – Andrej Bauer Mar 15 2011 at 3:56
2
Thanks! I didn't mean to suggest it was a bad idea. – Jeremy Shipley Mar 15 2011 at 4:21
3
I thought that points were definable in mereology as objects that have no proper parts (after you get rid of the empty set's object). What's the obstruction that prevents mereology from getting set theory as a definitional extension in that way? – Carl Mummert Mar 15 2011 at 12:03
1
As a quibble, locale theory can certainly prove a result that is analogous to Tychonoff's theorem without AC, but because Tychonoff's theorem implies AC over ZF it's impossible to prove the actual Tychonoff theorem in ZF or in any constructive theory that is a subtheory of ZF when viewed from a classical standpoint. – Carl Mummert Mar 15 2011 at 12:08
5
My main point in answering the question is that mereology is more restrictive. Although it is true that interesting mathematics arises from adopting restrictions (intuitionism, constructivism), more restricitve frameworks are not likely to supplant less restrictive frameworks as widely adopted working foundations, in my opinion. – Jeremy Shipley Mar 15 2011 at 14:34
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Lesniewski's idea was not only to replace set theory with mereology but to construct entirely new foundation for mathematics which consisted of three systems:
• prototethics - the counterpart of propositional logic
• ontology - which from contemporary point of view is a first-order theory of a binary predicate, this could be roughly described as a theory of "is" (but do not confound it with $\in$)
• mereology - nominalistically motivated theory of sets.
Lesniewski's motivations were first of all philosophical in spirit. He wrote explicitly that he could not accept either the notion of class of Russell's and Whitehead's or the notion of the extension of a concept of Frege's. Moreover he could not accept existence of the empty class. One of the most important, so to say, technical motivations was Russell's paradox.
As for mereology (I know very little about other systems) Lesniewski's original system of axioms (as well as the one introduced by Leonard and Goodman under the name calculus of individuals) is definitely too weak to reconstruct even a fragment of arithmetic, for example. It was proved by Tarski (in the 30's of the previous century) that Lesniewski's mereology determine structures which bear a very strong resemblance to complete Boolean algebras. Every mereological structure can be transformed into complete Boolean lattice by adding zero element (its non-existence is a consequence of axioms for mereology). And vice versa, every complete Boolean lattice can be turned into (mutatis mutandis) a mereological structure by deleting the zero element. Thus it is by far too little to think of rebuilding mathematics in this framework.
However, as it was said by Jeremy Shipley above there is some work towards building point-free geometrical and topological systems based on mereology enhanced with some additional relation which according to its intended interpretation is to model the situation in which regions are in contact (or are separated). Alfred Tarski himself was one of the first to do this in his Foundations of geometry of solids. One can then try to express separation axioms in the language of mereology plus connection, or require some other topological properties by means of axioms put upon connection. These all can be done, however usually with an application of ZF (ZFC) on metalevel, which is far from Lesniewski's intentions.
-
It seems worthwhile to point out that Steve’s answer also essentially answers Carl Mummert’s question (in a comment) about why one can’t get set theory as a definitional extension of mereology by defining points (as things with no proper parts) and then using “point $x$ is a part of object $y$” as the mereological interpretation of $x\in y$. You can indeed handle sets of points this way, but there’s no good way to handle sets of sets. Mereology (at least in Leśniewski’s version — I’m not familiar with other versions) would make no distinction between a collection of sets and the union of those sets. I think you can get somewhat closer to set theory by combining (as Leśniewski did) mereology with ontology, but even then I don’t think you get anywhere near ZF. To really handle something like the cumulative hierarchy of ZF (or even the shorter hierarchy of Russell-style type theory, I believe), mereology would have to be supplemented with some way to treat sets as (new) points, something like Frege’s notion of Wertverlauf (which would probably be anathema to Leśniewski).
-
1
Either my browser (Safari) or MO software seems to prefer French to Polish. It allows me to put an acute accent over an e, but when I try to put an acute accent over an s (as in Lesniewski) it inserts a space before the s and puts the accent on that. So please imagine that all occurrences of "Lesniewski" have an acute accent over the first s. – Andreas Blass Aug 15 at 14:27
@Emil: Thanks for adding the accents. – Andreas Blass Aug 15 at 16:24
In algebraic set theory a la Joyal and Moerdijk, the subset relation is taken as fundamental, with membership only being a derived notion (specifically, the cumulative hierarchy is taken to be the free "ZF-algebra"*; i.e., partial order with small joins and an abstract "singleton" operator. The order corresponds to subsethood, and x is defined to be an element of y just in case the singleton operator applied to x yields a subset of y). I can never quite grasp what it is that mereology is supposed to be all about as a supposed contrast to set theory, but if it's just a matter of viewing subsethood as more elementary a concept than membership, well, there you go.
[*: ZF-algebra isn't a great name for the general concept of such structures, in my opinion, since they have very little to do with specifically Zermelo-Fraenkel set theory. Note that, while every object in the cumulative hierarchy is uniquely a join of singletons (and in this way can be viewed as a plain old bag of elements), in more general ZF-algebras, there may be objects which are not joins of singletons, thus carrying a more mereological flavor; in particular, these illustrate that subsethood is not definable in terms of membership, firmly establishing subsethood as the more primitive notion in this context]
-
I decided to add one more answer (instead of editing the previous one), since it is quite long. This will mainly address the OP question, Andreas Blass answer and Carl Mummert comment about defining sets as sets of atoms (points) in mereology. I hope it will shed some light on mereology and its relation to set theory.
In mereology, as it is done in Lesniewskian tradition, it is assumed that part of relation (in symbols: $\sqsubseteq$) is a partial order (reflexive, antisymmetrical and transitive) and that it satisfies the separation condition (those familiar with forcing will find it very familiar): $$\neg x\sqsubseteq y\longrightarrow\exists z(z\sqsubseteq x\wedge z\mathrel{\bot} y)$$ where $z\mathrel{\bot} y\iff\neg\exists u(u\sqsubseteq z\wedge u\sqsubseteq z)$ ($z$ and $y$ are incompatible, otherwise they are compatible). The crucial point is a definition of mereological sum (sometimes called fusion as well). The very idea of mereological sum is hidden in the following equivalence:
an object $x$ is a mereological sum of the group of $S$-es if and only if every $S$ is part of $x$ and every part of $x$ is compatible with some $S$.
Notice that it is a consequence of the definition that there cannot be a mereological set of an empty group of objects. Using sets and set theoretical notation we may define the sum of a set $X$ as binary relation in the following way: $$x\mathrel{\mathrm{Sum}} X\iff \forall y(y\in X\longrightarrow y\sqsubseteq x)\wedge\forall y(y\sqsubseteq x\longrightarrow\exists z(z\in X\wedge\neg z \mathrel{\bot} y).$$ What is usually called classical mereology is a second order system which is obtain by adding the following axiom: $$\forall X(X\neq\emptyset\longrightarrow\exists x(x\mathrel{\mathrm{Sum}} X).$$ Building a first-order system is a little bit more painstaking. To simplify things a bit we may introduce some auxiliary notation: $$x\mathrel{\mathbf{sum}_y}\varphi(y)$$ as an abbreviation of the following formula: $$\forall y(\varphi(y)\longrightarrow y\sqsubseteq x)\wedge\forall u(u\sqsubseteq x\longrightarrow\exists z(\varphi(z)\wedge \neg z\mathrel{\bot} u)).$$ "$x\mathrel{\mathbf{sum}_y}\varphi(y)$" may be read as $x$ is a mereological sum of all $\varphi$-ers. From this we can prove for example that:
• $\forall z(z\mathrel{\mathbf{sum}_y}\text‘z=y\text')$
• $\forall z(z\mathrel{\mathbf{sum}_y}\text‘z\sqsubseteq y\text')$.
In this setting, mereological sum existence axiom schema can be expressed as: $$\exists x\varphi(x)\longrightarrow\exists y(y\mathrel{\mathbf{sum}_x}\varphi(x)).$$ Since the consequence of the axioms presented is that there can only be one mereological sum of $\varphi$-ers we can introduce notation (analogous to the set-theoretical abstraction operator): $$\bigl[x\mid\varphi(x)\bigr],$$ for those formulas, which are satisfied by at least one object. Now, important thing is that: $$x=\bigl[x\bigr]$$ so we cannot distinguish between any given object and its mereological singleton (so to say), which is the first problem to interpret ZF(C).
Defining proper part as $x\sqsubset y\iff x\sqsubseteq y\wedge x\neq y$ we may define mereological atoms (or points, if you prefer the name): $$\mathrm{Atom}(x)\iff\neg\exists y(y\sqsubset x).$$ Now, in case $a_1,\ldots,a_n$ are atoms we can indeed treat $\bigl[a_1,\ldots,a_n\bigr]$ as a counterpart of $\{a_1,\ldots,a_n\}$ (and similarly in case of infinite collections), thus in this case the interpretation suggested by Carl Mummert and mentioned by Andreas Blass: $$x\in y\iff\mathrm{Atom}(x)\wedge x\sqsubset y,$$ works fine. But it does not work for example for: $$\bigl[\bigl[a_1,\ldots,a_n\bigr],\bigl[b_1,\ldots,b_m\bigr]\bigr]=\bigl[a_1,\ldots,a_n, b_1,\ldots,b_m\bigr],$$ since under the interpretation in question for every $a_i$: $$a_i\in\bigl[\bigl[a_1,\ldots,a_n\bigr],\bigl[b_1,\ldots,b_m\bigr]\bigr].$$ Thus, as Andreas already pointed to it, there is no way to differentiate between sets of atoms and sets of sets of atoms and so on. Everything is reducible to a mereological set of atoms. (It is worth mentioning here as well that existence of atoms is independent from the axioms of the classical mereology.)
To conclude this lengthy post, the crucial distinction between mereological sets and, so to say, standard ones is (I think) hidden in the following fact. The equivalence below is true about sets (with obvious restrictions, but assume that we limit our attention to a domain which is a set): $$\varphi(x)\iff x\in\{z\mid\varphi(z)\},$$ while its mereological counterpart is usually not true. That is it is the case that: $$\varphi(x)\longrightarrow x\sqsubseteq\bigl[z\mid\varphi(z)\bigr],$$ but is NOT the case that: $$x\sqsubseteq\bigl[z\mid\varphi(z)\bigr]\longrightarrow \varphi(x).$$
EDIT: Originally I suggested that it might be interesting to consider a system of mereology with the implication above taken as an axiom. However, in the comment below Andreas pointed to the fact that this entails linearity of $\sqsubseteq$. The consequence is that the class of models of the theory which consists of poset axioms+separation+existence of mereological sums narrows down to one-element (up to isomorphism) class, the only model being degenerate one-element structure.
As Jeremy Shipley wrote above (in comments) part of is a decent interpretation of subsethood, but not membership. There are still some other points worth mentioning, but this post has already got out of control.
-
I experience some problems with TeX notation - I write \{ and \} but the brackets are not visible in my browser. Could somebody please help me with this? – R.G. Aug 17 at 19:01
1
Fixed. You need to write these as \\{ \\} (or \lbrace \rbrace). – Emil Jeřábek Aug 17 at 19:07
2
You wrote that it may be interesting to consider mereology with the additional axiom that if $x$ is part of the sum of the $\phi$-ers then $x$ is itself a $\phi$-er. This axiom looks very strange to me for the following reason. Consider any two things $a$ and $b$, and let $\phi(z)$ say "$z=a$ or $z=b$". Let $s$ be the sum of the $\phi$-ers, i.e., of $a$ and $b$. Since $s$ is part of itself, your axiom would require $\phi(s)$. So $s$ would be one of $a$ and $b$, say $a$. Since $b$ is part of $s$, we'd get that $b$ is part of $a$. Conclusion: Of any two things, one is part of the other. – Andreas Blass Aug 17 at 20:33
1
@godelian: You can find something about non-wellfounded approach to mereology in the paper by A.J. Cotnoir and A. Bacon "Non-wellfounded mereology", Review of Symbolic Logic / Volume 5 / Issue 02 / June 2012, pp. 187-204 . Hope this helps. – R.G. Aug 18 at 11:36
1
The fact that the only structure satisfying axioms for mereology plus the schema in question can be shown directly using the fact that mereology axioms entail existence of the unity $\mathbf{1}$, that is the object $x$ such that $\forall y(y\sqsubseteq x)$. One can now put $\varphi(x)\iff\forall y(y\sqsubseteq x)$. Since for any object $y$ it is the case that $y\sqsubseteq\mathbf{1}=\bigl[x\mid\varphi(x)\bigr]$, the axiom entails $\forall z(z\sqsubseteq y)$, that is $y=\mathbf{1}$. Andreas, thank you very much once again for the comment! – R.G. Aug 18 at 15:04
show 6 more comments
The following remarks reflect personal research that may be relevant to the idea of a mereological foundation.
I devised a set of sentences intended to admit a universal class to Zermelo-Fraenkel set theory. The strategy involved a primitive part relation and a primitive membership relation with additional axioms to deal with identity and recharacterizing the part relation as a subset relation.
The proper part relation can be expressed as a self-defining predicate with a circular syntax. For this reason, I view the system as related to mereology.
The membership relation depends on the part relation, but is also introduced with a circular syntax.
The sense of these sentences is that to be a subset cannot exclude being a basic open set for a topology. To be an element cannot exclude being an element of a basic open set for a topology.
No functions or constants have such definition. A grammatical equivalence with relation to the primitive relations is defined. A first-order identity is defined after certain axioms establish familiar relations with respect to class equivalence. Second-order extensionality holds, but it is not the criterion of identity. Functions and constants may be introduced only with non-circular syntax in relation to the first-order identity predicate.
Although mereology is generally thought of in terms of the proper part relation, if one reads Lesniewski, there is a great deal of effort involved with investigation of logical equivalence. This work is done in response to Tarski's paper on primitive logistic. Tarski's analysis is done in second-order logic, as is Lesniewski's.
So, the manipulations to obtain an identity relation are consistent with Lesniewski's work, even though it does not seem that way because the usual feature discussed is the part relation.
All objects are classes, with exactly one class as a proper class. The proper part relation is essential to establish this distinction. The first-order identity relation is also essential since the single class that is not an element of any class is unique by virtue of first-order identity. Second-order extensionality does not permit this distinction. The sole proper class is the set universe.
Again, this is consistent with Lesniewski's work. In objecting to Russell's paradox, Lesniewski develops this notion of a full class. This becomes the general mereological principle that a class and its parts are uniform.
The membership relation could be stratified using the proper part relation. But, to establish singletons relative to the modified axiom of pairing, an empty set had to be assumed. This is not a typical mereological assumption. This stratification is comparable to what Quine found necessary in order to have a universal class for his New Foundations. If compared with Euclid, the empty set is "that which has no parts". It is the ground for units which are "that by which what exists is one".
There is a power set axiom. However, a similar axiom only collecting proper parts is included as needed to form the first-order identity. This, too, is comparable to Quine whose system has Cantorian and non-Cantorian classes. In order for the set universe to be differentiated from its elements, proper parts had to be associated with the membership relation in the sense of a power axiom. Once a first-order identity is described, the usual power set axiom can be defined for the Cantorian "finished classes".
If these things do not sound bad enough, the model theory would necessarily be unacceptable to those committed to a predicative model construction strategy. The mereological or topological emphasis is viewed as a second-order structure in spite of the manipulations to obtain a first-order identity relation. This is consistent with the Tarskian analysis and the Lesniewskian program of research. But, it is non-standard with respect to modern foundational thinking.
In this sense, the system is Brouwerian. Logicism and logical atomism reduce the notion of object to presupposed denotations and treat the universe as Ax(x=x) with respect to ontology. When Leibniz introduced the principle of identity of indiscernibles, he did so while invoking geometric principles. The system interprets the Cantorian theory of ones in relation to his topological ideas as reflecting Leibniz' original statement. This is actually the source of the stratified membership relation. I compare it to Brouwerian ideals in that a focus on geometry is a rejection of the logicist interpretation of Leibniz principle of identity of indiscernibles.
In general, it would be best to view the structure as a closure algebra. The set universe would be the intersection over the empty set. So, the system is closed under arbitrary intersection in the same sense that an axiom of union may be interpreted as arbitrary union. With regard to statements in Aristotle, a choice has been made with regard to what "exists". In naive set theory and set theories such as New Foundations, no distinction is made with respect to partitions in relation to negation. Aristotle remarks that one should not attempt to negate substance. A closure algebra interpretation makes a distinguishing choice of closed sets over open sets. This actually derives from the model-theoretic axiom of foundation. The transitive closures satisfy the closure axioms.
It is a very strong system. It is as least as strong as Tarski's axiom. So, it would be modeled by an inaccessible cardinal or stronger.
Although this system will never be published, it was developed carefully. I hope that these remarks help anyone who might wonder what would be involved in a mathematics based on a part relation. But, if you read Lesniewski, and the paper by Tarski, you will see that much of a Lesniewskian system has nothing to do with the part relation. The part relation had merely been an outcome of his analysis of Russell's paradox, and, he insisted that the paradox should be ignored in the development of foundations because it was the result of a mistaken analysis concerning classes.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550144076347351, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/13568/symmetric-tensor-products-of-irreducible-representations/13575
|
## Symmetric tensor products of irreducible representations
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I wonder if there is a way to compute the symmetric tensor power of irreducible representations for classical Lie algebras: $\mathfrak{so}(n)$, $\mathfrak{sp}(n)$, $\mathfrak{sl}(n)$.
The question is motivated by reading "Introduction to Quantum Groups and Crystal Bases" by Hong, J. and Kang, S.-J. The book provides an algorithm for computing the tensor product of any two irreducible representations for classical Lie algebras. Could it be generalized to symmetric parts of tensor products? Any references are very much appreciated!
-
For different reasons (I think), I asked some quantum group experts whether their ability to decompose tensor products also worked for symmetric powers of a representation. My impression is that it is an open problem to find a crystal basis approach to symmetric powers. – Marty Jan 31 2010 at 18:44
Unfortunately, I was given the same answer that this is an open "crystal problem" – Eugene Starling Jan 31 2010 at 21:41
## 4 Answers
I assume, since you haven't explicitly stated it, that you're taking these Lie algebras in characteristic 0 -- the question is much harder in positive characteristic (and in particular, the word "compute" will be ambiguous there). With this assumption, one can use theoretical techniques as in David Speyer's answer; also check out Fulton & Harris's Representation Theory book. But I'd also point out that if you have specific irreps in mind, there are computer packages that will do this computation, eg LiE; and there's even a web interface here: http://www-math.univ-poitiers.fr/~maavl/LiE/form.html that will compute the answer in low rank.
-
Yes, you are right, the characteristic I have in mind is zero. Thank you for the reference, the algorithm used there has an open source, which is well commented with one more useful reference at the end. However, I still hope that the problem is solved somewhere. – Eugene Starling Jan 31 2010 at 21:15
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It's maybe worth mentioning a reason why the crystal techniques don't immediately solve this problem: If $V$ and $W$ are representations of $\mathfrak g$, then we can find the corresponding representations of the quantum group $V_v$,$W_v$, and take their tensor product. The crystal basis theory shows you how to combinatorially understand the decomposition of $V_v\otimes W_v$ and so this also solves in turn the problem of decomposing $V\otimes W$. However the "flip map" $\sigma\colon V_v\otimes W_v \to W_v\otimes V_v$ is not an isomorphism representations (even though the representations are in fact isomorphic). Thus $\sigma$ doesn't induce an automorphism of $V_v\otimes V_v$, and so the "symmetric part" of the quantum version of a second tensor power doesn't immediately make sense.
This gets reflected on the combinatoral side by the fact that if $B_1$ and $B_2$ are the crystals of highest weight representations, then their tensor product $B_1 \boxtimes B_2$ and $B_2 \boxtimes B_1$ are isomorphic but this isomorphism is not realized by the flip map: one first has to twist by the map which takes a "highest weight" crystal to a "lowest weight" crystal (this has been studied by a number of people: e.g. Berenstein, Henriques-Kamnitzer, and in the context of Littelmann paths by Biane-Bourgerol-O'Connell and probably others too!) So again, at least at first sight it's not immediately clear what the "symmetric part" of a crystal $B_1 \boxtimes B_1$ should be.
-
Computing the character of a symmetric power is fairly easy; see this note by Stavros Kousidos. Decomposing that character into irreps is a form of plethysm, and is very hard.
-
Could you please be more precise. Is it only hard to solve or is it unsolved? Are there cases for which it is solved? – Eugene Starling Jan 31 2010 at 21:34
What David means is: there is an algorithm for computing which simples appear with which multiplicity. However, it is very computationally intensive, so it is not very practical in large cases. – Ben Webster♦ Feb 1 2010 at 0:24
2
I'll just note: what it seems likely are open at this point is to find a positive algorithm, one where the multiplicities are presented in a subtract-free way. Crystal bases give such an algorithm for tensor products; I don't think one exists for symmetric powers. – Ben Webster♦ Feb 1 2010 at 0:46
1
There are known algorithms for this decomposition and, as Ben says, they are not positive. I don't know what their efficiency is, either in practice or in the sense of theoretical complexity. – David Speyer Feb 1 2010 at 12:50
Ok, I see that I should wait a number of years until the positive algorithm will be found!:) – Eugene Starling Feb 1 2010 at 14:19
Here's a very special case for `$\mathfrak{gl}_n$ in characteristic 0 (which I have found useful in my work). Let $V$ be the vector representation, and for a partition $\lambda$ with at most $n$ parts, let ${\bf S}_{\lambda}(V)$ denote the corresponding highest weight representation. Then $Sym^n(Sym^2 V) = \bigoplus_{\lambda} {\bf S}_{\lambda}(V)$ where the direct sum is over all partitions $\lambda$ of size $2n$ with at most $n$ parts such that each part of $\lambda$ is even. Similarly, $Sym^n(\bigwedge^2 V) = \bigoplus_{\mu} {\bf S}_{\mu}(V)$ where the direct sum is over all partitions $\mu$ of $2n$ with at most $n$ parts such that each part of the conjugate partition $\mu'$ is even. If you want the corresponding result for $\mathfrak{sl}_n$ we just introduce the equivalence relation $(\lambda_1, \dots, \lambda_n) \equiv (\lambda_1 + r, \dots, \lambda_n + r)$ where $r$ is an arbitrary integer.
One reference for this is Proposition 2.3.8 of Weyman's book Cohomology of Vector Bundles and Syzygies (note that $L_\lambda E$ in that book means a highest weight representation with highest weight $\lambda'$ and not $\lambda$).
Another reference is Example I.8.6 of Macdonald's Symmetric Functions and Hall Polynomials, second edition, which proves the corresponding character formulas.
-
Thank you for Weynman reference! The answer in the case of rank-two tensors is known to me, it is related to building invariant tensors for $\mathfrak{so}(n)$ and $\mathfrak{sp}(n)$ by taking tensor powers of the invariant tensor with the lowest rank -- the rank two symmetric and rank two antisymmetric, respectively – Eugene Starling Feb 3 2010 at 13:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421194791793823, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/tags/value-at-risk/new
|
# Tag Info
## New answers tagged value-at-risk
1
### Is my VaR calculation correct?
you need to use the forecast for both the mean and sigma. It should look something like this: forecast = ugarchforecast(modelfit, n.ahead = 1, data = mydata); sigma(forecast); fitted(forecast) Then plug these values into the equation: \begin{align} \hat{VaR}_{0.99,T|T-1}&=\hat{\mu}_{T|T-1} + \hat{\sigma}_{T|T-1} * q_{0.99} \end{align} where $T$ is ...
1
### Fitting distributions to financial data using volatility model to estimate VaR
The standard answer to your question would be to do the maximum likelihood estimation. When you say "plug in $\sigma$" you can show that the sample estimate of $\sigma$ is actually the maximum likelihood estimate of $\sigma$ for the normal distribution. If I can assume that your data are IID then what you do is use your distribution with parameters ...
Top 50 recent answers are included
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8131420016288757, "perplexity_flag": "middle"}
|
http://www.uncommondescent.com/management/a-designed-objects-entropy-must-increase-for-its-design-complexity-to-increase-part-1/
|
benefits of prozac and how to buy prozac you can use a credit card prozacs advantages and how to order buy prozac online and read here buy lexapro online without a prescription is no longer a problem buy lasix buy citalopram Lexapro answer your qestions
Home » Biophysics, Comp. Sci. / Eng., Complex Specified Information, ID Foundations, Informatics, Physics, Self-Org. Theory » A Designed Object’s Entropy Must Increase for Its Design Complexity to Increase – Part 1
# A Designed Object’s Entropy Must Increase for Its Design Complexity to Increase – Part 1
September 4, 2012 Posted by scordova under
The common belief is that adding disorder to a designed object will destroy the design (like a tornado passing through a city, to paraphrase Hoyle). Now if increasing entropy implies increasing disorder, creationists will often reason that “increasing entropy of an object will tend to destroy its design”. This essay will argue mathematically that this popular notion among creationists is wrong.
The correct conception of these matters is far more nuanced and almost the opposite of (but not quite) what many creationists and IDists believe. Here is the more correct view of entropy’s relation to design (be it man-made or otherwise):
1. increasing entropy can increase the capacity for disorder, but it doesn’t necessitate disorder
2. increasing an object’s capacity for disorder doesn’t imply that the object will immediately become more disordered
3. increasing entropy in a physical object is a necessary (but not sufficient) condition for increasing the complexity of the design
4. contrary to popular belief, a complex design is a high entropy design, not a low entropy design. The complex organization of a complex design is made possible (and simultaneously improbable) by the high entropy the object contains.
5. without entropy there is no design
If there is one key point it is: Entropy makes design possible but simultaneously improbable. And that is the nuance that many on both sides of the ID/Creation/Evolution controversy seem to miss.
The notion of entropy is foundational to physics, engineering, information theory and ID. These essays are written to provide a discussion on the topic of entropy and its relationship to other concepts such as uncertainty, probability, microstates, and disorder. Much of what is said will go against popular understanding, but the aim is to make these topics clearer. Some of the math will be in a substantially simplified form, so apologies in advance to the formalists out there.
Entropy may refer to:
1. Thermodynamic (Statistical Mechanics) entropy – measured in Joules/Kelvin, dimensionless units, degrees of freedom, or (if need be) bits
2. Shannon entropy – measured in bits or dimensionless units
3. Algorithmic entropy or Kolmogorov complexity – measured also in bits, but deals with the compactness of a representation. A file that can be compressed substantially has low algorithmic entropy, whereas files which can’t be compressed evidence high algorithmic entropy (Kolmogorov complexity). Both Shannon entropy and algorithmic entropies are within the realm of information theory, but by default, unless otherwise stated, most people associate Shannon entropy as the entropy in information theory.
4. disorder in the popular sense – no real units assigned, often not precise enough to be of scientific or engineering use. I’ll argue the term “disorder” is a misleading way to conceptualize entropy. Unfortunately, the word “disorder” is used even in university science books. I will argue mathematically why this is so…
The reason the word entropy is used in the disciplines of Thermodynamics, Statistical Mechanics and Information Theory is that there are strong mathematical analogies. The evolution of the notion of entropy began with Clausius who also coined the term for thermodynamics, then Boltzmann and Gibbs related Clausius’s notions of entropy to Newtonian (Classical) Mechanics, then Shannon took Boltzmann’s math and adapted it to information theory, and then Landauer brought things back full circle by tying thermodynamics to information theory.
How entropy became equated with disorder, I do not know, but the purpose of these essays is to walk through actual calculations of entropy and allow the reader to decide for himself whether disorder can be equated with entropy. My personal view is that Shannon entropy and Thermodynamic entropy cannot be equated with disorder, even though the lesser-known algorithmic entropy can. So in general entropy should not be equated with disorder. Further, the problem of organization (which goes beyond simple notions of order and entropy) needs a little more exploration. Organization sort of stands out as a quality that seems difficult to assign numbers to.
The calculations that follow are to give an illustration how I arrived at some my conclusions.
First I begin with calculating Shannon entropy for simple cases. Thermodynamic entropy will be covered in the Part II.
Bill Dembski actually alludes to Shannon entropy in his latest offering on Conservation of Information Made Simple
In the information-theory literature, information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy).
William Dembski
Conservation of Information Made Simple
To elaborate on what Bill said, if we have a fair coin, it can exist in two microstates: heads (call it microstate 1) or tails (call it microstate 2).
After a coin flip, the probability of the coin emerging in microstate 1 (heads) is 1/2. Similarly the probability of the coin emerging in microstate 2 (tails) is 1/2. So let me tediously summarize the facts:
N = Ω(N) = Ω = Number of microstates of a 1-coin system = 2
x1 = microstate 1 = heads
x2 = microstate 2 = tails
P(x1) = P(microstate 1)= P(heads) = probability of heads = 1/2
P(x2) = P(microstate 2)= P(tails) = probability of tails = 1/2
Here is the process for calculating the Shannon Entropy of a 1-coin information system starting with Shannon’s famous formula:
$\large I=-\sum_{i=0}^n {p({x}_{i})\log_{2}p(x_{i})}$
$\large =-p({x}_{1})\log_{2}p(x_{1})-p({x}_{2})\log_{2}p(x_{2})$
$\large =-p(\text{heads})\log_{2}p(\text{heads})-p(\text{tails})\log_{2}p(\text{tails})$
$\large =-(\frac{1}{2})\log_{2}(\frac{1}{2})-(\frac{1}{2})\log_{2}(\frac{1}{2})$
$\large =\frac{1}{2}+\frac{1}{2}= 1 = 1 \text { bit}$
where I is the Shannon entropy (or measure of information).
This method seems a rather torturous way to calculate the Shannon entropy of a single coin. A slightly simpler method exists if we take advantage of the fact that each microstate of the coin (heads or tails) is equiprobable, and thus conforms to the fundamental postulate of statistical mechanics, and thus we can calculate the number of bits by simply taking the logarithm of the number of microstates as is done in statistical mechanics.
$\large I=-\sum_{i=0}^n {p({x}_{i})\log_{2}p(x_{i})}$
$\large =\log_{2}\Omega =\log_{2}(2)=1=1 \text{ bit}$
Now compare this equation of the Shannon entropy in information theory
$\large I=\log_{2}\Omega$
to Boltzmann entropy from statistical mechanics and thermodynamics
$\large S=k_{b}\ln\Omega$
and even more so using different units whereby kb=1
$\large S=\ln\Omega$
The similarities are not an accident. Shannon’s ideas of information theory are a descendant of Boltzmann’s ideas from statistical mechanics and thermodynamics.
To explore Shannon entropy further, let us suppose we have a system of 3 distinct coins. The Shannon entropy relates the amount of information that will be gained by observing the collective state (microstate) of the 3 coins.
First we have to compute the number of microstates or ways the system of coins can be configured. I will lay them out specifically.
microstate 1 = H H H
microstate 2 = H H T
microstate 3 = H T H
microstate 4 = H T T
microstate 5 = T H H
microstate 6 = T H T
microstate 7 = T T H
microstate 8 = T T T
N = Ω(N) = Ω = Number of microstates of a 3-coin system = 8
So there are 8 microstates or outcomes the system can realize. The Shannon entropy can be calculated in the torturous way:
$\small I=-\sum_{i=0}^n {p({x}_{i})log_{2}p(x_{i})}$
$=-p(\text{hhh})\log_{2}p(\text{hhh}) -p(\text{hht})\log_{2}p(\text{hht}) -p(\text{hth})\log_{2}p(\text{hth}) -p(\text{htt})\log_{2}p(\text{htt}) -p(\text{thh})\log_{2}p(\text{thh}) -p(\text{tht})\log_{2}p(\text{tht}) -p(\text{tth})\log_{2}p(\text{tth}) -p(\text{ttt})\log_{2}p(\text{ttt})$
$= -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8})$
$=3=3 \text{ bits}$
or simply taking the logarithm of the number of microstates:
$I=\log_{2}\Omega =\log_{2}8=3=3 \text{ bits}$
It can be shown that for the Shannon entropy of a system of N distinct coins is equal to N bits. That is, a system with 1 coin has 1 bit of Shannon entropy, a system with 2 coins has 2 bits of Shannon entropy, a system of 3 coins has 3 bits of Shannon entropy, etc.
Notice, the more microstates there are, the more uncertainty exists that the system will be found in any given microstate. Equivalently, the more microstates there are, the more improbable the system will be found in a given microstate. Hence, sometimes entropy is described in terms of improbability or uncertainty or unpredictability. But we must be careful here, uncertainty is not the same thing as disorder. That is subtle but important distinction.
So what is the Shannon Entropy of a system of 500 distinct coins? Answer: 500 bits, or the Universal Probability Bound.
By way of extension, if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain gigabits of Shannon entropy. This illustrates the principle that more complex designs require larger Shannon entropy to support the design. It cannot be otherwise. Design requires the presence of entropy, not absence of it.
Suppose we found that a system of 500 coins were all heads, what is the Shannon entropy of this 500-coin system? Answer: 500 bits. No matter what configuration the system is in, whether ordered (like all heads) or disordered, the Shannon entropy remains the same.
Now suppose a small tornado went through the room where the 500 coins resided (with all heads before the tornado), what is the Shannon entropy after the tornado? Same as before, 500-bits! What may arguably change is the algorithmic entropy (Kolmogorov complexity). The algorithmic entropy may go up, which simply means we can’t represent the configuration of the coins in a compact sort of way like saying “all heads” or in the Kleene notation as H*.
Amusingly, if in the aftermath of the tornado’s rampage, the room got cooler, the thermodynamic entropy of the coins would actually go down! Hence the order or disorder of the coins is independent not only of the Shannon entropy but also the thermodynamic entropy.
Let me summarize the before and after of the tornado going through the room with the 500 coins:
BEFORE : 500 coins all heads, Temperature 80 degrees
Shannon Entropy : 500 bits
Algorithmic Entropy (Kolmogorov complexity): low
Thermodynamic Entropy : some finite starting value
AFTER : 500 coins disordered
Shannon Entropy : 500 bits
Algorithmic Entropy (Kolmogorov complexity): high
Thermodynamic Entropy : lower if the temperature is lower, higher if the temperature is higher
Now to help disentangle concepts a little further consider three 3 computer files:
File_A : 1 gigabit of binary numbers randomly generated
File_B : 1 gigabit of all 1′s
File_C : 1 gigabit encrypted JPEG
Here are the characteristics of each file:
File_A : 1 gigabit of binary numbers randomly generated
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly disorganized
inference : not designed
File_B : 1 gigabit of all 1′s
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): low
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : designed (with qualification, see note below)
File_C : 1 gigabit encrypted JPEG
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : extremely designed
Notice, one cannot ascribe high levels of improbable design based on the Shannon entropy or algorithmic entropy without some qualification. Existence of improbable design depends on the existence of high Shannon entropy, but is somewhat independent of algorithmic entropy. Further, to my knowledge, there is not really a metric for organization that is separate from Kolmogorov complexity, but this definition needs a little more exploration and is beyond my knowledge base.
Only in rare cases will high Shannon entropy and low algorithmic entropy (Kolmogorov complexity) result in a design inference. One such example is 500 coins all heads. The general method to infer design (including man-made designs), is that the object:
1. has High Shannon Entropy (high improbability)
2. conforms to an independent (non-postdictive) specification
In contrast to the design of coins being all heads where the Shannon entropy is high but the algorithmic entropy is low, in cases like software or encrypted JPEG files, the design exists in an object that has both high Shannon entropy and high algorithmic entropy. Hence, the issues of entropy are surely nuanced, but on balance entropy is good for design, not always bad for it. In fact, if an object evidences low Shannon entropy, we will not be able to infer design reliably.
The reader might be disturbed at my final conclusion in as much as it grates against popular notions of entropy and creationist notions of entropy. But well, I’m no stranger to this controversy. I explored Shannon entropy in this thread because it is conceptually easier than its ancestor concept of thermodynamic entropy.
In the Part II (which will take a long time to write) I’ll explore thermodynamic entropy and its relationship (or lack thereof) to intelligent design. But in brief, a parallel situation often arises: the more complex a design, the higher its thermodynamic entropy. Why? The simple reason is that more complex designs involve more parts (molecules) and more molecules in general imply higher thermodynamic (as well as Shannon) entropy. So the question of Earth being an open system is a bit beside the point since entropy is essential for intelligent designs to exist in the first place.
[UPDATE: the sequel to this thread is in Part 2]
Acknowledgements (both supporters and critics):
1. Elizabeth Liddle for hosting my discussions on the 2nd Law at TheSkepticalZone
2. physicist Olegt who offered generous amounts of time in plugging the holes in my knowledge, particularly regarding the Liouville Theorem and Configurational Entropy
3. retired physicist Mike Elzinga for his pedagogical examples and historic anecdotes. HT: the relationship of more weight to more entropy
4. An un-named theoretical physicist who spent many hours teaching his students the principles of Statistical Mechanics and Thermodynamics
5. physicists Andy Jones and Rob Sheldon
6. Neil Rickert for helping me with Latex
7. Several others that have gone unnamed
NOTE:
[UPDATE and correction: gpuccio was kind enough to point out that in the case of File_B, the design inference isn't necessarily warranted. It's possible an accident or programming error or some other reason could make all the bits 1. It would only be designed if that was the designer's intention.]
[UPDATE 9/7/2012]
Boltzmann
“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.” (Final paragraph of #87, p. 443.)
That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?
Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”
There is no basis in physical science for interpreting entropy change as involving order and disorder.
### 62 Responses to A Designed Object’s Entropy Must Increase for Its Design Complexity to Increase – Part 1
1. but on balance entropy is good for design, not always bad for it.
One problem, as with neo-Darwinists, you don’t have any physical example of ‘not always bad’. i.e. you have not one molecular machine or one functional protein coming about by purely material processes. But the IDists and creationists have countless examples of purely material processes degrading as such.
2. SC:
S = k*log W, per Boltzmann
where W is the number of ways that mass and/or energy at ultra-microscopic level may be arranged, consistent with a given Macroscopic [lab-level observable] state.
That constraint is crucial and brings out a key subtlety in the challenge to create functionally specific organisation on complex [multi-part] systems through forces of blind chance and mechanical necessity.
FSCO/I is generally deeply isolated in the space of raw configurational possibilities, and is not normally created by nature working freely. Nature, working freely, on the gamut of our solar system or of the observed cosmos, will blindly sample the space from some plausible, typically arbitrary initial condition, and thereafter it will undergo a partly blind random walk, and there may be mechanical dynamics at work that will impress a certain orderly motion, or the like.
(Think about molecules in a large parcel of air participating in wind and weather systems. The temperature is a metric of avg random energy per degree of freedom of relevant particles, usually translational, rotational and vibrational. At the same time, the body of air as a whole is drifting along in the wind that may reflect planetary scale convection.)
Passing on to Shannon’s entropy in the information context (and noting Jaynes et al on the informational view of thermodynamics that I do not see adequately reflected in your remarks above — there are schools of thought here, cf. my note here on), what Shannon was capturing is average info per symbol transmitted in the case of non equiprobable symbols; the normal state of codes. This turns out to link to the Gibbs formulation of entropy you cite. And, I strongly suggest you look at Harry S Robertson’s Statistical Thermophysics Ch 1 (Prentice) to see what it seems from appearances that your interlocutors have not been telling you. That is, there is a vigorous second school of thought within physics on stat thermo-d, that bridges to Shannon’s info theory.
Wikipedia bears witness to the impact of this school of thought:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.
But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
So, when we see the value of H in terms of uncommunicated micro- level information based on lab observable state, we see that entropy, traditionally understood per stat mech [degrees of micro-level freedom], is measuring the macro-micro info-gap [MmIG], NOT the info we have in hand per macro-observation.
The subtlety this leads to is that when we see a living unicellular species of type x, providing we know the genome, through lab level observability, we know a lot about the specific molecular states from a lab level observation. The MmIG is a lot smaller, as there is a sharp constraint on possible molecular level configs, once we have a living organism in hand. When it dies, the active informationally directed maintenance of such ceases, and spontaneous changes take over. The highly empirically reliable result is well known: decay and breakdown to simpler component molecules.
We also know that in the period of historic observation and record — back to the days of early microscopy 350 years back, this is passed on from generation to generation by algorithmic processes. Such a system is in a programmed, highly constrained state governed by gated encapsulation, metabolic automata that manage an organised flow-through of energy and materials [much of this in the form of assembled smart polymers such as proteins] backed up by a von Neumann self-replicator [vNSR].
We can also infer on this pattern right back to the origins of cell based life, on the relevant macro-traces of such life.
So, how do we transition from Darwin’s warm pond with salts [or the equivalent] state, to the living cell state?
The dominant OOL school, under the methodological naturalism imposition, poses a claimed chem evo process of spontaneous cumulative change. This runs right into the problem of accessing deeply isolated configs spontaneously.
For, sampling theory and common sense alike tell us that pond state — due to the overwhelming bulk of configs and some very adverse chemical reaction equilibria overcome in living systems by gating, encapsulation and internal functional organisation that uses coded data and a steady flow of ATP energy battery molecules to drive algorithmic processes — will be dominant over spontaneous emergence at organised cell states (or any reasonable intermediates).
There is but one empirically confirmed means of getting to FSCO/I, namely design.
In short, on evidence, the info-gap between pond state and cell state, per the value of FSCO/I as sign, is best explained as being bridged by design that feeds in the missing info and through intelligently directed organising work [IDOW] creates in this case a self replicating micro-level molecular nanotech factory. That self replication also uses an information and organisation-rich vNSR, and allows a domination of the situation by a new order of entity, the living cell.
So, it is vital for us to understand at the outset of discussion that the entropy in a thermodynamic system is a metric of missing information on the microstate, given the number of microstate possibilities consistent with the macro-observable state. That is, entropy measures the MmIG.
Where also, the living cell is in a macro-observable state that initially and from generation to generation [via vNSR in algorithmically controlled action on coded information], locks down the number of possible states drastically relative to pond state. The debate on OOL, then is about whether it is a credible argument on observed evidence in the here and now, for pond state, via nature operating freely and without IDOW, to go to cell-state. (We know that IDOW routinely creates FSCO/I, a dominant characteristic of living cells.)
A common argument is that raw injection of energy suffices to bridge the info-gap without IDOW, as the energy flow and materials flows allow escape from “entropy increases in isolated systems.” What advocates of this do not usually disclose, is that raw injection of energy tends to go to heat, i.e. to dramatic rise in the number of possible configs, given the combinational possibilities of so many lumps of energy dispersed across so many mass-particles. That is, MmIG will strongly tend to RISE on heating. Where also, for instance, spontaneously ordered systems like hurricanes are not based on FSCO/I, but instead on the mechanical necessities of Coriolis forces acting on large masses of air moving under convection on a rotating spherical body.
(Cf my discussion here on, remember, I came to design theory by way of examination of thermodynamics-linked issues. We need to understand and visualise step by step what is going on behind the curtain of serried ranks of algebraic, symbolic expressions and forays into calculus and partial differential equations etc. Otherwise, we are liable to miss the forest for the trees. Or, the old Wizard of Oz can lead us astray.)
A good picture of the challenge was posed by Shapiro in Sci AM, in challenging the dominant genes first school of thought, in words that also apply to his own metabolism first thinking:
RNA’s building blocks, nucleotides, are complex substances as organic molecules go. They each contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern. Many alternative ways exist for making those connections, yielding thousands of plausible nucleotides that could readily join in place of the standard ones but that are not represented in RNA. That number is itself dwarfed by the hundreds of thousands to millions of stable organic molecules of similar size that are not nucleotides [--> and he goes on, with the issue of assembling component monomers into functional polymers and organising them into working structures lurking in the background] . . . .
[--> Then, he flourishes, on the notion of getting organisation without IDOW, merely on opening up the system:] The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.
Orgel’s reply in a post-humus paper, is equally revealing on the escape from IDOW problem:
If complex cycles analogous to metabolic cycles could have operated on the primitive Earth, before the appearance of enzymes or other informational polymers, many of the obstacles to the construction of a plausible scenario for the origin of life would disappear . . . Could a nonenzymatic “metabolic cycle” have made such compounds available in sufficient purity to facilitate the appearance of a replicating informational polymer?
It must be recognized that assessment of the feasibility of any particular proposed prebiotic cycle must depend on arguments about chemical plausibility, rather than on a decision about logical possibility . . . few would believe that any assembly of minerals on the primitive Earth is likely to have promoted these syntheses in significant yield. Each proposed metabolic cycle, therefore, must be evaluated in terms of the efficiencies and specificities that would be required of its hypothetical catalysts in order for the cycle to persist. Then arguments based on experimental evidence or chemical plausibility can be used to assess the likelihood that a family of catalysts that is adequate for maintaining the cycle could have existed on the primitive Earth . . . .
Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [6]? The lack of a supporting background in chemistry is even more evident in proposals that metabolic cycles can evolve to “life-like” complexity. The most serious challenge to proponents of metabolic cycle theories—the problems presented by the lack of specificity of most nonenzymatic catalysts—has, in general, not been appreciated. If it has, it has been ignored. Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . .
The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.
So, we have to pull back the curtain and make sure we first understand that the sense in which entropy is linked to information in a thermodynamics context is that we are measuring missing info on the micro-state given the macro-state. So, we should not allow the similarity of mathematics to lead us to think that IDOW is irrelevant to OOL, once a system is opened up to energy and mass flows.
In fact, given the delicacy and unfavourable kinetics and equilibria involved — notice all those catalysing enzymes and ATP energy battery molecules in life? — the challenge of IDOW is the elephant standing in the middle of the room that ever so many are desperate not to speak bout.
KF
3. 3
gpuccio
Sal:
Great post!
A few comments:
a) Shannon entropy is the basis for what we usually call the “complexity” of a digital string.
b) Regarding the exmaple in:
File_B : 1 gigabit of all 1?s
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): low
Organizational characteristics: highly organized
inference : designed
I would say that the inference of design is not necessarily warrnted. According to the explanatory filter, in the presence of this kind of compressible order we must first ascertain that no deterministic effect is the cause of the apparent order. IOWs, many simple deterministic causes could explain a series of 1s, however long. Obviously, such a scenario would imply that the system that generates the string is not random, or that the probabilities of 0 and 1 are extremely different. I agree that, if we have assurance that the system is really random and the probabilities are as described, then a long series of 1 allows the design inference.
c) A truly pseudo-random string, which has no formal evidence of order (no compressibility), like the jpeg file, but still conveys very specific information, is certainly the best scenario for design inference. Indeed, as far as I know, no deterministic system can explain the emergence of that kind of object.
d) Regarding the problem of specification, I paste here what I posted yesterday in another thread, as I believe it is pertinent to the discussion here:
“I suppose much confusion derives from Shannon’s theory, which is not, and never has been, a theory about information, but is often considered as such.
Contemporary thought, in the full splendor of its dogmatic reductionism, has done its best to ignore the obvious connection between information and meaning. Everybody talks about information, but meaning is quite a forbidden word. As if the two things could be separated!
I have discussed for days here with darwinists just trying to have them admit that sucg a thing as “function” does exist. Another forbidden word.
And even IDist often are afraid to admit that meaning and function cannot even be defined if we do not refer to a conscious being. I have challenged evrybody I know to give a definition, any definition, of meaning, function and intent without recurring to conscious experience. How strange, the same concepts on which all our life, and I would say also all our science and knowledge, are based, have become forbidden in modern thought. And consciousness itself, what we are, the final medium that cognizes everything, can scarcely be mentioned, if not to affirm that it is an unscientific concept, or even better a concept completely reducible to non conscious aggregations of things (!!!).
The simple truth is: there is no cognition, no science, no knowledge, without the fundamental intuition of meaning. And that intuition is a conscious event, and nothing else.
There is no understanding of meaning in stones, rivers or computers. Only in conscious beings. And information is only a way to transfer menaing from one conscious being to another. Through material systems, that carry the meaning, but have no understanding of it.
That’s what Shannon considered: what is necessary to transfer information through a material system. In that context, meaning is not relevant, because what we are measuring is only a law of transmission.
The same is true in part for ID. The measure of complexity is a Shannon measure, it has nothing to do with meaning. A random string can be as complex as a meaningful string.
But the concept of specification does relate to meaning, in one of its many aspects, for instance as function. The beautiful simplicity of ID theory is that it measures the complexity necessary to convey a specific meaning. That is simple and beautiful, beacuse it connects the quantitative concept of Shannon complexity to the qualitative aspect of meaning and function.”
4. F/N: I have put the above comment up with a diagram here.
5. F/N 2: We should bear in mind that information arises when we move from an a priori state to an a posteriori one where with significant assurance we are in a state that is to some degree or other surprising. Let me clip my always linked note, here on:
let us now consider in a little more detail a situation where an apparent message is received. What does that mean? What does it imply about the origin of the message . . . or, is it just noise that “got lucky”?
If an apparent message is received, it means that something is working as an intelligible — i.e. functional — signal for the receiver. In effect, there is a standard way to make and send and recognise and use messages in some observable entity [e.g. a radio, a computer network, etc.], and there is now also some observed event, some variation in a physical parameter, that corresponds to it. [For instance, on this web page as displayed on your monitor, we have a pattern of dots of light and dark and colours on a computer screen, which correspond, more or less, to those of text in English.]
Information theory, as Fig A.1 illustrates, then observes that if we have a receiver, we credibly have first had a transmitter, and a channel through which the apparent message has come; a meaningful message that corresponds to certain codes or standard patterns of communication and/or intelligent action. [Here, for instance, through HTTP and TCP/IP, the original text for this web page has been passed from the server on which it is stored, across the Internet, to your machine, as a pattern of binary digits in packets. Your computer then received the bits through its modem, decoded the digits, and proceeded to display the resulting text on your screen as a complex, functional coded pattern of dots of light and colour. At each stage, integrated, goal-directed intelligent action is deeply involved, deriving from intelligent agents -- engineers and computer programmers. We here consider of course digital signals, but in principle anything can be reduced to such signals, so this does not affect the generality of our thoughts.]
Now, it is of course entirely possible, that the apparent message is “nothing but” a lucky burst of noise that somehow got through the Internet and reached your machine. That is, it is logically and physically possible [i.e. neither logic nor physics forbids it!] that every apparent message you have ever got across the Internet — including not just web pages but also even emails you have received — is nothing but chance and luck: there is no intelligent source that actually sent such a message as you have received; all is just lucky noise:
“LUCKY NOISE” SCENARIO: Imagine a world in which somehow all the “real” messages sent “actually” vanish into cyberspace and “lucky noise” rooted in the random behaviour of molecules etc, somehow substitutes just the messages that were intended — of course, including whenever engineers or technicians use test equipment to debug telecommunication and computer systems! Can you find a law of logic or physics that: [a] strictly forbids such a state of affairs from possibly existing; and, [b] allows you to strictly distinguish that from the “observed world” in which we think we live? That is, we are back to a Russell “five- minute- old- universe”-type paradox. Namely, we cannot empirically distinguish the world we think we live in from one that was instantly created five minutes ago with all the artifacts, food in our tummies, memories etc. that we experience. We solve such paradoxes by worldview level inference to best explanation, i.e. by insisting that unless there is overwhelming, direct evidence that leads us to that conclusion, we do not live in Plato’s Cave of deceptive shadows that we only imagine is reality, or that we are “really” just brains in vats stimulated by some mad scientist, or we live in a The Matrix world, or the like. (In turn, we can therefore see just how deeply embedded key faith-commitments are in our very rationality, thus all worldviews and reason-based enterprises, including science. Or, rephrasing for clarity: “faith” and “reason” are not opposites; rather, they are inextricably intertwined in the faith-points that lie at the core of all worldviews. Thus, resorting to selective hyperskepticism and objectionism to dismiss another’s faith-point [as noted above!], is at best self-referentially inconsistent; sometimes, even hypocritical and/or — worse yet — willfully deceitful. Instead, we should carefully work through the comparative difficulties across live options at worldview level, especially in discussing matters of fact. And it is in that context of humble self consistency and critically aware, charitable open-mindedness that we can now reasonably proceed with this discussion.)
In short, none of us actually lives or can consistently live as though s/he seriously believes that: absent absolute proof to the contrary, we must believe that all is noise. [To see the force of this, consider an example posed by Richard Taylor. You are sitting in a railway carriage and seeing stones you believe to have been randomly arranged, spelling out: "WELCOME TO WALES." Would you believe the apparent message? Why or why not?]
Q: Why then do we believe in intelligent sources behind the web pages and email messages that we receive, etc., since we cannot ultimately absolutely prove that such is the case?
ANS: Because we believe the odds of such “lucky noise” happening by chance are so small, that we intuitively simply ignore it. That is, we all recognise that if an apparent message is contingent [it did not have to be as it is, or even to be at all], is functional within the context of communication, and is sufficiently complex that it is highly unlikely to have happened by chance, then it is much better to accept the explanation that it is what it appears to be — a message originating in an intelligent [though perhaps not wise!] source — than to revert to “chance” as the default assumption. Technically, we compare how close the received signal is to legitimate messages, and then decide that it is likely to be the “closest” such message. (All of this can be quantified, but this intuitive level discussion is enough for our purposes.)
In short, we all intuitively and even routinely accept that: Functionally Specified, Complex Information, FSCI, is a signature of messages originating in intelligent sources.
Thus, if we then try to dismiss the study of such inferences to design as “unscientific,” when they may cut across our worldview preferences, we are plainly being grossly inconsistent.
Further to this, the common attempt to pre-empt the issue through the attempted secularist redefinition of science as in effect “what can be explained on the premise of evolutionary materialism – i.e. primordial matter-energy joined to cosmological- + chemical- + biological macro- + sociocultural- evolution, AKA ‘methodological naturalism’ ” [ISCID def'n: here] is itself yet another begging of the linked worldview level questions.
For in fact, the issue in the communication situation once an apparent message is in hand is: inference to (a) intelligent — as opposed to supernatural — agency [signal] vs. (b) chance-process [noise]. Moreover, at least since Cicero, we have recognised that the presence of functionally specified complexity in such an apparent message helps us make that decision. (Cf. also Meyer’s closely related discussion of the demarcation problem here.)
More broadly the decision faced once we see an apparent message, is first to decide its source across a trichotomy: (1) chance; (2) natural regularity rooted in mechanical necessity (or as Monod put it in his famous 1970 book, echoing Plato, simply: “necessity”); (3) intelligent agency. These are the three commonly observed causal forces/factors in our world of experience and observation. [Cf. abstract of a recent technical, peer-reviewed, scientific discussion here. Also, cf. Plato's remark in his The Laws, Bk X, excerpted below.]
Each of these forces stands at the same basic level as an explanation or cause, and so the proper question is to rule in/out relevant factors at work, not to decide before the fact that one or the other is not admissible as a “real” explanation.
This often confusing issue is best initially approached/understood through a concrete example . . .
A CASE STUDY ON CAUSAL FORCES/FACTORS — A Tumbling Die: Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance.
But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!
This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious — as some are tempted to imagine or assert. [More details . . .] . . . .
The second major step is to refine our thoughts, through discussing the communication theory definition of and its approach to measuring information. A good place to begin this is with British Communication theory expert F. R Connor, who gives us an excellent “definition by discussion” of what information is:
From a human point of view the word ‘communication’ conveys the idea of one person talking or writing to another in words or messages . . . through the use of words derived from an alphabet [NB: he here means, a "vocabulary" of possible signals]. Not all words are used all the time and this implies that there is a minimum number which could enable communication to be possible. In order to communicate, it is necessary to transfer information to another person, or more objectively, between men or machines.
This naturally leads to the definition of the word ‘information’, and from a communication point of view it does not have its usual everyday meaning. Information is not what is actually in a message but what could constitute a message. The word could implies a statistical definition in that it involves some selection of the various possible messages. The important quantity is not the actual information content of the message but rather its possible information content.
This is the quantitative definition of information and so it is measured in terms of the number of selections that could be made. Hartley was the first to suggest a logarithmic unit . . . and this is given in terms of a message probability. [p. 79, Signals, Edward Arnold. 1972. Bold emphasis added. Apart from the justly classical status of Connor's series, his classic work dating from before the ID controversy arose is deliberately cited, to give us an indisputably objective benchmark.]
To quantify the above definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the “Shannon sense” – never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a “typical” long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M –> pj, and in the limit attains equality. We term pj the a priori — before the fact — probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori — after the fact — probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver:
I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1
This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that:
I total = Ii + Ij . . . Eqn 2
For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is:
I = log [1/pj] = – log pj . . . Eqn 3
This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so:
Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4
So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is – log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see “wueen” it is most likely to have been “queen.”)
Further to this, we may average the information per symbol in the communication system thusly (giving in termns of -H to make the additive relationships clearer):
- H = p1 log p1 + p2 log p2 + . . . + pn log pn
or, H = – SUM [pi log pi] . . . Eqn 5
H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: “it is often referred to as the entropy of the source.” [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form . . . [--> previously discussed]
A baseline for discussion.
KF
6. It is interesting to note that in the building of better random number generators for computer programs, a better source of entropy is required:
Cryptographically secure pseudorandom number generator
Excerpt: From an information theoretic point of view, the amount of randomness, the entropy that can be generated is equal to the entropy provided by the system. But sometimes, in practical situations, more random numbers are needed than there is entropy available.
http://en.wikipedia.org/wiki/C....._generator
And Indeed we find:
Thermodynamics – 3.1 Entropy
Excerpt:
Entropy – A measure of the amount of randomness or disorder in a system.
And the maximum source of randomness in the universe is found to be,,,
Entropy of the Universe – Hugh Ross – May 2010
Excerpt: Egan and Lineweaver found that supermassive black holes are the largest contributor to the observable universe’s entropy. They showed that these supermassive black holes contribute about 30 times more entropy than what the previous research teams estimated.
http://www.reasons.org/entropy-universe
Roger Penrose – How Special Was The Big Bang?
“But why was the big bang so precisely organized, whereas the big crunch (or the singularities in black holes) would be expected to be totally chaotic? It would appear that this question can be phrased in terms of the behaviour of the WEYL part of the space-time curvature at space-time singularities. What we appear to find is that there is a constraint WEYL = 0 (or something very like this) at initial space-time singularities-but not at final singularities-and this seems to be what confines the Creator’s choice to this very tiny region of phase space.”
,,, there is also a very strong case to be made that the cosmological constant in General Relativity, the extremely finely tuned 1 in 10^120 expansion of space-time, drives, or is deeply connected to, entropy as measured by diffusion:
Big Rip
Excerpt: The Big Rip is a cosmological hypothesis first published in 2003, about the ultimate fate of the universe, in which the matter of universe, from stars and galaxies to atoms and subatomic particles, are progressively torn apart by the expansion of the universe at a certain time in the future. Theoretically, the scale factor of the universe becomes infinite at a finite time in the future.
http://en.wikipedia.org/wiki/Big_Rip
Thus, though neo-Darwinian atheists may claim that evolution is as well established as Gravity, the plain fact of the matter is that General Relativity itself, which is by far our best description of Gravity, testifies very strongly against the entire concept of ‘random’ Darwinian evolution.
also of note, quantum mechanics, which is even stronger than general relativity in terms of predictive power, has a very different ‘source for randomness’ which sets it as diametrically opposed to materialistic notion of randomness:
Can quantum theory be improved? – July 23, 2012
Excerpt: However, in the new paper, the physicists have experimentally demonstrated that there cannot exist any alternative theory that increases the predictive probability of quantum theory by more than 0.165, with the only assumption being that measurement (conscious observation) parameters can be chosen independently (free choice, free will assumption) of the other parameters of the theory.,,,
,, the experimental results provide the tightest constraints yet on alternatives to quantum theory. The findings imply that quantum theory is close to optimal in terms of its predictive power, even when the predictions are completely random.
http://phys.org/news/2012-07-quantum-theory.html
Needless to say, finding ‘free will conscious observation’ to be ‘built into’ quantum mechanics as a starting assumption, which is indeed the driving aspect of randomness in quantum mechanics, is VERY antithetical to the entire materialistic philosophy which demands randomness as the driving force of creativity! Could these two different sources of randomness in quantum mechanics and General relativity be one of the primary reasons of their failure to be unified???
Further notes, Boltzman, as this following video alludes to,,,
BBC-Dangerous Knowledge
,,,being a materialist, thought of randomness, entropy, as ‘unconstrained’, as would be expected for someone of the materialistic mindset. Yet Planck, a Christian Theist, corrected that misconception of his:
The Austrian physicist Ludwig Boltzmann first linked entropy and probability in 1877. However, the equation as shown, involving a specific constant, was first written down by Max Planck, the father of quantum mechanics in 1900. In his 1918 Nobel Prize lecture, Planck said:This constant is often referred to as Boltzmann’s constant, although, to my knowledge, Boltzmann himself never introduced it – a peculiar state of affairs, which can be explained by the fact that Boltzmann, as appears from his occasional utterances, never gave thought to the possibility of carrying out an exact measurement of the constant. Nothing can better illustrate the positive and hectic pace of progress which the art of experimenters has made over the past twenty years, than the fact that since that time, not only one, but a great number of methods have been discovered for measuring the mass of a molecule with practically the same accuracy as that attained for a planet.
http://www.daviddarling.info/e.....ation.html
Related notes:
“It from bit symbolizes the idea that every item of the physical world has at bottom – at a very deep bottom, in most instances – an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that things physical are information-theoretic in origin.”
John Archibald Wheeler
Zeilinger’s principle
Zeilinger’s principle states that any elementary system carries just one bit of information. This principle was put forward by Austrian physicist Anton Zeilinger in 1999 and subsequently developed by him to derive several aspects of quantum mechanics. Some have reasoned that this principle, in certain ways, links thermodynamics with information theory. [1]
http://www.eoht.info/page/Zeilinger%27s+principle
“Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…”
Tom Siegfried, Dallas Morning News, 5/14/90 – Quotes attributed to Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin in the article
In the beginning was the bit – New Scientist
Excerpt: Zeilinger’s principle leads to the intrinsic randomness found in the quantum world. Consider the spin of an electron. Say it is measured along a vertical axis (call it the z axis) and found to be pointing up. Because one bit of information has been used to make that statement, no more information can be carried by the electron’s spin. Consequently, no information is available to predict the amounts of spin in the two horizontal directions (x and y axes), so they are of necessity entirely random. If you then measure the spin in one of these directions, there is an equal chance of its pointing right or left, forward or back. This fundamental randomness is what we call Heisenberg’s uncertainty principle.
http://www.quantum.at/fileadmi.....t/bit.html
Is it possible to find the radius of an electron?
The honest answer would be, nobody knows yet. The current knowledge is that the electron seems to be a ‘point particle’ and has refused to show any signs of internal structure in all measurements. We have an upper limit on the radius of the electron, set by experiment, but that’s about it. By our current knowledge, it is an elementary particle with no internal structure, and thus no ‘size’.
7. F/N: Let’s do some boiling down, for summary discussion in light of the underlying matters above and in onward sources:
1: In communication situations, we are interested in information we have in hand, given certain identifiable signals (which may be digital or analogue, but can be treated as digital WLOG)
2: By contrast, in the thermodynamics situation, we are interested in the Macro-micro info gap [MmIG], i.e the “missing info” on the ultra-microscopic state of a system, given the lab-observable state of the system.
3: In the former, the inference that we have a signal, not noise, is based on an implicit determination that noise is not credibly likely to be lucky enough to mimic the signal, given the scope of the space of possible configs, vs the scope of apparently intelligent signals.
4: So, we confidently and routinely make that inference to intelligent signal not noise on receiving an apparent signal of sufficient complexity, and indeed define a key information theory metric signal to noise power ratio, on the characteristic differences between the typical observable characteristics of signals and noise.
5: Thus, we are routinely inferring that signals involving FSCO/I are not improbable on intelligent action (intelligently directed organising work, IDOW) but that they are so maximally improbable on “lucky noise” that we typically assign what looks like typical signals to real signals, and what looks like noise to noise on a routine and uncontroversial basis.
6: In the context of spontaneous OOL etc, we are receiving a signal in the living cell, which is FSCO/I rich.
7: But because there is a dominant evo mat school of thought that assumes or infers that at OOL no intelligence was existing or possible to direct organising work, it is presented as if it were essentially unquestionable knowledge, that without IDOW, FSCO/I arose.
8: In other words, despite never having observed FSCO/I arising in this way and despite the implications of the infinite monkeys/ needle in haystack type analysis, that such is essentially unobservable on the gamut of our solar system or the observed cosmos, this ideological inference is presented as if it were empirically well grounded knowledge.
9: This is unacceptable, for good reasons of avoiding question-begging.
10: By sharpest contrast, on the very same principles of inference to best current explanation of the past in light of dynamics of cause and effect in the present that we can observe as leaving characteristic signs that are comparable to traces in deposits from the past or from remote reaches of space [astrophysics], design theorists infer from the sign, FSCO/I to its cause in the remote past etc being — per best explanation on empirical warranting grounds — being design, or as I am specifying for this discussion: IDOW.
Let us see how this chain of reasoning is handled, here and elsewhere.
KF
8. 8
scordova
Sal:
Great post!
Thank you!
A few comments:
a) Shannon entropy is the basis for what we usually call the “complexity” of a digital string.
In Bill Dembski’s literature, yes. Some other’s will use a different metric for complexitly, like Algorithmic complexity. Phil Johnson and Stephen Meyer actually refer to algorithmic complexity if you read what they say carefully. In my previously less enlightened writings on the net I used algorithmic complexity.
The point is, this confusion needs a little bit of remedy. Rather than use the word “complexity” it is easier to say what actual metric one is working from. CSI is really based on Shannon Entropy not algorithmic or thermodynamic entropy.
b) Regarding the exmaple in:
File_B : 1 gigabit of all 1?s
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): low
Organizational characteristics: highly organized
inference : designed
I would say that the inference of design is not necessarily warrnted.
Yes, thank you. I’ll have to revisit this example. It’s possible a programmer had the equivalent of stuck key’s. I’ll update the post accordingly. That’s why I post stuff like this at UD, to help clean up my own thoughts.
9. 9
scordova
gpuccio,
In light of your very insightful criticism, I amended the OP as follows:
inference : designed (with qualification, see note below)
….
NOTE:
[UPDATE and correction: gpuccio was kind enough to point out that in the case of File_B, the design inference isn't necessarily warranted. It's possible an accident or programming error or some other reason could make all the bits 1. It would only be designed if that was the designer's intention.]
10. 10
mahuna
Complete and utter nonsense. I assume you have absolutely no experience with the specification and development of new systems.
A baseball’s design is refined to eliminate every single ounce of weight or space that does not satisfy the requirements for a baseball.
An airliner’s design is refined to eliminate every single ounce of weight or space that does not satifsy the requirements for an airliner.
But the airliner is much more complex than the baseball and didn’t get that way by accident.
I assume that you assume that an entropic design is launched by its designers like a Mars probe but expected to change/evolve after launch (by increasing its entropy). But as far as we know, most biologic systems are remarkably stable in their designs (um, the oldest known bat fossils are practically identical to modern bats). In “The Edge of Evolution”, Behe in fact bases his argument against Evolution on the fact that there are measurably distinct levels of complexity in biologic systems, and that no known natural mechanism, most especially random degradation of the original design, will get you from a Level 2 system to a more complex Level 3 system.
11. 11
scordova
mahuna
I assume you have absolutely no experience with the specification and development of new systems.
Before becoming a financeer I was an engineer. I have 3 undergraduate degrees in electrical engineering and computer science and mathematics and a graduate engineering degree in applied physics. Of late I try to minimize mentioning it because there are so many things I don’t understand which I ought to with that level of academic exposure. I fumble through statistical mechanics and thermodynamics and even basic math. I have to solicit expertise on these matters, and I have to admit that I’m wrong many times or don’t know something, or misunderstand something — and willingness to admit mistakes or lack of understanding is a quality which I find lacking among many of my creationist brethren, and even worse among evolutionary biologists.
I worked on aerospace systems, digital telephony, unmanned aerial vehicles, air traffic control systems, security systems. I’ve written engineering specifications and carried them out. Thus
I assume you have absolutely no experience with the specification and development of new systems.
is utterly wrong and a fabrication of your own imagination.
Besides, my experience is irrelevant to this discussion. At issue are the ideas and calculations.
Do you have any comment on my calculations of Shannon entropy or the other entropy scores for the objects listed?
12. kf:
What advocates of this do not usually disclose, is that raw injection of energy tends to go to heat, i.e. to dramatic rise in the number of possible configs, given the combinational possibilities of so many lumps of energy dispersed across so many mass-particles. That is, MmIG will strongly tend to RISE on heating.
Interesting thought and worth considering. I think it is a useful point to bring up when addressing the “open system” red herring put forth by some OOL advocates, but at the end of the day it is really a rounding error on the awful probabilities that already exist. Thus, it probably makes sense to mention it in passing (“Adding energy without direction can actually make things worse.”) if someone is pushing the “just add energy” line of thought, but then keep the attention focused squarely on the heart of the matter.
13. Also, kf, the rejoinder by the “just add energy” advocate will be that the energy typically increases the reaction rate. Therefore, even if there are more states possible, the prebiotic soup can move through the states more quickly.
It is very difficult to analyze and compare the probabilities (number of states and increased reaction time of various chemicals in the soup) and how they would be affected by adding energy. Perhaps impossible, without making all kinds of additional assumptions about the particular soup and amount/type of energy, which assumptions would themselves be subject to debate.
Anyway, I think you make an interesting point. The more I think about it, however, the more I think it could lead to getting bogged down in the ‘add energy’ part of the discussion. Seems it might be better to stick with a strategy that forcefully states that the ‘add energy’ argument is a complete red herring and not honor the argument by getting into a discussion of whether adding energy would decrease or increase the already terrible odds with specific chemicals in specific situations.
Anyway, just thinking out loud here . . .
14. 14
scordova
Regarding the “Add Energy” argument. Set off an source equal in energy and power to an atomic bomb — the results are predictable in terms of the designs (or lack thereof) that will emerge in the aftermath.
That is an example where Entropy increases, but so does disorder.
The problem, as illustrated with the 500-coins, is that Shannon Entropy and Thermodynamic Entropy have some independence from the notions of disorder.
A designed system can have 500 bits of Shannon entropy but so can an undesigned system. Having 500 bits of Shannon entropy says little (in and of itself) whether something is desiged. An independent specification is needed to identify a design, the entropy score is only a part.
We can have:
1. entropy rise and more disorder
2. entropy rise and more order
3. entropy rise and more disorganization
4 entropy rise and more organization
5. entropy rise and destroying design
6. entropy rise and creating design
We can’t make a general statement about what will happen to a design or a disordered system merely because the entropy rises. There are too many other variables to account for before we can say something useful.
15. 15
16. EA:
When the equilibria are as unfavourable as they are, a faster reaction rate will favour breakdown, as is seen from how we refrigerate to preserve. In effect around room temp, activation processes double for every 8 K increase in temp.
And, the rate of state sampling used in the FSCI calc at 500 bits as revised is actually that for the fastest ionic reactions, not the slower rates appropriate to organic ones. For 1,000 bits, we are using Planck times which are faster than anything else physical. The limits are conservative.
KF
17. F/N; Please note how I speak of a sampling theory result on a config space, which is independent of precise probability calculations; we have only a reasonable expectation to pick up the bulk of the distribution. Remember we are sampling on the order of one straw to a cubical hay bale 1,000 light years on the side, i.e comparably thick to our Galaxy. KF
18. SC: Please note the Macro-micro info gap issue I have highlighted above. KF
19. 19
OlegT helped you? Is this the same olegt that now quote-mines you for brownie points?
olegt’s quote-mine earns him 10 points (out of 10) on the low-integrity scale
20. 20
scordova
The fact that Oleg and Mike went beyond their natural dislike of creationists and were generous to teach me things is something I’m very appreciative of. I’m willing to endure their harsh comments about me because they have scientific knowledge that is worth learning and passing on to everyone.
21. OT:
Amazing — light filmed at 1,000,000,000,000 Frames/Second! – video (this is so fast that at 9:00 Minute mark of video the time dilation effect of relativity is caught on film)
22. 22
butifnot
Sal, something’s missing, don’t you think? Does it not ‘feel’ that when we get to thermo and information and design, there is *more*, that will not to be admitted from a basic rehash, which is where it looks like you’re at.
The bridge between thermo and ‘information’ is fascinating, but here is where it could become really interesting – [what if] actual information has material and non material components! Our accounting may, and may have to, meet this reality.
The difference in entropy of a ‘live’ brain and and the same brain dead with a small .22 hole in it is said to be very small, but is it? Perhaps something is missing.
23. 23
scordova
Part two is now available:
Part II
24. 24
butifnot
Sal, the time is ripe for a bold new thermo-entropy synthesis! Practically the sum of human knowledge is available in an instant for free. A continuing and wider survey, far wide of materialists, is needed before this endeavor can (should) be launched to fruition.
Comments on Shannon
Shannon’s concept of information is adequate to deal with the storage and transmission of data, but it fails when trying to understand the qualitative nature of information.
Theorem 3: Since Shannon’s definition of information relates exclusively to the statistical relationship of chains of symbols and completely ignores their semantic aspect, this concept of information is wholly unsuitable for the evaluation of chains of symbols conveying a meaning.
In order to be able adequately to evaluate information and its processing in different systems, both animate and inanimate, we need to widen the concept of information considerably beyond the bounds of Shannon’s theory. Figure 4 illustrates how information can be represented as well as the five levels that are necessary for understanding its qualitative nature.
Level 1: statistics
Shannon’s information theory is well suited to an understanding of the statistical aspect of information. This theory makes it possible to give a quantitative description of those characteristics of languages that are based intrinsically on frequencies. However, whether a chain of symbols has a meaning is not taken into consideration. Also, the question of grammatical correctness is completely excluded at this level.
http://creation.com/informatio.....nd-biology
The distinction (good question) between data and information (and much else) must be addressed to get to thermo-design-info theory.
25. 25
scordova
Hi butifnot,
I don’t believe that evolutionists have proven their case.
There are fruitful ways to criticize OOL and Darwinism, I just think that creationists will hurt themselves using the 2nd Law and Entropy arguments (for the reasons outlined in these posts). They need to move on to arguments that are more solid.
What is persuasive to me are the cases of evolutionsits leaving the Darwin camp or OOL camp:
Micahel Denton
Jerry Fodor
Masimo Piantelli
Jack Trevors
Hubert Yockey
Richard Sternberg
Dean Kenyon
James Shapiro
etc.
Their arguments I find worthwhile. I don’t have any new theories to offer. Such an endeavor would be over my head anyway. I know too little to make much of a contribution to the debate beyond what you have seen at places like UD. Besides, blogs aren’t really for doing science, laboratories and libraries are better places for that. The internet is just for fun…
Sal
26. kf @16:
You make a good point about breakdown.
I’m just looking at the typical approach by abiogenesis proponents from a debating standpoint. I have rarely seen an abiogenesis proponent take careful stock of the many problems with their own preferred OOL scenario, including not only breakdown but also problems with interfering cross reactions, construction of polymers only on side chains, etc. The typical abiogenesis proponent, when they are willing to debate the topic, are almost wholly engrossed with the raw probabilistic resources — amount of matter in the universe, reaction rates, etc. Rarely do they consider the additional probabilistic hurdles that come with things like breakdown.
Indeed, one of the favorite debating tactics is to assert that because we don’t know all the probabilistic hurdles that need to be overcome we can’t therefore draw any conclusion about the unlikelihood of abiogenesis taking place. Despite the obvious logical failure of such an argument, this is a favorite rhetorical tactic of, for example, Elizabeth Liddle. This is of course absurd, to say the least, but it underscores the mindset.
As a result, when we talk about increased energy, the only thing the abiogenesis proponent will generally allow into their head is the hopeful glimmer of faster reaction rates. That is all they are interested in — more opportunities for chance to do its magic. The other considerations — including things like interfering cross reactions and breakdown of nascent molecules — are typically shuffled aside or altogether forgotten. The unfortunate upshot is that pointing out problems with additional energy (like faster breakdown), typically, will fall on deaf ears.
That, coupled with the fact that any definitive answer on the point requires a detailed analysis of precisely which OOL scenario is being discussed, how dilute the solution is, what kind of environment is present, the operative temperature, the type of energy infused, etc., means that it is nearly impossible to convince the recalcitrant abiogenesis proponent that additional energy can in fact be worse. Thus, from a practical standpoint, we seem better off just focusing on real issue — information — and note that energy does nothing to help with that key aspect.
Anyway, way more than you wanted to hear. I’m glad you shared your thoughts on additional energy. I think you have something there worth considering, including a potential hurdle for the occasional abiogenesis proponent who is actually willing to think about things like breakdown.
27. Sal:
Besides, blogs aren’t really for doing science, laboratories and libraries are better places for that. The internet is just for fun…
Spoken like a true academic elitist!
28. 28
scordova
Trevors and Abel point out the necessity of Shannon entropy (uncertainty) to store information for life to replicate. Hence, they recognize that a sufficient amount of Shannon entropy is needed for life:
Chance and Necessity do not explain the Origin of Life
No natural mechanism of nature reducible to law canexplain the high information content of genomes. This is a mathematical truism, not a matter subject to over-turning by future empirical data. The cause-and-e?ect necessity described by natural law manifests a probability approaching 1.0. Shannon uncertainty is a probability function (-log2 p). When the probability of naturallaw events approaches 1.0, the Shannon uncertaintycontent becomes miniscule (-log2p = log2 1.0=0 uncertainty). There is simply not enough Shannon uncertainty in cause-and-e?ect determinism and its reductionistic laws to retain instructions for life.
29. 29
butifnot
Their arguments I find worthwhile. I don’t have any new theories to offer. Such an endeavor would be over my head anyway. I know too little to make much of a contribution to the debate beyond what you have seen at places like UD. Besides, blogs aren’t really for doing science, laboratories and libraries are better places for that. The internet is just for fun…
Sorry, I have to bring it down a notch. Just something that has been on my mind a long time
30. EA:
Notice, I consistently speak of sampling a distribution of possibilities in a config space, where the atomic resources of solar system or observed cosmos are such that only a very small fraction can be sampled. For 500 bits, we talk of a one straw size sample to a cubical haystack 1,000 LY on the side, about as thick as the galaxy.
With all but certainty, a blind, chance and necessity sample will be dominated by the bulk of the distribution. In short, it is maximally implausible that special zones will be sampled.
KF
PS: Have I been sufficiently clear in underscoring that in stat thermo-d the relevant info metric associated with entropy is a measure of the missing info to specify micro state given macro state?
31. 31
EndoplasmicMessenger
“if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain gigabits of Shannon entropy”
Surely you mean “if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain 32 bits or so of Shannon entropy”
32. 32
scordova
Surely you mean “if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain 32 bits or so of Shannon entropy”
Surely not.
32-bits (or 64 bits) refers to the number of bits available to address memeory, not the actual amount of memory Windows-7 requires.
32 bits can address 2^32 bytes of memory or 4 gigabytes directly.
From the Windows website describing Vista (and the comment applies to other Windows operating systems)
One of the greatest advantages of using a 64-bit version of Windows Vista is the ability to access physical memory (RAM) that is above the 4-gigabyte (GB) range. This physical memory is not addressable by 32-bit versions of Windows Vista.
Windows x64 occupies about 16gigabytes. A byte being 8 bits implies 16 gigabytes is 16*8 = 128 gigabits.
Thus the Shannon entropy required to represent windows-7 x64 is on the order of 128 gigabits.
Shannon entropy is the amount of information that can be represented, not the number of bits required to locate an address in memory.
33. 33
Mung
To elaborate on what Bill said, if we have a fair coin, it can exist in two microstates: heads (call it microstate 1) or tails (call it microstate 2).
I have to disagree with Bill. I have a coin in my pocket and it’s not in either the heads state or the tails state.
34. 34
Mung
Entropy:
The notion of entropy is foundational to physics, engineering, information theory and ID. These essays are written to provide a discussion on the topic of entropy and its relationship to other concepts such as uncertainty, probability, microstates, and disorder. Much of what is said will go against popular understanding, but the aim is to make these topics clearer.
ok, so what is entropy?
First I begin with calculating Shannon entropy for simple cases.
ok, but first, what is “Shannon entropy”?
2. Shannon entropy – measured in bits or dimensionless units
Telling me it’s measured in bits doesn’t tell me what “it” is.
I is the Shannon entropy (or measure of information).
So “Shannon entropy” is a measure of information?
Hence, sometimes entropy is described in terms of improbability or uncertainty or unpredictability.
So Shannon entropy is a measure of what we don’t know? More like a measure of non-information?
35. 35
scordova
FROM MUNG:
No Sal, 500 pennies gets you 500 bits of copper plated zinc, not 500 bits of information (or Shannon entropy).
Contrast to Bill Dembski’s recent article:
FROM BILL DEBMSKI
In the information-theory literature, information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy). This has the effect of transforming probabilities into bits and of allowing them to be added (like money) rather than multiplied (like probabilities). Thus, a probability of one-eighths, which corresponds to tossing three heads in a row with a fair coin, corresponds to three bits,
I just did a comparable calculation more elaborately, and you missed it. Instead of tossing a single coin 3 times, I had 3 coins tossed 1 time.
FROM WIKI
A single toss of a fair coin has an entropy of one bit. A series of two fair coin tosses has an entropy of two bits. The entropy rate for the coin is one bit per toss
I wrote the analogous situation, except insted of making multiple tosses of a single coin, I did the formula for single tosses of multiple coins. The Shannon entropy is analogous.
I wrote:
It can be shown that for the Shannon entropy of a system of N distinct coins is equal to N bits. That is, a system with 1 coin has 1 bit of Shannon entropy, a system with 2 coins has 2 bits of Shannon entropy, a system of 3 coins has 3 bits of Shannon entropy, etc.
36. 36
timothya
Sal posted this:
Now to help disentangle concepts a little further consider three 3 computer files:
File_A : 1 gigabit of binary numbers randomly generated
File_B : 1 gigabit of all 1?s
File_C : 1 gigabit encrypted JPEG
Here are the characteristics of each file:
File_A : 1 gigabit of binary numbers randomly generated
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly disorganized
inference : not designed
File_B : 1 gigabit of all 1?s
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): low
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : designed (with qualification, see note below)
File_C : 1 gigabit encrypted JPEG
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : extremely designed
Please tell me that you are joking.
If you didn’t know in advance what the origin of File A and File C were, then you would have no useful evidence from the contents of the two files to decide that one was “highly disorganised” and the other was “highly organised”. Hint: the purpose of encryption is to make the contents of the file approach as closely as possible to a randomly generated string.
File B supports an inference of “highly organised”? How? Why? What if the ground state of the signal is just the continuous emission of something interpreted digitally as “ones” (or zeroes” for that matter). Your argument appears to say that if a system transmits a constant signal, then it must be organised.
37. 37
timothya
Correction, I posted this:
Your argument appears to say that if a system transmits a constant signal, then it must be organised.
I meant to use the term from your post that the valid inference for File B was that the file contents were designed. Clearly a gigabit of “ones” is organised in the sense that it has an evident pattern.
38. 38
scordova
If you didn’t know in advance what the origin of File A and File C were, then you would have no useful evidence from the contents of the two files to decide that one was “highly disorganised” and the other was “highly organised”.
The fact that I knew File C was a JPEG suggests that I had some advanced knowledge of the file being designed. And even if I didn’t know that in advance, the fact that it could be parsed and processed as a JPEG indicates that it is organized.
The fact that I specified in advance that FILE A was created by a random number generator ensures a high probability it will not be designed.
File B had to be restated with qualification as gpuccio pointed out.
The inference of design or lack thereof was based on advanced prior knowledge, not some explantory filter after the fact.
39. 39
timothya
If you have a means of distinguishing between File X, (which contains a genuine random strong), and File Y (which contains a pseudorandom random string encoding a human-readable sentence), then fill your boots and publish the method.
The sound you can hear is that of computer security specialists the world over shifting uncomfortably in their seats. Or perhaps of computer security specialists laughing their faces off.
The point is this: if you want to infer “design” solely from the evidence (of the contents of the files, with no a priori knowledge of their provenance), then what is your method?
40. 40
If you have a means of distinguishing between File X, (which contains a genuine random strong), and File Y (which contains a pseudorandom random string encoding a human-readable sentence), then fill your boots and publish the method.
I would bet that both strings are the product of agency involvement as blind and undirected processes cannot construct a file.
41. 41
timothya
Waiting for Sal’s response, I noticed that he posted this:
The fact that I knew File C was a JPEG suggests that I had some advanced knowledge of the file being designed. And even if I didn’t know that in advance, the fact that it could be parsed and processed as a JPEG indicates that it is organized.
Exactly. You knew in advance that the file was JPEG-encoded. But even if you didn’t know in advance, the fact that a JPEG decoder could produce a meaningful image proves only that the message was encoded using the JPEG protocol. A magnificent feat of inference.
It might be interesting if you could prove that the message originated from a non-human source. Otherwise not.
But what if you only have the encoded string to work upon, and the JPEG codec generates an apparently random string as output? How do you tell whether the output signal is truly random or that it contains a human-readable message encoded using some other protocol?
If I understand your original post, you claim that design is detectable from the pattern of the encoded message, independent of its mode of encoding.
42. 42
timothya
Joe posted this:
I would bet that both strings are the product of agency involvement as blind and undirected processes cannot construct a file.
Forget the container and consider the thing contained (I mean, really, do I have to define every parameter of the discussion?). Scientists sensing signals from a pulsar store the results in a computer “file” via a series of truth-preserving transformations (light data to electronics to magnetic marks on a hard drive). Are you arguing that the stored data does not correlate reliably to the original sense data?
43. 43
I’m saying that if you find a file on a compuetr then it a given some agency put it there.
44. 44
And timothya- I am still waiting for evidence that natural selection is non-random….
45. 45
timothya
Joe posted this:
I’m saying that if you find a file on a compuetr then it a given some agency put it there.
Brilliant insight.
Users of computers generate artefacts that are stored in a form determined by the operating system of the computer that they are using (in turn determined by the human designers of the operating system involved). I would be a little surprised if it proved to be otherwise.
However, the reliable transformation of input data to stored data in computer storage doesn’t help Sal with his problem of how to assign “designedness” to an arbitrary string of input data.
He has to show that there is a reliable way to distinguish between a genuinely random string and a pseudorandom string that is hiding a human-readable message, when all he has to go on is the string itself, with no prior knowledge.
If he has such a method, I would be fascinated to know what it is.
46. 46
timothya
Joe posted this:
And timothya- I am still waiting for evidence that natural selection is non-random….
As far as it matters, you have already had your answer in a different thread.
This thread seems to be focussed on the “how to identify designedness”, so perhaps we should stick to that subject.
47. 47
timothya- there isn’t any evidence that natural selection is non-random- just so that we are clear.
48. 48
timothya
Joe
I am clear that you think so. You are in disagreement with almost every practising biologist in the world of science. But that is your choice.
In the meantime, can we focus on Sal’s proposal?
49. 49
No timothya- I don’t think so. It is obvious. And not one of those biologists can produce any evidence that demonstrates otherwise.
50. TA:
Why not look over in the next thread 23 – 24 (with 16 in context as background)?
Kindly explain the behaviour of the black box that emits ordered vs random vs meaningful text strings of 502 bits:
|| BLACK BOX || –> 502 bit string
As in, explain to us, how emitting the string of ASCII characters for the first 72 or so letters of this post is not an excellent reason to infer to design as the material cause of the organised string. As in, intelligently directed organising work, which I will label for convenience, IDOW.
Can you justify a claim that lucky noise plus mechanical necessity adequately explains such an intelligible string, in the teeth of what sampling theory tells us on the likely outcome of samples on the gamut of the 10^57 atoms of the solar system for 10^17 s, at about 10^14 sa/s — comparable to fast chemical ionic reaction rates — relative to the space of possible configs of 500 bits. (As in 1 straw-size to a cubical hay bale of 1,000 LY on the side about as thick as our galaxy.)
As in, we have reason to infer on FSCO/I as an empirically reliable sign of design, no great surprise, never mind your recirculation of long since cogently answered objections.
(NB: This is what often happens when a single topic gets split up by using rapid succession of threads with comments. That is why I posted a reference thread, with a link back and no comments.)
KF
51. 51
scordova
But even if you didn’t know in advance, the fact that a JPEG decoder could produce a meaningful image proves only that the message was encoded using the JPEG protocol.
And JPEG encoders are intellignetly deisgned, so the files generated are still products of intelligent design.
A magnificent feat of inference.
Indeed.
It might be interesting if you could prove that the message originated from a non-human source
Humans can make JPEGs, so no need to invoke non-human sources.
52. 52
scordova
But what if you only have the encoded string to work upon, and the JPEG codec generates an apparently random string as output? How do you tell whether the output signal is truly random or that it contains a human-readable message encoded using some other protocol?
You can’t tell if a string is truly the product of mindless purposeless forces (random is your word), so you have to be agnostic about that. So one must accept that one can make a false inference to randomness (such as when someone wants to be extremely stealthy and encrypt the data).
If it parses with another codec that is avaiable to you, you have good reason to accept the file is designed.
Beyond that, one might have other techniques such as those that team Norton Symantec used to determine that Stuxnet was the product of an incredible level of intelligent design:
How Digital Detectives Deciphered Stuxnet
Several layers of masking obscured the zero-day exploit inside, requiring work to reach it, and the malware was huge — 500k bytes, as opposed to the usual 10k to 15k. Generally malware this large contained a space-hogging image file, such as a fake online banking page that popped up on infected computers to trick users into revealing their banking login credentials. But there was no image in Stuxnet, and no extraneous fat either. The code appeared to be a dense and efficient orchestra of data and commands.
….
Instead, Stuxnet stored its decrypted malicious DLL file only in memory as a kind of virtual file with a specially crafted name.
It then reprogrammed the Windows API — the interface between the operating system and the programs that run on top of it — so that every time a program tried to load a function from a library with that specially crafted name, it would pull it from memory instead of the hard drive. Stuxnet was essentially creating an entirely new breed of ghost file that would not be stored on the hard drive at all, and hence would be almost impossible to find.
O Murchu had never seen this technique in all his years of analyzing malware. “Even the complex threats that we see, the advanced threats we see, don’t do this,” he mused during a recent interview at Symantec’s office.
Clues were piling up that Stuxnet was highly professional, and O Murchu had only examined the first 5k of the 500k code. It was clear it was going to take a team to tackle it. The question was, should they tackle it?
….
But Symantec felt an obligation to solve the Stuxnet riddle for its customers. More than this, the code just seemed way too complex and sophisticated for mere espionage. It was a huge adrenaline-rush of a puzzle, and O Murchu wanted to crack it.
“Everything in it just made your hair stand up and go, this is something we need to look into,” he said.
….
As Chien and O Murchu mapped the geographical location of the infections, a strange pattern emerged. Out of the initial 38,000 infections, about 22,000 were in Iran. Indonesia was a distant second, with about 6,700 infections, followed by India with about 3,700 infections. The United States had fewer than 400. Only a small number of machines had Siemens Step 7 software installed – just 217 machines reporting in from Iran and 16 in the United States.
The infection numbers were way out of sync with previous patterns of worldwide infections — such as what occurred with the prolific Conficker worm — in which Iran never placed high, if at all, in infection stats. South Korea and the United States were always at the top of charts in massive outbreaks, which wasn’t a surprise since they had the highest numbers of internet users. But even in outbreaks centered in the Middle East or Central Asia, Iran never figured high in the numbers. It was clear the Islamic Republic was at the center of the Stuxnet infection.
The sophistication of the code, plus the fraudulent certificates, and now Iran at the center of the fallout made it look like Stuxnet could be the work of a government cyberarmy — maybe even a United States cyberarmy.
And that illustrates how a non-random string in a computer might be deduced as the product of some serious ID.
53. F/N: This from OP needs comment:
what is the Shannon Entropy of a system of 500 distinct coins? Answer: 500 bits, or the Universal Probability Bound.
By way of extension, if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain gigabits of Shannon entropy. This illustrates the principle that more complex designs require larger Shannon entropy to support the design. It cannot be otherwise. Design requires the presence of entropy, not absence of it.
Actually, in basic info theory, H strictly is a measure of average info content per element in a system or symbol in a message. Hence its being estimated on a weighted average of information per relevant element.
This, I illustrated earlier from a Shannon 1950/1 paper, in comment 15 in the part 2 thread:
The entropy is a statistical parameter which measures, in a certain sense, how much information is produced on the average for each letter of a text in the language. If the language is translated into binary
digits (0 or 1) in the most efficient way, the entropy is the average number of binary digits required per letter of the original language. The redundancy, on the other hand, measures the amount of constraint imposed on a text in the language due to its statistical structure, e.g., in English the high fre-quency of the letter E, the strong tendency of H to follow T or of V to follow Q. It was estimated that when statistical effects extending over not more than eight letters are considered the entropy is roughly 2.3 bits per letter, the redundancy about 50 per cent.
So, we see the context of usage here.
But what happens when you have a message of N elements?
In the case of a system of complexity N elements, then the cumulative, Shannon metric based information — notice how I am shifting terms to avoid ambiguity — is, logically, H + H + . . . H N times over, or N * H.
And, as was repeatedly highlighted, in the case of the entropy of systems that are in clusters of microstates consistent with a macrostate, the thermodynamic entropy is usefully measured by and understood on terms of the Macro-micro information gap (MmIG], not on a per state or per particle basis but a cumulative basis: we know macro quantities, not the specific position and momentum of each particle, from moment to moment, which given chaos theory we could not keep track of anyway.
A useful estimate per the Gibbs weighed probability sum entropy metric — which is where Shannon reputedly got the term he used from in the first place, on a suggestion from von Neumann — is:
>>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate. >>
Where, Wiki gives a useful summary:
The macroscopic state of the system is defined by a distribution on the microstates that are accessible to a system in the course of its thermal fluctuations. So the entropy is defined over two different levels of description of the given system. The entropy is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if E_i is the energy of microstate i [--> Notice, summation is going to be over MICROSTATES . . . ], and p_i is its probability that it occurs during the system’s fluctuations, then the entropy of the system is
S_sys = – k_B [SUM over i's] P_i log p_i
Also, {- log p_i} is an information metric, I_i, i.e the information we would learn on actually coming to know that the system is in microstate i. Thus, we are taking a scaled info metric on the probabilistically weighted summmation of info in each microstate. Let us adjust:
S_sys = k_B [SUM over i's] p_i * I_i
This is the weighted average info per possible microstate, scaled by k_B. (Which of course is where the Joules per Kelvin come from.)
In effect the system is giving us a message, its macrostate, but that message is ambiguous over the specific microstate in it.
After a bit of mathematical huffing and puffing, we are seeing that the entropy is linked to the average info per possible microstate.
Where this is going is of course that when a system is in a state with many possible microstates, it has enormous freedom of being in possible configs, but if the macro signals lock us down to specific states in small clusters, we need to account for how it could be in such clusters, when under reasonable conditions and circumstances, it could be easily in states that are far less specific.
In turn that raises issues over IDOW.
Which then points onward to FSCO/I being a sign of intelligent design.
KF
54. PS: As I head out, I think an estimate of what it would take to describe the state of 1 cc of monoatomic ideal gas at 760 mm HG and 0 degrees C, i.e. 2.687 * 10^19 particles with 6 degrees of positional and momentum freedom would help us. Let us devote 32 bits — 16 bits to get 4 hex sig figs, and a sign bit plus 15 bits for the binary exponent to each of the (x, y, z) and (P_x, P-y and P-z) co-ordinates in the phase space. We are talking about:
2.687 * 10^19 particles
x 32 bits per degree of freedom
x 6 degrees of freedom each
_____________
5.159 * 10^21 bits of info
That is, to describe the state of the system at a given instant, we would need 5.159 * 10^21 bits, or 644.9 * 10^18 bits. That is how many yes/no quest5ions, in teh correct order, would have to be amnswered and processed every clock tick we update. And with 10^-14 s as a reasonable chemical reaction rate, we are seeing a huge amount of required processing to keep track. As to how that would be done, that is anybody’s guess.
55. OOPS, 600 + Exa BYTES
56. 56
Mung
As I have said above, the adoption of the term “entropy” for SMI was an unfortunate event, not because entropy is not SMI, but because SMI is not entropy!.
*SMI – Shannon’s Measure of Information
http://www.worldscientific.com......1142/7694
57. 57
Mung
From the OP:
How entropy became equated with disorder, I do not know …
Arieh Ben-Naim writes:
“It should be noted that Boltzmann himself was perhaps the first to use the “disorder” metaphor in his writing:
…are initially in a very ordered – therefore very improbable – state … when left to itself it rapidly proceeds to the disordered most probable state.
– Boltzmann (1964)
You should note that Boltzmann uses the terms “order” and “disorder” as qualitative descriptions of what goes on in the system. When he defines entropy, however, he uses either the number of states or probability.
Indeed, there are many examples where the term “disorder” can be applied to describe entropy. For instance, mixing two gases is well described as a process leading to a higher degree of disorder. However, there are many examples for which the disorder metaphor fails.”
58. 58
scordova
Boltzmann
“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.” (Final paragraph of #87, p. 443.)
That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?
Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”
There is no basis in physical science for interpreting entropy change as involving order and disorder.
59. 59
Mung
…and when left to itself it rapidly proceeds to the most probable state.
There, I fixed it fer ya!
As a bonus you get the “directionality” of entropy.
Ordered and disordered gots nothing to do with it.
60. 60
Mung
They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask.
Does Lambert answer that question?
What is Entropy, really?
So, entropy is the answer to the age-old question, why me?
61. Mung:
One more time [cf. 56 above, which clips elsewhere . . . ], let me clip Shannon, 1950/1:
The entropy is a statistical parameter which measures, in a certain sense, how much information is produced on the average for each letter of a text in the language. If the language is translated into binary
digits (0 or 1) in the most efficient way, the entropy is the average number of binary digits required per letter of the original language. The redundancy, on the other hand, measures the amount of constraint imposed on a text in the language due to its statistical structure, e.g., in English the high fre-quency of the letter E, the strong tendency of H to follow T or of V to follow Q. It was estimated that when statistical effects extending over not more than eight letters are considered the entropy is roughly 2.3 bits per letter, the redundancy about 50 per cent.
Going back to my longstanding, always linked note, which I have clipped several times over the past few days, here on is how we measure info and avg info per symbol:
To quantify the above definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the “Shannon sense” – never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a “typical” long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M –> pj, and in the limit attains equality. We term pj the a priori — before the fact — probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori — after the fact — probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver:
I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1
This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that:
I total = Ii + Ij . . . Eqn 2
For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is:
I = log [1/pj] = – log pj . . . Eqn 3
This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so:
Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4
So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is – log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see “wueen” it is most likely to have been “queen.”)
Further to this, we may average the information per symbol in the communication system thusly (giving in termns of -H to make the additive relationships clearer):
- H = p1 log p1 + p2 log p2 + . . . + pn log pn
or, H = – SUM [pi log pi] . . . Eqn 5
H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: “it is often referred to as the entropy of the source.” [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form . . .
What this last refers to is the Gibbs formulation of entropy for statistical mechanics, and its implications when the relationship between probability and information is brought to bear in light of the Macro-micro views of a body of matter. That is, when we have a body, we can characterise its state per lab-level thermodynamically significant variables, that are reflective of many possible ultramicroscopic states of constituent particles.
Thus, clipping again from my always linked discussion that uses Robertson’s Statistical Thermophysics, CH 1 [and do recall my strong recommendation that we all acquire and read L K Nash's elements of Statistical Thermodynamics as introductory reading):
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) . . . .
For, as he astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
And, in more details, (pp. 3 - 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design; also see recent ArXiv papers by Duncan and Samura here and here):
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . .
[deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati's discussion of debates and the issue of open systems here . . . ]
H({pi}) = – C [SUM over i] pi*ln pi, [. . . "my" Eqn 6]
[--> This is essentially the same as Gibbs Entropy, once C is properly interpreted and the pi's relate to the probabilities of microstates consistent with the given lab-observable macrostate of a system at a given Temp, with a volume V, under pressure P, degree of magnetisation, etc etc . . . ]
[where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . .
[H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . .
Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]
As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life’s Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then — again following Brillouin — identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously “plausible” primordial “soups.” In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale.
By many orders of magnitude, we don’t get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel, here and here], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics . . .
Now, of course, as Wiki summarises, the classic formulation of the Gibbs entropy is:
The macroscopic state of the system is defined by a distribution on the microstates that are accessible to a system in the course of its thermal fluctuations. So the entropy is defined over two different levels of description of the given system. The entropy is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if E_i is the energy of microstate i, and p_i is its probability that it occurs during the system’s fluctuations, then the entropy of the system is:
S = -k_B * [sum_i] p_i * ln p_i
This definition remains valid even when the system is far away from equilibrium. Other definitions assume that the system is in thermal equilibrium, either as an isolated system, or as a system in exchange with its surroundings. The set of microstates on which the sum is to be done is called a statistical ensemble. Each statistical ensemble (micro-canonical, canonical, grand-canonical, etc.) describes a different configuration of the system’s exchanges with the outside, from an isolated system to a system that can exchange one more quantity with a reservoir, like energy, volume or molecules. In every ensemble, the equilibrium configuration of the system is dictated by the maximization of the entropy of the union of the system and its reservoir, according to the second law of thermodynamics (see the statistical mechanics article).
Neglecting correlations between the different possible states (or, more generally, neglecting statistical dependencies between states) will lead to an overestimate of the entropy[1]. These correlations occur in systems of interacting particles, that is, in all systems more complex than an ideal gas.
This S is almost universally called simply the entropy. It can also be called the statistical entropy or the thermodynamic entropy without changing the meaning. Note the above expression of the statistical entropy is a discretized version of Shannon entropy. The von Neumann entropy formula is an extension of the Gibbs entropy formula to the quantum mechanical case.
It has been shown that the Gibb’s Entropy is numerically equal to the experimental entropy[2] dS = delta_Q/{T} . . .
Looks to me that this is one time Wiki has it just about dead right. Let’s deduce a relationship that shows physical meaning in info terms, where (- log p_i) is an info metric, I-i, here for microstate i, and noting that a sum over i of p_i * log p_i is in effect a frequency/probability weighted average or the expected value of the log p_i expression, and also moving away from natural logs (ln) to generic logs:
S_Gibbs = -k_B * [sum_i] p_i * log p_i
But, I_i = – log p_i
So, S_Gibbs = k_B * [sum_i] p_i * I-i
i.e. S-Gibbs is a constant times the average information required to specify the particular microstate of the system, given its macrostate, the MmIG (macro-micro info gap.
Or, as Wiki also says elsewhere:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.
But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
So, immediately, the use of “entropy” in the Shannon context, to denote not H but N*H, where N is the number of symbols (thus, step by step states emitting those N symbols involved), is an error of loose reference.
Similarly, by exploiting parallels in formulation and insights into the macro-micro distinction in thermodynamics, we can develop a reasonable and empirically supportable physical account of how Shannon information is a component of the Gibbs entropy narrative. Where also Gibbs subsumes the Boltzmann formulation and onward links to the lab-measurable quantity. (Nash has a useful, relatively lucid — none of this topic is straightforward — discussion on that.)
Going beyond, once the bridge is there between information and entropy, it is there. It is not going away, regardless of how inconvenient it may be to some schools of thought.
We can easily see that, for example, information is expressed in the configuration of a string, Z, of elements z1 -z2 . . . zN in accordance with a given protocol of assignment rules and interpretation & action rules etc.
Where also, such is WLOG as AutoCAD etc show us that using the nodes and arcs representation and a list of structured strings that record this, essentially any object can be described in terms of a suitably configured string or collection of strings.
So now, we can see that string Z (with each zi possibly taking b discrete states) may represent an island of function that expresses functionally specific complex organisation and associated information. Because of specificity to achieve and keep function, leading to a demand for matching, co-ordinated values of zi along the string, that string has relatively few of the N^b possibilities for N elements with b possible states being permissible. We are at isolated islands of specific function i.e cases E from a zone of function T in a space of possibilities W.
(BTW, once b^N exceeds 500 bits on the gamut of our solar system, or 1,000 bits on the gamut of our observable cosmos, that brings to bear all the needle in the haystack, monkeys at keyboards analysis that has been repeatedly brought forth to show why FSCO/I is a useful sign of IDOW — intelligently directed organising work — as empirically credible cause.)
We see then that we have a complex string to deal with, with sharp restrictions on possible configs, that are evident from observable function, relative to the general possibility of W = b^N possibilities. Z is in a highly informational, tightly constrained state that comes form a special zone specifiable on macro-level observable function (without actually observing Z directly). That constraint on degrees of freedom contingent on functional, complex organisation, is tantamount to saying that a highly informational state is a low entropy one, in the Gibbs sense.
Going back to the expression, comparatively speaking there is not a lot of MISSING micro-level info to be specified, i.e. simply by knowing the fact of complex specified information-rich function, we know that we are in a highly restricted special Zone T in W. This immediately applies to R/DNA and proteins, which of course use string structures. It also applies tot he complex 3-D arrangement of components in the cell, which are organised in ways that foster function.
And of course it applies to the 747 in a flyable condition.
Such easily explains why a tornado passing through a junkyard in Seattle will not credibly assemble a 747 from parts it hits, and it explains why the raw energy and forces of the tornado that hits another formerly flyable 747, and tearing it apart, would render its resulting condition much less specified per function, and in fact result in predictable loss of function.
We will also see that this analysis assumes the functional possibilities of a mass of Al, but is focussed on the issue of functional config and gives it specific thermodynamics and information theory context. (Where also, algebraic modelling is a valid mathematical analysis.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332910776138306, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/118461-orthogonal-basic-basic-orthogonal-complement.html
|
# Thread:
1. ## Orthogonal basic && basic for the orthogonal complement
Hi guys,
I am stuck with this questions, and final is coming... I would be appreciate if anyone can help me to solve these questions.
Thanks
2. For $A= \begin{bmatrix}6 & -6 & -12 & 0 & 3 & -5 \\ 5 & -5 & -10 & 0 & -2 & 6 \\ -4 & 4 & 8 & 0 & -6 & 6 \\ 4 & -4 & -8 & 0 & -2 & 4\end{bmatrix}$
"null space" is, of course, the set of all vector so that Av= 0:
$\begin{bmatrix}6 & -6 & -12 & 0 & 3 & -5 \\ 5 & -5 & -10 & 0 & -2 & 6 \\ -4 & 4 8 & 0 & -6 & 6 \\ 4 & -4 & -8 & 0 & -2 & 4\end{bmatrix}\begin{a \\ b \\ c \\ d \\ e \\ f\end{matrix}= \begin{bmatrix}0 \\ 0 \\ 0 \\0 \end{bmatrix}$.
That matrix row-reduces to $\begin{bmatrix}1 & -1 & -2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0\end{bmatrix}$.
The fourth row tells us nothing. The third row tells us that f= 0. The second row tells us that e= 0. The first row tells us that a- b- 2c= 0 or that a= b+ 2c. Notice that d does not appear in any of those. A general vector in the null space is of the form (b+ 2c, b, c, d, 0, 0)= b(1, 1, 0, 0, 0, 0)+ c(2, 0, 1, 0, 0, 0)+ d(0, 0, 0, 1, 0, 0). Those three vectors, (1, 1, 0, 0, 0, 0), (2, 0, 1, 0, 0, 0), and (0, 0, 0, 1, 0, 0) form a basis for the null space.
But "the null space of A containing (1, 1, 0, 0, 0, 0)" doesn't really make sense! Yes, (1, 1, 0, 0, 0, 0) is in the null space- "the null space containing (1, 1, 0, 0, 0, 0)" is just the null space itself. If the problem had said "the smallest subspace of the null space containing (1, 1, 0, 0, 0, 0)" then we can just take that smallest subspace to be the multiples of (1, 1, 0, 0, 0, 0)
To find a basis for "the orthogonal complement of the null space" of the given matrix, just repeat what I did to find a basis for the null space. Then look for vectors (a, b, c, d, e, f) that have dot product with each of those basis vectors equal to 0. Depending upon how many vectors you get in the basis (the dimension of the null space) you will have up to 6 equations for a, b, c, d, e, and f.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8878408670425415, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/194267/solve-the-equation-2x-1-x?answertab=active
|
Solve the equation $2^x=1-x$
Solve the equation: $$2^x=1-x$$
I know this is extremely easy and I know the solution using graphical approach. Basically, I can see the solution, but I can't work it out algebraically.
-
You can use the iterative formula: $$x_{n+1}=1-2^{x_{n}}$$ for a numerical approach, which eventually converges to $0$. – Shaktal Sep 11 '12 at 18:05
I would not classify an algebraic solution to this as "extremely easy"! However, Ross Millikan shows that if you keep your eyes open, a convincing solution presents itself. – rschwieb Sep 11 '12 at 18:37
Well, this community has much greater mathematical knowledge than I do, thus what I deem to be medium-hard or hard, you guys find to be easy or medium. That is why I wrote extremely easy because I though it is a medium-level exercise for me. – David Hoffman Sep 11 '12 at 18:41
1
Are there nonzero complex solutions? – Ben Crowell Sep 11 '12 at 23:03
5 Answers
A different perspective on an algebraic solution: you know that for negative $x$, $2^x\lt 1$ but $1-x \gt 1$ (since the latter is $1+(-x)$ and $-x \gt 0$); contrariwise, for positive $x$, $2^x\gt 1$ but $1-x \lt 1$. This means that the only possibility for a solution is $x=0$, and of course by quick algebra that does in fact work.
-
I posted a solution to a problem related to this where I used the Lambert W function which is the solution of $y {\rm e}^{y}=x$. A solution to a more general form $a^x=bx+c$ is here.
Let $1-x = y$, then we have
$$2^x = 1-x \Rightarrow 2^{1-y} = y \Rightarrow y \,2^y = 2 \Rightarrow y\,{\rm e}^{y\, \ln(2)}=2 \Rightarrow \frac{z}{\ln(2)} {\rm e}^{z} = 2 \Rightarrow z {\rm e}^{z}= 2\,\ln(2)$$ $$\Rightarrow z = W( 2 \,\ln (2) ) \Rightarrow y = \frac{W(2\,\ln(2))}{\ln(2)} \Rightarrow 1-x=\frac{W(2\,\ln(2))}{\ln(2)} \Rightarrow x = 1-\frac{W(2\,\ln(2))}{\ln(2)} = 0\,,$$
since $W(2\,\ln(2))=\ln(2)$.
-
$W(2 \log(2)) = \log(2)$. Your solution is actually the same as $x = 0$. – Ayman Hourieh Sep 11 '12 at 19:29
@AymanHourieh: Thanks. Of course you can look for the other solutions by considering the other branches of the Lambert W function. Here I showed the techniques of solving such equation. – Mhenni Benghorbal Sep 11 '12 at 19:34
Here is a method for finding complex roots that should be understandable to those of us who didn't learn the Lambert W function at our father's knee. $2^{i\phi}=e^{i\phi\ln 2}$ is on the unit circle in the complex plane. Say we put in some fairly large value of $\phi$. Then $1-x=1-i\phi$ is going to be pretty close to the negative imaginary axis. To make the l.h.s. and r.h.s. of the equation have about the same complex phase, we can just pick a value of $\phi$ such that $2^{i\phi}$ is on the imaginary axis. Suppose we try $\phi=(3\pi/2+10\pi)/\ln 2\approx 52.1$. Now the two sides of the equation match pretty well, except that their magnitudes are mismatched. To fix this, tack on $\ln 52/\ln 2\approx 5.7$ as a real part, giving $x=5.7+52.1i$. This is pretty nearly a solution. Now play around with the real and imaginary parts to minimize the error, and you can converge to a pretty good numerical approximation, about $5.7061+51.99191i$. By replacing the $10\pi$ with other multiples of $2\pi$, it should be clear that you can get as many solutions as you want.
-
Solution is $1-W(2\log(2))/\log(2) = 0$, using the Lambert W function. [added Mhenni's answer shows the steps to derive this.]
And for the non-real solutions, use the other branches of the Lambert W function. $$\begin{align} &\dots\\&5.430858450 + 42.90897219 i\\ &5.090239758 + 33.81905797 i\\ &4.642925846 + 24.71686730 i\\ &3.988583083 + 15.59001288 i\\ &2.732900763 + 6.418080468 i\\ &0.000000000 + 0.000000000 i\\ &2.732900763 - 6.418080468 i\\ &3.988583083 - 15.59001288 i\\ &4.642925846 - 24.71686730 i\\ &5.090239758 - 33.81905797 i\\ &5.430858450 - 42.90897219 i\\ &\dots \end{align}$$
-
By inspection $0$ is a solution. As the left side is increasing with $x$ and the right decreasing, that is the only solution. Equations that mix exponentials and polynomials usually need the Lambert W function for a "closed form" solution.
-
"By inspection" should be taken to mean: think about what the two graphs, of $y=1-x$ and $y=2^x$, look like. – Michael Hardy Sep 11 '12 at 18:25
5
@MichaelHardy "By inspection" could mean that I look at the equation, whether there is any graph involved or not, and I try a few easy values in my head and find one that works. You could also inspect it by looking at (thinking about) the graphs. That would be a different type of inspection, but they are both inspection. – Graphth Sep 11 '12 at 18:30
2
Equations that mix exponentials and polynomials usually have no closed form solution. Some (in particular, those of the form $a^x + b x + c = 0$) can be solved using the Lambert W function. But change the $b x$ to $b x^2$, for example, and Lambert doesn't help. – Robert Israel Sep 11 '12 at 18:36
Well f(x)=2^x intersects the y-axis at y=1, and so does f(x)=1-x. So it is fairly obvious I know. – David Hoffman Sep 11 '12 at 18:37
I meant that in this case, that's what "by insepction" should mean, since when you do that, you get the answer instantly. – Michael Hardy Sep 11 '12 at 22:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454778432846069, "perplexity_flag": "head"}
|
http://cs.stackexchange.com/questions/7705/notation-conventions-for-tree-data-structures
|
# Notation Conventions for Tree Data Structures
I'm currently working on a paper describing a new algorithm in computational science. If all goes well, this algorithm will be around for a while (within the specific community). As such, I want to set notation conventions that will not drive other people insane. The primary difference is that this algorithm makes use (logically) of tree data structures while the traditional algorithms in the field have used linear arrays.
The old algorithms therefore could denote specific data as $data_i$ (that is, using LaTex subscripts. Similarly, one could refer to $data_{i-1}$, and it would be clear that that is the "parent" data to $data_i$. Unfortunately, trees do not support this sort of indexing.
Are there any notation conventions for trees that allow clear, concise descriptions of that sort? I want to be able to give talk about an arbitrary bit of data (i.e. $data_i$) and readily discuss parent and child data. The community is mathematician heavy and as such mathematical notation of the sub/superscript y operator sort is favored over the class.property CS style. Note also that these are arbitrary trees; they need not be binary or any other such structure.
Does anyone know of a notation convention that would fit the bill? Alternatively, is there a better place to ask this? Thanks for the help.
-
What kind of tree structure? Are there values in the nodes? Are there values in the leaves? Does the tree have a fixed arity (e.g. binary tree)(as opposed to a rose tree)? Are there any other structural invariants (balanced)? – Tinctorius Jan 2 at 19:22
As the question says "Note also that these are arbitrary trees; they need not be binary or any other such structure." All nodes/leaves in the tree have values. The current implementation of the algorithm features several trees with identical structures but different sets of data. Some trees contain floats at each node, while others feature an large object at each node. – Ethan Jan 2 at 20:55
## 1 Answer
You can use functions.
Let $node$ be an arbitrary node of the tree.
Then $data(node)$ is data associated with $node$, $children(node)$ is set of child nodes of $node$ and $parent(node)$ is parent of $node$.
For data associated with $i^{th}$ child of $node$ you can use $data(children(node)_i)$ etc.
-
1
for the $i^{th}$ child i would prefer $child(node, i)$, but thats a matter of taste – Simon S Jan 3 at 0:42
Is this an established convention? It's similar to what I was considering, but if there are conventions of which I am unaware, I'd like to respect them. – Ethan Jan 3 at 0:56
2
It doesn't matter if it is an established or not. If it is good it will establish itself. :) – Pratik Deoghare Jan 3 at 1:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943051278591156, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/4541/missing-factor-in-the-factor-model/4544
|
# Missing factor in the factor model
I am developing a factor model to predict monthly returns. One of the factors alone accounts for an R squared of 0.3 to 0.4 for many single periods that has surprised me.
However, for some periods the direction of the factor reverse completely and if I run a single regression for all the periods this factor alone accounts for an R squared of 0.1 which means that the factor is robust but I am missing another factor, correct me if there is another explanation.
Does anyone have any opinion on how to isolate the explanatory power of this factor in the panels in order to eliminate the effect of missing factor or any analytical way to flag the other factor?
-
There is not enough information for me to answer it but - your statement One of the factors alone accounts for an R squared of 0.3 to 0.4 for many single periods that has surprised me. Statistically it is not surprising and I need to know more about the factor and what you are modelling and your assumptions are to pin point if there is anything wrong with it If you think it is important factor than have it by all means. See the outliers and try to analyse them. – Ash Nov 16 '12 at 9:38
## 1 Answer
The best option is to identify the other missing factors and include them in your analysis. Depending on your data and assumptions, PCA is a good place to start.
Your data also shows signs of a time-varying correlation with your factor. Hence, it $may$ also be appropriate to allow for time-varying regression coefficients or some other technique to account for this feature of the data.
-
Right John. I introduced N dummy variables for each period to isolate the time-factor, R squared increased from 0.1 to 0.22 which confirms your hypothesis. I also was thinking to take a shorter period of time. Right now I am running the model for 43 months ending this October, but this force the Beta to be stationary over almost 4 years, which might not be necessary in practical usage of the model. Do you have any comments. about the missing factor, I have no idea how to identify that factor, I should try PCA, but interpreting that factor is another pain in bott. – Amir Yousefi Nov 17 '12 at 0:10
I'd have to know more about what the data is like. – John Nov 17 '12 at 0:13
send me your contact if you would like to take a look at the data. They are monthly returns of S&P100. – Amir Yousefi Nov 18 '12 at 0:13
Not necessary. The returns on the index should explain a significant amount of the variation, but PCA can also help. – John Nov 18 '12 at 1:39
This is a predictive model, I am locked to the factors are available in the preceding period. However, I added index returns it could not help the model as I expected. – Amir Yousefi Nov 18 '12 at 2:35
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9524673223495483, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/70601/primes-of-the-form-p-i-1p-i-2-cdots-p-i-n2k?answertab=votes
|
# Primes of the form $p_{i_1}p_{i_2}\cdots p_{i_n}+2k$
Let $S_{n,k}$ be the set of all numbers that can be written as the product of $n$ odd primes plus $2k$. Is there integers $n>1$ and $k>1$ such that $S_{n,k}$ contains finite number of primes?
-
Are you asking if $S_{n,k}$ is non-empty for some $n$ and $k$ or do you want to know if $S_{n,k}$ is finite for some $n$ and $k$? (You say "contains finite number of primes" which is a bit unclear to me...) – shaye Oct 7 '11 at 20:29
## 1 Answer
Assuming Schinzel's Hypothesis H, these sets are always infinite:
Step 1: Let $t$ be the product of $n-1$ primes such that $\gcd(t,2k)=1$.
Step 2: Let $f(x)=x$ and $g(x)=tx+2k$. Define $Q(x)=x(tx+2k)$.
Hypothesis H asserts that if $Q(x)$ has no fixed prime divisor $q$, then $f(x)$ and $g(x)$ are simultaneously prime an infinite number of times.
Step 3: If $x=1$ then $q$ divides $Q(x)=t+2k$, and thus $q$ is odd. If $x=2$, then $q$ divides $Q(x)=2(2t+2k)$, and thus $q$ divides $t+k$.
Step 4: Since $q$ divides both $t+2k$ and $t+k$, we find $q$ divides both $k$ and $t$, contradicting that $\gcd(t,2k)=1$.
Hence $Q(x)$ has no fixed prime divisor, and Hypothesis H applies. Thus, since $g(x)$ has the desired form, we conclude that $|S_{k,n}|=\infty$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284062385559082, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/83272?sort=newest
|
## Comparing two measures on trees on $n$ vertices
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A standard measure on trees on $n$ vertices is the Uniform Spanning Tree (UST) on the complete graph. This is the measure where every tree has equal probability, $1 / n^{n-2}$ by Cayley's formula.
Here is another measure. Take an Erdős–Rényi (i.e. edge independent) random graph $G \in G(n,p)$ with $p$ large enough to ensure that $G$ is asymptotically almost surely connected, and then choose a UST on $G$.
Note that if $p = 1$ these two measures are identical. My guess is that they are close (say in total variation distance), even for much smaller $p$. In particular, suppose that $$p \ge \frac{\log n + \omega}{n},$$ where $\omega \to \infty$ arbitrarily slowly as $n \to \infty$. (This is barely sufficient to guarantee that the probability that $G$ is connected tends to one.)
Are these actually the same measure in disguise? If not, can we say that they are "close" to the same measure, for example, by putting an upper bound on the total variation distance between the two measures that tends to zero as $n \to \infty$?
-
## 2 Answers
It's a nice question. The following is not an answer but too long to put in comment. I think in general you want to ask "how much can you tell of the underlying graph given a sample from the uniform spanning tree". There is a great algorithm due to David Wilson to sample the UST, which consists in growing the tree by successively running loop-erased random walks. See eg http://en.wikipedia.org/wiki/Loop-erased_random_walk.
Here I think you can try to generate the graph at the same time as the loop-erased random walks, which means that for a while you can couple a loop-erased random walk on the complete graph with one on $G(n,p)$. Not sure this method will allow you to go all the way to the connectivity threshsold $p = (1+ \epsilon) \log n / n$, though. Perhaps it would be easier to start with $p=1/2$ !
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It's a nice question that I don't know a complete answer to. However it can be seen that the measures are not exactly the same. Take $p$ to be very small. The most likely connected $G$ is a tree, and all trees are equally likely. The next most likely $G$ is a unicyclic graph, whose numbers of spanning trees vary. Calculating $K_{1,3}$ versus $P_4$ shows that $K_{1,3}$ is a little bit more likely.
The exact expression for the probability of a given $T$ is a polynomial in $p$, so if this polynomial is not constant for tiny $p$ it won't be constant for larger $p$ either.
It would be very interesting to know how much the distributions can differ in general.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9624596834182739, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/85744-find-all-complex-solutions.html
|
# Thread:
1. ## Find all complex solutions
How can I find all complex solutions of the following?
z^4 = -1 + sqrt(3*i)
2. Originally Posted by posix_memalign
How can I find all complex solutions of the following?
z^4 = -1 + sqrt(3*i)
Hi
Isn't it $z^4 = -1 + \sqrt{3}\:i$
One way is to go to exponential form
$z^4 = 2\left(-\frac12 + \frac{\sqrt{3}}{2}\:i\right) = 2\:e^{i\frac{2\pi}{3}}$
Then $z = r\:e^{i\theta}\implies z^4 = r^4\:e^{4i\theta}$
$r^4 = 2$ and $4\theta = \frac{2\pi}{3} + 2k\pi$
$r = 2^{\frac14}$ and $\theta = \frac{\pi}{6} + k\frac{\pi}{2}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8541679978370667, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/7951/is-there-a-poset-with-0-with-countable-automorphism-group/7954
|
## Is there a poset with 0 with countable automorphism group?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a poset P with a unique least element, such that every element is covered by finitely many other elements of P (and P is locally finite -- actually, per David Speyer's example, let's say that it satisfies the descending chain condition), and P has countably infinite automorphism group?
The question is motivated by extensions of Sperner's theorem and the LYM inequality to infinite posets. In particular I'm interested in whether you can extend Bollobas' (I believe) probabilistic proof of LYM to the infinite setting in general -- you can for some specific posets. But a prerequisite for a direct extension is for the automorphism group of the poset to be compact, and at the very least we want it to have some nice topological properties. So a poset with countably infinite automorphism group would be a very interesting case.
-
## 2 Answers
It seems unlikely (once you assume d.c.c.). Define the height of an element $x$ in $P$ to be the length of the shortest unrefinable chain from $x$ to $0$.
Let $P_n$ denote the elements of $P$ whose height is at most $n$. Since each element has a finite number of covers, the number of elements in $P_n$ is finite.
By d.c.c., every element of $P$ is in some $P_n$.
Let $G$ denote the automorphisms of $P$ and let $G_n$ denote the automorphisms of $P_n$. $G$ is the inverse limit of the system $G_n$. Let $H_n$ denote the image of $G$ inside $G_n$. (Note that this might not be all of $G_n$, since there could be automorphisms of $P_n$ that don't extend to $P$.) $G$ is also the inverse limit of the system $H_n$.
If the system $H_n$ stabilizes, then $G$ is finite. On the other hand, if $H_n$ doesn't stabilize, then the cardinality of $G$ is an infinite product, i.e. uncountable.
-
Cool! This is more or less what my intuition was, although I didn't know the bit of abstract nonsense about inverse systems that lets you formalize it. Just to be clear, though -- by "it seems unlikely" you mean "no, and here's why not", right? :) – Harrison Brown Dec 6 2009 at 4:00
Yeah. I got more definite as I went along. – Hugh Thomas Dec 6 2009 at 4:19
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
What about $\mathbb{Z} \cup \{ - \infty \}$? Here $- \infty$ is less than everything, and $\mathbb{Z}$ has the usual order.
You probably want some sort of descending chain condition, to rule this out.
-
That example is not locally finite, because the intervals bounded by the least element are not finite. – David Eppstein Dec 6 2009 at 3:38
Huh. Yeah, that's gross and pathological and nothing at all like the posets I want (I think I've been implicitly assuming a d.c.c.), but you're right, of course. (Of course, now I'm afraid that if I add the d.c.c., there'll be another pathological example!) – Harrison Brown Dec 6 2009 at 3:39
Oh, I see. I was taking "every element is covered by finitely many other elements of P" as the definition of "locally finite", because I didn't know the correct definition. My example does obey the condition on covering elements. – David Speyer Dec 6 2009 at 3:43
With the d.c.c., the condition on covering elements implies local finiteness -- it's just that local finiteness is a more common notion. – Harrison Brown Dec 6 2009 at 3:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471572041511536, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/80007/topology-on-the-space-of-schwartz-distributions/114367
|
## Topology on the space of Schwartz Distributions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If we equip the Schwartz space $\mathcal{S}$ with its usual Fréchet space topology, then the space of continuous linear functionals $\mathcal{S}^\ast$ is known as the space of Schwartz distributions or tempered distributions. If we equip this space with the strong topology, is there anything we can say about the resulting topological vector space? Evidently, the resulting space will not be a Fréchet space, but perhaps it will have other nice properties. In particular, I am interested in the space of continous linear operators on $\mathcal{S}^\ast$. Is there anything interesting we can say about this space?
Unfortunately, a quick google search did not turn up many sources that dealt with the particulars of the topology on $\mathcal{S}^\ast$, much less the topology on the space of continuous linear operators on $\mathcal{S}^\ast$, so a point in the right direction to a reference would also be greatly appreciated.
EDIT: After thinking about this more deeply, I realize that I am interested in a specific type of operator on $\mathcal{S}^\ast$. $\mathcal{S}$ occurs naturally inside of $L^2$, so after identifying the dual of $L^2$ with itself via the Riesz Representation Theorem, we can in turn regard $\mathcal{S}$ as a subspace of $\mathcal{S}^\ast$. With this in mind, I am interested in the operators on $\mathcal{S}^\ast$ that restrict to operators on $\mathcal{S}$.
The motivation for this question comes from quantum mechanics, where I have in mind the position and momentum operators acting on $\mathcal{S}^\ast$. I am thus interested in the operator algebra they generate. Furthermore, these of course restrict to operators on $\mathcal{S}$, and so I am likewise interseted in the operator algebra of operators on $\mathcal{S}^\ast$ that restrict to operators on $\mathcal{S}$. In particular, I would like to abstractly characterize this space.
As this is the natural space for observables in quantum mechanics (whether physicsts realize it or not), there has to be at least something known about this space. . .
-
The strong dual of a Frechet space is what is called a DF-space: see e.g. ncatlab.org/nlab/show/DF+space I'm afraid I don't know about the space of all linear operators on DF-spaces, but hopefully someone will come along who knows more about this – Yemon Choi Nov 4 2011 at 5:36
Mathematical physicists are well aware of the relevance of this situaton to quantum mechanics. I recommend you google "rigged Hilbert space" and "Gelfand triple". What you are looking at is an (probably the) example of such structures. Note that it arises from the following data. A Hilbert space and an unbounded operator thereon (the standard one-dimensional Schrödinger operator). Any such operator leads to corresponding structures. The fact that the above operator has discrete spectrum and that the eigenvalues grow like a power of $n$ (in this case the square) makes life simpler. – jbc Nov 28 at 7:54
## 5 Answers
A detailed study of topological properties of the space $S^*$ and other similar spaces of distributions is given in the book
I. M. Gel'fand and G. E. Shilov, Generalized functions. Vol. 2: Spaces of fundamental and generalized functions, Academic Press, 1968.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Perhaps, the following will be interesting for you: the strong topology on S*, obviously, coincides with the topology of uniform convergence on totally bounded sets, and this means that S*, being endowed with this topology, is what is called stereotype space. :) The definition is as follows. For a locally convex space X let us denote by X* the dual space (of functionals) endowed with the topology of uniform convergence on totally bounded sets. Then X is said to be {\it stereotype}, if X** is naturally isomorphic (as a locally convex space) to X.
By the way, this kind of duality allows to consider linear continuous operators X*$\to$X* as linear continuous operators X$\to$X. :)
-
Could you explain in more detail how we can consider linear operators on $X^\ast$ as linear opeerators on $X$? I am thinking of the more familiar example of $L^p$ and $L^q$ for $1<p<q<\infty$ Hölder conjugates, and in general I don't think linear operators on $L^p$ correspond to operators on $L^q$, at least not in any way I know of. Other the other hand, I see how you can obtain a sort-of adjoint operator on $X$ given an operator on $X^\ast$. Was this what you had in mind? – Jonathan Gleason Nov 4 2011 at 21:36
Suppose X is stereotype, and X* is its dual space of linear continuous functionals on X (again with the topology of uniform convergence on totally bounded sets on X). Then to each operator A:X$\to$X one can assign a dual operator A*:X*$\to$X* by formula A*f(x)=f(Ax), x$\in$X, f$\in$X*. The properties of A* are totally determined by the properties of A. On the other hand, since X is stereotype, the operation of taking dual operator A$\mapsto$A* is bijective, and the properties of A are totally determined by properties of A*. – Sergei Akbarov Nov 5 2011 at 8:35
Actually, this is a usual trick in all the theories of reflexivity (I think I should have told about this from the very beginning). For example, the spaces $L^p$, which you mentioned, are (not only stereotype but also) reflexive in the usual sence (for $1<p<\infty$), like reflexive locally convex spaces. And the Schwartz space S is also reflexive in this traditional sence -- so, perhaps, there were no need to begin a talk about stereotype theory. – Sergei Akbarov Nov 5 2011 at 11:06
But the advantage of stereotype theory is that it makes all the reasonable spaces in analysis reflexive (in new sence). In particular, all Frechet spaces are stereotype. – Sergei Akbarov Nov 5 2011 at 11:06
As regards your first question, the space of tempered distributions is indeed an example of a special class of locally convex spaces which, while not metrisable, have very nice properties. These are the Silva spaces which were investigated by the portuguese mathematician Sebastiao e Silva---they are, by definition, inductive limits of sequences of Banach spaces with compact interconnecting mappings---in your case, these are even nuclear. The original articles are rather inaccessible but Koethe's monograph on topological vector spaces has a chapter on these spaces. Since everything in sight in your application is nuclear, the operator spaces you are interested in can be represented as tensor products (in any of the standard tensor product topologies---in this case they all coincide). These facts were investigated in considerable detail by some of the great masters of functional analysis, for example, Koethe, Schwartz and Grothendieck and there is a wealth of material on this topic in their works---for example, Koethe's treatise mentioned above, Schwartz' sequel to his classical "Th\'eorie des Distributions" (in which he considers vector-valued distributions---this is easily available online) and Grothendick's thesis. One can find more accesssible representations in the recent secondary literature on locally convex spaces.
-
Just as a variation on the other answers: the Schwartz space can easily be written as a (projective) limit of Hilbert spaces $V_s$, basically requiring that both a function and its Fourier transform be in a Levi-Sobolev space. The inclusions for $s>t$ are compact (in fact, trace-class, seen by proving that they're composites of Hilbert-Schmidt) inclusions. This adds a bit to the assertion that it is Frechet, which would indeed give a proj lim of Banach spaces, but if we can have Hilbert spaces, it's even better. Then, as in the question, identifying $L^2(\mathbb R)$ with its own dual (up to complex conjugation, anyway), the dual of $V_s$ for positive $s$ is $V_{-s}$... and we can give it the strong (=Hilbert-space) topology. It is not completely formal-categorical, but easy enough, to show that the dual of the limit is the colimit of the duals, however we topologize them. With the Hilbert-space topologies we obtain (yet another presentation of) the strong topology on tempered distributions.
It is completely formal that the dual of a colimit is the limit of duals, and a virtue of this set-up, with Hilbert spaces, is that the reflexivity is trivial.
Also, operators on $V_{-\infty}=\bigcup_s V_s$ are easy to understand in this presentation. The operators that stabilize $V_{+\infty}=\bigcap_s V_s$ must stabilize each Hilbert space $V_s$, etc.
-
Concerning this: "I am likewise interseted in the operator algebra of operators on S* that restrict to operators on S. In particular, I would like to abstractly characterize this space"
I am not sure that I understand you correctly, but if you want to characterize the operators which are extensions from S to S*, then the following can be the answer. First, let us use some notations: for any x,y$\in$S we set $(x,y)=\int x(t)y(t)dt$, for any operator (everywhere operator will be linear and continuous) $A:S\to S$ we define a transposed operator $A^T:S\to S$ by formula $(Ax,y)=(x,A^Ty)$ (it does not always exist), the dual operator A*:S*$\to$ S* by formula A*f(x)=f(Ax) (it always exists) and for any operator B:S*$\to$S* its dual B*:S$\to$S by formula B*x(f)=Bf(x) (it also always exists). Let us also consider an operator F:S$\to$S*, Fx(y)=(x,y).
Then
1) B:S*$\to$S* is an extension of $A:S\to S$ iff $B\circ F=F\circ A$;
2) an operator A:S$\to$S can be extended to some operator B:S*$\to$S* iff there exists a transposed operator $A^T:S\to S$; in this case B=A$^T$*;
3) an operator B:S*$\to$S* is an extension of some operator A:S$\to$S iff for B* there exists a transposed operator B*$^T:S\to S$; in this case A=B*$^T$.
Well, the idea "abstractly characterize" is not a precisely defined, but I'll try to give you an idea of what I meant by way of example. If you take a subalgebra of bounded operators on a Hilbert space, this is a unital $C^*$-algebra. Similarly, the Gelfand-Naimark Theorem says that any abstract unital $C^*$-algebra is isomorphic to a subalgebra of all the bounded operators on some Hilbert space. In this way, the structure of a unital $C^*$-algebra abstractly characterizes bounded observables on a Hilbert space. Can something similar be done in this case? – Jonathan Gleason Nov 6 2011 at 0:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365375638008118, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/17826/why-should-i-prefer-bundles-to-surjective-submersions/34434
|
## Why should I prefer bundles to (surjective) submersions?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I hope this question isn't too open-ended for MO --- it's not my favorite type of question, but I do think there could be a good answer. I will happily CW the question if commenters want, but I also want answerers to pick up points for good answers, so...
Let $X,Y$ be smooth manifolds. A smooth map $f: Y \to X$ is a bundle if there exists a smooth manifold $F$ and a covering $U_i$ of $X$ such that for each $U_i$, there is a diffeomorphism $\phi_i : F\times U_i \overset\sim\to f^{-1}(U_i)$ that intertwines the projections to $U_i$. This isn't my favorite type of definition, because it demands existence of structure without any uniqueness, but I don't want to define $F,U_i,\phi_i$ as part of the data of the bundle, as then I'd have the wrong notion of morphism of bundles.
A definition I'm much happier with is of a submersion $f: Y \to X$, which is a smooth map such that for each $y\in Y$, the differential ${\rm d}f|_y : {\rm T}_y Y \to {\rm T}_{f(y)}X$ is surjective. I'm under the impression that submersions have all sorts of nice properties. For example, preimages of points are embedded submanifolds (maybe preimages of embedded submanifolds are embedded submanifolds?).
So, I know various ways that submersions are nice. Any bundle is in particular a submersion, and the converse is true for proper submersions (a map is proper if the preimage of any compact set is compact), but of course in general there are many submersions that are not bundles (take any open subset of $\mathbb R^n$, for example, and project to a coordinate $\mathbb R^m$ with $m\leq n$). But in the work I've done, I haven't ever really needed more from a bundle than that it be a submersion. Then again, I tend to do very local things, thinking about formal neighborhoods of points and the like.
So, I'm wondering for some applications where I really need to use a bundle --- where some important fact is not true for general submersions (or, surjective submersions with connected fibers, say).
-
Doesn't the definition of a smooth manifold demand existence of structure without any uniqueness? (This isn't a rhetorical question - I'm honestly not sure.) – Qiaochu Yuan Mar 11 2010 at 5:42
2
@Qiaochu. No, you have to specify (say) an equivalence class of atlases to define a smooth manifold. So R with the chart x --> x^3 is a different smooth manifold to R with the obvious chart (though diffeomorphic to it). More interestingly, the action of the homeomorphism group of S^7 on its smooth atlases has 28 orbits. – Tim Perutz Mar 11 2010 at 5:57
13
Theo, you answered your own question by saying that you like to work locally. Submersions don't have global structure. Take a smooth fibre bundle and delete any closed subset; it's still a submersion. Now try to say something interesting about its topology. Or integrate a vector field on it. – Tim Perutz Mar 11 2010 at 6:02
@Qiaochu: One way to say "smooth manifold" is to talk about maximal atlases, and these are unique. I guess I could use the same device to talk about bundles. So maybe that's not a complaint against them, but it's not a reason to like them any better either. – Theo Johnson-Freyd Mar 11 2010 at 6:30
2
I see where you're coming from about not liking existential quantifiers in certain definitions, but if they're of a local nature (ie there exists a cover such that on each piece blah blah blah), which they are in the case of bundles, then they're really well behaved! This is the whole point of sheaf theory! – James Borger Mar 11 2010 at 9:46
show 1 more comment
## 7 Answers
One would be that a fibre bundle $F \to E \to B$ has a homotopy long exact sequence
$$\cdots \to \pi_{n+1} B \to \pi_n F \to \pi_n E \to \pi_n B \to \pi_{n-1} F \to \cdots$$
This isn't true for a submersion, for one, the fibre in a submersion does not have a consistent homotopy-type as you vary the point in the base space.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There's no reason I can see for preferring bundles over submersions, unless you need bundles. If you don't need the extra global structure implied by a bundle, then by all means stick to submersions.
-
I'm tempted to accept this answer, as it's closest to what I really believe. But I think Ryan most accurately answered my question as asked. In any case, everyone should vote up Deane. – Theo Johnson-Freyd Mar 31 2010 at 2:44
Consider co-dimension 0. In this case, bundles are covering maps, with all the goodies that they bring. And submersions are just local homeomorphisms - not very exciting compared to coverings.
-
you mean local diffeomorphism right? It doesn't seem like there is a difference between the two except that the cardinality of the fiber is locally constant. maybe i am missing something though. – Sean Tilson Aug 4 2010 at 0:39
You write:
So, I'm wondering for some applications where I really need to use a bundle --- where some important fact is not true for general submersions (or, surjective submersions with connected fibers, say).
Actually, I am going to play devil's advocate here: sometimes it's better to have a submersion! This point comes up in a very relevant way in the classical smoothing theory of topological manifolds. Siebenmann (cf. Kirby and Siebenmann's book) defines a moduli space of smoothings of a topological manifold $M$ to be the space of $$(N,f)$$ such that $N$ is smooth and $f: N \to M$ is a homeomorphism.
Siebenmann chooses to topologize this in what seems a funny way: a $k$-simplex of such things is a pair $(N,f)$, where now $N \to \Delta^k$ is a smooth submersion (not necessarily proper if $M$ isn't compact!) and $f: N \to M \times \Delta^k$ is a homeomorphism which is compatible with projection to $\Delta^k$. This gives a $\Delta$-space (a simplicial set w/o degeneracies). Call its geometric realization $\text{Sm}(M)$.
Why doesn't he just topologize families as fiber bundles?
Here's why:
Let ${\cal O}_M$ be the poset of open subsets of $M$ which are abstractly homeomorphic to open balls. The fundamental theorem of smoothing theory asserts that the contravariant functor $\text{Sm} : {\cal O}_M \to \text{Top}$ given by $$U \mapsto \text{Sm}(U)$$ is a "homotopy sheaf" if $\dim M \ge 5$, i.e., the (restriction) map $$\text{Sm}(M) \to \underset{U \in {\cal O}_M} {\text{holim}}\quad \text{Sm}(U)$$ is a homotopy equivalence. This would not be the case if we had defined the families as bundles (rather than as submersions). Note: we cannot appeal to Ehresmann here as the submersions which are used in the define $k$-simplices in $\text{Sm}(U)$ are not assumed to be proper.
-
This is probably making a hash of the earlier answers, but bundles are special fibrations; specifically, they are fibrations with (not canonically) isomorphic fibers. And we all like fibrations, right?
-
Good answer! You can lift any curve in the base into the total space of a bundle, but you can't lift it into the total space of a submersion. – Konrad Waldorf Mar 11 2010 at 8:28
Konrad Waldorf, I do not understand. Take the boundary of the mobius band, i.e., the nontrivial Z_2 bundle over S^1. There is no section for the projection. – AndrewLMarshall May 2 2010 at 5:55
1
@Andrew, perhaps Konrad is liberal with the term "curve". I certainly didn't mean you can lift maps with arbitrary domain --- fibration only means you can lift whole homotopies that already lift at one end. – some guy on the street May 4 2010 at 17:48
There's also the cohomology version of Ryan's answer: the Leray-Serre spectral sequence, which tells you some very nice things about the cohomology of a bundle, and essentially nothing useful about the cohomology of a submersion. You can consider this a particular instance of Tim's comment.
In general, algebraic geometers and homotopy theorists work with bundles (or more generally, fibrations), every day of their lives, and will extremely rarely encounter submersions. Even if you don't want to work in such fields, their existence is a good reason to distinguish bundles from submersions.
-
1
There is the Leray spectral sequence of a map. It's just much better behaved for a fibration. – Ryan Budney Mar 11 2010 at 21:04
It's a bit unfair to say "nothing useful," but at the same time, I'm not very good at taking cohomology of random constructible sheaves on a space, as opposed to the local systems that show up in Serre for a bundle. – Ben Webster♦ Mar 11 2010 at 21:10
Ben, I disagree with your statement that "algebraic geometers [...] will extremely rarely encounter submersions". We just call them "smooth morphisms". Also, the Leray spectral sequence behaves quite nicely already for flat morphisms, you don't even need smoothness. – Sándor Kovács Jan 24 2011 at 1:21
I'm not sure what you mean by "the Leray spectral sequence works quite nicely for flat morphisms." If you take an arbitrary flat morphism (say the inclusion of a curve minus a point into a curve), a naive interpretation of Leray-Serre gives nonsense; of course this can be fixed, as Ryan points out, but at a significant cost in terms of complication. Of course, things work beautifully for proper smooth maps, but I would call those fibrations; they are in the analytic topology over $\mathbb C$, and behave like them over other fields. – Ben Webster♦ Jan 24 2011 at 4:46
Ben, sorry, I did indeed have proper flat in mind with respect to that comment about Leray. And I am happy to call those fibrations. However, the original question was about bundles. But the main point of my comment was that I think that we still do see submersions on a regular basis, just don't call them that. – Sándor Kovács Jan 24 2011 at 5:16
It probably won't matter which concept you use due to the theorem of Ehresmann. See: http://en.wikipedia.org/wiki/Ehresmann%27s_theorem
It states something like most surjective submersions are in fact fibre bundles (most meaning that this is the case if the surjective submersion is proper, and I am not sure how dense proper maps are). Is there an approximation theorem for proper maps?
So i think the answer is that you don't have to. Also, (smooth?) fibrant replacement can be done to any map so that you get a LES in homotopy (although this map may no longer be a submersion.).
hope this helps, sean
-
Well, for maps like $\mathbb C \setminus \{0\} \to \mathbb R$ given by projection onto the x-axis, it matters. – Ryan Budney Aug 4 2010 at 0:51
2
This is an interesting comment, it leads me to wonder if it would be good to think of submersions as bundles with singularities. But I guess that is obvious since everything we are working with is a manifold and so locally it would always look like a product. – Sean Tilson Aug 4 2010 at 2:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950502872467041, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/120580/list
|
## Return to Answer
2 corrected three errors
The whole discussion seems to devolve on whether the empty graph (or empty space) should be considered "connected". Angelo and I are of the school that it should not, but this should be explained since some of the traditional definitions of "connected" apparently allow the empty space to be connected.
A general abstract context is as follows. Let $C$ be a category with finite coproducts with the property that for any two objects $a$, $b$ (whose coproduct is denoted $a+b$), the canonical functor
$$C/a \times C/b \to C/(a+b): (x \to a, y \to b) \mapsto (x + y \to a + b)$$
is an equivalence. Such a category is said to be extensive. The category of topological spaces is extensive, the category of graphs is extensive, any topos is extensive, and there are many, many other examples.
Now, say an object $a$ in an extensive category to be is connected if the functor
$$\hom(a, -): C \to Set$$
preserves binary coproducts (whence it can be shown to preserve finite coproducts). This is a fundamental definition; see the nLab for an extended discussion. Under this definition, the empty space (the empty graph, etc.), i.e., the initial object, is not connected.
An equivalent definition is to say $c$ is connected if, whenever $c \cong a + b$, exactly one of $a, b$ is inhabited. If one insists that the empty space should be inhabitedconnected, then change the word "exactly" to "at most", and instead of saying the canonical map $\hom(c, x) + \hom(c, y) \to \hom(c, x + y)$ is an isomorphism, say it is merely surjective. However, most results come out more cleanly by working with the definition above, which disqualifies the empty set.
Compare the notion of prime ideal: working in the lattice of ideals of a commutative ring p.i.d. $R$ where $\leq$ is given by reverse inclusion, the coproduct or join of ideals $a, b$ is $ab$, the initial ideal is $R$, and we say an ideal $p$ is prime if $p \neq R$ and $p \leq ab$ implies $p \leq a$ or $p \leq b$. The condition $p \neq R$ is considered fundamental to the definition of prime. Without it, we no longer have e.g. unique decomposition of integers into prime factors (compare the fact that every graph is uniquely a coproduct of connected graphs under our definition, but this is not so if the empty graph is considered to be connected). See also the numerous examples in the nLab discussion "too simple to be simple"; for example, $1$ is too simple to be a prime, and the zero module is considered too simple to be a simple module.
Every acyclic graph (a forest) is uniquely a coproduct of acyclic connected graphs (i.e., trees) under our definition of connectedness. This includes the empty forest. So a forest can be empty, but a tree cannot.
1
The whole discussion seems to devolve on whether the empty graph (or empty space) should be considered "connected". Angelo and I are of the school that it should not, but this should be explained since some of the traditional definitions of "connected" apparently allow the empty space to be connected.
A general abstract context is as follows. Let $C$ be a category with finite coproducts with the property that for any two objects $a$, $b$ (whose coproduct is denoted $a+b$), the canonical functor
$$C/a \times C/b \to C/(a+b): (x \to a, y \to b) \mapsto (x + y \to a + b)$$
is an equivalence. Such a category is said to be extensive. The category of topological spaces is extensive, the category of graphs is extensive, any topos is extensive, and there are many, many other examples.
Now, say an object $a$ in an extensive category to be connected if the functor
$$\hom(a, -): C \to Set$$
preserves binary coproducts (whence it can be shown to preserve finite coproducts). This is a fundamental definition; see the nLab for an extended discussion. Under this definition, the empty space (the empty graph, etc.), i.e., the initial object, is not connected.
An equivalent definition is to say $c$ is connected if, whenever $c \cong a + b$, exactly one of $a, b$ is inhabited. If one insists that the empty space should be inhabited, then change the word "exactly" to "at most", and instead of saying the canonical map $\hom(c, x) + \hom(c, y) \to \hom(c, x + y)$ is an isomorphism, say it is merely surjective. However, most results come out more cleanly by working with the definition above, which disqualifies the empty set.
Compare the notion of prime ideal: working in the lattice of ideals of a commutative ring $R$ where $\leq$ is given by reverse inclusion, the coproduct or join of ideals $a, b$ is $ab$, the initial ideal is $R$, and we say an ideal $p$ is prime if $p \neq R$ and $p \leq ab$ implies $p \leq a$ or $p \leq b$. The condition $p \neq R$ is considered fundamental to the definition of prime. Without it, we no longer have e.g. unique decomposition of integers into prime factors (compare the fact that every graph is uniquely a coproduct of connected graphs under our definition, but this is not so if the empty graph is considered to be connected). See also the numerous examples in the nLab discussion "too simple to be simple"; for example, $1$ is too simple to be a prime, and the zero module is considered too simple to be a simple module.
Every acyclic graph (a forest) is uniquely a coproduct of acyclic connected graphs (i.e., trees) under our definition of connectedness. This includes the empty forest. So a forest can be empty, but a tree cannot.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258993268013, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/1973/is-there-a-complex-structure-on-the-6-sphere
|
## Is there a complex structure on the 6-sphere?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I don't know who first asked this question, but it's a question that I think many differential and complex geometers have tried to answer because it sounds so simple and fundamental. There are even a number of published proofs that are not taken seriously, even though nobody seems to know exactly why they are wrong.
-
4
A topical preprint has been posted on ArXiv (asserting that $S^6$ has a complex structure): front.math.ucdavis.edu/0505.5634 – Ramsay Dec 7 2010 at 19:33
And there is a new version out: arxiv.org/abs/math/0505634 claiming to completely overhaul the proof. Did anyone take a look with expertise in this area? – Daniel Apr 30 2011 at 10:28
2
I think you'll find that very few experts are willing to study the 4th revision, if the first 3 had serious flaws. – Deane Yang Apr 30 2011 at 12:53
## 5 Answers
Of course, I'm not about to answer this question one way or the other, but there are at least a couple of interesting things one might point out. Firstly, it has been shown (although I forget by whom) that there is no complex structure on S6 which is also orthogonal with respect to the round metric. The proof uses twistor theory. The twistor space of S6 is the bundle whose fibre at a point p is the space of orthogonal almost complex structures on the tangent space at p. It turns out that the total space is a smooth quadric hypersurface Q in CP7. If I remember rightly, an orthogonal complex structure would correspond to a section of this bundle which is also complex submanifold of Q. Studying the complex geometry of Q allows you to show this can't happen.
Secondly, there is a related question: does there exist a non-standard complex structure on CP3? To see the link, suppose there is a complex structure on S6 and blow up a point. This gives a complex manifold diffeomorphic to CP3, but with a non-standard complex structure, which would seem quite a weird phenomenon. On the other hand, so little is known about complex threefolds (in particular those which are not Kahler) that it's hard to decide what's weird and what isn't.
Finally, I once heard a talk by Yau which suggested the following ambitious strategy for finding complex structures on 6-manifolds. Assume we are working with a 6-manifold which has an almost complex structure (e.g. S6). Since the tangent bundle is a complex vector bundle it is pulled back from some complex Grassmanian via a classifying map. Requiring the structure to be integrable corresponds to a certain PDE for this map. One could then attempt to deform the map (via a cunning flow, continuity method etc.) to try and solve the PDE. I have no idea if anyone has actually tried to carry out part of this program.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A little more detail to Joel's first paragraph (I can't see how to add a comment to it, sorry!).
The argument that there is no orthogonal complex structure on the 6-sphere is due to Claude Lebrun and the point is that such a thing, viewed as a section of twistor space, has as image a complex submanifold. Now, on the one hand, this submanifold is Kaehler, and so has non-trivial second cohomology, since the twistor space is Kaehler. On the other hand, the section itself provides a diffeomorphism of our submanifold with the 6-sphere which has trivial second cohomology. Neat, huh?
-
1
This is interesting. Doesn't this bear some similarity to the argument used by Adler (who is or was a colleague of LeBrun) in his published "proof" of this conjecture? My recollection is that Adler tried to show that a Riemannian metric compatible with a complex structure on S^6 could be deformed into a Kahler metric, leading to the same contradiction. By the way, I never found anyone who was able to identify exactly why Adler's proof is wrong. – Deane Yang Oct 26 2009 at 1:13
That's right. He has a continuity argument involving a notion of "distinguished metric" for an almost complex structure that I have some difficulty making sense of: it requires embedding yr almost complex manifold in some high dimensional sphere. – Fran Burstall Oct 26 2009 at 21:10
If such a complex structure exists, it would weird indeed! For example, as shown by Campana, Demailly and Peternell (Compositio 112, 77-91), if such a thing exists, then $S^6$ would have no non-constant meromorphic functions. In particular, $S^6$ can't be Moishezon, let alone algebraic.
-
It should be possible to show that majority of complex 3-folds are not Moishezon. So, I would not say that this remark is a real argument against existsing of a complex strucutre on S^6. There is a nice phrase in the aricle of Gromov. ihes.fr/~gromov/topics/SpacesandQuestions.pdf Page 30. "How much do we gain in global understanding of a compact (V, J) by assuming that the structure J is integrable (i.e. complex)? It seems nothing at all: there is no single result concerning all compact complex manifolds" – Dmitri Jan 21 2010 at 23:25
I'm not sure I understand this remark by Gromov. In the complex analytic case we have the Dolbeault resolution -- one of the ways to state the integrability condition is precisely that Dolbeault complex is a complex. This leads to topological statements, e.g. the alternating sum of the Euler characteristics of $\Omega^i$'s (computed using the Chern classes) is the Euler characteristic of the manifold itself. This may or may not be true in the almost-complex case, but I don't see how to prove it. – algori Jan 22 2010 at 2:02
1
I think, the remark of Gromov is quite clear, it is quite hard to belive this remark, but the message is clear. As for Euler characteristics, David gave a correct explanation mathoverflow.net/questions/12601/… – Dmitri Jan 22 2010 at 9:05
What I meant was precisely that: this is hard to believe. The Euler characteristic is just the first thing that comes to mind. – algori Jan 22 2010 at 17:41
This is a famous open-problem. It is still unknown.
-
Yeah, I know. But I think it's a great question. Am I supposed to post only questions for which I believe an answer is already known? – Deane Yang Oct 22 2009 at 23:25
Well, for problems that you know are open, there's already a site for collecting them: the open problem garden garden.irmacs.sfu.ca – Charles Siegel Oct 23 2009 at 0:06
Thanks for the link. I didn't know about it. But it seems like a less active site? I don't see any differential geometry there at all. – Deane Yang Oct 23 2009 at 1:26
2
I propose that we move future meta-discussion regarding open problems to this [thread](meta.mathoverflow.net/discussion/8/…) on meta. – Scott Morrison♦ Oct 23 2009 at 3:05
Continuing Joel Fine and Fran Burstall's answer about, indeed "neat", Lebrun's result. Just want to recall that the "orthogonal" twistor space of any $2n$-dimensional pseudo-sphere $SO(2p+1,2q)/SO(2p,2q)$ can be written as $SO(2p+2,2q)/U(p+1,q)$. So the Kähler manifold in question, in case of the 6-sphere, is $SO(8)/U(4)$. One should think of each $j:T_xS^6\rightarrow T_xS^6$ as a linear map on $R^8$ with $j(x)=-1$ and $j(1)=x$. Well, proofs have been rewritten of LeBrun's result. I wish I had more opinion on this: http://arxiv.org/abs/math/0509442
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504829049110413, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/3055/how-does-the-cyclic-attack-on-rsa-work/3056
|
# How does the cyclic attack on RSA work?
I am trying to get the idea of cyclic attacks againts assymetric RSA encryption. Taken from Handbook of applied cryptography .
Let $k$ be a positive integer such that $$c^{(e^{k})} = c\mod n \tag{1}.$$ There for $k-1$ it holds that $$c^{(e^{k-1})} = m \mod n \tag 2$$ where $m$ is the message for encryption $n$ is the modulus and $c$ is the ciphertext.
I can't understand why equation (2) must hold?
-
1
It is important to note that such attacks are not a practical threat, for they are demonstrably less likely to succeed than some extremely inefficient factorization methods. – fgrieu Jun 27 '12 at 10:14
## 2 Answers
Let us remind that, by definition of the RSA encryption, we have $c = m^e \bmod{n}$ (where $n=pq$ and $\mathrm{gcd}(e, (p-1)(q-1)) = 1$, but it's not important here). Let's take the equation $$c^{e^{k-1}} \equiv m \bmod{n}$$ and let's raise both sides to the power $e$: $$\left(c^{e^{k-1}}\right)^e \equiv m^e \bmod{n}\,,$$ so $$c^{e^k} \equiv c \bmod{n}\,.$$
-
Ok i got it. As $c^{e^{k}} = c\space mod (n)$ (1) and $c=m^{e}\space mod(n)$ (2) then $c^{e^{k}} = m^{e}\space mod(n)$ (3). Then dividing each member in (3) by $e$ we get $c^{ e^{k-1}}=m\space mod(n)$ – curious Jun 26 '12 at 14:02
We start with the definition of textbook RSA encryption: $c = m^e \bmod n$. From your first equation
$$c^{e^k} = c \pmod{n},$$ we have that if $c^{e^k} = c \pmod{n}$, then $e^k = 1 \pmod{\phi(n)}$ (Euler's theorem). Dividing both sides by $e$, we get
$$e^{k-1} = e^{-1} \pmod{\phi(n)}.$$
By definition, $d = e^{-1} \pmod{\phi(n)}$. Thus, $$c^{e^{k-1}} = c^d = m \pmod{n}.$$
-
why $c^{e^{k}} = c$ ? $c^{e^{k}} = c mod (n)$ right? – curious Jun 26 '12 at 13:48
1
Right, just changed the answer to make the modulo explicit. – Samuel Neves Jun 26 '12 at 14:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365018606185913, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/208629-limit-mvt.html
|
1Thanks
• 1 Post By hollywood
# Thread:
1. ## limit (MVT)
show that Show that cosx>1-(x²/2) "x>0 o "x<0
hint:for x>0, you can use Mean value theorem for f(x)=cosx+(x²/2) on [0,x]
Thank you ..
2. ## Re: limit (MVT)
I think the best method is to use Taylor's Theorem - it gives the expansion for $\cos{x}$ in powers of x as $1-\frac{x^2}{2}$. The error term is $\frac{c^4}{24}$ where c is some number between x and 0. Since this error term is positive, $\cos{x}>1-\frac{x^2}{2}$ for all $x\ne{0}$.
- Hollywood
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.899069607257843, "perplexity_flag": "middle"}
|
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_10&diff=32079&oldid=30007
|
# User:Michiexile/MATH198/Lecture 10
### From HaskellWiki
(Difference between revisions)
| | | | |
|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | Current revision (18:21, 2 December 2009) (edit) (undo) | |
| (11 intermediate revisions not shown.) | | | |
| Line 1: | | Line 1: | |
| - | IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ. | + | This lecture will be shallow, and leave many things undefined, hinted at, and is mostly meant as an appetizer, enticing the audience to go forth and seek out the literature on topos theory for further studies. |
| | | + | |
| | | + | ===Subobject classifier=== |
| | | + | |
| | | + | One very useful property of the category <math>Set</math> is that the powerset of a given set is still a set; we have an internal concept of ''object of all subobjects''. Certainly, for any category (small enough) <math>C</math>, we have a contravariant functor <math>Sub(-): C\to Set</math> taking an object to the set of all equivalence classes of monomorphisms into that object; with the image <math>Sub(f)</math> given by the pullback diagram |
| | | + | :[[Image:SubobjectFunctorMorphismPullback.png]] |
| | | + | |
| | | + | If the functor <math>Sub(-)</math> is ''representable'' - meaning that there is some object <math>X\in C_0</math> such that <math>Sub(-) = hom(-,X)</math> - then the theory surrounding representable functors, connected to the Yoneda lemma - give us a number of good properties. |
| | | + | |
| | | + | One of them is that every representable functor has a ''universal element''; a generalization of the kind of universal mapping properties we've seen in definitions over and over again during this course; all the definitions that posit the unique existence of some arrow in some diagram given all other arrows. |
| | | + | |
| | | + | Thus, in a category with a representable subobject functor, we can pick a representing object <math>\Omega\in C_0</math>, such that <math>Sub(X) = hom(X,\Omega)</math>. Furthermore, picking a universal element corresponds to picking a subobject <math>\Omega_0\hookrightarrow\Omega</math> such that for any object <math>A</math> and subobject <math>A_0\hookrightarrow A</math>, there is a unique arrow <math>\chi: A\to\Omega</math> such that there is a pullback diagram |
| | | + | :[[Image:SubobjectClassifierPullback.png]] |
| | | + | |
| | | + | One can prove that <math>\Omega_0</math> is terminal in <math>C</math>, and we shall call <math>\Omega</math> the ''subobject classifier'', and this arrow <math>\Omega_0=1\to\Omega</math> ''true''. The arrow <math>\chi</math> is called the characteristic arrow of the subobject. |
| | | + | |
| | | + | In Set, all this takes on a familiar tone: the subobject classifier is a 2-element set, with a ''true'' element distinguished; and a characteristic function of a subset takes on the ''true'' value for every element in the subset, and the other (false) value for every element not in the subset. |
| | | + | |
| | | + | ===Defining topoi=== |
| | | + | |
| | | + | '''Definition''' A ''topos'' is a cartesian closed category with all finite limits and with a subobject classifier. |
| | | + | |
| | | + | It is worth noting that this is a far stronger condition than anything we can even hope to fulfill for the category of Haskell types and functions. The functional programming relevance will take a back seat in this lecture, in favour of usefulness in logic and set theory replacements. |
| | | + | |
| | | + | ===Properties of topoi=== |
| | | + | |
| | | + | The meat is in the properties we can prove about topoi, and in the things that turn out to be topoi. |
| | | + | |
| | | + | '''Theorem''' Let <math>E</math> be a topos. |
| | | + | * <math>E</math> has finite colimits. |
| | | + | |
| | | + | ====Power object==== |
| | | + | |
| | | + | Since a topos is closed, we can take exponentials. Specifically, we can consider <math>[A\to\Omega]</math>. This is an object such that <math>hom(B,[A\to\Omega]) = hom(A\times B, \Omega) = Sub(A\times B)</math>. Hence, we get an internal version of the subobject functor. (pick <math>B</math> to be the terminal object to get a sense for how global elements of <math>[A\to\Omega]</math> correspond to subobjects of <math>A</math>) |
| | | + | |
| | | + | ====Internal logic==== |
| | | + | |
| | | + | We can use the properties of a topos to develop a logic theory - mimicking the development of logic by considering operations on subsets in a given universe: |
| | | + | |
| | | + | Classically, in Set, and predicate logic, we would say that a ''predicate'' is some function from a universe to a set of truth values. So a predicate takes some sort of objects, and returns either True or False. |
| | | + | |
| | | + | Furthermore, we allow the definition of sets using predicates: |
| | | + | :<math>\{x\in U: P(x)\}</math> |
| | | + | |
| | | + | Looking back, though, there is no essential difference between this, and defining the predicate as the subset of the universe directly; the predicate-as-function appears, then, as the characteristic function of the subset. And types are added as easily - we specify each variable, each object, to have a set it belongs to. |
| | | + | |
| | | + | This way, predicates really are subsets. Type annotations decide which set the predicate lives in. And we have everything set up in a way that opens up for the topos language above. |
| | | + | |
| | | + | We'd define, for predicates <math>P, Q</math> acting on the same type: |
| | | + | :<math>\{x\in A : \top\} = A</math> |
| | | + | :<math>\{x\in A : \bot\} = \emptyset</math> |
| | | + | :<math>\{x : (P \wedge Q)(x)\} = \{x : P(x)\} \cap \{x : Q(x)\}</math> |
| | | + | :<math>\{x : (P \vee Q)(x)\} = \{x : P(x)\} \cup \{x : Q(x)\}</math> |
| | | + | :<math>\{x\in A : (\neg P)(x) \} = A \setminus \{x\in A : P(x)\}</math> |
| | | + | |
| | | + | We could then start to define primitive logic connectives as set operations; the intersection of two sets is the set on which '''both''' the corresponding predicates hold true, so <math>\wedge = \cap</math>. Similarily, the union of two sets is the set on which either of the corresponding predicates holds true, so <math>\vee = \cup</math>. The complement of a set, in the universe, is the negation of the predicate, and all other propositional connectives (implication, equivalence, ...) can be built with conjunction (and), disjunction (or) and negation (not). |
| | | + | |
| | | + | So we can mimic all these in a given topos: |
| | | + | |
| | | + | We say that a ''universe'' <math>U</math> is just an object in a given topos. (Note that by admitting several universes, we arrive at a ''typed'' predicate logic, with basically no extra work.) |
| | | + | |
| | | + | A ''predicate'' is a subobject of the universe. |
| | | + | |
| | | + | We can now proceed to define all the familiar logic connectives one by one, using the topos setting. While doing this, we shall introduce the notation <math>t_A: A\to\Omega</math> for the morphism <math>t_A = A\to 1\to^{true}\Omega</math> that takes on the value ''true'' on all of A. We note that with this convention, <math>\chi_{A_0}</math>, the characteristic morphism of a subobject, is the arrow such that <math>\chi_{A_0}\circ i = t_{A_0}</math>. |
| | | + | |
| | | + | '''Conjunction''': |
| | | + | Given predicates <math>P, Q</math>, we need to define the ''conjunction'' <math>P\wedge Q</math> as some <math>P\wedge Q: U\to\Omega</math> that corresponds to both <math>P</math> and <math>Q</math> simultaneously. |
| | | + | |
| | | + | We may define <math>true\times true: 1\to\Omega\times\Omega</math>, a subobject of <math>\Omega\times\Omega</math>. Being a subobject, this has a characteristic arrow <math>\wedge:\Omega\times\Omega\to\Omega</math>, that we call the ''conjunction arrow''. |
| | | + | |
| | | + | Now, we may define <math>\chi_P\times\chi_Q:U\to\Omega\times\Omega</math> for subobjects <math>P,Q\subseteq U</math> - and we take their composition <math>\wedge\circ\chi_P\times\chi_Q</math> to be the characteristic arrow of the subobject <math>P\wedge Q</math>. |
| | | + | |
| | | + | And, indeed, this results in a topoidal version of intersection of subobjects. |
| | | + | |
| | | + | '''Implication''': |
| | | + | Next we define <math>\leq_1: \Omega_1\to\Omega\times\Omega</math> to be the equalizer of <math>\wedge</math> and <math>proj_1</math>. Given <math>v, w: U\to\Omega</math>, we write <math>v\leq_1 w</math> if <math>v\times w</math> factors through <math>\leq_1</math>. |
| | | + | |
| | | + | Using the definition of an equalizer we arrive at <math>v\leq_1 w</math> iff <math>v = v\wedge w</math>. From this, we can deduce |
| | | + | :<math>u\leq_1 true</math> |
| | | + | :<math>u\leq_1 u</math> |
| | | + | :If <math>u\leq_1 v</math> and <math>v\leq_1 w</math> then <math>u\leq_1 w</math>. |
| | | + | :If <math>u\leq_1 v</math> and <math>v\leq_1 u</math> then <math>u=v</math> |
| | | + | and thus, <math>\leq_1</math> is a partial order on <math>[U\to\Omega]</math>. Intuitively, <math>u\leq_1 v</math> if <math>v</math> is at least as true as <math>u</math>. |
| | | + | |
| | | + | This relation corresponds to inclusion on subobjects. Note that <math>\leq_1:\Omega_1\to\Omega\times\Omega</math>, given from the equalizer, gives us <math>\Omega_1</math> as a ''relation'' on <math>\Omega</math> - a subobject of <math>\Omega\times\Omega</math>. Specifically, it has a classifying arrow <math>\Rightarrow:\Omega\times\Omega\to\Omega</math>. We write <math>h\Rightarrow k = (\Rightarrow)\circ h\times k</math>. And for subobjects <math>P,Q\subseteq A</math>, we write <math>P\Rightarrow Q</math> for the subobject classified by <math>\chi_P\Rightarrow\chi_Q</math>. |
| | | + | |
| | | + | It turns out, without much work, that this <math>P\Rightarrow Q</math> behaves just like classical implication in its relationship to <math>\wedge</math>. |
| | | + | |
| | | + | '''Membership''': |
| | | + | We can internalize the notion of ''membership'' as a subobject <math>\in^U\subseteq U\times\Omega^U</math>, and thus get the membership relation from a pullback: |
| | | + | :[[Image:ToposMembershipPullback.png]] |
| | | + | |
| | | + | For elements <math>x\times h: 1\to U\times\Omega^U</math>, we write <math>x\in^U h</math> for <math>x\times h\in\in^U</math>. Yielding a subset of the product <math>U\times\Omega^U</math>, this is readily interpretable as a relation relating things in <math>U</math> with subsets of <math>U</math>, so that for any <math>x,h</math> we can answer whether <math>x\in^Uh</math>. Both notations indicate <math>ev_A\circ h\times x = true</math>. |
| | | + | |
| | | + | '''Universal quantification''': |
| | | + | For any object <math>U</math>, the maximal subobject of <math>U</math> is <math>U</math> itself, embedded with <math>1_U</math> into itself. There is an arrow <math>\tau_U:1\to\Omega^U</math> represents this subobject. Being a map from <math>1</math>, it is specifically monic, so it has a classifying arrow <math>\forall_U:\Omega^U\to\Omega</math> that takes a given subobject of <math>U</math> to <math>true</math> precisely if it is in fact the maximal subobject. |
| | | + | |
| | | + | Now, with a relation <math>r:R\to B\times A</math>, we define <math>\forall a. R</math> by the following pullback: |
| | | + | :[[Image:ToposForallPullback.png]] |
| | | + | where <math>\lambda\chi_R</math> comes from the universal property of the exponential. |
| | | + | |
| | | + | '''Theorem''' For any <math>s:S\to B</math> monic, <math>S\subseteq \forall a.R</math> iff <math>S\times A\subseteq R</math>. |
| | | + | |
| | | + | This theorem tells us that the subobject given by <math>\forall a.R</math> is the largest subobject of <math>B</math> that is related by <math>R</math> to all of <math>A</math>. |
| | | + | |
| | | + | '''Falsum''': |
| | | + | We can define the ''false'' truth value using these tools as <math>\forall w\in\Omega.w</math>. This might be familiar to the more advanced Haskell type hackers - as the type |
| | | + | <haskell> |
| | | + | x :: forall a. a |
| | | + | </haskell> |
| | | + | which has to be able to give us an element of any type, regardless of the type itself. And in Haskell, the only element that inhabits all types is <hask>undefined</hask>. |
| | | + | |
| | | + | From a logical perspective, we use a few basic inference rules: |
| | | + | :[[Image:ToposTrivialSequent.png]] [[Image:ToposForallConnective.png]] [[Image:ToposSubstitution.png]] |
| | | + | and connect them up to derive |
| | | + | :[[Image:ToposFalsumDerivation.png]] |
| | | + | for any <math>\phi</math> not involving <math>w</math> - and we can always adjust any <math>\phi</math> to avoid <math>w</math>. |
| | | + | |
| | | + | Thus, the formula <math>\forall w.w</math> has the property that it implies everything - and thus is a good candidate for the ''false'' truth value; since the inference |
| | | + | :[[Image:ToposExFalsoQuodlibet.png]] |
| | | + | is the defining introduction rule for false. |
| | | + | |
| | | + | |
| | | + | '''Negation''': |
| | | + | We define negation the same way as in classical logic: <math>\neg \phi = \phi \Rightarrow false</math>. |
| | | + | |
| | | + | |
| | | + | '''Disjunction''': |
| | | + | We can define |
| | | + | :<math>P\vee Q = \forall w. ((\phi\Rightarrow w)\wedge(\psi\Rightarrow w))\Rightarrow w</math> |
| | | + | |
| | | + | Note that this definition uses one of our primary inference rules: |
| | | + | :[[Image:ToposDisjunctionConnective.png]] |
| | | + | as the defining property for the disjunction, and we may derive any properties we like from these. |
| | | + | |
| | | + | '''Existential quantifier''': |
| | | + | Finally, the existential quantifier is derived similarly to the disjunction - by figuring out a rule we want it to obey, and using that as a definition for it: |
| | | + | :<math>\exists x.\phi = \forall w. (\forall x. \phi \Rightarrow w)\Rightarrow w</math> |
| | | + | |
| | | + | Here, the rule we use as defining property is |
| | | + | :[[Image:ToposExistsConnective.png]] |
| | | + | |
| | | + | Before we leave this exploration of logic, some properties worth knowing about: |
| | | + | While we can prove <math>\neg(\phi\wedge\neg\phi)</math> and <math>\phi\Rightarrow\neg\neg\phi</math>, we cannot, in just topos logic, prove things like |
| | | + | :<math>\neg(\phi\wedge\psi)\Rightarrow(\neg\phi\vee\neg\psi)</math> |
| | | + | :<math>\neg\neg\phi\Rightarrow\phi</math> |
| | | + | nor any statements like |
| | | + | :<math>\neg(\forall x.\neg\phi)\Rightarrow(\exists x.\phi)</math> |
| | | + | :<math>\neg(\forall x.\phi)\Rightarrow(\exists x.\neg\phi)</math> |
| | | + | :<math>\neg(\exists x.\neg\phi)\Rightarrow(\forall x.\phi)</math> |
| | | + | We can, though, prove |
| | | + | :<math>\neg(\exists x.\phi)\Rightarrow(\forall x.\neg\phi)</math> |
| | | + | |
| | | + | If we include, extra, an additional inference rule (called the ''Boolean negation rule'') given by |
| | | + | :[[Image:BooleanNegation.png]] |
| | | + | then suddenly we're back in classical logic, and can prove <math>\neg\neg\phi\Rightarrow\phi</math> and <math>\phi\or\neg\phi</math>. |
| | | + | |
| | | + | ====Examples: Sheaves, topology and time sheaves==== |
| | | + | |
| | | + | The first interesting example of a topos is the category of (small enough) sets; in some sense clear already since we've been modelling our axioms and workflows pretty closely on the theory of sets. |
| | | + | |
| | | + | Generating logic and set theory in the topos of sets, we get a theory that captures several properties of ''intuitionistic logic''; such as the lack of Boolean negation, of exclusion of the third, and of double negation rules. |
| | | + | |
| | | + | For the more interesting examples, however, we shall introduce the concepts of ''topology'' and of ''sheaf'': |
| | | + | |
| | | + | '''Definition''' A (set-valued) ''presheaf'' on a category <math>C</math> is a contravariant functor <math>E: C^{op}\to Set</math>. |
| | | + | |
| | | + | Presheaves occur all over the place in geometry and topology - and occasionally in computer science too: There is a construction in which a functor <math>A\to Set</math> for a discrete small category <math>A</math> identified with its underlying set of objects as a set, corresponds to the data type of ''bags'' of elements from <math>A</math> - for <math>a\in A</math>, the image <math>F(a)</math> denotes the multiplicity of <math>a</math> in the bag. |
| | | + | |
| | | + | '''Theorem''' The category of all presheaves (with natural transformations as the morphisms) on a category <math>C</math> form a topos. |
| | | + | |
| | | + | '''Example''' Pick a category on the shape |
| | | + | :[[Image:GraphsAsPresheaves.png]] |
| | | + | A contravariant functor on this category is given by a pair of sets <math>G_0, G_1</math> and a pair of function <math>source, target: G_1\to G_0</math>. Identities are sent to identities. |
| | | + | |
| | | + | The category of presheaves on this category, thus, is the category of graphs. Thus graphs form a topos. |
| | | + | |
| | | + | The subobject classifier in the category of graphs is a graph with two nodes: in and out, and five arrows: |
| | | + | :<math>in \to^{all} in</math> |
| | | + | :<math>in \to^{both} in</math> |
| | | + | :<math>in \to^{source} out</math> |
| | | + | :<math>out \to^{target} in</math> |
| | | + | :<math>out \to^{neither} out</math> |
| | | + | Now, given a subgraph <math>H \leq G</math>, we define a function <math>\chi_H:G\to\Omega</math> by sending nodes to in or out dependent on their membership. For an arrow <math>a</math>, we send it to all if the arrow is in <math>H</math>, and otherwise we send it to both/source/target/neither according to where its source and target reside. |
| | | + | |
| | | + | To really get into sheaves, though, we introduce more structure - specifically, we define what we mean by a ''topology'': |
| | | + | |
| | | + | '''Definition''' Suppose <math>P</math> is a partially ordered set. We call <math>P</math> a ''complete Heyting algebra'' if |
| | | + | * There is a top element <math>1</math> such that <math>x\leq 1 \forall x\in P</math>. |
| | | + | * Any two elements <math>x, y</math> have an infimum (greatest lower bound) <math>x\wedge y</math>. |
| | | + | * Every subset <math>Q\subseteq P</math> has a supremum (least upper bound) <math>\bigvee_{p\in P} p</math>. |
| | | + | * <math>x\wedge(\bigvee y_i) = \bigvee x\wedge y_i</math> |
| | | + | |
| | | + | Note that for the partial order by inclusion of a family of subsets of a given set, being a complete Heyting algebra is the same as being a topology in the classical sense - you can take finite unions and any intersections of open sets and still get an open set. |
| | | + | |
| | | + | If <math>\{x_i\}</math> is a subset with supremum <math>x</math>, and <math>E</math> is a presheaf, we get functions <math>e_i:E(x)\to E(x_i)</math> from functoriality. We can summarize all these <math>e_i</math> into <math>e = \prod_i e_i: E(x)\to\prod_i E(x_i)</math>. |
| | | + | |
| | | + | Furthermore, functoriality gives us families of functions <math>c_{ij}: E(x_i)\to E(x_i\wedge x_j)</math> and <math>d_{ij}: E(x_j)\to E(x_i\wedge x_j)</math>. These can be collected into <math>c: \prod_i E(x_i)\to\prod_{ij}E(x_i\wedge x_j)</math> and <math>d:\prod_j E(x_j)\to\prod_{ij}E(x_i\wedge x_j)</math>. |
| | | + | |
| | | + | '''Definition''' A presheaf <math>E</math> on a Heyting algebra is called a ''sheaf'' if it satisfies: |
| | | + | :<math>x = \bigvee x_i</math> |
| | | + | implies that |
| | | + | :[[Image:SheafEqualizer.png]] |
| | | + | is an equalizer. If you have seen sheaves before, you may recognize this as the covering axiom. |
| | | + | |
| | | + | In other words, <math>E</math> is a sheaf if whenever <math>x=\bigvee x_i</math> and <math>c(\alpha) = d(\alpha)</math>, then there is some <math>\bar\alpha</math> such that <math>\alpha = e(\bar\alpha)</math>. |
| | | + | |
| | | + | '''Theorem''' The category of sheaves on a Heyting algebra is a topos. |
| | | + | |
| | | + | For context, we can think of sheaves over Heyting algebras as sets in a logic with an expanded notion of truth. Our Heyting algebra is the collection of truth values, and the sheaves are the fuzzy sets with fuzziness introduced by the Heyting algebra. |
| | | + | |
| | | + | Recalling that subsets and predicates are viewed as the same thing, we can view the set <math>E(p)</math> as the part of the fuzzy set <math>E</math> that is at least <math>p</math> true. |
| | | + | |
| | | + | As it turns out, to really make sense of this approach, we realize that ''equality'' is a predicate as well - and thus can hold or not depending on the truth value we use. |
| | | + | |
| | | + | '''Definition''' Let <math>P</math> be a complete Heyting algebra. A ''<math>P</math>-valued set'' is a pair <math>(S,\sigma)</math> of a set <math>S</math> and a function <math>\sigma: S\to P</math>. A ''category of fuzzy sets'' is a category of <math>P</math>-valued sets. A morphism <math>f:(S,\sigma)\to(T,\tau)</math> of <math>P</math>-valued sets is a function <math>f:S\to T</math> such that <math>\tau\circ f = \sigma</math>. |
| | | + | |
| | | + | From these definitions emerges a fuzzy set theory where all components of it being a kind of set theory emerges from the topoidal approach above. Thus, say, subsets in a fuzzy sense are just monics, thus are injective on the set part, and such that the valuation, on the image of the injection, increases from the previous valuation: <math>(T,\tau)\subseteq(S,\sigma)</math> if <math>T\subseteq S</math> and <math>\sigma|_T = \tau</math>. |
| | | + | |
| | | + | To get to topoi, though, there are a few matters we need to consider. First, we may well have several versions of the empty set - either a bona fide empty set, or just a set where every element is never actually there. This issue is minor. Much more significant though, is that while we can easily make <math>(S,\sigma)</math> give rise to a presheaf, by defining |
| | | + | :<math>E(x) = \{s\in S: \sigma(s)\geq x\}</math> |
| | | + | this definition will not yield a sheaf. The reason for this boils down to <math>E(0) = S \neq 1</math>. We can fix this, though, by adjoining another element - <math>\bot</math> - to <math>P</math> giving <math>P^+</math>. The new element <math>\bot</math> is imbued with two properties: it is smaller, in <math>P^+</math>, than any other element, and it is mapped, by <math>E</math> to <math>1</math>. |
| | | + | |
| | | + | '''Theorem''' The construction above gives a fuzzy set <math>(S,\sigma)</math> the structure of a sheaf on the augmented Heyting algebra. |
| | | + | |
| | | + | '''Corollary''' The category of fuzzy sets for a Heyting algebra <math>P</math> forms a topos. |
| | | + | |
| | | + | '''Final note''' While this construction allows us to make ''membership'' a fuzzy concept, we're not really done fuzzy-izing sets. There are two fundamental predicates on sets: equality and membership. While fuzzy set theory, classically, only allows us to make one of these fuzzy, topos theory allows us - rather easily - to make both these predicates fuzzy. Not only that, but membership reduces - with the power object construction - to equality testing, by which the fuzzy set theory ends up somewhat inconsistent in its treatment of the predicates. |
| | | + | |
| | | + | ===Literature=== |
| | | + | |
| | | + | At this point, I would warmly recommend the interested reader to pick up one, or more, of: |
| | | + | * Steve Awodey: Category Theory |
| | | + | * Michael Barr & Charles Wells: Categories for Computing Science |
| | | + | * Colin McLarty: Elementary Categories, Elementary Toposes |
| | | + | |
| | | + | or for more chewy books |
| | | + | * Peter T. Johnstone: Sketches of an Elephant: a Topos Theory compendium |
| | | + | * Michael Barr & Charles Wells: Toposes, Triples and Theories |
| | | + | |
| | | + | ===Exercises=== |
| | | + | |
| | | + | No homework at this point. However, if you want something to think about, a few questions and exercises: |
| | | + | |
| | | + | # Prove the relations showing that <math>\leq_1</math> is indeed a partial order on <math>[U\to\Omega]</math>. |
| | | + | # Prove the universal quantifier theorem. |
| | | + | # The ''extension'' of a formula <math>\phi</math> over a list of variables <math>x</math> is the sub-object of the product of domains <math>A_1\times\dots\times A_n</math> for the variables <math>x_1,\dots,x_n=x</math> classified by the interpretation of <math>\phi</math> as a morphism <math>A_1\times\dots\times A_n\to\Omega</math>. A formula is ''true'' if it classifies the entire product. A ''sequent'', written <math>\Gamma:\phi</math> is the statement that using the set of formulae <math>\Gamma</math> we may prove <math>\phi</math>, or in other words that the intersection of the extensions of the formulae in <math>\Gamma</math> is contained in the extension of <math>\phi</math>. If a sequent <math>\Gamma:\phi</math> is true, we say that <math>\Gamma</math> ''entails'' <math>\phi</math>. (some of the questions below are almost embarrassingly immediate from the definitions given above. I include them anyway, so that a ''catalogue'' of sorts of topoidal logic inferences is included here) |
| | | + | ## Prove the following entailments: |
| | | + | ### Trivial sequent: <math>\phi:\phi</math> |
| | | + | ### True: <math>:true</math> (note that true classifies the entire object) |
| | | + | ### False: <math>false:\phi</math> (note that false classifies the global minimum in thepreorder of subobjects) |
| | | + | ## Prove the following inference rules: |
| | | + | ### Implication: <math>\Gamma,\phi:\psi</math> is equivalent to <math>\Gamma:\phi\Rightarrow\psi</math>. |
| | | + | ### Thinning: <math>\Gamma:\phi</math> implies <math>\Gamma,\psi:\phi</math> |
| | | + | ### Cut: <math>\Gamma,\psi:\phi</math> and <math>\Gamma:\psi</math> imply <math>\Gamma:\phi</math> if every variable free in <math>\psi</math> is free in <math>\Gamma</math> or in <math>\phi</math>. |
| | | + | ### Negation: <math>\Gamma, \phi: false</math> is equivalent (implications both ways) to <math>\Gamma: \neg\phi</math>. |
| | | + | ### Conjunction: <math>\Gamma:\phi</math> and <math>\Gamma:\psi</math> together are equivalent to <math>\Gamma:\phi\wedge\psi</math>. |
| | | + | ### Disjunction: <math>\Gamma,\phi:\theta</math> and <math>\Gamma,\psi:\theta</math> together imply <math>\Gamma, \phi\vee\psi: \theta</math>. |
| | | + | ### Universal: <math>\Gamma:\phi</math> is equivalent to <math>\Gamma:\forall x.\phi</math> if <math>x</math> is not free in <math>\Gamma</math>. |
| | | + | ### Existential: <math>\Gamma,\phi: \psi</math> is equivalent to <math>\Gamma,\exists x.\phi:\psi</math> if <math>x</math> is not free in <math>\Gamma</math> or <math>\psi</math>. |
| | | + | ### Equality: <math>:q=q</math>. |
| | | + | ### Biconditional: <math>(v\Rightarrow w)\wedge(w\Rightarrow v):v=w</math>. We usually write <math>v\Leftrightarrow w</math> for <math>v=w</math> if <math>v,w:A\to\Omega</math>. |
| | | + | ### Product: <math>p_1u = p_1u', p_2u = p_2u' : u = u'</math> for <math>u,u'\in A\times B</math>. |
| | | + | ### Product revisited: <math>:(p_1(s\times s')=s)\wedge(p_2(s\times s')=s')</math>. |
| | | + | ### Extensionality: <math>\forall x\in A. f(x) = g(x) : f = g</math> for <math>f,g\in[A\to B]</math>. |
| | | + | ### Comprehension: <math>(\lambda x\in A. s)x = s</math> for <math>x\in A</math>. |
| | | + | ## Prove the following results from the above entailments and inferences -- or directly from the topoidal logic mindset: |
| | | + | ### <math>:\neg(\phi\wedge\neg\phi)</math>. |
| | | + | ### <math>:\phi\Rightarrow\neg\neg\phi</math>. |
| | | + | ### <math>:\neg(\phi\vee\psi)\Rightarrow(\neg\phi\wedge\neg\psi)</math>. |
| | | + | ### <math>:(\neg\phi\wedge\neg\psi)\Rightarrow\neg(\phi\wedge\psi)</math>. |
| | | + | ### <math>:(\neg\phi\vee\neg\psi)\Rightarrow\neg(\phi\vee\psi)</math>. |
| | | + | ### <math>\phi\wedge(\theta\vee\psi)</math> is equivalent to <math>(\phi\wedge\theta)\vee(\phi\wedge\psi)</math>. |
| | | + | ### <math>\forall x.\neg\phi</math> is equivalent to <math>\neg\exists x.\phi</math>. |
| | | + | ### <math>\exists x\phi\Rightarrow\neg\forall x.\neg\phi</math>. |
| | | + | ### <math>\exists x\neg\phi\Rightarrow\neg\forall x.\phi</math>. |
| | | + | ### <math>\forall x\phi\Rightarrow\neg\exists x.\neg\phi</math>. |
| | | + | ### <math>\phi:\psi</math> implies <math>\neg\psi:\neg\phi</math>. |
| | | + | ### <math>\phi:\psi\Rightarrow\phi</math>. |
| | | + | ### <math>\phi\Rightarrow\not\phi:\not\phi</math>. |
| | | + | ### <math>\not\phi\vee\psi:\phi\Rightarrow\psi</math> (but not the converse!). |
| | | + | ### <math>\neg\neg\neg\phi</math> is equivalent to <math>\neg\phi</math>. |
| | | + | ### <math>(\phi\wedge\psi)\Rightarrow\theta</math> is equivalent to <math>\phi\Rightarrow(\psi\Rightarrow\theta)</math> (currying!). |
| | | + | ## Using the Boolean negation rule: <math>\Gamma,\neg\phi:false</math> is equivalent to <math>\Gamma:\phi</math>, prove the following additional results: |
| | | + | ### <math>\neg\neg\phi:\phi</math>. |
| | | + | ### <math>:\phi\vee\neg\phi</math>. |
| | | + | ## Show that either of the three rules above, together with the original negation rule, implies the Boolean negation rule. |
| | | + | ### The converses of the three existential/universal/negation implications above. |
| | | + | ## The restrictions introduced for the cut rule above block the deduction of an entailment <math>:\forall x.\phi\Rightarrow\exists x.\phi</math>. The issue at hand is that <math>A</math> might not actually have members; so choosing one is not a sound move. Show that this entailment can be deduced from the premise <math>\exists x\in A. x=x</math>. |
| | | + | ## Show that if we extend our ruleset by the quantifier negation rule <math>\forall x\Leftrightarrow \neg\exists x.\neg</math>, then we can derive the entailment <math>:\forall w: w=t \vee w = false</math>. From this derive <math>:\phi\vee\neg\phi</math> and hence conclude that this extension gets us Boolean logic again. |
| | | + | # A ''topology'' on a topos <math>E</math> is an arrow <math>j:\Omega\to\Omega</math> such that <math>j\circ true=true</math>, <math>j\circ j=j<math> and <math>j\circ\wedge = \wedge\circ j\times j</math>. For a subobject <math>S\subseteq A</math> with characteristic arrow <math>\chi_S:A\to\Omega</math>, we define its <math>j</math>-closure as the subobject <math>\bar S\subseteq A</math> classified by <math>j\circ\chi_S</math>. |
| | | + | ## Prove: |
| | | + | ### <math>S\subseteq\bar S</math>. |
| | | + | ### <math>\bar S = \bar{\bar S}</math>. |
| | | + | ### <math>\bar{S\cap T} = \bar S\cap\bar T</math>. |
| | | + | ### <math>S\subseteq T</math> implies <math>\bar S\subseteq\bar T</math>. |
| | | + | ### <math>\bar{f^{-1}(S)} = f^{-1}(\bar S)</math>. |
| | | + | ## We define <math>S</math> to be <math>j</math>-closed if <math>S=\bar S</math>. It is <math>j</math>-dense if <math>\bar S=A</math>. These terms are chosen due to correspondences to classical pointset topology for the topos of sheaves over some space. For a logical standpoint, it is more helpful to look at <math>j</math> as a modality operator: "''it is <math>j</math>-locally true that''" Given any <math>u:1\to\Omega</math>, prove that the following are topologies: |
| | | + | ### <math>(u\to -): \Omega\to\Omega</math> (the ''open topology'', where such a <math>u</math> in a sheaf topos ends up corresponding to an open subset of the underlying space, and the formulae picked out are true on at least all of that subset). |
| | | + | ### <math>u\vee -): \Omega\to\Omega</math> (the closed topology, where a formula is true if its disjunction with <math>u</math> is true -- corresponding to formulae holding over at least the closed set complementing the subset picked out) |
| | | + | ### <math>\neg\neg: \Omega\to\Omega</math>. This may, depending on the topos, end up being interpreted as ''true so far as global elements are concerned'', or ''not false on any open set'', or other interpretations. |
| | | + | ### <math>1_\Omega</math>. |
| | | + | ## For a topos <math>E</math> with a topology <math>j</math>, we define an object <math>A</math> to be a ''sheaf'' iff for every <math>X</math> and every <math>j</math>-dense subobject <math>S\subseteq X</math> and every <math>f:S\to A</math> there is a unique <math>g:X\to A</math> with <math>f=g\circ s</math>. In other words, <math>A</math> is an object that cannot see the difference between <math>j</math>-dense subobjects and objects. We write <math>E_j</math> for the full subcategory of <math>j</math>-sheaves. |
| | | + | ### Prove that any object is a sheaf for <math>1_\Omega</math>. |
| | | + | ### Prove that a subobject is dense for <math>\neg\neg</math> iff its negation is empty. Show that <math>true+false:1+1\to\Omega</math> is dense for this topology. Conclude that <math>1+1</math> is dense in <math>\Omega_{\neg\neg}</math> and thus that <math>E_{\neg\neg}</math> is Boolean. |
## Current revision
This lecture will be shallow, and leave many things undefined, hinted at, and is mostly meant as an appetizer, enticing the audience to go forth and seek out the literature on topos theory for further studies.
## Contents
### 1 Subobject classifier
One very useful property of the category Set is that the powerset of a given set is still a set; we have an internal concept of object of all subobjects. Certainly, for any category (small enough) C, we have a contravariant functor $Sub(-): C\to Set$ taking an object to the set of all equivalence classes of monomorphisms into that object; with the image Sub(f) given by the pullback diagram
If the functor Sub( − ) is representable - meaning that there is some object $X\in C_0$ such that Sub( − ) = hom( − ,X) - then the theory surrounding representable functors, connected to the Yoneda lemma - give us a number of good properties.
One of them is that every representable functor has a universal element; a generalization of the kind of universal mapping properties we've seen in definitions over and over again during this course; all the definitions that posit the unique existence of some arrow in some diagram given all other arrows.
Thus, in a category with a representable subobject functor, we can pick a representing object $\Omega\in C_0$, such that Sub(X) = hom(X,Ω). Furthermore, picking a universal element corresponds to picking a subobject $\Omega_0\hookrightarrow\Omega$ such that for any object A and subobject $A_0\hookrightarrow A$, there is a unique arrow $\chi: A\to\Omega$ such that there is a pullback diagram
One can prove that Ω0 is terminal in C, and we shall call Ω the subobject classifier, and this arrow $\Omega_0=1\to\Omega$ true. The arrow χ is called the characteristic arrow of the subobject.
In Set, all this takes on a familiar tone: the subobject classifier is a 2-element set, with a true element distinguished; and a characteristic function of a subset takes on the true value for every element in the subset, and the other (false) value for every element not in the subset.
### 2 Defining topoi
Definition A topos is a cartesian closed category with all finite limits and with a subobject classifier.
It is worth noting that this is a far stronger condition than anything we can even hope to fulfill for the category of Haskell types and functions. The functional programming relevance will take a back seat in this lecture, in favour of usefulness in logic and set theory replacements.
### 3 Properties of topoi
The meat is in the properties we can prove about topoi, and in the things that turn out to be topoi.
Theorem Let E be a topos.
• E has finite colimits.
#### 3.1 Power object
Since a topos is closed, we can take exponentials. Specifically, we can consider $[A\to\Omega]$. This is an object such that $hom(B,[A\to\Omega]) = hom(A\times B, \Omega) = Sub(A\times B)$. Hence, we get an internal version of the subobject functor. (pick B to be the terminal object to get a sense for how global elements of $[A\to\Omega]$ correspond to subobjects of A)
#### 3.2 Internal logic
We can use the properties of a topos to develop a logic theory - mimicking the development of logic by considering operations on subsets in a given universe:
Classically, in Set, and predicate logic, we would say that a predicate is some function from a universe to a set of truth values. So a predicate takes some sort of objects, and returns either True or False.
Furthermore, we allow the definition of sets using predicates:
$\{x\in U: P(x)\}$
Looking back, though, there is no essential difference between this, and defining the predicate as the subset of the universe directly; the predicate-as-function appears, then, as the characteristic function of the subset. And types are added as easily - we specify each variable, each object, to have a set it belongs to.
This way, predicates really are subsets. Type annotations decide which set the predicate lives in. And we have everything set up in a way that opens up for the topos language above.
We'd define, for predicates P,Q acting on the same type:
$\{x\in A : \top\} = A$
$\{x\in A : \bot\} = \emptyset$
$\{x : (P \wedge Q)(x)\} = \{x : P(x)\} \cap \{x : Q(x)\}$
$\{x : (P \vee Q)(x)\} = \{x : P(x)\} \cup \{x : Q(x)\}$
$\{x\in A : (\neg P)(x) \} = A \setminus \{x\in A : P(x)\}$
We could then start to define primitive logic connectives as set operations; the intersection of two sets is the set on which both the corresponding predicates hold true, so $\wedge = \cap$. Similarily, the union of two sets is the set on which either of the corresponding predicates holds true, so $\vee = \cup$. The complement of a set, in the universe, is the negation of the predicate, and all other propositional connectives (implication, equivalence, ...) can be built with conjunction (and), disjunction (or) and negation (not).
So we can mimic all these in a given topos:
We say that a universe U is just an object in a given topos. (Note that by admitting several universes, we arrive at a typed predicate logic, with basically no extra work.)
A predicate is a subobject of the universe.
We can now proceed to define all the familiar logic connectives one by one, using the topos setting. While doing this, we shall introduce the notation $t_A: A\to\Omega$ for the morphism $t_A = A\to 1\to^{true}\Omega$ that takes on the value true on all of A. We note that with this convention, $\chi_{A_0}$, the characteristic morphism of a subobject, is the arrow such that $\chi_{A_0}\circ i = t_{A_0}$.
Conjunction: Given predicates P,Q, we need to define the conjunction $P\wedge Q$ as some $P\wedge Q: U\to\Omega$ that corresponds to both P and Q simultaneously.
We may define $true\times true: 1\to\Omega\times\Omega$, a subobject of $\Omega\times\Omega$. Being a subobject, this has a characteristic arrow $\wedge:\Omega\times\Omega\to\Omega$, that we call the conjunction arrow.
Now, we may define $\chi_P\times\chi_Q:U\to\Omega\times\Omega$ for subobjects $P,Q\subseteq U$ - and we take their composition $\wedge\circ\chi_P\times\chi_Q$ to be the characteristic arrow of the subobject $P\wedge Q$.
And, indeed, this results in a topoidal version of intersection of subobjects.
Implication: Next we define $\leq_1: \Omega_1\to\Omega\times\Omega$ to be the equalizer of $\wedge$ and proj1. Given $v, w: U\to\Omega$, we write $v\leq_1 w$ if $v\times w$ factors through $\leq_1$.
Using the definition of an equalizer we arrive at $v\leq_1 w$ iff $v = v\wedge w$. From this, we can deduce
$u\leq_1 true$
$u\leq_1 u$
If $u\leq_1 v$ and $v\leq_1 w$ then $u\leq_1 w$.
If $u\leq_1 v$ and $v\leq_1 u$ then u = v
and thus, $\leq_1$ is a partial order on $[U\to\Omega]$. Intuitively, $u\leq_1 v$ if v is at least as true as u.
This relation corresponds to inclusion on subobjects. Note that $\leq_1:\Omega_1\to\Omega\times\Omega$, given from the equalizer, gives us Ω1 as a relation on Ω - a subobject of $\Omega\times\Omega$. Specifically, it has a classifying arrow $\Rightarrow:\Omega\times\Omega\to\Omega$. We write $h\Rightarrow k = (\Rightarrow)\circ h\times k$. And for subobjects $P,Q\subseteq A$, we write $P\Rightarrow Q$ for the subobject classified by $\chi_P\Rightarrow\chi_Q$.
It turns out, without much work, that this $P\Rightarrow Q$ behaves just like classical implication in its relationship to $\wedge$.
Membership: We can internalize the notion of membership as a subobject $\in^U\subseteq U\times\Omega^U$, and thus get the membership relation from a pullback:
For elements $x\times h: 1\to U\times\Omega^U$, we write $x\in^U h$ for $x\times h\in\in^U$. Yielding a subset of the product $U\times\Omega^U$, this is readily interpretable as a relation relating things in U with subsets of U, so that for any x,h we can answer whether $x\in^Uh$. Both notations indicate $ev_A\circ h\times x = true$.
Universal quantification: For any object U, the maximal subobject of U is U itself, embedded with 1U into itself. There is an arrow $\tau_U:1\to\Omega^U$ represents this subobject. Being a map from 1, it is specifically monic, so it has a classifying arrow $\forall_U:\Omega^U\to\Omega$ that takes a given subobject of U to true precisely if it is in fact the maximal subobject.
Now, with a relation $r:R\to B\times A$, we define $\forall a. R$ by the following pullback:
where λχR comes from the universal property of the exponential.
Theorem For any $s:S\to B$ monic, $S\subseteq \forall a.R$ iff $S\times A\subseteq R$.
This theorem tells us that the subobject given by $\forall a.R$ is the largest subobject of B that is related by R to all of A.
Falsum: We can define the false truth value using these tools as $\forall w\in\Omega.w$. This might be familiar to the more advanced Haskell type hackers - as the type
`x :: forall a. a`
which has to be able to give us an element of any type, regardless of the type itself. And in Haskell, the only element that inhabits all types is
undefined
.
From a logical perspective, we use a few basic inference rules:
and connect them up to derive
for any φ not involving w - and we can always adjust any φ to avoid w.
Thus, the formula $\forall w.w$ has the property that it implies everything - and thus is a good candidate for the false truth value; since the inference
is the defining introduction rule for false.
Negation: We define negation the same way as in classical logic: $\neg \phi = \phi \Rightarrow false$.
Disjunction: We can define
$P\vee Q = \forall w. ((\phi\Rightarrow w)\wedge(\psi\Rightarrow w))\Rightarrow w$
Note that this definition uses one of our primary inference rules:
as the defining property for the disjunction, and we may derive any properties we like from these.
Existential quantifier: Finally, the existential quantifier is derived similarly to the disjunction - by figuring out a rule we want it to obey, and using that as a definition for it:
$\exists x.\phi = \forall w. (\forall x. \phi \Rightarrow w)\Rightarrow w$
Here, the rule we use as defining property is
Before we leave this exploration of logic, some properties worth knowing about: While we can prove $\neg(\phi\wedge\neg\phi)$ and $\phi\Rightarrow\neg\neg\phi$, we cannot, in just topos logic, prove things like
$\neg(\phi\wedge\psi)\Rightarrow(\neg\phi\vee\neg\psi)$
$\neg\neg\phi\Rightarrow\phi$
nor any statements like
$\neg(\forall x.\neg\phi)\Rightarrow(\exists x.\phi)$
$\neg(\forall x.\phi)\Rightarrow(\exists x.\neg\phi)$
$\neg(\exists x.\neg\phi)\Rightarrow(\forall x.\phi)$
We can, though, prove
$\neg(\exists x.\phi)\Rightarrow(\forall x.\neg\phi)$
If we include, extra, an additional inference rule (called the Boolean negation rule) given by
then suddenly we're back in classical logic, and can prove $\neg\neg\phi\Rightarrow\phi$ and $\phi\or\neg\phi$.
#### 3.3 Examples: Sheaves, topology and time sheaves
The first interesting example of a topos is the category of (small enough) sets; in some sense clear already since we've been modelling our axioms and workflows pretty closely on the theory of sets.
Generating logic and set theory in the topos of sets, we get a theory that captures several properties of intuitionistic logic; such as the lack of Boolean negation, of exclusion of the third, and of double negation rules.
For the more interesting examples, however, we shall introduce the concepts of topology and of sheaf:
Definition A (set-valued) presheaf on a category C is a contravariant functor $E: C^{op}\to Set$.
Presheaves occur all over the place in geometry and topology - and occasionally in computer science too: There is a construction in which a functor $A\to Set$ for a discrete small category A identified with its underlying set of objects as a set, corresponds to the data type of bags of elements from A - for $a\in A$, the image F(a) denotes the multiplicity of a in the bag.
Theorem The category of all presheaves (with natural transformations as the morphisms) on a category C form a topos.
Example Pick a category on the shape
A contravariant functor on this category is given by a pair of sets G0,G1 and a pair of function $source, target: G_1\to G_0$. Identities are sent to identities.
The category of presheaves on this category, thus, is the category of graphs. Thus graphs form a topos.
The subobject classifier in the category of graphs is a graph with two nodes: in and out, and five arrows:
$in \to^{all} in$
$in \to^{both} in$
$in \to^{source} out$
$out \to^{target} in$
$out \to^{neither} out$
Now, given a subgraph $H \leq G$, we define a function $\chi_H:G\to\Omega$ by sending nodes to in or out dependent on their membership. For an arrow a, we send it to all if the arrow is in H, and otherwise we send it to both/source/target/neither according to where its source and target reside.
To really get into sheaves, though, we introduce more structure - specifically, we define what we mean by a topology:
Definition Suppose P is a partially ordered set. We call P a complete Heyting algebra if
• There is a top element 1 such that $x\leq 1 \forall x\in P$.
• Any two elements x,y have an infimum (greatest lower bound) $x\wedge y$.
• Every subset $Q\subseteq P$ has a supremum (least upper bound) $\bigvee_{p\in P} p$.
• $x\wedge(\bigvee y_i) = \bigvee x\wedge y_i$
Note that for the partial order by inclusion of a family of subsets of a given set, being a complete Heyting algebra is the same as being a topology in the classical sense - you can take finite unions and any intersections of open sets and still get an open set.
If {xi} is a subset with supremum x, and E is a presheaf, we get functions $e_i:E(x)\to E(x_i)$ from functoriality. We can summarize all these ei into $e = \prod_i e_i: E(x)\to\prod_i E(x_i)$.
Furthermore, functoriality gives us families of functions $c_{ij}: E(x_i)\to E(x_i\wedge x_j)$ and $d_{ij}: E(x_j)\to E(x_i\wedge x_j)$. These can be collected into $c: \prod_i E(x_i)\to\prod_{ij}E(x_i\wedge x_j)$ and $d:\prod_j E(x_j)\to\prod_{ij}E(x_i\wedge x_j)$.
Definition A presheaf E on a Heyting algebra is called a sheaf if it satisfies:
$x = \bigvee x_i$
implies that
is an equalizer. If you have seen sheaves before, you may recognize this as the covering axiom.
In other words, E is a sheaf if whenever $x=\bigvee x_i$ and c(α) = d(α), then there is some $\bar\alpha$ such that $\alpha = e(\bar\alpha)$.
Theorem The category of sheaves on a Heyting algebra is a topos.
For context, we can think of sheaves over Heyting algebras as sets in a logic with an expanded notion of truth. Our Heyting algebra is the collection of truth values, and the sheaves are the fuzzy sets with fuzziness introduced by the Heyting algebra.
Recalling that subsets and predicates are viewed as the same thing, we can view the set E(p) as the part of the fuzzy set E that is at least p true.
As it turns out, to really make sense of this approach, we realize that equality is a predicate as well - and thus can hold or not depending on the truth value we use.
Definition Let P be a complete Heyting algebra. A P-valued set is a pair (S,σ) of a set S and a function $\sigma: S\to P$. A category of fuzzy sets is a category of P-valued sets. A morphism $f:(S,\sigma)\to(T,\tau)$ of P-valued sets is a function $f:S\to T$ such that $\tau\circ f = \sigma$.
From these definitions emerges a fuzzy set theory where all components of it being a kind of set theory emerges from the topoidal approach above. Thus, say, subsets in a fuzzy sense are just monics, thus are injective on the set part, and such that the valuation, on the image of the injection, increases from the previous valuation: $(T,\tau)\subseteq(S,\sigma)$ if $T\subseteq S$ and σ | T = τ.
To get to topoi, though, there are a few matters we need to consider. First, we may well have several versions of the empty set - either a bona fide empty set, or just a set where every element is never actually there. This issue is minor. Much more significant though, is that while we can easily make (S,σ) give rise to a presheaf, by defining
$E(x) = \{s\in S: \sigma(s)\geq x\}$
this definition will not yield a sheaf. The reason for this boils down to $E(0) = S \neq 1$. We can fix this, though, by adjoining another element - $\bot$ - to P giving P + . The new element $\bot$ is imbued with two properties: it is smaller, in P + , than any other element, and it is mapped, by E to 1.
Theorem The construction above gives a fuzzy set (S,σ) the structure of a sheaf on the augmented Heyting algebra.
Corollary The category of fuzzy sets for a Heyting algebra P forms a topos.
Final note While this construction allows us to make membership a fuzzy concept, we're not really done fuzzy-izing sets. There are two fundamental predicates on sets: equality and membership. While fuzzy set theory, classically, only allows us to make one of these fuzzy, topos theory allows us - rather easily - to make both these predicates fuzzy. Not only that, but membership reduces - with the power object construction - to equality testing, by which the fuzzy set theory ends up somewhat inconsistent in its treatment of the predicates.
### 4 Literature
At this point, I would warmly recommend the interested reader to pick up one, or more, of:
• Steve Awodey: Category Theory
• Michael Barr & Charles Wells: Categories for Computing Science
• Colin McLarty: Elementary Categories, Elementary Toposes
or for more chewy books
• Peter T. Johnstone: Sketches of an Elephant: a Topos Theory compendium
• Michael Barr & Charles Wells: Toposes, Triples and Theories
### 5 Exercises
No homework at this point. However, if you want something to think about, a few questions and exercises:
1. Prove the relations showing that $\leq_1$ is indeed a partial order on $[U\to\Omega]$.
2. Prove the universal quantifier theorem.
3. The extension of a formula φ over a list of variables x is the sub-object of the product of domains $A_1\times\dots\times A_n$ for the variables $x_1,\dots,x_n=x$ classified by the interpretation of φ as a morphism $A_1\times\dots\times A_n\to\Omega$. A formula is true if it classifies the entire product. A sequent, written Γ:φ is the statement that using the set of formulae Γ we may prove φ, or in other words that the intersection of the extensions of the formulae in Γ is contained in the extension of φ. If a sequent Γ:φ is true, we say that Γ entails φ. (some of the questions below are almost embarrassingly immediate from the definitions given above. I include them anyway, so that a catalogue of sorts of topoidal logic inferences is included here)
1. Prove the following entailments:
1. Trivial sequent: φ:φ
2. True: :true (note that true classifies the entire object)
3. False: false:φ (note that false classifies the global minimum in thepreorder of subobjects)
2. Prove the following inference rules:
1. Implication: Γ,φ:ψ is equivalent to $\Gamma:\phi\Rightarrow\psi$.
2. Thinning: Γ:φ implies Γ,ψ:φ
3. Cut: Γ,ψ:φ and Γ:ψ imply Γ:φ if every variable free in ψ is free in Γ or in φ.
4. Negation: Γ,φ:false is equivalent (implications both ways) to $\Gamma: \neg\phi$.
5. Conjunction: Γ:φ and Γ:ψ together are equivalent to $\Gamma:\phi\wedge\psi$.
6. Disjunction: Γ,φ:θ and Γ,ψ:θ together imply $\Gamma, \phi\vee\psi: \theta$.
7. Universal: Γ:φ is equivalent to $\Gamma:\forall x.\phi$ if x is not free in Γ.
8. Existential: Γ,φ:ψ is equivalent to $\Gamma,\exists x.\phi:\psi$ if x is not free in Γ or ψ.
9. Equality: :q = q.
10. Biconditional: $(v\Rightarrow w)\wedge(w\Rightarrow v):v=w$. We usually write $v\Leftrightarrow w$ for v = w if $v,w:A\to\Omega$.
11. Product: p1u = p1u',p2u = p2u':u = u' for $u,u'\in A\times B$.
12. Product revisited: $:(p_1(s\times s')=s)\wedge(p_2(s\times s')=s')$.
13. Extensionality: $\forall x\in A. f(x) = g(x) : f = g$ for $f,g\in[A\to B]$.
14. Comprehension: $(\lambda x\in A. s)x = s$ for $x\in A$.
3. Prove the following results from the above entailments and inferences -- or directly from the topoidal logic mindset:
1. $:\neg(\phi\wedge\neg\phi)$.
2. $:\phi\Rightarrow\neg\neg\phi$.
3. $:\neg(\phi\vee\psi)\Rightarrow(\neg\phi\wedge\neg\psi)$.
4. $:(\neg\phi\wedge\neg\psi)\Rightarrow\neg(\phi\wedge\psi)$.
5. $:(\neg\phi\vee\neg\psi)\Rightarrow\neg(\phi\vee\psi)$.
6. $\phi\wedge(\theta\vee\psi)$ is equivalent to $(\phi\wedge\theta)\vee(\phi\wedge\psi)$.
7. $\forall x.\neg\phi$ is equivalent to $\neg\exists x.\phi$.
8. $\exists x\phi\Rightarrow\neg\forall x.\neg\phi$.
9. $\exists x\neg\phi\Rightarrow\neg\forall x.\phi$.
10. $\forall x\phi\Rightarrow\neg\exists x.\neg\phi$.
11. φ:ψ implies $\neg\psi:\neg\phi$.
12. $\phi:\psi\Rightarrow\phi$.
13. $\phi\Rightarrow\not\phi:\not\phi$.
14. $\not\phi\vee\psi:\phi\Rightarrow\psi$ (but not the converse!).
15. $\neg\neg\neg\phi$ is equivalent to $\neg\phi$.
16. $(\phi\wedge\psi)\Rightarrow\theta$ is equivalent to $\phi\Rightarrow(\psi\Rightarrow\theta)$ (currying!).
4. Using the Boolean negation rule: $\Gamma,\neg\phi:false$ is equivalent to Γ:φ, prove the following additional results:
1. $\neg\neg\phi:\phi$.
2. $:\phi\vee\neg\phi$.
5. Show that either of the three rules above, together with the original negation rule, implies the Boolean negation rule.
1. The converses of the three existential/universal/negation implications above.
6. The restrictions introduced for the cut rule above block the deduction of an entailment $:\forall x.\phi\Rightarrow\exists x.\phi$. The issue at hand is that A might not actually have members; so choosing one is not a sound move. Show that this entailment can be deduced from the premise $\exists x\in A. x=x$.
7. Show that if we extend our ruleset by the quantifier negation rule $\forall x\Leftrightarrow \neg\exists x.\neg$, then we can derive the entailment $:\forall w: w=t \vee w = false$. From this derive $:\phi\vee\neg\phi$ and hence conclude that this extension gets us Boolean logic again.
4. A topology on a topos E is an arrow $j:\Omega\to\Omega$ such that $j\circ true=true$, $j\circ j=j<math> and <math>j\circ\wedge = \wedge\circ j\times j$. For a subobject $S\subseteq A$ with characteristic arrow $\chi_S:A\to\Omega$, we define its j-closure as the subobject $\bar S\subseteq A$ classified by $j\circ\chi_S$.
1. Prove:
1. $S\subseteq\bar S$.
2. $\bar S = \bar{\bar S}$.
3. $\bar{S\cap T} = \bar S\cap\bar T$.
4. $S\subseteq T$ implies $\bar S\subseteq\bar T$.
5. $\bar{f^{-1}(S)} = f^{-1}(\bar S)$.
2. We define S to be j-closed if $S=\bar S$. It is j-dense if $\bar S=A$. These terms are chosen due to correspondences to classical pointset topology for the topos of sheaves over some space. For a logical standpoint, it is more helpful to look at j as a modality operator: "it is j-locally true that" Given any $u:1\to\Omega$, prove that the following are topologies:
1. $(u\to -): \Omega\to\Omega$ (the open topology, where such a u in a sheaf topos ends up corresponding to an open subset of the underlying space, and the formulae picked out are true on at least all of that subset).
2. $u\vee -): \Omega\to\Omega$ (the closed topology, where a formula is true if its disjunction with u is true -- corresponding to formulae holding over at least the closed set complementing the subset picked out)
3. $\neg\neg: \Omega\to\Omega$. This may, depending on the topos, end up being interpreted as true so far as global elements are concerned, or not false on any open set, or other interpretations.
4. 1Ω.
3. For a topos E with a topology j, we define an object A to be a sheaf iff for every X and every j-dense subobject $S\subseteq X$ and every $f:S\to A$ there is a unique $g:X\to A$ with $f=g\circ s$. In other words, A is an object that cannot see the difference between j-dense subobjects and objects. We write Ej for the full subcategory of j-sheaves.
1. Prove that any object is a sheaf for 1Ω.
2. Prove that a subobject is dense for $\neg\neg$ iff its negation is empty. Show that $true+false:1+1\to\Omega$ is dense for this topology. Conclude that 1 + 1 is dense in $\Omega_{\neg\neg}$ and thus that $E_{\neg\neg}$ is Boolean.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 199, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8057393431663513, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/6868/how-to-slice-the-cheese
|
# How to slice the cheese
I encounter a problem recently stated as below:
How many pieces of cheese we can obtain from a single thick piece by making five straight slices?(we can't move the cheese when slicing)If we wanna maximize the number of pieces which is denoted by P(n), is there any recurrence relation for P(n)?(n is number of slices)
Any hints will be highly appreciated and thank you all in advance!
Best regards.
-
The first thing is that you don't have to worry too much about the shape of the cheese. If you can divide space into k pieces with n planes, then you can cut the cheese into that many pieces just by shrinking the diagram till all the intersections fit in the cheese. – Oscar Cunningham Oct 15 '10 at 13:43
## 3 Answers
This was a very old Monthly problem - see below. For an excellent introduction to the general topic see Richard Stanley's paper An Introduction to Hyperplane Arrangements (2004) and also Renteln's lecture slides It All Depends on How You Slice It: An Introduction to Hyperplane Arrangements, 2008
-
This is a special case of the problem of counting the number of regions $\mathbb{R}^n$ is divided into by $k$ hyperplanes in general position. The answer is $$\sum_{j=0}^n {k\choose j}.$$ This is mentioned in Richard Stanley's notes on hyperplane arrangements.
-
Thanks for your answer Robin, but Bill gives me more insight into the problem :-). – Summer_More_More_Tea Oct 17 '10 at 14:11
This is the lazy caterer's sequence. As others have mentioned, arbitrary dimensional analogues are called hyperplane arrangements.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344484210014343, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/12327?sort=votes
|
## Extension of some feature of SDE Ornstein-Uhlenbeck type
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi everyone,
I am looking for some ideas (or references) in order to get an explicit SDE (if it exists) which would have a stylised property extending in some sense the mean-reversion property of SDE of Ornstein-Uhlenbeck type.
More formally, is it possible to have a $n$-means reverting process defined by an SDE ?
I imagine this SDE would have the form like $dS_t=f_1(S_t,t)dt+...+f_n(S_t,t)dt+\sigma dW_t$
where $f_i$'s are such that if $S_t$ is closed to i-th mean $m_i$ then it stays closed to this point with high probability.
I am sorry to not define the necessary concepts more clearly but as I am only looking for ideas (or refernces) on this, I rather define some intuitive concept than a fully formal framework in order not to close any possibility.
Thank's for the time spend reading those lines
PS : I would like to avoid the n states regime switching technology if possible
-
mathworks.com/access/helpdesk/help/toolbox/econ/… – Steve Huntsman Jan 19 2010 at 18:17
## 1 Answer
I believe your notation is redundant. Let $g = f_1 + ... + f_n$. Correct me if I'm wrong, but you just want a function $g$ so that $dS(t) = g(s(t),t)dt + \sigma dW_t$ has the behavior you specify.
It's not clear exactly what properties you want. One possibility is that you can just let $S$ be a Brownian motion in a potential function $\Phi$ with $n$ local minimums. In that case, you don't need $g$ to depend on $t$: $g = -\Phi'$. You may want the potential function to be approximated by the potential well of an Ornstein-Uhlenbeck process near each minimum.
This would mean S would stay near each minimum for a while, but it would leak out eventually with probability 1. You can increase the potential with time to force the process to stay near the local minimum with positive probability. However, this would no longer resemble a fixed Ornstein-Uhlenbeck process near each minimum.
-
Douglas Zare Thank you for your interest You are right we can set a function $g$ but I thought that $g$ could have the form I specified with $f_i(t,S_t)$ not far from $a_i(m_i-S_t)$ when $S_t$ is close to $m_i$ and approximately null when far, which seems close to what you suggest in your second paragraph. Second, I am still looking for the "right" criteria to establish a clear sense to the multi-mean reverting property, and your answer is helpful for this goal to be acheived, I will try to propose candidates for both the definition of the property and for $g$. Best Regards – The Bridge Jan 20 2010 at 10:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426540732383728, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/tagged/pontrjagin-duality
|
## Tagged Questions
1answer
195 views
### What is the dual of a pre-injective map?
In [M. Gromov, Endomorphisms of symbolic algebraic varieties, J. Eur. Math. Soc. (JEMS) 1 (1999), 109–197], Gromov introduces the notion of pre-injective map. Recasting this notion …
0answers
48 views
### The unitary dual of non discrete abelian group
What is the unitary dual of the non discrete abelian group containing elements of the type $$\left( \frac{k}{2^{j}},\frac{l}{2^{j}}\right)$$ where $k,l,j$ are integers? The g …
4answers
520 views
### Quick computation of the Pontryagin dual group of torus
I'm looking for a quick way to compute the Pontryagin dual group of the n-dimensional torus $\mathbb{T}^n$ (with $\mathbb{T} := \mathbb{R} / \mathbb{Z}$). The only way I know is fr …
1answer
369 views
### Proof that the Pontryagin dual of a topological group is a topological group
I'm looking for a proof that the Pontryagin dual $G^*$ of a topological group $G$ is a topological group. It's very easy to prove that $G^*$ is a group, my troubles are in provin …
7answers
794 views
### Discrete-compact duality for nonabelian groups
A standard property of Pontrjagin duality is that a locally compact Hausdorff abelian group is discrete iff its dual is compact (and vice versa). In what senses, if any, is this st …
1answer
3k views
### Fourier transforms of compactly supported functions
One manifestation of the uncertainty principle is the fact that a compactly supported function $f$ cannot have a Fourier transform which vanishes on an open set. As stated, this p …
1answer
457 views
### Fourier Transform of measure on Banach Space (a question about Pontryagin Duality)
The following definition is given as the Fourier transform of a Borel probability measure $\mu$ on $E$, a Banach Space (Real): $\hat{\mu}: E^*\rightarrow \mathbb{C}$ defined by …
2answers
530 views
### Injective modules and Pontrjagin duals
Forgive me for this naive question. We consider the following lemma and its proof in Lang's algebra, Third Ed., published 1999, Chap. 20, section 4, page 784. Every module is …
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8465219736099243, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/123809-catenary.html
|
# Thread:
1. ## Catenary
Equation of a catenary is
y = a * cosh(x/a) + C
I am told that two telephone poles are 10 m apart and that the poles are 6 m tall. The minimum clearance (distance of the cable to the ground) is 4 m. Question asks how long can the cable be.
So. I know that once I find the equation for the catenary the cable makes, I get the integral and use the arc-length formula. Unfortunately, I don't know how to get the 'a' in the formula.
Here's my attempt so far.
C doesn't really matter. The lowest point at x = 0 is a, since cosh(0) = 1. That means the point at x = -5 and x = 5 is a+2. So I get an equation:
a+2 = a * cosh(5 / a)
But I'm not sure how to solve it.
a * arccosh[(a+2) / a] = 5
That's as far as I got, and I'm rather sure I'm not doing it right. The assignment is focused on linearization and the like (Newton's method, fixed-point iteration, etc.).
2. Since the poles are 6 meters high and the clearance is 4 m, then the sag is 2 m.
We can use Newton's method to find the length.
The sag is given by $S=a\cdot cosh(\frac{b}{a})-a$
b is the distance from the origin to one of the poles. 5 m.
$2=a\cdot cosh(\frac{5}{a})-a$
Let $u=\frac{5}{a}$
Then, $a=\frac{5}{u}$
$2=\frac{5}{u}\left[cosh(u)-1\right]$
$cosh(u)-1=\frac{2u}{5}$
If $f(u)=cosh(u)-\frac{2u}{5}-1$
Then, $u_{n+1}=u_{n}-\frac{cosh(u_{n})-\frac{2u_{n}}{5}-1}{sinh(u_{n})-\frac{2}{5}}$
Try an initial guess for u. Then, once it converges, 'a' can be found by just subbing it into $a=\frac{5}{u}$
The length can be found by plugging 'a' into the length formula:
$L=2a\cdot sinh(\frac{b}{a})$
We can derive the length by:
$y'=sinh(\frac{x}{a}), \;\ 1+(y')^{2}=1+sinh^{2}(\frac{x}{a})=cosh^{2}(\frac{ x}{a})$
$L=2\int_{0}^{b}cosh(\frac{x}{a})dx=\boxed{2a\cdot sinh(\frac{b}{a})}$
#### Search Tags
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320877194404602, "perplexity_flag": "middle"}
|
http://cms.math.ca/Events/winter12/abs/sdd
|
2012 CMS Winter Meeting
Fairmont Queen Elizabeth (Montreal), December 7 - 10, 2012
Symmetries of Differential and Difference Equations
Org: Alexei Cheviakov (Saskatchewan) and Pavel Winternitz (Montréal)
[PDF]
STEPHEN ANCO, Brock University
Symmetry analysis and exact solutions of semilinear Schrodinger equations [PDF]
A novel symmetry method is used to obtain exact solutions to Schrodinger equations with a power nonlinearity in multi-dimensions. The method uses a separation technique to solve an equivalent first-order group foliation system whose independent and dependent variables consist of the invariants and differential invariants of the point symmetry generators admitted by the Schrodinger equation. Many explicit new solutions are obtained which have interesting analytical behavior connected with blow-up and dispersion. These solutions include new similarity solutions and other new group-invariant solutions, as well as new solutions that are not invariant under any point symmetries of the Schrodinger equation. In contrast, standard symmetry reduction leads to nonlinear ODEs for which few if any explicit solutions can be derived by familiar integration methods.
ALEXANDER BIHLO, Centre de recherches mathématiques, Université de Montréal
Invariant discretization schemes [PDF]
Geometric numerical integration is a recent field in the numerical analysis of differential equations. It aims at improving the quality of the numerical solution of a system of differential equations by preserving qualitative features of that system. Such qualitative feature can be conservation laws, a Hamiltonian or variational structure or a nontrivial point symmetry group. While quite some effort has been put in the construction of conservation law preserving and Hamiltonian discretization schemes, the problem of finding invariant numerical integrators is more recent and less investigated. The main obstacle one faces when constructing symmetry-preserving approximations for evolution equations is that these discretizations generally require the usage of moving meshes. Grids that undergo an evolution in the course of numerical integration pose several theoretical challenges, especially in the multi-dimensional case.
In this talk we will present three possible strategies to overcome the problem with invariant moving meshes and thus address the practicability of symmetry-preserving discretization schemes. These ways are the discretization in computational coordinates, the use of invariant interpolation schemes and the formulation of invariant meshless schemes. The different strategies will be illustrated by presenting the results obtained from invariant numerical schemes constructed for the linear heat equation, a diffusion equation and the system of shallow-water equations.
On Symmetry Properties of a Class of Constitutive Models in Two-dimensional Nonlinear Elastodynamics [PDF]
We consider the Lagrangian formulation of the nonlinear equations governing the dynamics of isotropic homogeneous hyperelastic materials. For two-dimensional planar motions of Ciarlet–Mooney–Rivlin solids, we compute equivalence transformations that lead to a reduction of the number parameters in the constitutive law. Further, we classify point symmetries in a general dynamical setting and in traveling wave coordinates. A special value of traveling wave speed is found for which the nonlinear Ciarlet–Mooney–Rivlin equations admit an additional infinite set of point symmetries. A family of essentially two-dimensional traveling wave solutions is derived for that case.
ALFRED MICHEL GRUNDLAND, Centre de Recherches Mathematiques and Universite du Quebec a Trois-Rivieres
Soliton surfaces and zero-curvature representation of differential equations [PDF]
A new version of the Fokas-Gel'fand formula for immersion of 2D surfaces in Lie algebras associated with three forms of matrix Lax pairs for either PDEs or ODEs is proposed. The Gauss-Mainardi-Codazzi equations for the surfaces are infinitesimal deformations of the zero-curvature representation for the differential equations. Such infinitesimal deformations can be constructed from symmetries of the zero-curvature representation considered as PDE in the matrix variables or of the differential equation itself. The theory is applied to zero-curvature reprentations of the Painleve equations P1, P2 and P3. Certain geometrical aspects of surfaces associated with these Painleve equations are discussed.
Based on joint work with S. Post (University of Hawaii, USA)
VERONIQUE HUSSIN, Université de Montréal
Grassmannian sigma models and constant curvature solutions [PDF]
We discuss solutions of Grassmannian models $G(m,n)$ and give some general results. We thus concentrate on such solutions with constant curvature. For holomorphic solutions, we give some conjectures for the admissible constant curvatures which are verified for the cases, $G(2,4)$ and $G(2,5)$. The study is extended to the case of non holomorphic solutions with constant curvatures and we show that in the case of the Veronese sequence, such curvatures are always smaller than the ones of the holomorphic solutions. This work has been done in collaboration with L. Delisle (UdM) and W. Zakrzewski (Durham, UK).
WILLARD MILLER JR., University of Minnesota
Contractions of 2D 2nd order quantum superintegrable systems and the Askey scheme for hypergeometric orthogonal polynomials [PDF]
A quantum superintegrable system is an integrable $n$-dimensional Hamiltonian system on a Riemannian manifold with potential: $H=\Delta_n+V$ that admits 2n-1 algebraically independent partial differential operators commuting with the Hamiltonian, the maximum number possible. A system is of order $L$ if the maximum order of the symmetry operators, other than $H$, is is $L$. For $n=2$, $L=2$ all systems are known. There are about 50 types but they divide into 12 equivalence classes with representatives on flat space and the 2-sphere. The symmetry operators of each system close to generate a quadratic algebra, and the irreducible representations of this algebra determine the eigenvalues of $H$ and their multiplicity. All the 2nd order superintegrable systems are limiting cases of a single system: the generic 3-parameter potential on the 2-sphere, $S9$ in our listing. Analogously all of the quadratic symmetry algebras of these systems are contractions of $S9$. The irreducible representations of $S9$ have a realization in terms of difference operators in 1 variable. It is exactly the structure algebra of the Wilson and Racah polynomials! By contracting these representations to obtain the representations of the quadratic symmetry algebras of the other less generic superintegrable systems we obtain the full Askey scheme of orthogonal hypergeometric polynomials. This relationship provides great insight into the structure of special function theory and directly ties the structure equations to physical phenomena.
Joint work with Ernie Kalnins and Sarah Post
ROMAN POPOVYCH, Brock University
Potential symmetries in dimension three [PDF]
Potential symmetries of partial differential equations with more than two independent variables are considered. Possible strategies for gauging potential are discussed. A special attention is paid to the case of three independent variables. As illustrating examples, we present gauges of potentials and nontrivial potential symmetries for the (1+2)-dimensional linear heat, Schrödinger and wave equations, the three-dimensional Laplace equation and generalizations of these equations.
SARAH POST, U. Hawaii
Contractions of superintegrable systems and limits of orthogonal polynomials [PDF]
In two dimension, all second-order superintegrable systems are limits of a generic system on the sphere. These limits in the physical systems correspond to contraction of the symmetry algebras generated by the integrals of the motion as well their function space representations. The action of these limits on the representation of the models gives the well known Askey-tableau of hypergeometric polynomials.
In this talk, we focus on the top of the tableau. That is, we will discuss in depth the contractions of the generic system on the sphere to the singular isotropic oscillator of Smorodinsky and Winternitz. These limits give the limits of Wilson polynomials to Hahn, dual Hahn and Jacobi polynomials. The physical limit gives a deeper understanding of the connection between the Hahn and dual Hahn polynomials. The general theory and outline of the tableau will be discussed in a later talk of W. Miller Jr.
This is joint work with W. Miller Jr. and E. Kalnins
RAPHAËL REBELO, Université de Montréal
Symmetry preserving discretization of partial differential equations [PDF]
A definition of discrete partial derivatives on non orthogonal and non uniform meshes will be given. This definition permits the application of moving frames to partial difference equations and will be used to generate invariant numerical schemes for a heat equation with source and for the spherical Burgers' equation. The numerical precision of those schemes will be displayed for two particular solutions.
DANILO RIGLIONI, CENTRE DE RECHERCHES MATHÉMATIQUES
Superintegrable systems on non Euclidean spaces [PDF]
A Maximally Superintegrable (M.S.) system is an integrable n-dimensional Hamiltonian system which has 2n-1 integrals of motion. The (M.S.) systems share nice properties such as periodic trajectories for classical systems and degenerate spectrum for quantum mechanical systems. Aim of the talk is providing a complete classification of classical and quantum M.S. systems characterized by a radial symmetry and defined on n-dimensional non Euclidean manifold. We will achieve this result considering the only systems which are eligible to be M.S. namely all the classical radial systems which admit stable closed orbits and whose classification is given by the non-Euclidean generalization of the well known Bertrand's theorem. As in the Euclidean case the generalized Bertrand theorem still gives us two families of exactly solvable M.S. but, in contrast with the flat case, they exhibit extra integral of motion which have the remarkable property of being of higher order in the momenta.
ZORA THOMOVA, SUNY Institute of Technology
Contact transformations for difference equations [PDF]
Contact transformations for ordinary differential equations are transformations in which the new variables $(\tilde x, \tilde y)$ depend not only on the old variables $(x, y)$ but also on the first derivative of $y$. The Lie algebra of contact transformations can be integrated to a Lie group. The purpose of this talk is to extend the definition of contact transformations to ordinary difference equations. We will provide an example showing that these transformations do exist. This is a joint work with D. Levi and P. Winternitz.
SASHA TURBINER, Nuclear Science Institute, UNAM
$BC_2$ Lame polynomials [PDF]
$BC_2$ elliptic Hamiltonian is two-dimensional Schroedinger operator with double-periodic potential of a special form which does not admit separation of variables. In space of orbits of double-affine $BC_2$ Weyl group the similarity-transformed Hamiltonian takes the algebraic form of the second order differential operator with polynomial coefficients. This operator has a finite-dimensional invariant subspace in polynomials which is a finite-dimensional representation space of the algebra gl(3). This space is invariant wrt $2D$ projective transformations. $BC_2$ Lame polynomials are the eigenfunctions of this operator, supposedly, their eigenvalues define edges of the Brillouin zones (bands).
FRANCIS VALIQUETTE, Dalhousie University
Group foliation of differential equations using moving frames [PDF]
We incorporate the new theory of equivariant moving frames for Lie pseudo-groups into Vessiot’s method of group foliation of differential equations. The automorphic system is replaced by a set of reconstruction equations on the pseudo-group jets. The result is a completely algorithmic and symbolic procedure for finding invariant and non-invariant solutions of differential equations admitting a symmetry group.
Joint work with Robert Thompson.
PAVEL WINTERNITZ, Universite de Montreal
Symmetry preserving discretization of ordinary differential equations [PDF]
We show how one can approximate an Ordinary Differential Equation by a Difference System that has the same Lie point symmetry group as the original ODE. Such a discretization has many advantages over standard discretizations. In particular it provides numerical solutions that are qualitatively better, specially in the neighborhood of singularities.
THOMAS WOLF, Brock
ZHENGZHENG YANG, UBC
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8685173988342285, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/124222/partial-derivatives-and-orthogonality-with-polar-coordinates
|
# Partial derivatives and orthogonality with polar-coordinates
We are stuck with this question here because I cannot understand the following results. I find it hard to visualize this, let alone deduce from that. How to do it?
Objective to Attack The closely Related Problems with Orthogonal Basis and Dot Products In Polar-coordinates
1. $\left(\hat{e}_{r}\partial_{r}\right) \cdot \left(\frac{1}{r}\hat{e}_{\theta}\partial_{\theta}\right)= 0$
2. $\left(\frac{1}{r}\hat{e}_{\theta}\partial_{\theta}\right) \cdot \left(\hat{e}_{r}\partial_{r}\right) = \frac{1}{r} \partial_r$
3. $\partial_\theta \hat e_r = \hat e_\theta$
4. $\partial_\theta \hat e_\theta = -\hat e_r$
Trials
1. I have some errors there, related to 3-4 apparently.
$$\left(\hat{e}_{r}\partial_{r}\right) \cdot \left(\frac{1}{r}\hat{e}_{\theta}\partial_{\theta}\right) = \left(\hat{e}_{r}\partial_{r}\right) \cdot \frac{1}{r}+ \left(\hat{e}_{r}\partial_{r}\right) \cdot \left( \hat{e}_{\theta}\partial_{\theta} \right) \not = \frac{-\hat{e}_r}{r^2}+ \left(\hat{e}_{r}\cdot\hat{e}_\theta \right) \partial_{r} \partial_{\theta}$$
Perhaps Related
-
## 1 Answer
Some hints: (not a complete solution)
• I guess once you think about it you find it clear that $$\hat{e}_{r}\cdot\hat{e}_\theta=0$$ or in words the unit vector along the radial direction is orthogonal to the unit vector along the angular direction.
• I guess the confusion you have originates from the following fact: the unit vectors $\hat{e}_r$ and $\hat{e}_\theta$ themselves depend on the coordinate $(r,\theta)$. This dependence is usually not made explicit but you should always keep that in mind. If you think about it then I guess it is clear. For $\theta=0$ the radial unit vector points along the $x$-axis whereas for $\theta=\pi/2$ the unit vector points along the $y$-axis.
• The last two points hold for an arbitrary rectangular coordinate system. What is special for the polar coordinate system is that even though $\hat{e}_{r,\theta}$ depend on $\theta$ they do not depend on $r$, i.e., $$\hat{e}_{r,\theta} = \hat{e}_{r,\theta} (\theta).$$
• The important relations $\partial_\theta \hat e_r(\theta) = \hat e_\theta (\theta)$ and $\partial_\theta \hat e_\theta(\theta) = -\hat e_r(\theta)$, you can check by taking the explicit expressions $$\begin{array}\ \hat{e}_r(\theta)&= \begin{pmatrix} \cos \theta \\ \sin \theta \end{pmatrix} & \hat{e}_\theta(\theta)&= \begin{pmatrix} -\sin \theta \\ \cos \theta \end{pmatrix} \end{array}$$
With these remarks, it is easy to show that (here, I make the dependence of the unit vectors on the coordinates explicit) $$\left(\hat{e}_{r}(\theta)\partial_{r}\right) \cdot \left(\frac{1}{r}\hat{e}_{\theta}(\theta) \partial_{\theta}\right) = \underbrace{\hat{e}_{r}(\theta) \cdot \hat{e}_{\theta}(\theta)}_{=0} \,\left(\partial_{r} \frac{1}{r} \partial_{\theta} \right) =0 .$$ The other results follow similarly (but I will leave the proof up to you).
-
Why is there no $r$ in the $e_r(\theta)$? – hhh Mar 25 '12 at 11:38
1
@hhh: There is a typo. The equation you wrote is correct. Also notice that $$\hat e_r = \frac{{\bf r}}{r} = \hat e_x \frac{x}{r} + \hat e_y \frac{y}{r},$$ but $x/r = \cos\theta$ and $y/r = \sin\theta$, so there is no $r$. – oen Mar 25 '12 at 15:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240675568580627, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/43491-contractive-sequence-print.html
|
# contractive sequence
Printable View
• July 11th 2008, 08:41 AM
particlejohn
contractive sequence
If $x_n$ is a sequence and there is a $c \geq 1$ such that $|x_{k+1}-x_{k}| > c|x_{k}-x_{k-1}|$ for all $k > 1$, then can $x_n$ converge?
We claim that if $n \in \bold{N}$, then $|x_{k}-x_{k+1}| > c^{k-1}|x_{1}-x_{2}|$. For $k = 1$, $|x_{1}-x_{2}| > |x_{1}-x_{2}|$, which is false. So it cannot be a contractive sequence?
• July 11th 2008, 09:18 AM
particlejohn
although this seems too simple.
• July 11th 2008, 09:40 AM
ThePerfectHacker
Quote:
Originally Posted by particlejohn
If $x_n$ is a sequence and there is a $c \geq 1$ such that $|x_{k+1}-x_{k}| > c|x_{k}-x_{k-1}|$ for all $k > 1$, then can $x_n$ converge?
Note that $x_1 \not = x_2$. Because if this was not the case then $0 = |x_2-x_1| > c|x_1 - x_0|$ which is impossible. But as you showed $|x_{n+1} - x_n| > c^{n-1}|x_1-x_2| \geq |x_1-x_2|$. Let $0<\epsilon < |x_1-x_2|$. Then $|x_{n+1} - x_n| > \epsilon$ for $n\geq 2$. Therfore the sequence $\{x_n\}$ is not Cauchy and therefore not convergent.
All times are GMT -8. The time now is 03:23 AM.
Copyright © 2005-2013 Math Help Forum. All rights reserved.
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505306482315063, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/119678/sorting-two-paired-lists-of-real-numbers-to-minimize-consecutive-absolute-differe/119697
|
## sorting two paired lists of real numbers to minimize consecutive absolute differences
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider a set of $n$ real-valued number pairs: $(x_1,y_1), (x_2,y_2), \dots, (x_n,y_n)$. I want to find a permutation $p$ of the indices which minimizes the sum of consecutive absolute differences:
$$\sum_{j=1}^{n-1} |x_{p(j+1)} - x_{p(j)}| + \sum_{j=1}^{n-1} |y_{p(j+1)} - y_{p(j)}|.$$
I suspect this is reducible to a well known problem, so I'm looking for pointers to literature mainly, but would be happy to see a clever algorithm for doing this from scratch.
Intuitively, I want to shuffle the observations so that the graph of $x$ elements against the index is smooth looking and the same graph of the $y$ elements is also smooth looking. If I cared only about one or the other, I could simply sort with respect to those elements. I want to shuffle in such a way that I compromise between the two coordinates.
My motivation is a statistical problem of estimating a smooth curve in the plane by assuming that the coordinate dimensions are each smooth functions of an unrecorded "time index". The above problem is maximizing the smoothness of the observed data under the assumption of evenly spaced observations in time.
-
1
This sounds like you want a Hamiltonian path in the $L^1$ metric on $\R^2$ of shortest length. – Anthony Quas Jan 23 at 18:19
Yeah, I just remembered that the last time I thought about this I realized it could be formulated as a traveling salesman type problem. I need to go look at algorithms tailored to the version in the plane. – R Hahn Jan 23 at 18:21
## 2 Answers
This is known in complexity circles as rectilinear TSP. Technically this is a path version TSP rather than the more common tour. In any of these specific settings, this is known to be NP-complete since the 70s (see Papadimitriou's paper).
However, unlike some NP-hard problems, Euclidean and rectilinear TSP admit a polynomial-time approximation scheme: one can obtain a tour of cost at most $(1+\epsilon)$ times optimal in time $O(n (\log n)^{O(1/\epsilon)})$. See Arora's web page for some of the key papers on this subject. You'll also find there a nice survey on approximation results (postscript link), outlining the key ideas.
The survey mentions a subexponential $2^{O(\sqrt{n})}$ exact algorithm for the Euclidean case by Smith (1988). While I would not be surprised if it adapts readily to $L^1$, I don't know enough about it to say for certain.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here is something to try. Even if it fails, knowing why it fails might be useful.
Consider the Steiner tree joining your set of points. It should have minimal length and can give you a goal. If the Steiner tree has branches at points that are not on your vertex set, consider a tour that does one subtree followed by the other subtree followed by the last subtree in decreasing order of branch length.
Now that I think on it, there is potential for enough triple branch points that you might be able to reduce something like exact three cover to this problem. Locally, though, you might consider Steiner tree suggestions as to order. If not Steiner tree, then whatever tree is suggested by the Manhattan metric.
EDIT: indeed, months in the laboratory can save hours spent in the library. As Gerry Myerson suggests in his comment, there are computationally infeasible problems related to Steiner trees, and I have likely suggested one above. Moreover, the problem of rectilinear Steiner trees has been studied, thus giving more search terms for the original poster to use. Even so, there are good approximations to some Steiner tree problems, and I suspect this problem will be similar. END EDIT.
Gerhard "As Lovely As A Tree" Paseman, 2013.01.23
-
1
As a wicked test case, take the vertices of some not too small iterate of Sierpinski's triangle, rotated to produce distinct x and y coordinates, and see what you get. Gerhard "Maybe You Get A Paper" Paseman, 2013.01.23 – Gerhard Paseman Jan 23 at 20:53
How do you propose to find the Steiner tree? That's already computationally infeasible, no? – Gerry Myerson Jan 23 at 23:04
I don't know. I believe a greedy algorithm can find a Steiner tree, at least a local minimum, by adding one vertex at a time. At the time of posting the answer, I thought a global minimum could be found in polytime, and was feasible. Gerhard "Others Can Make Trees Too" Paseman, 2013.01.23 – Gerhard Paseman Jan 23 at 23:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397596716880798, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/221319/diagonalization-theorem-and-convergence
|
# Diagonalization theorem and convergence
Let $\{f_{n}\}$ be a sequence of pointwise bounded continuous functions on a separable metric space $X$. There is a common diagonalization theorem (see Baby Rudin, Theorem 7.23) which states that if $E$ is a countable subset of $X$, then we can find a susequence $\{f_{n_{k}}\}$ which converges on every point of $E$ as $k\to \infty$.
My question is that if $E$ is also dense, must $\{f_{n_{k}}\}$ converge for every point in $X$? The Arzela theorem states that if equicontinuity and compactness of $X$ is also assumed then we have uniform convergence. What results are there if one or both of these assumptions are removed?
-
## 1 Answer
Let $f_n(x) = \max\{1-n\operatorname{dist}(x,\mathbb{Z}),0\}$ for $n \geq 4$. The graph of $f_n\colon \mathbb{R} \to [0,1]$ has triangular spikes of height $1$ and with base of length $\frac{2}{n}$ centered around the integers.
With this picture in mind, you can see that $$\lim_{n\to\infty} f_n(x) = \begin{cases} 0, & \text{if }x \notin\mathbb{Z}, \\ 1, & \text{if }x \in \mathbb{Z}. \end{cases}$$ If you put $g_{2n}(x) = f_n(x)$ and $g_{2n+1}(x) = f_{n}(x-\frac{1}{2})$ then you obtain a sequence of continuous and bounded functions $g_n$ that converges if and only if $x \notin \frac{1}{2}\mathbb{Z}$.
Nothing prevents you from obtaining a sequence like $(g_n)_{n\in\mathbb{N}}$ from Rudin's theorem you describe (for example taking the sequence $g_n$ and taking a dense set $E$ in the complement of $\frac{1}{2}\mathbb{Z}$).
Restricting the $g_n$'s to the interval $[-100,100]$ also shows that compactness alone doesn't help.
However, the sequence of functions in this example is not equicontinuous.
You can show that if the $f_n$ are equicontinuous and converge pointwise on a dense subset $E$ of $X$ then their limit $\tilde{f}(e) = \lim_{n\to\infty} f_n(e)$ is uniformly continuous on $E$ and hence extends uniquely to a continuous function $f$ on $X$ (this is proved as in the argument for Ascoli's theorem). Then using equicontinuity one can even show that $f_n|_K \to f|_{K}$ uniformly for each compact $K \subset X$.
-
Thanks for the response. – Alex Lapanowski Oct 26 '12 at 15:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426229596138, "perplexity_flag": "head"}
|
http://dsp.stackexchange.com/questions/7864/dft-as-convolution-question
|
# DFT as convolution question
I have tried to make this question as readable and consistent as possible. The short of it, is that I am trying to ascertain how one gets from the math equation shown, (which I understand), to the correct implementation. (Which I also understand). However, how/why one gets from one to the other seems unclear to me.
Here is my setup:
We have an $M$ length 'original' signal $x[m]$: (Equation 1)
$$x[m] , \space m = 0,1,..M-1$$
We wish to take a 'zoom-DFT' of it, where the DFT length will be, $N$, where $N > M$.
We thus now have an $N$ length zero-padded new signal, lets call it $x_p[n]$, where: (Equation 2) $$x_p[n], \space n = 0,1,...N-1$$
Our discrete time variable is now $n = 0,1,...N-1$, and our discrete frequency variable is $k = 0,1,...N-1$.
The $N$-length DFT of a signal is defined as such: (Equation 3)
$$X[k] = \sum_{n=0}^{N-1} x_p[n]\space e^{-j\frac{2\pi nk}{N}}$$
If the $nk$ in the DFT equation is written as $\frac{-(k-n)^2 + n^2 + k^2}{2}$, then the DFT can now be written as: (Equation 4)
$$X[k] = e^{-j\frac{\pi k^2}{N}} \space \sum_{n=0}^{N-1} x_p[n]\space e^{-j\frac{\pi n^2}{N}} \space e^{j\frac{\pi (k-n)^2}{N}}$$
This expresses the full DFT, (i.e, the DFT from 0 to nyquist), as a convolution. To evaluate the 'Zoom-DFT', where we want to interrogate only frequencies between $f_1$ and $f_2$, and so the following minor modification is made: If we let $F_{\Delta} = \frac{(f_2 - f_1)}{f_s}$, and $F_1 = \frac{f_1}{f_s}$, then the zoom-DFT, $X_{z}[k]$ becomes: (Equation 5)
$$X_z[k] = e^{-j\frac{\pi F_{\Delta} k^2}{N}} \space \sum_{n=0}^{N-1} x_p[n]\space e^{-j\frac{\pi n^2}{N}} \space e^{-j 2\pi \space F_1 n}\space e^{j\frac{\pi F_{\Delta} (k-n)^2}{N}}$$
We can then simplify, if we let: (Equation 6)
$$a[u] = x_p[u]\space e^{-j\frac{\pi u^2}{N}} \space e^{-j 2\pi \space F_1 u} \\ b[u] = e^{j\frac{\pi F_{\Delta} u^2}{N}}$$
, where $u$ is just some general dummy variable, (you will see why I picked it later). Anyway, now finally, we simply have: (Equation 7)
$$X_z[k] = b^*[k] \space \sum_{n=0}^{N-1} a[n] \space b[k-n]$$
This, as we are told here, and here, page 183, is how the DFT can be re-written as a convolution.
Great! So far so good. The problem comes in interpretation vs implementation.
The problem:
My interpretation of (Equation 7), is that $a[u]$ is of length $N$, and $b[u]$ is also of length $N$, where we evaluate then for $u = 0,1,...N-1$. Then, we do a circular convolution of the series $a[u]$ and $b[u]$, to attain a series also of length $N$, (which is our desired DFT length), before we finally perform an element-by-element post-multiply with another series, also of length $N$. I dont so much care about post-multiply, that is not the point of this question. The point here is that $a[u]$ is length $N$, $b[u]$ is also length $N$, and $u$ for both of them is evaluated for $0 \leq u \leq N-1$, and then we do a circular convolution, modulo-$N$.
If you however now take this interpretation and implement it, you will not get the right answer. In fact, you will only ever get the right answer, if the $u$ variable for evaluation of $a[u]$ is taken from $u = 0,1,...N-1$, (same as above), BUT, the $u$ variable for evaluation of $b[u]$ is taken from $-(M-1) \leq u \leq N-1$. So now the series $b[u]$ is not length $N$ anymore, it is of length $N+M-1$. Then, if we do a modulo-($N+M-1$) circular convolution, and extract the indicies from $M$ to $M+N-1$, we will get the right answer.
...How, from equation-1, does one possibly get this 'correct' interpretation though? This, I cannot seem to explain, however I understand everything else.
FWIW, I have sat down and done the computation 'by hand', so I can 'see' why $b[u]$ MUST be taken from $-(M-1) \leq u \leq N-1$. (It would otherwise be impossible to compute the DFT for $k = 0,1,...N-1$). However I do not understand how one goes from (Equation 7), to this conclusion. Right now, to me, EQ-1 just looks like a garden variety circular convolution, with no hints as to the details for now $a[u]$ and $b[u]$ must be exactly evaluated.
Put another way - AFAIK, when we see equation-1, we simply assume that $a[u]$ and $b[u]$ are evaluated from $0 \leq u \leq N-1$, and then we do a circular convolution, modulo-$N$ - how/why then do we derive the proper evaluations for $a$ and $b$ from this equation??
Thanks!
-
Looking at the text, it states that the convolution is linear, not circular and that to do this, values of bn are required from -(M-1) to (N-1). Then I think the circular convolution and linear convolution become equivalent. – B Z Feb 17 at 5:01
@BruceZenone Well, I know that this is indeed a linear convolution. However, what I do not understand is, how is it obvious from the equation, that b[u] must be evaluated from $-(M-1) \leq u \leq N-1$, and not $0 \leq u \leq M+N-1$? – Mohammad Feb 17 at 21:14
## 1 Answer
You're interpretation that a[u] and b[u] are both N length sequences is incorrect. a[u] is of length N because it is effectively truncated by the length of your input sequence. The period of b[u] is not necessarily N. Depending on the value of N, b[u] may not even be periodic, so what you have in Equation 7 is a linear convolution of two sequences. One sequence (a[u] has length N, the other (b[u]) has length unknown. The limits of summation in the convolution equation are strictly set by the length of $a[u]$ and don't imply circular convolution or anything about the length of sequence b[u].
Next you have to consider what range of values of b[u] are required to compute the convolution. Straight from the form of EQUATION 1, you see that b[u] in general must be defined in the range -[N-1] to N-1. The negative boundary is set by the most negative b[u] value required to calculate X[0] (i.e. you need b[0−(N-1)] for this calculation). The positive boundary is set by the maximum spectral step you want to evaluate in the chirp Z transform (X[K] max). From the way the question has been posed, X[k] is best interpreted as $X[N-1]$.
The reason your computations work with a range that only goes back as far as -[M-1] is because you started with an M length sequence and zero padded it up to length N which zeros out any b[u] values going back further than -[M-1].
There is an interesting write up of Chirp-Z along with Goertzel's algorithm that I think puts this in proper perspective: http://www.google.com/url?sa=t&rct=j&q=goertzel%20algorithm&source=web&cd=3&sqi=2&ved=0CEIQFjAC&url=http%3A%2F%2Focw.mit.edu%2Fcourses%2Felectrical-engineering-and-computer-science%2F6-341-discrete-time-signal-processing-fall-2005%2Flecture-notes%2Flec20.pdf&ei=rawjUd6sM-LE0QHJ2IGQAQ&usg=AFQjCNEFEo7R7VSmk5jWYon0f4WqZYpvzg&bvm=bv.42553238,d.dmQ
-
Bruce, FYI, I edited the question and put an equation label for every equation, so that discussing them becomes easier. Thanks for your answer. Also, let us just put some numbers to assist. Say $M = 10$, (original length signal), and $N = 20$, (DFT length desired). – Mohammad Feb 19 at 15:35
So, I have struggled to find the words that describe my confusion, while simultaneously keeping it succinct, and there are two parts: Part1) ...How does a person, looking at equation 7,...know before hand, the length of what 'b' is supposed to be? How is a person, supposed to look at eq-7, and say "ah! This is totally a M+N-1 = 29 length vector". Now, if I manage to convince myself of Part1 (and I think I can, because I worked it by hand, and can see that b must be an M+N-1 length vector for the evaluation of the DFT), then the Part2) question is, WHY must b be evaluated from u = -9 to 19? – Mohammad Feb 19 at 15:39
I think what I am trying to get at is, did the 'physics' of the problem, dictate to us what the 1) length of and 2)evaluation indicies for 'b' must be, or was this somehow obvious to a casual observer simply looking at equation-4, or equation-7? This is the root of it. – Mohammad Feb 19 at 15:40
So Bluestein employed a mathematical trick to rearrange the original DFT so an FFT could be exploited in computing a convolution instead of directly computing the DFT. Once the equations are massaged to reveal the convolution, nothing is obvious and a casual observer would be confused without carefully inspecting the convolution to identify what information is actually required (i.e. what values of b[u] are required) to properly carry out the computation. If you try to carry out the convolution, you immediately are hit with a need for values of b[u] going back to -[N-1]. – B Z Feb 19 at 16:15
BTW - I see 3 common methods in use for doing a ZOOM FFT. 1) shift the frequency range of interest down towards DC, down sample and recalculate the FFT. 2) Zero pad the original data to increase frequency resolucion. 3) Use Bluestein's method and calculated only over the range of interest. You seem to be mixing options 2 and 3. – B Z Feb 19 at 16:41
show 5 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352947473526001, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/8076/how-does-atp-transfer-energy-to-a-reaction
|
How does ATP transfer energy to a reaction?
This is a question for which I've found it surprisingly hard to find a good answer. Biology texts talk mystically about the ATP->ADP reaction providing energy to power other reactions. I'd like to know some more details. Is this following roughly right ?
1. Each reaction in a cell has a specific enzyme.
2. Each enzyme has binding sites for, say, two molecular species AND for an ATP molecule.
3. When a reaction takes places, the two species bind to the enzyme, and a little later, an ATP molecule binds.
4. For some reason (why ?), the ATP->ADP reaction is now energetically favourable, so the high-energy bond breaks.
5. This releases electromagnetic energy at some characteristic frequency.
6. Certain bonds in the enzyme have a resonant frequency that allow them to absorb this electromagnetic energy (the EM energy disturbs molecular dipoles ?).
7. The 3D structure of the enzyme is disturbed (i.e. it bends) in such a way that the 2 molecular species are mechanically forced together, providing sufficient activation energy for the reaction in question.
8. The newly formed species no longer binds nicely to the enzyme (why ?) so it detaches, as does the ADP, which also doesn't bind as nicely as ATP.
9. The End.
So, is that an accurate summary ? Anyone care to add some more physical details ? (like all the thermodynamics and quantum chemistry I have no idea about)
-
My impression is that such things are generally too complicated for we humble physics people to deal with; you might have better luck with a chemistry board. Anyway, it's too complicated for me to tell you much about it! – Mark Eichenlaub Apr 5 '11 at 9:13
That is a pretty nice summary for someone going off of intuition alone. I think physics is written all over the question but as @Mark said this question might get better answers somewhere else. I for one, would be very happy to see any good answers! – user346 Apr 5 '11 at 9:37
@Mark, @Deepak: Vote to close if you feel this is off topic. Comments and no close vote are not useful :-) – Sklivvz♦ Apr 5 '11 at 10:15
1
@Mark - I suspect that someone on a chemistry board somewhere is redirecting someone with the same question to a physics board :-) – user2925 Apr 5 '11 at 10:17
@englishphysics , ""someone with the same question to a physics board "" I dont think so, someone really in the business of catalysis or enzymes might give You some details of a special process, where a little bit is known, or some analogies from nonbiological catalysis. – Georg Apr 5 '11 at 10:30
show 4 more comments
5 Answers
This is rather a strange model; I'll took a chance to explain it from scratch. Basically all the chemical reactions look like this:
$$S_1 + S_2 + \cdots + S_n \leftrightarrows P_1 + P_2 + \cdots + P_n$$
what means that a bunch of atoms forming few substrate molecules may rearrange into few product molecules. Both states have some energies and one postulates that the reaction involves some intermediate state when everything is mixed up (this is a simple view; since there may be few such states, few hidden subreactions, stuff -- I will not discuss it). This can be plotted as:
where S are the substrates state, with energy $E_S$, than we heave an intermediate state of a higher energy $E_I$ and finally the products state of energy $E_P$. Now, the core of chemistry:
• The balance between atoms in substrate state and product state after infinite time is only governed by the energy difference $E_S-E_P$ (and temperature, pressure, and other stuff that we assume constant)
• The speed of the reaction depends on the intermediate state energy $E_I$, because we assume that the substrates must gather $E_I-E_S$ from the thermal fluctuations and collide do form intermediate state.
An enzyme forms a proper environment and usually improves spatial organisation of substrates to decrease $E_I$ and greatly improve the reaction speed.
Now, in a cell, we have dozens of different molecules and thus huge amount of possible reactions. Moreover, the cell is under a constant matter exchange with environment, so the situation is dynamic and far from static equilibrium defined by $E_S-E_P$ differences. Because of that, the enzyme ability to change reaction speed is enough to form a dynamic equilibrium which we know as a metabolism.
So I can finally go back to the ATP role. Let's consider a situation where we have a reaction $S \leftrightarrows P$ but $E_S<E_P$; this reaction will of course rather go the opposite way we want and the equilibrium will be shifted into substrates. Now, let's say we have also a reaction $S_X \leftrightarrows P_X$ for which $E_{S_X}>>E_{P_X}$. One can expect that it is possible that the both reactions may co-occur creating reaction $S+S_X \leftrightarrows P+P_X$ which would have positive $E_{S}-E_{P}$ difference, however in most cases this would be very improbable (corresponding to large $E_I$). Yet, this obstacle could be easy omitted -- with a help of an appropriate enzyme.
Reassuming, enzyme role is to temporarily shift the balance of endoenergetical reaction by enlarging the probability of its co-occurrence with exoenergetic ATP to ADP+P transition.
This is of course a trivial and very wrong picture =) You can start learning more here.
-
this picture is quite right, but the "hard core" here is how is the energy of the exothermic ATP=>ADP transferred to some reaction running "uphill"? There is no real evidence for that, but it is clear that the transfer is "mechanical" because nothing else is possible. (the energy of ATP=>ADP is some kilocalories/mole, no electronic or eletromagnetics is in that range) – Georg Apr 5 '11 at 11:30
@Georg Sure, I just wanted to focus OP on general facts rather than on the details of enzyme mechanics (which is not only enzyme-specific but also the place where hard-core quantum chemistry applies). – mbq♦ Apr 5 '11 at 13:00
While this is potentially useful info, it seems to be more about the general theory of equilibria of such reactions, than the rather specific detail I'm asking about. Thanks anyway. – user2925 Apr 5 '11 at 13:48
@englishphysics The problem is that your question is too general; the details about enzyme-substrate-ATP interactions are just enzyme specific. – mbq♦ Apr 5 '11 at 17:48
The double welled potential diagram mbq posted is an operative diagram for the energetic state of a molecule, call it M, which is phosphorylated ATP + M --> ADP + M-P. The phosphorous atom binds onto an amino acid residue group in a polypeptide chain, often threonine or tyrosine in the case of kinases. Kinases are enzymes which initiate biochemical pathways. The amino acid residue which is phosphorylated has dihedral bond angles with adjacent amino acid residues in the chain. The unphosphorylated and phosphorylated bond angles are substantially different, reflecting the two minima in the double welled potential. The result is the enzyme exhibits two conformational shapes. These shapes can act as binary conditions, literally on and off, for turning on and off a biochemical pathway.
This may be compared to Landauer’s principle which relates a binary information condition to thermodynamics. ATP contains 76j/mole of energy, which we can identify as the E. The work or useful energy extracted is the change in the energy from the lower portion of the well to the higher. Also there is energy in the entropy due to a chemical change, which for the two states is $k~ln(2)$. Therefore, the free energy per mole $F~=~ 76j/mole$ must be greater than the entropy for a change in the configuration or shape of the kinase or other molecule.
-
I always pictured it in the following way, even though this is not what you read on Wikipedia, or in chemistry textbooks:
ATP has three negatively charged phosphate groups, which are highly repulsive, but stuck together by a chemical bond which is too strong for the repulsion to overcome by itself. When ATP binds to protein, the protein breaks one of the phosphates off, and this phosphate pushes hard on the other two as it zooms away, doing mechanical work. This mechanically forces parts of the protein to close or open, providing energy for the reaction. There is nothing more magical than electrostatic forces between phosphate groups, and the forces are mostly electrostatic, not entropic.
The theoretical values for ATP energy content are always given in terms of the maximum energy that could be theoretically extracted, given the temperature and concentrations, if you run a perfect adiabatic heat engine. But I think this is misleading, because the proteins aren't that sophisticated. They waste a lot of the free energy, because they are just using mechanical forces, they let the outgoing phosphate heat up the water, they aren't doing things particularly adiabatically, and I believe the actual amount of useful work done for a protein by ATP is a fraction of the free energy content, essentially only the electrostatic repulsion potential energy between the $\gamma$ phosphate and the other two, and so it doesn't depend on the temperature or on the concentration.
-
It's usually more indirect than that. You will need to know biochemical statistical thermodynamics of irreversible processes rather better. I'll look for some references.
To go to your example of an Enzyme E catalyzing the exothermic reaction summarized by X + Y + E + ATP -> X-Y + E + ADP + P(i), I agree with the answer above that (4) the ATP -> ADP + P(i) reaction in water at standard lab conditions is always energetically favorable, i.e. free energy is always released, and entropy increased as a result of this reaction. It is this increase in entropy which will drive the reaction in the forward direction.
Eventually, the free energy released will be in the form of many far away infrared photons or phonons, which is much more probable (many more such states) than having it all localized in the single bond in ATP, so the reaction will go statistically in one direction.
Your points 5 and 6 are overly specific. The specific mechanism you refer to, resonant energy transfer can happen in some rather special cases, but it is unusual. Instead, more often enzymes form a temporary labile phosphorylated intermediate E-P which changes the charge in the active site and therefore the shape of the active site via a)E + ATP -> E-P + ADP b)H2O + E-P ->E + P(i) as part of the overall reaction summarized above. No resonance is necessary, since charge effects do all the work.
Other different enzymes work by forming labile intermediates with one of the substrates as part of the overall reaction summary. E + X -> E-X . Many other mechanisms are also possible.
The main point is that everything does not need to happen at once, since the statistics will determine a net direction for the coupled reactions which comprise the overall reaction to go, based upon the overall free energy change.
Another point is that the enzyme is always changing shape somewhat from the ordinary room temperature fluctuations it undergoes, according to the Boltzmann formula.
To reiterate, reactions can be driven forward overall by statistics even if the early steps are energetically unfavorable. So the whole situation is much more diverse than you described (which is why it is chemistry). A good example of this last mechanism is in the synthesis of nucleic acids from nucleotide triphosphates, the overall reaction being driven forward by the much later release of free energy associated with the hydrolytic splitting of the previously released pyrophosphate molecule
H2O + P-P -> 2 P(i).
-
Thanks. This is a pretty nice answer. However, I have a couple of questions: a) could you expand on why having energy in the bond is a lower entropy state than having radiated energy ? b) I'm not sure what you mean by "changes the charge in the active site" - this seems to be the heart of the energy transfer mechanism, so I'd like to know exactly what's going on here - you're saying that the molecular deformation occurs solely due to coulomb forces that arise when the intermediate is formed ? (And I've noticed that I misused the term "energetically favourable" in the original post) – user2925 Apr 5 '11 at 13:41
a) This is very rough, but it's the same idea as why gas molecules fill out a room, rather than stay, by chance, in one corner. There are many more ways of distributing the free energy when it's divided up into small packets, than when it's all concentrated in the one bond. So it is vastly more likely that the macrostate looks like the energy packets diffused all over. The logarithm of the number of ways (microstates) is the entropy. – sigoldberg1 Apr 5 '11 at 16:29
b) I just meant that there are many different mechanisms of catalysis, even when ATP is involved. You have to find out about each one individually, by searching under "reaction mechanism", or "active site". Different enzymes use fundamentally different mechanisms. c) You seem to be mostly asking about reactions in which the high free energy in ATP is transferred to another high free energy bond in another reaction product. Is that right? – sigoldberg1 Apr 5 '11 at 16:42
Finally, I think you are confusing the free energy changes of the reaction with the free energy of activation. The hydrolysis of the ATP provides the first to make the reaction products more likely than the substrates, the enzyme or catalyst lowers the second to increase the rate. They are different. – sigoldberg1 Apr 5 '11 at 16:59
@sigoldberg: re: entropy - I was wondering how you could measure the entropy of the bond - I'm only familiar with entropy in classical thermodynamics where S = klogW where W is the volume of space phase occupied - it's not clear to me what an analog of this is for bond energy - it's a side issue though. – user2925 Apr 5 '11 at 18:09
show 2 more comments
there are some good answers here and I'll agree it is important to understand the thermodynamical aspects of enzyme function, but I sympathize with the OP on this point: down at the lowest level you still want to describe what happens with the bonds and energy even if the macroscopics of the system (temperature, pressure, configuration) controls the reaction speeds. After all, chemistry is low-energy physics, and thermodynamics is big-number physics.
Below is linked a paper (from 2005) which summarizes the then-current knowledge about ATP hydrolysis in the F1-ATPase, one of the most strikingly mechanical enzyme complexes in nature (see the linked YouTube-video!). It's basically a stator and a rotor, which can convert between ATP<->ADP+P and mechanical rotation of the rotor. It is one of the crucial proteins that power respiration in the mitochondrias in human cells in a very mechanical way - protons fall down a gradient through the mitochondrial membrane, pulling the rotor around, which turns ADP+P into ATP. In the bacterial flagelli (tails) it is used the other way, ATP powers it as a motor which enables the bacteria to swim.
The 3D-structure of the F1-ATPase has been known for some time and there are some beautiful images of it on the web but the process is not completely understood. The paper discusses results from experiments and atomic simulations (one of my research interests) which give clues to exactly how the ATP bond energy is transferred.
As in most protein functions, you have a very mechanical function (as the OP suggests in pt. 7) in combination with redistribution of Coulumbic charge as well as hydrogenic and covalent bonds. Some speculate that elastic storage of mechanical energy can participate (and hence the possibility of phonons I guess) - but I have never heard of any suggestions of intra-protein photon exchange as the OP suggests in pt. 5-6.
I tried posting the link to the researcher's homepage of the paper linked below but this site didn't allow me posting more than 2 links in a post :) Sorry..
Zooming in on ATP hydrolysis in F1
ATP F1 ATPase youtube video
-
Welcome to Physics! Please use the Post answer button only for actual answers. You should modify your original question to add additional information. – Sklivvz♦ Apr 5 '11 at 13:15
Hi, I did add answers to pt 5-7 and the OP does indeed ask for more information in the question. OTOH the other answers with general thermodynamics does not answer how the energy is redistributed.. – Bjorn Wesen Apr 5 '11 at 13:24
What's the down-vote for? This is a good reply with some useful references. Sorry about that @Bjorn. Not everyone here has manners. The downvoter should have left a brief note at the very least. Welcome to physics.SE! – user346 Apr 5 '11 at 13:32
Hello Björn, are that movies (ie the evidence behind) solid? "Molecular biologists" often make theorys where in science one would say "working model/hypothesis". This: ""After all, chemistry is low-energy physics, and thermodynamics is big-number physics"" is simply wrong. – Georg Apr 5 '11 at 13:54
@Georg: the rotation itself of the F1 ATPase has been directly observed and measured, but one of the points here were that the exact mechanism of ATP-hydrolysis work-extraction is not conclusively known generally and I used this enzyme as an illustration. I agree that my remark about thermodynamics is a long-shot, but I figured the OP wanted a more direct explanation than thermodynamics (which in these cases usually implies longer timescales or averages). For example you can say that a reaction goes faster at higher T, but you can still ask what happens at a lower level. Different issues. – Bjorn Wesen Apr 5 '11 at 18:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482866525650024, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/190532/what-is-the-significance-of-the-graph-isomorphism-problem
|
# What is the significance of the graph isomorphism problem?
It seems that graph isomorphism is an overwhelmingly interesting problem, particularly computationally. Why is that? What are the (theoretical and practical) implication of the existence of an algorithm, which decides isomorphism of graphs effectively? (I have assumed that there is still no such algorithm known to human, am I right?)
-
1
– binn Sep 3 '12 at 16:42
## 1 Answer
You are mistaken that there is no known algorithm for graph isomorphism. There is an obvious algorithm: given graphs $G$ and $H$ with the same number of vertices, enumerate all possible bijective mappings from the vertices of $G$ to the vertices of $H$, and then check (in $O(v^2)$ time) to see if each mapping is an isomorphism. This algorithm is guaranteed to terminate; it has running time $O(v!v^2)$.
As I understand it, though, the theoretical significance of this problem is that it is suspected to be NP-intermediate (NPI). That is:
1. It is clearly in NP. (The NTM guesses a possible isomorphism and then checks it in polynomial time)
2. It is suspected not to be NP-complete
3. It is suspected not to be in P.
The existence of an NPI problem is equivalent to P≠NP, so NPI problems are interesting for at least this reason.
Garey and Johnson give the following reasons for suspecting that graph isomorphism might be NPI:
Researchers who have attempted to prove that GRAPH ISOMORPHISM is NP-complete have noted that its nature is much more constrained than that of a typical NP-complete problem, such as SUBGRAPH ISOMORPHISM. NP-completeness proofs seem to require a bit of leeway; if the desired structure $X$ (subset, permutation, schedule, etc.) exists, it should still exist even if certain aspects of the instance are locally altered. Forexample, a function $f$ will be an isomorphism between a graph $H$ and a subgraph of a graph $G$ even if we add edges to $G$ or delete edges not in the image of $f$. However, if $f$ is an isomorphism between $H$ and $G$ itself, then any change in $G$ must be reflected by a corresponding change in $H$, or else $f$ will no longer be an isomorphism. In other words, proofs of NP-completeness seem to require a certain amount of redundancy in the target problem, a redundancy that GRAPH ISOMORPHISM lacks. Unfortunately, this lack of redundancy does not seem to be much of a help in designing a polynomial time algorithm for GRAPH ISOMORPHISM either, so perhaps it belongs to NPI.
(Computers and Intractibility, pages 155–156)
In contrast, the subgraph isomorphism problem is known to be NP-complete. This is the problem of deciding, given graphs $G$ and $H$, whether $G$ is isomorphisc to some subgraph of $H$. An efficient solution to this problem obviously solves Clique, Maximal Independent Set, Hamiltonian Cycle, and other similar problems.
-
Nice answer! Btw. I think the poster meant that there is no known "effective" (which probably means polynomial) algorithm that decides graph isomorphism. Which is true. – Gabor Csardi Sep 4 '12 at 4:47
1
@GaborCsardi I wondered that myself. But "effective" is an unusual word to use, and has a precise technical meaning in this context, so I took them at their word. – MJD Sep 4 '12 at 10:05
@MJD: Note that there are some languages where the word for "efficient" is disappointingly close to "effective". For example, "efficient" is effektiv in Danish. – Henning Makholm Sep 4 '12 at 13:44
1
@Henning Okay, but I'm not sure it would be useful for me to try to second-guess the poster's expressed meaning on what they might possibly have meant had it been written in some other language. I think a better strategy might be to take them at their word and then wait to see if they post clarifications in response. – MJD Sep 4 '12 at 22:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532471895217896, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/93958-sum-periodic-functions-print.html
|
# Sum of periodic functions
Printable View
• June 28th 2009, 08:00 PM
Bruno J.
Sum of periodic functions
This theorem was inspired to me by this thread. I have found a proof of it but I'll post it later; I want to see if somebody can find another one!
Let $m(x), n(x)$ be continuous real-valued functions having periods $p,q$. Show that $m(x)+n(x)$ is periodic if and only if $p,q$ are linearly dependent over $\mathbb{Q}$.
• June 29th 2009, 01:59 AM
malaygoel
Quote:
Originally Posted by Bruno J.
This theorem was inspired to me by this thread. I have found a proof of it but I'll post it later; I want to see if somebody can find another one!
Let $m(x), n(x)$ be continuous real-valued functions having periods $p,q$. Show that $m(x)+n(x)$ is periodic if and only if $p,q$ are linearly dependent over $\mathbb{Q}$.
If $m(x) + n(x)$ has period T, then
$T=ap$
$T=bq$
where a,b are natural numbers.
hence, $ap=bq$
or,
$p=\frac{b}{a}q$
now, what is meant by linear independence here?
• June 29th 2009, 08:31 AM
Bruno J.
How can you be sure that the period of the sum is a multiple of both $p,q$? You need to show that this is true; it's the hard part of the problem.
If $x_1,...,x_n$ are linearly independent over $K$, then whenever $k_1x_1+...+k_nx_n=0$ with $k_1,...,k_n \in K$ we must have $k_1=...=k_n=0$. If they are linearly dependent then they are not linearly independent. In your post, $p-\frac{b}{a}q=0$ is a linear dependence relation over the rationals.
But you have to show that we must have $T=ap=bq$ for some integers $a,b$.
• June 29th 2009, 02:15 PM
pedrosorio
For a,b integers:
m(x + ap) + n(x + bq) = m(x) + n(x) (1)
Therefore ap=bq defines all periods of the sum function.
If the sum is periodic, p = b/a * q, where b/a obviously belongs to Q.
Alas c1*p + c2*q = (c1 * b/a + c2) * q, for c1=-a/b and c2=1 (both € Q) it is 0, that is if the sum is periodic p,q are linearly dependent over Q (2)
If p,q are linearly dependent over Q, p = -c2/c1 * q for c1,c2 rational numbers, then expressing c1,c2 as c1N/c1D and c2N/c2D, where c1N,c1D,c2N,c2D are integers, we get c1N*c2D*p = -c2N*c1D*q.
Taking (1) and making a=c1N*c2D and b=-c2N*c1D, ab=pq, so If p,q are linearly independent over Q, the sum function is periodic (3).
m(x) + n(x) is periodic If(3) and only if(2) p,q are linearly independent over Q.
(Sorry for the ugly proof) (Thinking)
• June 29th 2009, 02:51 PM
Bruno J.
Quote:
For a,b integers:
m(x + ap) + n(x + bq) = m(x) + n(x) (1)
Ok so far.
Quote:
Therefore ap=bq defines all periods of the sum function.
How so? If $T$ is the period of $m(x)+n(x)$, then $m(x)+n(x)=m(x+T)+n(x+T)$; but that does NOT imply $m(x)=m(x+T)$ and $n(x)=n(x+T)$. You are making a huge leap here.
If you do not use the continuity of $m,n$ your proof is certainly flawed because we can construct non-continuous $m,n$ whose sum is periodic, but whose periods are linearly independent over $\mathbb{Q}$.
• June 29th 2009, 05:12 PM
pedrosorio
Quote:
Originally Posted by Bruno J.
Ok so far.
How so? If $T$ is the period of $m(x)+n(x)$, then $m(x)+n(x)=m(x+T)+n(x+T)$; but that does NOT imply $m(x)=m(x+T)$ and $n(x)=n(x+T)$. You are making a huge leap here.
If you do not use the continuity of $m,n$ your proof is certainly flawed because we can construct non-continuous $m,n$ whose sum is periodic, but whose periods are linearly independent over $\mathbb{Q}$.
Ups (Surprised) I'll try it later...
• September 24th 2009, 11:22 AM
Bruno J.
Bump. Nobody has solved this one yet... last call! I'll give the solution soon.
All times are GMT -8. The time now is 10:27 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342576861381531, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/1765/can-i-construct-a-zero-knowledge-proof-that-i-solved-a-project-euler-problem
|
# Can I construct a zero-knowledge proof that I solved a Project Euler problem?
Is there a practical method, and if so what is the method, to reveal that I have the following type of answer but conceal the answer itself? The answer is, let's say, the solution to a Project Euler problem.
That is all. However, I'm new here. I'll try to explain why I hope this question will be considered appropriate. The explanation is a bit lengthy, maybe too defensive!
The question concerns implementing a specific information security system. To make that clear, I'll pose a birthday problem and the information to conceal will be the answer to that problem. Project Euler doesn't have any birthday problems, but it's not really so different from typical Project Euler problems. The Project Euler organizers believe that preventing widespread publication of their answers is vital to the goal of their project. Many Project Euler participants, including me, believe that it's valuable to convince people that we have solved the problems that we claim we have. (OK, I suppose we can also refer people to the Project Euler leaderboard, but how does anyone know that that is me?)
The specific problem is, what's the least number of people we must plan to select in order to have at least 95% confidence that five of our selected people will have their birthdate (day of the year) in common? Let the problem presuppose the usual sorts of silly assumptions: that birthdates are uniformly distributed throughout the year, no one is ever born on February 29, the world's population is effectively infinite, and we can do perfect random sampling from the world's population. The answer is just some number, possibly a big one.
Now, once I compute the number, can I publish information that proves that I have it without revealing its identity? I realize that there's no asymmetric computation involved. It's OK that others who know my birthday problem can solve it just as I did. It's not OK if others can get the answer from my information without even knowing my birthday problem. Crucially, I'm not sure what to say about people who do know my birthday problem but want to cheat by deriving the answer from my information instead of solving the birthday problem for themselves.
-
This problem does not seem to require zero knowledge as it could easily be solved using a trusted third party. Zero knowledge is a solution for problems not ammeniable to trusted third parties. – this.josh Jan 27 '12 at 22:12
Hey thanks for accepting my answer. I've recently edited it and was wondering if you can you the method to check that I solved your 5-birthday problem correctly? – dr jimbob Jan 27 '12 at 23:10
@poncho: Just a note: When you add other tags, you should also remove the [untagged] tag. (It appears only when all tags are removed, either by migration or by expiry of seldom used tags.) – Paŭlo Ebermann♦ Jan 28 '12 at 16:46
## 1 Answer
Verifiable to whom? Someone else with the correct answer? Then you may be able to get away with a salted bcrypt hash; e.g., you can easily say make a cryptographically strong one-way hash of an answer:
````>>> import bcrypt #using py-bcrypt python module
>>> hashed_answer = bcrypt.hashpw('[secret answer to problem 283]', bcrypt.gensalt(log_rounds=16))
$2a$16$fPitqoBxyVJbxGsWCecweu2Wfrz8KhgnargfA9F//c0ZJ6E4ONwHK
# the above hash was generated using the actual answer to problem 283
````
Then if you have the answer you can check that I also have the answer with:
````>>> hashed_answer == bcrypt.hashpw('[secret answer to problem 283]', hashed_answer)
True
````
You could also use an online javascript bcrypt calculator, where you type the answer as the password, use my hash as the salt `$2a$16$fPitqoBxyVJbxGsWCecweu2Wfrz8KhgnargfA9F//c0ZJ6E4ONwHK`, and then press run and check that the hash with your answer equals my hash.
You should make sure that the answer isn't brute-forceable by using a suitably strong number of rounds. (In this case where the answer is a base-10 number between 16-20 digits long and it takes my cpu about 4 seconds to generate a hash to brute force the answer would take a single cpu about a billion years to brute-force).
Granted people who don't have the answer to 283 cannot verify that you have an answer.
Also without solving your five person birthday problem; by the pigeonhole principle its known that the answer is less than (365)*4+1 ~ 1461; so you must use a very strong hash or add more information; e.g., one that takes days to calculate to make bruteforcing impractical.
Now if you want to have someone who doesn't know the answer check that you know an answer, you'll have to appeal to a trusted authority. This could be projecteuler.net where a list of users who have solved any individual problem is available. Users could authenticate their answer is correct with project euler (either via checking a hashed answer or directly submitting the plaintext answer to the server) and then project euler could let the world know.
As for identifying that minopret on project euler to their real life identity, project euler could decide to do this in a variety of ways. One scheme would have people type in an email address to a form and then you see the problems that that user has solved (you don't want to just list email addresses to mitigate potential spam). So if you were applying to a job and wanted to demonstrate your project euler problem solving skills, you tell your employer the email you use on PE and they could check that you have control of your email address and then see the number of answers you have done correctly.
Aside: I solved your 5-birthday problem (was at work so couldn't solve earlier). As its a small integer, it probably wouldn't be that difficult to brute force even with an unreasonably large hash, so I would suggest requiring a less brute-forceable answer. Something like first find the least number of people (N) where the probablity of having at least one set of 5 (or more) people with the same birthday is greater than 0.95 and then give the sum of that number + the probability rounded to six decimal digits. For example, if this was the standard birthday problem (two people with the same birthday to more than 0.95) the answer would be 47 + 0.954774 = 47.954774. Then my bcrypt hashed answer to the at least 5 birthday problem at greater than 95% confidence would be `'$2a$20$ejlHCtR/G.19k9TCTPtsOOkZm8J9kbvwm6zrcJ0q9NNwcun1eOpoS'` (with an ~ minute long hash), which you could check with:
````'$2a$20$ejlHCtR/G.19k9TCTPtsOOkZm8J9kbvwm6zrcJ0q9NNwcun1eOpoS'==bcrypt.hashpw('____.95____', '$2a$20$ejlHCtR/G.19k9TCTPtsOOkZm8J9kbvwm6zrcJ0q9NNwcun1eOpoS')
````
where you replace `'____.95____'` with the actual number (no leading zeros/spaces).
-
1
Actually, by the pigeonhole principle, the answer is less than $4 \times 365 + 1 = 1461$ – poncho Jan 28 '12 at 5:20
@poncho - yes edited obviously was not thinking clearly. (The answer I got is roughly 1/3 of that so that's a much more reasonable limit). – dr jimbob Jan 28 '12 at 6:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9527429938316345, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-topics/85687-how-do-you-tell-cricle-formula.html
|
# Thread:
1. ## How do you tell a cricle by the formula?
I confused about how to tell what shape something will be by its formula.
for example..
I know that x^2+y^2=1 is a circle
and
sqrt(x^2+y^2)=1 is a semicircle
but how do you know what a shape like
exp{-x^2-y^2) would be..?
Also are there other types of formulas which include the circle formula, that are common as integration problems?
2. Originally Posted by dankelly07
I confused about how to tell what shape something will be by its formula.
for example..
I know that x^2+y^2=1 is a circle
and
sqrt(x^2+y^2)=1 is a semicircle
but how do you know what a shape like
exp{-x^2-y^2) would be..? Mr F says: This defines nothing because there is no = in it.
Also are there other types of formulas which include the circle formula, that are common as integration problems?
If your intention is something like $e^{-x^2-y^2} = a$ where $a$ is a constant then you would do the following:
$e^{-x^2-y^2} = a \Rightarrow -x^2 - y^2 = \ln a \Rightarrow x^2 + y^2 = - \ln a$.
This will have no solution if $a > 1$ (why?) or $a \leq 0$ (why?).
It will obviously be a circle if $0 < a < 1$.
If $a = 1$ it defines the point (0, 0).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9546806812286377, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/terminology+electromagnetism
|
# Tagged Questions
5answers
88 views
### Is there a more scientific term for “obstruction of EM waves”?
When EM waves pass through things like rain and hail, they can be "obstructed" and bounced back or absorbed, rather than passing through. I'm conducting an experiment on this effect, and wondered if ...
0answers
41 views
### What name would you give to the method of approximating an arbitrary magnet with many smaller dipoles?
Let's say I had an arbitrarily shaped permanent magnet, with total magnetic moment $M_{0}$. Ways to calculate the magnetic field of this magnet include an analytic solution (if one exists), as well ...
1answer
52 views
### Reason for the convention about polarization states
I'd like to know if there is a special reason for limiting convention of polarization state to waves that can be split in just two components of equal frequency.
2answers
251 views
### What does physics study?
Wikipedia definition: Physics (from Ancient Greek: φύσις physis "nature") is a natural science that involves the study of matter[1] and its motion through spacetime, along with related concepts such ...
0answers
112 views
### What is the electric field part of an EM wave? Radiation field or the induction field?
Look at this image: I wonder if the electric field is from the induction field from a vibrating electron or the radiation field? If it is from the radiation field, as I suppose, than can someone ...
4answers
601 views
### Work done by the Magnetic Force
The magnetic part of the Lorentz force acts perpendicular to the charge's velocity, and consequently does zero work on it. Can we extrapolate this statement to say that such a nature of the force ...
8answers
3k views
### What is the difference between electric potential, voltage and electromotive force?
This is a confused part ever since I started learning electricity. What is the difference between voltage and electromotive force (emf)? Both of them have the same SI unit, right? I would appreciate ...
1answer
3k views
### Is there a name for the derivative of current with respect to time, or the second derivative of charge with respect to time?
This measurement comes up a lot in my E&M class, in regards to inductance and inductors. Is there really no conventional term for this? If not, is there some historical reason for this omission? ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493419528007507, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/66127?sort=oldest
|
## Number of triples of roots (of a simply-laced root system) which sum to zero
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In a paper 1105.5073, the authors took a simply-laced root system $\Delta$ of type $G=A,D,E$, and then counted the number of unordered triples $(\alpha,\beta,\gamma)$ of roots which sum to zero: $\alpha+\beta+\gamma=0$.
They found that there are $rh(h-2)/3$ such triples, where $r$ is the rank of $G$ and $h$ is the Coxeter number of $G$.
Of course we can show this by studying the simply-laced root systems one-by-one (which is what the authors did.)
My question is if there is a nicer way to show this without resorting to a case-by-case analysis, using the general property of the Coxeter element, etc.
update
In string theory, there are things called D-branes. If $N$ D-branes are put on top of each other, we have $U(N)$ gauge fields on top of it, and there are of order $\sim N^2$ degrees of freedom on it. There are slightly more involved constructions which give us gauge fields with arbitrary simple group $G$. Then the number of degrees of freedom is $\dim G$.
In M-theory, there are things called M5-branes. If $N$ M5-branes are put on top of each other, we do not know what is on top of it. But there is an indirect way to calculate how many degrees of freedom there should be; and the conclusion is that there should be $\sim N^3$ degrees of freedom (hep-th/9808060). It is one of the big unsolved problem in string/M theory to understand what mathematical structure gives rise to this $N^3$ degrees of freedom.
We can call $N$ M5-branes as the $SU(N)$ version of the construction. There are slightly more involved constructions as in the case of D-branes, but it's believed that there are only simply-laced ones. These are the so-called "6d $\mathcal{N}=(2,0)$" theory which plays an important role in physical approach to geometric Langlands correspondence, etc. See Witten's review.
There are various stringy arguments why there are only simply-laced variants in 6d, but I like this one by Henningson the best. Anyway, the number of degrees of freedom for D and E cases was calculated in this one and this one; it turned out to be given by $h_G \dim G /3$. (As there is only simply-laced ones, there's no distinction between the dual Coxeter number and the Coxeter number.)
This product of the dimension and the (dual) Coxeter number was obtained in a very indirect way, and we'd like to know more. Bolognesi and Lee came up with an interesting numerological idea in the paper cited at the beginning of this question. There idea is to think of $h_G\dim G$ as
$h_G\dim G/3 = hr + h(h-2)r/3 =$ (number of roots) + (number of unordered triples of roots which sum to zero).
They propose that the first term corresponds to "strings" labeled by roots, and the second term as the "junctions of three strings". To consistently connect three strings at a point, the roots labeling them need to sum to zero. (They didn't write this way in their paper; they use more "physical" language. I "translated" it when I made the original question here.) This makes a little bit more precise the vague idea that three-string-junctions are important in M5-branes.
I found this fact on the simply-laced root systems quite intriguing, and therefore I wondered a way to prove it nicely.
-
5
I am not sure this works, but I think the number of such triples is the number of $A_2$-subsystems in $\Delta$. If you show that each $\alpha\in\Delta$ lies in $h-2$ such subsystems, then the number is $h-2$ times the number of roots (which is $rh$) divided by $3$ (since each subsytem is counted three times). Fix $\alpha\in\Delta$. We can assume $\Delta>0$. For simply-laced systems, I think $\alpha$, $\beta\in\Delta^+$ span an $A_2$ if and only if $\langle\alpha,\beta\rangle\neq0$. We need to count this number and it should equal $2h-4$ (since we will be counting twice each subsystem). – Claudio Gorodski May 26 2011 at 23:54
I meant we can assume $\alpha>0$ on line 4 above. – Claudio Gorodski May 26 2011 at 23:55
Ah, that's a good observation. Let me think about it... – Yuji Tachikawa May 27 2011 at 8:10
Claudio's suggested approach in the first sentence doesn't at first take account of the assumption that the triples are unordered, though that is brought in later on. Anyway, it still seems necessary to study the Dynkin diagrams one by one. Is there a way to avoid the classification, while keeping the simply-laced assumption? (And then there's the question of treating other root systems.) – Jim Humphreys May 27 2011 at 13:12
@Yuji: To follow up my other comment, I don't believe the authors of the paper you cite are talking about unordered triples. And their counting formula arrives at the product of the dimension and the Coxeter number, divided by 3. Note too that the paper refers to the dual Coxeter number, which suggests a broader question for non-simply-laced types where this differs from the Coxeter number. Early papers by Kostant are probably the best source for the use of the Coxeter number in root system questions like this. – Jim Humphreys May 27 2011 at 15:31
show 3 more comments
## 3 Answers
Assuming all roots have norm 2, this is essentially the same as showing that the number of roots having inner product 1 with a fixed root $\beta$ is 2h-4, which in turn follows from the property that $\sum_\alpha(\alpha,\beta)^2/(\alpha,\alpha)(\beta,\beta)=h$. This equality is one of many standard properties of h, given in Bourbaki ch V no 6.2 corollary to theorem 1.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
To compensate for my unfocused earlier comments it may be useful to supplement Richard's efficient answer based on Bourbaki's treatment of Coxeter elements in finite reflection groups. There is a short and largely self-contained 1999 paper by Weiqiang Wang which deals directly with (crystallographic) root systems and dual Coxeter numbers for all simple types. This was inspired partly by Wang's thesis and his joint work with Victor Kac on vertex operator algebras, but is elementary in nature. The link to the paper is here.
Briefly, for the given irreducible root system `$\Delta$` one first fixes a positive system, with `$\rho$` the half-sum of positive roots and `$\theta$` the highest root having normalized inner product `$(\theta, \theta) =2$`. Following Kac, the dual Coxeter number is defined by `$h^\vee = (\rho,\theta)+1$`. (Standard arguments show that for simply-laced types this agrees with the usual Coxeter number `$h$`.) Now a positive root `$\alpha$` is called special if `$\theta-\alpha \in \Delta$`. Then (Lemma 2) the number of special roots is `$2(h^\vee -2)$`. Each defines an ordered triple `$(-\theta, \alpha, \theta-\alpha)$` summing to 0.
The lemma (combined with `$h = h^\vee$` for simply-laced `$\Delta$`) provides another approach to the question here: All roots have length equal to that of `$\theta$` and are conjugate under the Weyl group (`$\Delta$` being irreducible), so each root plays the role of `$\theta$` for some positive system. But the number of roots is `$rh$` as noted already and thus the total number of sets `$\{\alpha,\beta,\gamma\}$` summing to 0 is `$2(h-2)rh/6 = rh(h-2)/3$` because `$6$` ordered triples yield just one such set. .
While only the simply-laced root systems seem to have physical significance here, I wonder whether Wang's formulation in terms of `$h^\vee$` has further combinatorial implications. (He used it to compute the dimension of the minimal nilpotent orbit in an associated simple Lie algebra: namely, `$2h^\vee -2$`.)
-
Thank you very much! It's such an honor to have your response! – Yuji Tachikawa May 29 2011 at 2:27
There is another approach by the strange formula of Freudenthal and de Vries, which states that $h^\vee d = 12 \rho^2$ where the Weyl vector $\rho$ is one half of the sum of positive roots. The long roots have the length square two. Thus $$\frac13 h^\vee d = \sum_{\alpha>0} \alpha^2+ \sum_{\alpha,\beta>0,\alpha\neq\beta}\alpha\cdot\beta$$ For simple-laced group, the first term of RHS is the number of roots $hr=d-r$, and the second term of RHS should be $rh(h-2)/3$. For the non-simple-laced case, there may be still some interesting interpretation of the above decomposition.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435293674468994, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/weak-derivatives
|
# Tagged Questions
For question about weak derivatives, a notion which extends the classical notion of derivative and allows us to consider derivatives of distributions rather than functions.
learn more… | top users | synonyms
2answers
54 views
### Laplace transform of the derivative of the Dirac delta function
$$\int_{0}^{\infty} \delta'(t) e^{-st} \ dt = \delta(t)e^{-st} \Big|_{0}^{\infty} + s \int_{0}^{\infty} \delta(t) e^{-st} \ dt$$ = 0 - \lim_{t \to 0} \delta(t)e^{-st} + s e^{-st}\Big|_{t=0} = - ...
1answer
39 views
### Finding the weak derivative of order $3$ of $f(x)=\operatorname{sgn} \sin(x)$ where $\operatorname{sgn}$ is the sign function
Let $$f(x)=\operatorname{sgn} \sin(x)$$ where $\operatorname{sgn}$ is sign function. I need to find the weak derivative of order 3 for $f(x)$?
1answer
40 views
### Use difference quotient with not uniform bound to appoximate weak derivative
Suppose U is an open set,not necessarily bounded or has Lipschitz boundary, $f\in L^p(U)$ ,define the difference as usual: $$D^h_i f=\frac{f(x+he_i)-f(x)}{h},\ \ \forall x\in U'\subset\subset U$$ ...
1answer
32 views
### Why is the weak limit of the derivatives the derivative of the weak limit here?
In [1, chapter 8.2.1.b, p.466] the author uses the following argument: Let $U \subset \mathbb{R}^N$ be an open, bounded domain with smooth boundary. Given a bounded sequence \$(u_k)_{k \in ...
2answers
93 views
### Weakly differentiable but classically nowhere differentiable
Is there any example of a function which is weakly differentiable but none of its versions are classically differentiable (or differentiable only on a set of measure 0) ? Thanks
1answer
41 views
### Weak Differentiability of Holder functions
Is it true that every Holder function is weakly differentiable? If not please give counterexample. Thanks
2answers
106 views
### about weak derivative ( Sobolev Spaces )
the following afirmation is true ? Consider $\Omega$ a bounded and smooth domain . Let $u \in W^{1,p} ( \Omega)$ ( p>1). Supose that $u \geq 0$. let $\alpha >1$ . Then \$\nabla u ^{\alpha} = ...
0answers
28 views
### A weak chainrule [Urbano, Intrinsic Scaling]
Hey I'm reading the book on intrinsic scalign by Urbano an there is a certain issue i have problems with. Essentially the problem is the following. Let $\Omega\subset \mathbb{R}^n$ be a bounded ...
1answer
98 views
### Derivative in the sense of distributions
I have a question regarding calculating the derivative in the distribution sense of the following function: $$f(x) = \frac{d^2 }{d x^2}|\cos|x||$$ Maybe someone can point me in the right ...
2answers
61 views
### Weakly convergence in $W^{1,q}, 1<q<\infty$
Let $(x_n)$ be weakly convergent against $x$ in the Sobolev space $W^{1,q},1<q<\infty$. Now I have to show, that $(\dot{x_n})$ converges weakly against $\dot{x}$ in $L^q$. (With the point I ...
1answer
113 views
### Prove that $f$ does not have a weak derivative
Consider a function $f:\mathbb{R} \rightarrow [0,1 ]$ defined by: \$\begin{equation*} f(x)=\left\{ \begin{array}{rl}0 & \text{if } x\leq 0,\\ 1 & \text{if } x\geq 1, \\ 1/2 & \text{if } ...
1answer
91 views
### Distributional/weak time derivative basic question
Suppose we have $u \in L^2(0,T;H^1(\Omega))$, and $v \in L^2(0,T;H^{-1}(\Omega))$ is the weak time derivative of $u$, so by definition it satisfies $$\int_0^T u(t)\phi'(t) = -\int_0^T v(t)\phi(t)$$ ...
0answers
99 views
### When the weak derivative just is the strong (or classical) derivative?
When the weak derivative just is the strong (or classical) derivative? For instance, can we prove that weak derivate $Du\in C^\alpha$(or $C^0$) implies $u\in C^{1,\alpha}$(or $C^1$).
1answer
84 views
### Weak derivative of $\operatorname{sgn}(x_1)$
Let $x\in \mathbb{R}^{n}, x = (x_1,\ldots,x_n)$, and $f(x) = \operatorname{sgn}(x_{1})$. Is $f$ weakly differentiable on $U = B(0,1)$, i.e. unit ball in $\mathbb{R}^{n}$, and what is the weak ...
1answer
222 views
### Quick question on definition of derivative in the sense of distribution
Consider a function f such that on $(-\infty,0)$ and $(0,\infty)$, f is differentiable. At 0, there is a point of discontinuity. e.g. $f(x) = 0$ for $x\leq 0$ and $f(x)=x$ for $x>0$ Then if we ...
1answer
168 views
### Easy question on derivative in the sense of distribution
I would like help proving this elementary result: Let $f\in L^{1}_{loc}(a,b)$. Let $x_0 \in (a,b)$ Let $F(x)=\int^{x}_{x_0} f$. Then $F'=f$ in the sense of distributions. i.e How do I show ...
1answer
96 views
### Consistency of derivative definitions in Sobolev spaces
Just for the sake of completeness, I begin defining the Sobolev space $H^m(\mathbb{R}^n), \; m \in \mathbb{N}$, as the following set: \$H^m(\mathbb{R}^n) = \{u \in L^2 : P^{\alpha} F u \in L^2,\; ...
1answer
126 views
### Weak derivatives
How can I prove that $|\nabla u|=|\nabla|u||$ when $u$ is regular enough for example Lipschitz or $W^{1,1}_{loc}$. Other question is about the pointwise derivative when $f:[0,1]\to R$ is BV is that ...
0answers
229 views
### Weak derivative
Let $u \in C(\Omega)$ be a function with weak derivative $Du \in C(\Omega)^n$. How does one prove that $Du$ coincides with the classical derivative? Is the mean value theorem for integration ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8976531624794006, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/74014?sort=votes
|
## What’s a magical theorem in logic?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Some theorems are magical: their hypotheses are easy to meet, and when invoked (as lemmas) in the midst of an otherwise routine proof, they deliver the desired conclusion more or less straightaway—like pulling a rabbit from a hat. Here are five examples. Some are from outside of logic, but each is often useful within logic.
Baire Category Theorem. In any completely metrizable topological space, each nonempty open set is nonmeager.
Gödel's Diagonal Lemma. If a theory $T$ relatively interprets Robinson's arithmetic, then for each first order formula $\varphi(x,v_1,\dots,v_n)$ in the language of $T$, there is a $\psi(v_1,\dots,v_n)$ such that $T$ proves the sentence $\forall v_1 \dots \forall v_n[\psi(v_1,\dots,v_n) \leftrightarrow \varphi(\overline\psi,v_1,\dots,v_n)]$, where $\overline\psi$ is the code of $\psi$.
König's Tree Lemma. Every finitely splitting tree of infinite height has an infinite branch.
Knaster–Tarski Theorem. Every monotone nondecreasing operator on the powerset of a set has a fixed point.
Mostowski Collapsing Lemma. If $E$ is a well founded, set like, and extensional binary class relation on a class $M$, then there is a unique transitive class $N$ such that $(M, E)$ is isomorphic to $(N, \in)$.
Let's list other magical theorems that every logician can wield. Students among us will thereby learn of useful results that might otherwise escape their attention until much later. (There is related question here. But it and most of its answers don't focus on theorems useful in logic.) Please treat only one theorem per answer, and write as many answers as you like. Don't just link to Wikipedia or whatever; give a pithy statement. If possible, keep it informal. Bonus if the theorem isn't well known, or if you show it in action.
-
5
Would those who vote to close please comment on their reasons? – Cole Leahy Aug 30 2011 at 20:01
4
We used to allow a lot of big-list questions, but now I think the community consensus is moving away from them. It would help if the question contained a clear, direct goal (e.g. "I'd like to collect a list of examples of X for a presentation I'm giving / a class I'm teaching"). – Qiaochu Yuan Aug 30 2011 at 22:48
4
I actually think completely the opposite. Contrary to "what is your favorite this and that," the magic criterion of the first paragraph is very specific. And most answers have an example or a context where the magic trick is used. – François G. Dorais♦ Aug 31 2011 at 17:40
4
My interpretation of magic was basically: a very handy routine black box. I see that the routine part wasn't exactly in Cole's description, but I thought it was implicit. Otherwise, I agree that the question is basically meaningless. – François G. Dorais♦ Aug 31 2011 at 18:36
7
The way the answers have panned out, it is as if the goal was to have a list of powerful technical principles in logic. Whatever the merits of the question, the answers have been really great, so in that sense I'm glad the question was asked and kept open! – Todd Trimble Oct 23 2011 at 21:06
show 7 more comments
## 15 Answers
The Compactness Theorem
Funny that no one mentioned it so far. I find the Compactness Theorem magical, actually:
A first order theory $T$ has a model if and only if every finite subset of $T$ has a model.
This let's you derive the finite form of Ramsey's theorem from the infinite form. That is magic. For most applications of this kind, the Compactness Theorem for propositional calculus is actually enough.
Compactness for first-order logic and propositional logic are actually equivalent (over ZF). In fact, there is a rather large collection of equivalent results:
• Boolean Prime Ideal Theorem
• The Ultrafilter Lemma
• The Stone Representation Theorem
• The Tychonoff Theorem for compact Hausdorff spaces
The Compactness Theorem is strictly weaker than the Axiom of Choice, but it is not provable in plain ZF. This is often used to show that certain results are weaker than the Axiom of Choice. For example, it can be used to show that the existence of non-measurable sets is weaker than the Axiom of Choice (by an old result of Sierpiński).
-
2
The compactness theorem always seemed to me to be "magical" in another sense, in that it allows one to seemingly defy the "laws of nature" by producing nonstandard models. – Adam Bjorndahl Aug 30 2011 at 12:57
I have a fondness for the compactness theorem without being able to immediately give particular reasons for it. I remember seeing the derivation of the finite form of Ramsey's theorem from the infinite one in Boolos & Jeffrey's Computability and Logic, and it did seem remarkable. In the statement of the theorem above, the "only if" part is such a complete triviality that it seems distracting to include it at all. It's almost as if something not worth mentioning is being given equal billing with something that is amazing to behold. – Michael Hardy Aug 30 2011 at 21:41
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think the fact that an elementary topos has finite colimits is magical. The definition of topos only asks for finite limits, but abracadabra! The magician pulls finite colimits from the hat.
(What's that I hear you say? You don't think topos theory is a part of modern logic? Really? Truly? Well, you know where the "down" button is.)
-
4
Tom, perhaps the fact that the power-set functor is monadic would fit in here? I think that is probably where the real magic is... – François G. Dorais♦ Aug 30 2011 at 11:48
2
François, I can't disagree with you, though personally my feelings about the monadicity of the power-set functor are more like "mysterious" than "magical". But the proof (by Paré) of the existence of finite colimits via the monadicity of the power-set functor is, to my mind, quite beautiful. – Tom Leinster Aug 30 2011 at 17:04
By the way, is there a proof that a topos has coproducts done entirely in the internal logic of a topos? I have never seen one, but it should be doable (and quite readable). – Andrej Bauer Oct 24 2011 at 17:35
The Shoenfield Absoluteness Theorem
$\Pi^1_2$ and $\Sigma^1_2$ statements are absolute for transitive models of set theory.
Thus, a $\Pi^1_2$ statement is true (in $V$) if and only if it is true in $L$ (the constructible universe). Since $L$ satisfies the Axiom of Choice, every true $\Pi^1_2$ statement is consistent with the Axiom of Choice. This fact is often used to show that assuming the Axiom of Choice is often completely harmless in mathematical practice. Examples of $\Pi^1_2$ theorems include
• The (classical) Brouwer Fixed Point Theorem
• The separable Hanh-Banach Theorem
• Ascoli's Theorem
• The existence of algebraic closures for countable fields
• The existence of prime or maximal ideals for countable rings
• König's Tree Theorem
• Ramsey's Theorem
• The Completeness Theorem for countable first-order languages
and many, many more...
The Shoenfield Absoluteness Theorem is often used in conjunction with forcing. Indeed, if a $\Pi^1_2$ or $\Sigma^1_2$ statement can be forced in an extension, then it must have already been true in the ground model. Other absoluteness results are used in this way too. For example, the original proof of the Baumgartner-Hajnal Theorem (see here and here) combines forcing and the absoluteness of well-founded relations.
-
The Low Basis Theorem
You mentioned König's Tree Lemma in the question. There is a very useful refinement that is common in computability theory and related areas:
Every computable infinite subtree of $\{0,1\}^{<\infty}$ has an infinite branch with a low Turing degree.
A set $A$ has low Turing degree if the halting problem relative to $A$ has the same Turing degree as the (unrelativized) halting problem. In particular, a set that has low Turing degree is incomplete (does not compute the halting set). Thus, the Low Basis Theorem is often used to show that a particular problem has a solution which has strictly lower complexity than the halting problem.
Another interesting feature of the Low Basis Theorem is that it is iterable. Indeed, the therorem relativizes very easily. Since being low relative to a low degree is the same as being plainly low, the theorem can be applied multiple times in a row to achieve the same outcome.
Also note that it is very important that the tree be a subtree of $\{0,1\}^{<\infty}$ (or, more generally, that the tree is computably bounded). Indeed, there are computable finitely branching subtrees of $\mathbb{N}^{<\infty}$ all of whose infinite branches compute the halting set. However, the Kreisel Basis Theorem guarantees that every computable finitely branching subtree of $\mathbb{N}^{<\infty}$ with infinite height has an infinite branch which is computable from the halting set.
An interesting application of the Low Basis Theorem is that Peano Arithmetic (PA), Zermelo-Fraenkel set theory (ZF), and all other consistent axiomatizable theories have completions that have low degree, and hence do not compute the halting set.
-
This is just the sort of answer I hoped for. – Cole Leahy Aug 30 2011 at 1:37
Cut elimination shows that if a sentence is provable in first-order logic, it is provable with a particularly nice type of proof in a natural deduction system without the "cut" rule, which is essentially modus ponens in that system. In particular these proofs have the subformula property – every formula in the entire proof is a subformula of the formula being proved.
The cut elimination theorem and its generalizations are key tools in proof theory. Gentzen proved cut elimination in 1934 and used it as part of his consistency proof of Peano arithmetic; there is a nice survey article "The art of ordinal analysis" by Michael Rathjen in Proc. ICM 2006.
The cut elimination theorem can be used to give nice proofs of the Craig interpolation theorem and other theorems from logic; one exposition is Chapter 6 of "Logic for Computer Science" by Jean Gallier.
-
1
Good example! Cut-elimination is indeed amazingly powerful, and has many applications. Suitably formulated, it can be used to prove coherence theorems in category theory, to give one example. – Todd Trimble Oct 23 2011 at 20:49
Kleene's recursion theorem says, informally, that when we write a program for a computable function we may assume that the program already has access to its own source code.
More formally, the theorem says that if $f\colon \mathbb{N} \to \mathbb{N}$ is a total computable function (which we view as a method that constructs a program $f(e)$ from a program $e$) there is some program $e_0$ such that the computable function with program $e_0$ is the same function as the function with program $f(e_0)$.
A trivial application: if $f(e)$ is a program that simply outputs $e$ and stops, the program $e_0$ outputs its own source code.
One of the magical applications of the recursion theorem is the lemma on effective transfinite recursion in hyperarithmetical theory, which is one of the key tools in that setting.
-
It bears mention that the recursion theorem doesn't yield a fixed point of $f$: the programs $e_0$ and $f(e_0)$ needn't be identical, though they do compute the same function. – Cole Leahy Aug 30 2011 at 19:00
The Truth Lemma
The result says that what's true in a forcing extension $M[G]$ is just what's forced to be true by the path of the generic filter $G$. More precisely:
Suppose $M$ is a countable transitive model of $\text{ZFC}$, $\mathbb{P} \in M$ a forcing poset, $\varphi(x_1, \dots, x_n)$ a set theoretical formula, and $\tau_1, \dots, \tau_n$ a sequence of $\mathbb{P}$-names in $M$. If a filter $G$ on $\mathbb{P}$ meets each dense subset of $\mathbb{P}$ in $M$, then $\varphi(\tau_1^G, \dots, \tau_n^G)$ holds in $M[G]$ iff $G$ hits a $p$ that forces $\varphi(\tau_1, \dots, \tau_n)$.
In fact, $M$ just needs to satisfy a rich enough finite subtheory of $\text{ZFC}$. (How rich depends on $\varphi$.) Therefore the existence of suitable $M$ follows from the Reflection Theorem, which can be proved in $\text{ZFC}$ without assuming that $\text{ZFC}$ has a countable transitive model.
With a result known as the Definability Lemma, the Truth Lemma implies that every partial order forces $\text{ZFC}$. That is, $\text{ZFC}$ holds in every $M[G]$—no matter the $M$, $\mathbb{P}$, or $G$. This is key to showing that $M[G]$ is always the smallest transitive extension of $M$ satisfying $\text{ZFC}$ and containing $G$ as an element. Thus, in diverse circumstances one may build the partial order $\mathbb{P}$ of attempts to construct a desired object, argue that the object exists in any extension of $M$ containing a filter $G$ as in the lemma, and thereby obtain the actual existence of the desired object in a model where the axioms of ZFC still hold. Magic!
-
This is probably a good place to add the Truth Lemma. What do you think? – François G. Dorais♦ Aug 30 2011 at 11:33
I gave it a shot. Feel free to edit or revert. – Cole Leahy Aug 30 2011 at 20:52
Ramsey's Theorem
The theorem has two forms.
Finite form. For all nonzero $n, k, m \in \mathbb{N}$ there is a nonzero $r \in \mathbb{N}$ such that, for each set $R$ of size $r$ and each $k$-coloring of the $n$-element subsets of $R$, there is a homogeneous set of size $m$.
Ininite form. For all nonzero $n, k \in \mathbb{N}$, each infinite set $W$, and each $k$-coloring of the $n$-element subsets of $W$, there is an infinite homogeneous set.
Frank Ramsey's original paper was titled On a problem of formal logic, so it should be no surprise that the famous theorem and its generalizations have magical applications in logic. For example, Ramsey's Theorem, the Erdős-Rado Theorem, and their more esoteric extensions, are often used in model theory to establish the existence of indiscernible elements.
-
Birkhoff's HSP Theorem
I liked the preservation theorems personally. Birkhoff's HSP theorem which identifies model classes of certain equational theories as being those classes (known in universal algebra as varieties) closed under homomorphisms (H), (iso to) subalgebras (S), and (iso to) products (P) is the one I remember most easily. There are other versions (e.g. for quasivarieties) as well. I hope this is magical enough for you.
Edit: François asked me to include my comment within Gerhard's answer. I'll try to be brief. Just as algebraic theories can be described à la Lawvere as certain categories $T$ with finite products, and models of $T$ as finite product preserving functors $T \to Set$, it is of interest to consider "left exact theories" (aka "essentially algebraic theories") which are categories $T$ with finite limits, whose models are finitely continuous functors $T \to Set$. The theories of posets and of categories are examples of essentially algebraic theories.
It is known which categories are categories of models of some essentially algebraic theory, and this is the content of Gabriel-Ulmer duality. There is in fact a precise (bicategorical) contravariant equivalence between the bicategory of finitely complete categories, and the bicategory of locally finitely presentable categories. The concept of filtered colimit plays a crucial role in this development.
A sample theorem within this general theory which extrapolates Birkhoff's Variety Theorem is that a class of structures over a multi-sorted functional signature is a finitary quasi-variety (i.e., definable by Horn clauses over equational predicates) if and only if it is closed under products, subobjects, and filtered colimits within the category of all structures. For a reference, see Adámek and Rosicky, Locally Presentable and Accessible Categories, theorem 3.22.
Todd "I'm Happy to Oblige" Trimble
-
A good example might be nice... Groups? – François G. Dorais♦ Aug 30 2011 at 11:53
OK. Omitting some technical detail, here are a few examples of such classes: commutative semigroups, groups of exponent 7, some classes of near-rings, the modules over a given fixed ring R, join-meet lattices, algebras with a single ternary operation representing majority, the one-element algebras of many classes, Heyting algebras, ... (and, yes, groups) . A text I like to recommend for more examples and information is "Algebras, Lattices, Varieties" by R. McKenzie, G. McNulty, and W. Taylor. Gerhard "Ask Me About System Design" Paseman, 2011.08.30 – Gerhard Paseman Aug 30 2011 at 16:57
1
I think it's magical enough, and it has powerful generalizations, as in Gabriel-Ulmer duality, and the theory of locally presentable categories. – Todd Trimble Oct 23 2011 at 20:51
Todd, could you include that in your edit of this cw post? That would be fantastic... Thanks! – François G. Dorais♦ Oct 24 2011 at 0:32
Thank you Todd! – François G. Dorais♦ Oct 24 2011 at 17:06
The Guaspari–Lindström Interpretability Theorem
For the sake of intelligibility, the statement here isn't as general as it could be.
Suppose $S$ and $T$ are consistent recursive extensions of $\text{ZFC}$. Then $T$ interprets $S$ iff $T$ proves each $\Pi_1$ consequence of $S$.
Set theorists use this to gauge the consistency strength of "natural" extensions of $\text{ZFC}$, which are (as far as people can tell) prewellordered under the relation of interpretability. In other words, the natural extensions form a well ordered hierarchy of degrees of interpretability. This is amazing, since it's easy to construct natural extensions that seem to have nothing in common—say, $\text{ZFC + }$"projective determinacy" and $\text{ZFC + }$"a supercompact cardinal exists."
[Someone with expertise, please edit and expand.]
-
The Montague–Lévy Reflection Theorem
The theorem applies to all cumulative hierarchies, but let's focus on the special case of the $V_\alpha$ hierarchy.
Let $\varphi_1, \dots, \varphi_n$ be set theoretical formulas in at most $k$ variables. $\text{ZF}$ proves that for all ordinals $\alpha$ there is a $\beta > \alpha$ such that, if $b_1, \dots, b_k$ are in the set $V_\beta$, then $V_\beta \vDash \varphi_i[b_1, \dots, b_k]$ just in case $\varphi_i[b_1, \dots, b_k]$ is really true.
In other words, $V_\beta$ is a $(\varphi_1, \dots, \varphi_n)$-elementary substructure of the universe $V$. (By the Second Incompleteness Theorem, if $\text{ZF}$ is consistent, then it can't prove that a full elementary substructure of $V$ exists.) Hence you can't characterize the universe: there are always arbitrarily large rank initial segments that behave just like the universe, with respect to whatever condition you invent.
The Reflection Theorem is very useful, for instance in proving that $\text{ZF}$ isn't finitely axiomatizable unless it's inconsistent. If if $T$ is a finite axiomatization, then $\text{ZF}$ proves by reflection that $T$ has a set model, and hence (since $T$ axiomatizes $\text{ZF}$) so does $T$. By the Second Incompleteness Theorem, $T$ is inconsistent. Hence, so is $\text{ZF}$.
-
Some theorems are magical: their hypotheses are easy to meet, and when invoked (as lemmas) in the midst of an otherwise routine proof, they deliver the desired conclusion more or less straightaway—like pulling a rabbit from a hat.
Perhaps to meet the sheer poetic requirement, Well-ordering theorem which states that "any set can be well-ordered" remains 'hands-down' as any meta-logician's top answer. In terms of historical perspective it has taken the logical house by maelstrom (with seemingly counter-intuitive offshoot of Banach-Tarski paradox at first) resulting from Axiom of Choice(AC) as mentioned by John bell in the SEP chronology:
1904/1908 Zermelo introduces axioms of set theory, explicitly formulates AC and uses it to prove the well-ordering theorem, thereby raising a storm of controversy.
-
The fundamental theorem of Ehrenfeucht-Fraisse games. It seems innocuous (because of the game formulation?) but the applications never stop. It provides a natural way to prove a property is not expressible in a logic. It is one of the tools of classical model theory that is also used in finite model theory. In the related notions of bisimulation and simulation, we have a cornerstone concept of concurrency theory and program verification.
-
The Feferman-Vaught theorem is a fairly magical result in logic since it allows one to figure out the truth value of a first order formula in a product of structures in terms of the individual structures and the type of product used. More generally, the Feferman-Vaught technique can be used to analyze complex structures based on their components. There are different versions of the Feferman-Vaught theorem, so I should say that the Feferman-Vaught theorem is a class of theorems or a technique rather than an individual result. This technique was developed by Feferman and Vaught in the well-known paper The first Order Properties of Products of Algebraic Systems. The following result is a version of the Feferman-Vaught theorem for reduced products. For this result, we will need the following definition. If $\psi(x_{1},...,x_{n})$ is a formula and `$\prod_{i\in I}\mathcal{A}_{i}/Z$` is a reduced product, then for `$[f]_{1},...,[f]_{n}\in\prod_{i\in I}\mathcal{A}_{i}/Z$`, let `$\|\psi([f]_{1},...,[f]_{n})\|=\{i\in I|\mathcal{A}_{i}\models\psi(f_{1}(i),...,f_{n}(i))\}/Z$`. In other words, `$\|\psi([f]_{1},...,[f]_{n})\|\in I/Z$` is the Boolean value of `$\psi([f]_{1},...,[f]_{n})$`.
For each formula $\phi(x_{1},...,x_{n})$ there is a sequence of formulas $(\theta;\psi_{1},...,\psi_{m})$ such that
• the formulas $\psi_{i}$ have at most the variables $x_{1},...,x_{n}$ free
• $\theta=\theta(y_{1},...,y_{m})$ is a formula in the language of Boolean algebras and
• If $I$ is a set and $Z$ is a filter on $I$ and `$\mathcal{A}_{i}$` is a first order structure for $i\in I$, then for ```$[f_{1}],...,[f_{n}]\in\prod_{i\in
I}\mathcal{A}_{i}/Z$```, we have ```\[\prod_{i\in
I}\mathcal{A}_{i}/Z\models\phi([f_{1}],...,[f_{n}])\]``` if and only if `\[P(I)/Z\models\theta(\|\psi_{1}([f_{1}],...,[f_{n}])\|,...,\|\psi_{m}([f_{1}],...,[f_{n}])\|).\]`
The above result also holds for the limit reduced power and the Boolean product (see the paper Sheaf Constructions and Their Elementary Properties for more details on this result). An immediate consequence of this result is that reduced products preserve elementary equivalence. In particular, direct products, direct powers, and reduced powers all preserve elementary equivalence and elementary embeddings. I posted this result since I recently used an application of a version of the Feferman-Vaught theorem to produce a result about ultrapowers and limit reduced powers.
-
This example is for me sheer wizardry: Given a class of relational structures of the same type, all with finite underlying universes and including one with a one-element universe, such a class has unique cancellation as well as unique kth roots (AxC iso BxC implies A iso B, A^k iso B^k implies A iso B). Lovasz came up with the results in the 1960's, and for me the only explanation to the magic is that he came up with the proof first and then the statements later. I hope that kind of magic is allowed for an answer.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 143, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349762201309204, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=3935727
|
Physics Forums
## Accelerated Expansion from Negative Λ
I think this is one of those papers that will have millions of citations
http://arxiv.org/abs/1205.3807
Accelerated Expansion from Negative Λ
James B. Hartle, S. W. Hawking, Thomas Hertog
(Submitted on 16 May 2012)
Wave functions specifying a quantum state of the universe must satisfy the constraints of general relativity, in particular the Wheeler-DeWitt equation (WDWE). We show for a wide class of models with non-zero cosmological constant that solutions of the WDWE exhibit a universal semiclassical asymptotic structure for large spatial volumes. A consequence of this asymptotic structure is that a wave function in a gravitational theory with a negative cosmological constant can predict an ensemble of asymptotically classical histories which expand with a positive effective cosmological constant. This raises the possibility that even fundamental theories with a negative cosmological constant can be consistent with our low-energy observations of a classical, accelerating universe. We illustrate this general framework with the specific example of the no-boundary wave function in its holographic form. The implications of these results for model building in string cosmology are discussed.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Science Advisor Yes. This might just be the adrenaline needle getting plunged into the heart of string theory.
Recognitions:
Gold Member
Science Advisor
Quote by bapowell Yes. This might just be the adrenaline needle getting plunged into the heart of string theory.
Good image!
## Accelerated Expansion from Negative Λ
Thanks for posting this, MTd2. This summer at Strings 2012, Andrew Strominger is giving a talk about progress in dS/CFT correspondence. With this, that may not be necessary.
Recognitions:
Gold Member
Science Advisor
Quote by Mark M Thanks for posting this, MTd2. This summer at Strings 2012, Andrew Strominger is giving a talk about progress in dS/CFT correspondence. With this, that may not be necessary.
in case anyone is interested and hasn't seen it, here is the list of titles of Strings 2012 talks (including Strominger's) that have been announced so far:
http://wwwth.mpp.mpg.de/members/stri...ram/talks.html
Quote by Mark M With this, that may not be necessary.
It seems there is a duality between AdS/CFT and dS/CFT. Maybe it is a huge advance in both fronts.
For those who do not wish to read the entire paper, here is an excerpt from the conclusion in the 'Summary' section:
Given this general framework, the argument proceeds as follows: The universal semiclassical asymptotic wave functions in theories with a negative cosmological constant describe two classes of real asymptotic histories - asymptotically Euclidean AdS for boundary metrics with one signature and Lorentzian de Sitter for metrics with the opposite signature. Assuming boundaries with spherical topology the classicality condition can be satisfied only for the asymtotically de Sitter histories. Therefore negative $\Lambda$ theories can be consistent with our observations of classical accelerated expansion.
(Bold added for emphasis.)
Interesting, to say the least ... Thanks MTd2 for bringing this up
Quote by MTd2 I think this is one of those papers that will have millions of citations http://arxiv.org/abs/1205.3807 Accelerated Expansion from Negative Λ James B. Hartle, S. W. Hawking, Thomas Hertog (Submitted on 16 May 2012)
For somebody who has had a degenerative "terminal illness" for 40 years Hawking publishes a lot.
I may be able to convey something of how this paper works. It combines AdS/CFT with Hartle and Hawking's "no-boundary proposal" for the wavefunction of the universe. First, visualize AdS/CFT as a solid standing cylinder and the no-boundary proposal as a solid sphere. The vertical direction in the cylinder is time, the solid interior is the AdS space, the surface of the cylinder is the CFT. The meaning of AdS/CFT is that quantum processes in the interior of the cylinder can be mapped to quantum processes taking place on the surface of the cylinder. As for the no-boundary proposal, that is a way to obtain amplitudes for a state of the universe by summing over Euclidean histories which end in that state and which have no other boundaries. So in the solid sphere, the surface represents the end-state, and the interior represents a typical history contributing to the path integral. In theory, you could say that the time direction runs from the surface into the center, so the sphere consists of previous states of the universe in concentric shells, but in practice, you're in Euclidean signature, so you're using "histories" in which there is no real time direction. I have described AdS/CFT in terms of a cylinder with time going up or along the cylinder, but you can do Euclidean AdS/CFT too. In that case, rather than a cylinder, you have a sphere, because both on the boundary and in the interior, you don't have space and time, you just have space. So here we see a convergence. To calculate the amplitude for a particular state of the universe, you sum over Euclidean 4-geometries ending on a 3-sphere boundary with specified values of the metric and other fields. And Euclidean AdS/CFT involves an equivalence between such a sum, and a dual sum defined just on the 3-boundary, in CFT language. So AdS/CFT offers a way to do the no-boundary calculation for cosmologies with a negative cosmological constant. But how do they get solutions with a positive cosmological constant? Basically, by complexifying the variables in the path integral - allowing e.g. the metric to take on complex values away from the final surface. So some quantities which classically were just real, can now be pure imaginary, and if you square them, you get a quantity which is real but of the opposite sign to what was classically possible. That's it, more or less, though the details are complicated. Hundreds of AdS/CFT dual pairs have been identified, and certain conditions (described in the paper) have to be met if they are to be cosmologically relevant, so yes, this will certainly lead to yet another line of research in cosmology. In fact, six months ago I was rather excited about a particular AdS/CFT pair, because I found out that, just before the first superstring revolution, Murray Gell-Mann had tried to obtain the standard model from the AdS side (as a compactification of d=11 supergravity, which we now know as the low-energy limit of M-theory). Many years later, the CFT dual of that AdS theory was constructed, and I thought that if it could be uplifted to de Sitter space, we might get to describe the real world. I'm not so excited about it now because I understand particle physics much better now, and Gell-Mann's construction has only an impressionistic resemblance to the standard model (though Hermann Nicolai, for one, remains intrigued by it). Nonetheless, I'm finding the details of the current paper (Hartle et al) so comprehensible, that I'll be tempted to go all the way, and at least see whether the "Gell-Mann model" meets the criteria for inflation and late-time acceleration, when evaluated via a holographic no-boundary wavefunction. At the same time, I have major reservations about a lot of the formal manipulations that are carried out in quantum cosmology. Euclidean space, complexified metrics, no time evolution; if this truly is a description of reality, it needs some sort of radical ontological reinterpretation, e.g. as a twistorial Bohmian mechanics (I mention twistors because of the complex variables, and Bohm because the path integral is approximated by Hamilton-Jacobi trajectories).
Mitchel, it seems that the proposal is deeper than what you are saying. For example, that sphere is also identified with a de Sitter space. So, in the end you have a dS/CFT - AdS/CFT correspondence. That is extremely impressive because it seems dS seems to hard to deal with, though I don't know the detail of why is that. But one thing I am sure. The most general holographic model, that one from Bousso, always assumes a dS space as an asymptotic behavior of his model. So, in a way, this new paper is a concrete and direct realization of an holographic string theoretical model for cosmology.
A commenter at Lubos's blog found a sign error. Lubos thought it might invalidate the paper but wasn't sure and mailed Hartle. Version 2 of the paper was just submitted, Lubos and his reader are acknowledged, and the error is called a typo. So the paper stands. People need to start trying the idea on their favorite AdS/CFT dual pairs - N=4 YM, ABJM, Witten's moonshine dual for AdS3 pure gravity... I want to see it applied to the Vasiliev theory for which a dS/CFT extension of the duality was recently constructed, because then we have two approaches to dS for that theory, which can be compared.
I vaguely remember that Witten found that the moonshine dual for the black hole was wrong. Is this what happened?
Quote by mitchell porter I may be able to convey something of how this paper works. It combines AdS/CFT with Hartle and Hawking's "no-boundary proposal" for the wavefunction of the universe. ... At the same time, I have major reservations about a lot of the formal manipulations that are carried out in quantum cosmology. Euclidean space, complexified metrics, no time evolution; if this truly is a description of reality, it needs some sort of radical ontological reinterpretation, e.g. as a twistorial Bohmian mechanics (I mention twistors because of the complex variables, and Bohm because the path integral is approximated by Hamilton-Jacobi trajectories).
Certainly a pretentious affirmation for someone like me but I agree with the main idea developped in that paper. Furthermore it can be connected with a toy model giving the ontological reinterpretation you are looking for. Consider a small piece of vacuum-made tubular string extending under the double influence of polarizations tensions at the extremities and gravitation everywhere else. Then you get equations like (2.11) in the reference. Details can be read on my home page since 2009. My reservation is that that toy model actually applies to particles with imaginary (complex) energies.
Recognitions: Science Advisor Does anyone know how solid the results from this paper are? I've talked to multiple people who are convinced that it involves mixing up fundamental physics with conventions. One would say that such an important result gets a lot of attention, but somehow I'm missing it.
Quote by haushofer Does anyone know how solid the results from this paper are? I've talked to multiple people who are convinced that it involves mixing up fundamental physics with conventions. One would say that such an important result gets a lot of attention, but somehow I'm missing it.
I don't know the answer to your question. Google (today) affirms that the paper was cited 5 times.
With the unique purpose to increase the knowledges on this topic, perhaps some usefull basics on the website of scholarpedia (cosmological constant)
Thread Tools
| | | |
|------------------------------------------------------------|-------------------|---------|
| Similar Threads for: Accelerated Expansion from Negative Λ | | |
| Thread | Forum | Replies |
| | Astrophysics | 2 |
| | Cosmology | 21 |
| | Cosmology | 37 |
| | General Astronomy | 3 |
| | General Astronomy | 7 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329676628112793, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/9523/recursively-dependent-types/10077
|
## Recursively dependent types?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there such a thing as "recursively dependent types"? Specifically, I would like a dependent type theory containing a type $A(x)$ which depends on a variable $x: A(z)$, where $z$ is a particular constant of type $A(z)$.
This may be more "impredicative" than some type-theorists would like, but from the perspective of semantics in locally cartesian closed categories, I can't see any reason it would be a problem: the type $A$ comes with a display map to $A_z$, while $A_z$ itself is the pullback of this display map along a particular morphism $z \colon 1\to A_z$. But I want to know whether a corresponding syntax exists.
-
## 2 Answers
If $z$ is a constant, it's completely unproblematic, but it's troublesome if $z$ is a variable. Here's a simple example: suppose $A$ is a type operator of kind $\mathbb{N} \to \star$, defined as follows:
$\matrix{ A(z) & = & \mathbb{N} \\ A(n + 1) & = & \mathbb{N} \times A(n) }$
Then it's obviously the case that $z : A(z)$.
OTOH, if $z$ is a variable, on the other hand, then you'll run into the difficulty that the standard well-formedness rule for contexts says that for $\Gamma, x:A$ to be a well-formed context, then $\Gamma \vdash A$ -- that is, $A$ should be well-formed with respect to the variables in $\Gamma$.
Now, there's nothing semantically wrong about adding fixed point operators at every kind. That is, have type operators with kinds like $\mu : (\star \to \star) \to \star$, or $\mu' : ((\star \to \star) \to (\star \to \star)) \to \star \to \star$ or $\mu'' : ((\mathbb{N} \to \star) \to (\mathbb{N} \to \star)) \to \mathbb{N} \to \star$. This kinds of dependency, where the dependent index can vary at every level of a data structure, are very useful when programming in type theory. For example, we might define the type of lists of type $A$, indexed by length in the following way:
$\array{nil & : & list(z) \\ cons & : & \forall n:\mathbb{N}.\; A \to list(n) \to list(n+1)}$
So here, $list$ is the least fixed point of a type operator of kind $(\mathbb{N} \to \star) \to (\mathbb{N} \to \star)$. However, most type theories avoid adding the generic operator (like $\mu''$ above) in favor of only permitting inductive types as primitive definitions.
This is partly for philosophical reasons, and partly for pragmatic reasons involving not wanting to require supplying a well-ordering at each elimination of a recursive type. This is necessary to avoid being able to use a recursive type like $\mu \alpha.\; \alpha \to \alpha$ to introduce general recursion and inconsistency into the type theory. However, this complicates typechecking quite a bit -- if you're not very careful, you can lose decidability of typechecking. (In particular, in an inconsistent context, you can cook up with a bogus well-order using the local contradiction, and then use that to tip a conversion rule based on blind $\beta$-reduction into going into an infinite loop. This is not a problem for consistency, but it can annoy users.)
If you're okay with impredicativity, I don't think there are any semantic issues related to consistency, though.
-
Thanks for the very detailed answer! I am still digesting the stuff about fixed-point operators, but I don't think it's quite relevant for what I wanted, since my indexing type $A(z)$ isn't something like $\mathbb{N}$ on which a type could be defined by recursion anyway. – Mike Shulman Dec 22 2009 at 23:25
But I think that your first couple of paragraphs give me the answer I want, because I really do want $z$ to be a constant. I had assumed that even in order for $z$ to be a declared constant of type $B$, the type $B$ would have to be a well-formed type in some context---which I guess $A(z)$ is since $z$ is a constant and all, but I had somehow felt that $B$ ought to be well-formed "before" $z$ is declared. But I guess that sort of "before" is not even applicable here. – Mike Shulman Dec 22 2009 at 23:26
Yeah, the same constants can participate in many types. The easiest example is with sums -- the left injection into a sum $inl(-)$ is a constant of type $A \to A + B$ for all types $A$ and $B$. – Neel Krishnaswami Dec 23 2009 at 12:55
I don't see why that's important---I only want $z$ to have one type, namely $A(z)$. – Mike Shulman Dec 23 2009 at 19:38
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Mike, if you consider that locally cartesian closed categories provide the canonical semantics for dependent type theories then you may as well just use sets, for which any function $p:A\to X$ provides a dependent type $A[x]={a|p((a)=x}$.
Not only is this a very dull notion of dependent type, but it gives no account of the way in which $A{x]$ might depend "continuously" on $x$, something that we probably need to understand in order to give a meaning to the word "recursive".
(Local, relative or ordinary) cartesian closure is needed to interpret function- or Pi-types, which do not feature in your question. The appropriate arena is a category with some finite limits and something infinitary to capture the recursion.
A class of display maps is a class of morphisms that is closed under (composition with isomorphisms and) pullback against arbitrary maps in the category.
This categorical notion is equivalent to that of a dependent type theory in the basic algebraic sense, ie with types, terms, equations and structural rules. As I believe you are more comfortable with a categorical language, you can solve your problem in that setting and then use the equivalence to reformulate it symbolically.
In particular, the class of display maps includes - all isomorphisms iff the type theory includes singleton dependent types; - composites iff the type theory has Sigma types; - inclusions of diagonals and hence all maps iff the type theory has equality types; - relative cartesian closure corresponds to Pi types.
I had originally interpreted your "recursively dependent" types to mean an infinite chain of dependencies, and hence of display maps. For that you would want the class of displays to be closed under cofiltered limits.
Neel, on the other hand, read it as a fixed point equation, which we can interpret categorically as the fixed point of a functor.
Unsurprisingly, domain theory would be a useful setting in which to look for models of these situations. Indeed my PhD thesis introduced classes of display maps in order to study dependent types in domain theory, and you might like to look at the last chapter for investigations of appropriate notions of displays of domains.
For the theory of display maps and their equivalence with dependent types, my thesis was completely superseded by Chapter VIII of my book, "Practical Foundations of Mathematics" (CUP 1999).
For the interpretation of dependent types in domain theory, Martin Hyland and Andrew Pitts gave a comprehensive account in their paper The Theory of Constructions: Categorical Semantics and Topos-Theoretic Models in Categories in Computer Science and Logic edited by John Gray and Andre Scedrov, AMS Contemporary Mathematics 92 (1989).
-
Yes, certainly. I was just using "display map" to mean "the projection from a dependent type to its context," but I can see that that might give the wrong impression. I don't think the point or the question depends on whether the whole category is lcc or not. (Also, just as an MO usage point, I think this would have been more appropriate as a "comment" than as an "answer.") – Mike Shulman Dec 30 2009 at 6:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468975067138672, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/57616/
|
## [automatic continuity] measurable homomorphisms of (C,+)-->(C,+) or (C,+)-->(C,*) are continuous and admit an explicit description ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am interested in generalisation of the following fact [known as automatic continuity, as I have been pointed out below]. I am especially looking for references to papers dating back to 1920s---I feel that question like these have been well-studied when people have still been interested in set-theoretic aspects of analysis...
(Cauchy) Any measurable automorphism (R,+)-->(R,+) is necessary a linear function, and any measurable homomorphism $(R,+)\longrightarrow(R,\times)$ is necessary an exponential function $e^{ax}$. Is something similar true for homomophisms of complex numbers f:(C,+)--->(C,+) or f:(C,+)-->(C,*) (latter assuming ker f = Z) ? (Yes, see answers below).
That is, I am interested in facts which follow the following rough pattern:
If a map is not set-theoretically wild, e.g. measurable or Borel, and satisfies some identities, then it is in fact continuous, and, further, can be given by an explicit formula.
-
## 2 Answers
The general phenomenon you're looking for is called automatic continuity. This is the line of research that stemmed from results regarding the continuity of homomorphisms from (R,+) to (R,+). The two principal general results illustrating the phenomenon of automatic continuity are the following.
Theorem (S. Banach). Any Baire measurable homomorphism between Polish groups is continuous.
Theorem (A. Weil). Any Haar measurable homomorphism from a locally compact Polish group into a Polish group is continuous.
Both of these theorems apply to the locally compact Polish group (C,+).
Automatic continuity is still an active area of research, see this recent survey by Christian Rosendal.
-
I apologize for the significant overlap between your answer and mine. I hadn't seen your post until submitting my answer. – Theo Buehler Mar 7 2011 at 0:54
Thanks for pointing out that the first theorem is due to Banach for Polish groups. Pettis generalized the result to a larger class of groups in his classic 1950 paper On continuity and openness of homomorphisms in topological groups. – François G. Dorais♦ Mar 7 2011 at 1:02
Thank you for your answer. Theorem by Weil does solve my particular question about homomorphisms (C,+)-->(C,*). Although it would be nice to have a reference to the original paper, perhaps found in the book by Dales ot aumatic continuity. – mmm Mar 7 2011 at 1:13
@mmm: I'm pretty sure Weil's theorem is in his 1940 book L'intégration dans les groupes topologiques et ses applications, Actualités Scientifiques et Industrielles, 869, Paris: Hermann. – Theo Buehler Mar 7 2011 at 1:23
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
On page 23 of his 1932 book Sur la théorie des opérations linéaires Banach proves:
Theorem. A Baire measurable homomorphism between Polish groups is continuous.
Note that Banach writes $+$ for the composition but neither does he assume nor use that a group is abelian. He proves the result first for Borel measurable homomorphisms and remarks immediately afterwards that the same argument even shows that a Baire measurable homomorphism between Polish groups is continuous, a fact usually attributed to Pettis for reasons that are unclear to me.
Recall that a topological group is Polish if its topology is second countable and metrizable with respect to a complete metric. The Baire $\sigma$-algebra is the $\sigma$-algebra generated by the Borel sets and the meager sets (beware that there are other uses of the term "Baire measurable" in the literature -- there are even published papers whose authors fell for this trap).
In particular, the specific question you ask about the real and complex numbers has a positive answer in all cases.
There is an entire industry, called automatic continuity, asking the question whether homomorphisms/derivations etc are continuous only due to their algebraic property. The easiest result in that direction states that a $*$-homomorphism between $C^{\ast}$-algebras is a linear contraction. One of the major players of that topic, H.G. Dales, has recently written a voluminous book of the same title, containing many results of that spirit and many historical remarks.
-
Thank you for your answer, especially the reference to H.G.Dales, Banach algebras and automatic continuity, 2000. I shall look whether it has a reference to the particular question about homomorphisms (C,+)-->(C,*). – mmm Mar 7 2011 at 1:12
mmm: Yes, that's the book. I'm not sure whether that specific question is addressed there. I doubt it. – Theo Buehler Mar 7 2011 at 1:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327336549758911, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/116799-tangent-normal-lines-print.html
|
# Tangent and Normal Lines
Printable View
• November 25th 2009, 10:49 PM
mj.alawami
Tangent and Normal Lines
Question:
Find the equation of the normal lines to the curve $y=x^4$ which is parallel to the line $2x+y=3$
Thank you
• November 25th 2009, 10:58 PM
Arturo_026
First you need to find the derivative:
$y' = 4x^3$
Then, from the equation of the line you are given, you see that the slopes of the normal line is -2 ; and since you want the slope of the tangets, you get the opposite reciprocal of -2, which is m=1/2.
Now you have y', just plug it in the derivative function and find your x.
Then with that, go back to the original equation and find y.
And you have all the ingredients necessary to set a line equation of the normals.
• November 26th 2009, 12:53 AM
mj.alawami
Quote:
Originally Posted by Arturo_026
First you need to find the derivative:
$y' = 4x^3$
Then, from the equation of the line you are given, you see that the slopes of the normal line is -2 ; and since you want the slope of the tangets, you get the opposite reciprocal of -2, which is m=1/2.
Now you have y', just plug it in the derivative function and find your x.
Then with that, go back to the original equation and find y.
And you have all the ingredients necessary to set a line equation of the normals.
So From what i understood?
$4x^3=\frac{1}{2}$
$x=\frac{1}{2}$
$y-\frac{1}{16}=\frac{1}{2}(x-\frac{1}{2}$
$x-2y+\frac{3}{8}$
Is my answer correct??
If not please answer in steps
• November 26th 2009, 02:11 AM
mathaddict
Quote:
Originally Posted by mj.alawami
So From what i understood?
$4x^3=\frac{1}{2}$
$x=\frac{1}{2}$
$y-\frac{1}{16}=\frac{1}{2}http://mathhelpforum.com/calculus/116799-tangent-normal-lines.html#post412400\$
So From what i understood?
$4x^3=\frac{1}{2}$
$x=\frac{1}{2}$
$y-\frac{1}{16}=\frac{1}{2}(x-\frac{1}{2}$
$x-2y+\frac{3}{8}$
Is my answer correct??
If not please answer in steps
Your answer is partly correct. You have the right values of $x$ and $y$, but the wrong gradient. The gradient of the normal is $-2$, so the equation is:
$y-\frac{1}{16}=-2\left(x - \frac12\right)$
i.e. $16y-1=-32x + 16$
i.e. $32x+16y-17 =0$
Grandad
All times are GMT -8. The time now is 03:53 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298420548439026, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=386497
|
Physics Forums
## Importance of the first 3 primes
hey guys, im a student engineer and have found that the laws we use are primarily based off of stats. this has led me to an interest in number theory and groupings. I was hoping someone could help me with somehing, i would like to know the relevance of the first 3 primes(2,3,5) to math in general. to be more specific, do the first three three primes play a significant role in number theory? I feel like i am being vague, so maybe i should ask, does being a multiple of 2, 3 or 5, or a power of one of these three give a number any special properties?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Gold Member When it comes to checking for primes, the usual suggested calculator program says to check for 2 and 3--which is very easy, and then run a program using $$6X\pm1$$. That way 2/3 of the integers are eliminated from further testing.
It could be worth mentioning that we have a prime number of hands (2) with a prime number of fingers on each (5) and, therefore, count everything in groups of ten.
## Importance of the first 3 primes
i suppose if we're taking such naturalist possibilities one could mention that our bodies are based off of the first 3 primes, in particular the number 3:
-30 bones in each arm and leg (3 x 2 x 5)
-26 vertabrae(though 2 are composites of 5 and 4, so in all there are 33 = [23 x 3] + 32 ]
-24 ribs( 23 x 3 )
-3 fused pelvic bones
:) These are funny facts to know, but they lack the 'therefore' part: do they influence math in general, or number theory in particular?
I don't know how these relations help math persay, but life reflects math, and given the importance of primes to number theory, it is interesting to see that primes are also found in biology, specifically human biology.
Primes in general are fundamental to number theory, and naturally the first primes come up more often: half of the natural numbers are even, but not nearly so many are divisible by, say, 6547.
Recognitions: Gold Member Tinyboss: Primes in general are fundamental to number theory, and naturally the first primes come up more often: half of the natural numbers are even, but not nearly so many are divisible by, say, 6547. But even so there are an infinite number of integers divisible by 6547. And they could be put into 1-1 correspondance with numbers divisible by 2. $$6547n\Longleftrightarrow2n$$
Quote by robert Ihnot Tinyboss: Primes in general are fundamental to number theory, and naturally the first primes come up more often: half of the natural numbers are even, but not nearly so many are divisible by, say, 6547. But even so there are an infinite number of integers divisible by 6547. And they could be put into 1-1 correspondance with numbers divisible by 2. $$6547n\Longleftrightarrow2n$$
Yeah, I was playing a little fast and loose there, but what I meant was small integers, the kind like trini mentions, for instance, or the kind that show up as the order of sporadic groups, and so forth. Not that |Z/nZ| depends on n.
here's my take on it, in a race to inifinity, no triplet of positive integers has more multiples than 2, 3 and 5 (not including 1 of course)
The square of all primes greater than 5 is either (1 mod 30) or (19 mod 30).
Thread Tools
| | | |
|-------------------------------------------------------|---------------------------|---------|
| Similar Threads for: Importance of the first 3 primes | | |
| Thread | Forum | Replies |
| | Linear & Abstract Algebra | 3 |
| | Academic Guidance | 6 |
| | General Discussion | 5 |
| | General Discussion | 8 |
| | Linear & Abstract Algebra | 112 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344459772109985, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/103252?sort=votes
|
## Why are injective modules more complicated than projective modules?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For beginners in homological algebra, it is a fact of life that injective modules seems to be more mysterious than projective modules. For example, for finitely generated modules over a noetherian ring, projective resolution can be taken as resolution by free modules of finite rank, but I don't see how one can easily write down injective resolutions.
I'm wondering if there is a deep reason behind this. What makes injective modules so complicated?
-
2
I don't quite agree. Injective envelopes occur more easily than projective covers, and Grothendieck abelian categories have injectives but need not have projectives. Some people find easier to think of projectives because they think of free objects, which behave like vector spaces to some extent. But once you're into the abelian category world, injectives are even easier. – Fernando Muro Jul 27 at 0:55
why is Q more complicated than Z? – roy smith Jul 27 at 1:05
@roy Do you mean why is Q/Z more complicated than Z? I imagine you are thinking that every abelian group has a projective resolution by direct sums of Z, and similarly, every abelian group has an injective resolution by products of Q/Z? – name Jul 27 at 1:36
5
I myself think $\mathbb Q/\mathbb Z$ is more complicated than $\mathbb Z$. – paul garrett Jul 27 at 1:53
There is, in fact, a classification of sorts for injective modules over a commutative noetherian ring; I believe this is discussed in Maclane's book on homological algebra (but probably in many others as well). The basic idea is this: every $R$-module $M$ has an "injective envelope"; this is an injection of modules $M \hookrightarrow I(M)$ such that every nonzero submodule of $I(M)$ intersects $M$. One can show that $I(M)$ is injective, and that every injective module containing $M$ also contains a (non-unique) copy of $I(M)$. Then an injective module is precisely a direct sum of... – Charles Staats Jul 27 at 15:54
show 6 more comments
## 2 Answers
Injective modules are of course just projective modules in the opposite category, so it seems to me that the question really is "why is the opposite of a module category more complicated than a module category?" Probably this is because the opposite of a module category is almost never itself a module category (see this MO question). It embeds into a module category by Freyd-Mitchell, but this is quite noncanonical.
For the sake of concreteness, by Pontrjagin duality $\text{Ab}^{op}$ itself is equivalent to the category of compact (Hausdorff) abelian groups (which embeds into $\text{Ab}$ itself but this is not too useful of an embedding for our purposes). An injective abelian group dualizes to a projective object in $\text{Ab}^{op}$, and it is not so straightforward as in a module category to find a projective object here. The simplest nontrivial thing that deserves to be called a free object is the Bohr compactification of $\mathbb{Z}$ (which dualizes to $S^1$ with the discrete topology). The injective abelian group $\mathbb{Q}/\mathbb{Z}$, as a filtered colimit of the groups $\mu_n$, dualizes to a cofiltered limit giving the profinite integers $\hat{\mathbb{Z}}$.
This gives one way to find an injective resolution of an abelian group by following a recipe exactly analogous to the free module recipe, but in $\text{Ab}^{op}$: find a projective resolution of its Pontrjagin dual by products of copies of the Bohr compactification of $\mathbb{Z}$, then dualize it!
Edit: Steven Landsburg makes the following comment below:
But I thought that another part of your answer was that in the categories we often choose to look at (module categories) projectives might look simpler, though in the opposite categories it's the injectives that look simpler. That leaves the question of why it's the projectives that are simpler in the categories we're naturally led to look at.
My revised revised answer is that free objects are projective, and free objects are simpler in the categories we're naturally led to look at.
Let $C$ be a category and let $U : C \to \text{Set}$ be a faithful functor. If $U$ has a left adjoint $F$, then $U$ preserves monomorphisms, so the monomorphisms in $C$ are precisely the maps which are injective on underlying sets. Thus to find projective objects it suffices to find objects with the lifting property. Now, for a set $S$, $F(S)$ clearly has the lifting property (hence is projective) because $S$ has the lifting property in $\text{Set}$ (in other words, is projective).
The categories we're naturally led to look at, such as module categories, are usually concretely defined so are equipped with a faithful functor to $\text{Set}$, and the corresponding objects $F(S)$ usually exist and provide a plentiful supply of projectives in $C$. (Without the functors $U$ and $F$ it is not clear why $C$ should have any projective objects whatsoever, and it might not: for an abelian category example, take $C = \text{FinAb}$.)
On the other hand, the opposite of a category, even one equipped with a faithful functor to $\text{Set}$, doesn't itself come equipped with a faithful functor to $\text{Set}$ (instead it comes equipped with a faithful functor to $\text{Set}^{op}$, which is harder to understand) and finding one with a left adjoint (as in the Pontrjagin duality example above) might be difficult.
-
compact abelian topological groups. But I guess finite groups are interesting too... – David Roberts Jul 27 at 1:35
@David: I don't follow. A finite group is also a compact abelian group with the discrete topology. (But thanks for reminding me I forgot to include Hausdorff.) – Qiaochu Yuan Jul 27 at 1:38
1
A paraphrase: while reversing arrows is abstractly nothing-at-all, for concrete-ish categories (not necessarily refering, as "concrete" sometimes does, to categories whose objects meaningful underlying sets) the concrete-ish interpretation of the dual category is typically significantly different. The simplest case is that projective $\mathbb Z$-modules are easily given by $\mathbb Z$ and sums thereof, while injective $\mathbb Z$ modules are trickier both to exhibit, and trickier to prove are truly injective. I suppose in a different universe things might be more symmetrical? – paul garrett Jul 27 at 1:56
1
@Steven: part of my answer is that I disagree with the premise of the question (if interpreted as "why are injective objects more complicated than projective objects?") because injectivity and projectivity are dual. Injective objects in $\text{Ab}^{op}$ are exactly as complicated as projective objects in $\text{Ab}$. – Qiaochu Yuan Jul 27 at 2:09
5
Every object of $Set$ is projective iff you assume the axiom of choice. See also arxiv.org/abs/1111.5180, "Are There Enough Injective Sets?", where they authors say that every non-empty set is injective in the category of ZF-sets. – David Roberts Jul 27 at 3:20
show 5 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I'll give you my more pedestrian reason. The proofs of existence of injective resolutions require the axiom of choice, in one form or another. Translation: these proofs are not constructive, so there are no general algorithms for producing such objects. This becomes a painful issue in concrete situations. This has similarities with another famous existence result, the Hahn-Banach theorem which postulates the existence of continuous linear functionals with certain properties. It is particularly useful for existence theorems for PDE's. Unfortunately it gives you no guide for finding those solutions.
-
6
A remark about the existence issue of injective resolutions: Everything comes down to prove that $\mathbb{Q}/\mathbb{Z}$ is injective; the rest of the proof is formal. So there is no problem to write down injective resolutions: Rather it requires AC in order to prove that they are indeed injective resolutions. – Martin Brandenburg Jul 27 at 9:02
Does not the existence of projective resolutions also require the axiom of choice? I used to think that one needs AC in order to show that a free module is projective. – Leonid Positselski Jul 27 at 9:26
But one does not need AC if one only needs to know that finitely generated free modules are projective. And the original question mentions Noetherian rings and finitely generated modules. OK. – Leonid Positselski Jul 27 at 9:30
@Martin: Thanks for the clarifying comments. Algebra is really not my thing. In certain concrete topological problems I had to construct extensions of morphisms whose existence was postulated by the injectivity condition and I had to scratch my head. I felt the same when I tried to produce an extension of a linear functional whose existence was postulated by Hahn-Banach. – Liviu Nicolaescu Jul 27 at 10:24
The axiom of choice is also necessary to show that any vector space is free. – Fernando Muro Jul 31 at 16:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9362542629241943, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2011/01/21/standard-polytabloids-span-specht-modules/?like=1&source=post_flair&_wpnonce=c589d5bcbb
|
# The Unapologetic Mathematician
## Standard Polytabloids Span Specht Modules
We defined the Specht module $S^\lambda$ as the subspace of the Young tabloid module $M^\lambda$ spanned by polytabloids of shape $\lambda$. But these polytabloids are not independent. We’ve seen that standard polytabloids are independent, and it turns out that they also span. That is, they provide an explicit basis for the Specht module $S^\lambda$.
Anyway, first off we can take care of all the $e_t$ where the columns of $t$ don’t increase down the column. Indeed, if $\sigma\in C_t$ stabilizes the columns of $t$, then
$\displaystyle e_{\sigma t}=\sigma e_t=\mathrm{sgn}(\sigma)e_t$
where we’ve used the sign lemma. So any two polytabloids coming from tableaux in the same column equivalence class are scalar multiples of each other. We’ve just cut our spanning set down from one element for each tableau to one for each column equivalence class of tableaux.
To deal with column equivalence classes, start with the tableau $t_0$ that we get by filling in the shape with the numbers $1$ to $n$ in order down the first column, then the second, and so on. This is the maximum element in the column dominance order, and it’s standard. Given any other tableau $t$, we assume that every tableau $s$ with $[s]\triangleright[t]$ is already in the span of the standard polytabloids. This is an inductive hypothesis, and the base case is taken care of by the maximum tabloid $t_0$.
If $t$ is itself standard, we’re done, since it’s obviously in the span of the standard polytabloids. If not, there must be a row descent — we’ve ruled out column descents already — and so we can pick our Garnir element to write $e_t$ as the sum of a bunch of other polytabloids $e_{\pi t}$, where $[\pi t]\triangleright[t]$ in the column dominance order. But by our inductive hypothesis, all these $e_{\pi t}$ are in the span of the standard polytabloids, and thus $e_t$ is as well.
As an immediate consequence, we conclude that $\dim(S^\lambda)=f^\lambda$, where $f^\lambda$ is the number of standard tableaux of shape $\lambda$. Further, since we know from our decomposition of the left regular representation that each irreducible representation of $S_n$ shows up in $\mathbb{C}[S_n]$ with a multiplicity equal to its dimension, we can write
$\displaystyle\mathbb{C}[S_n]=\bigoplus\limits_{\lambda\vdash n}f^\lambda S^\lambda$
Taking dimensions on both sides we find
$\displaystyle \sum\limits_{\lambda\vdash n}\left(f^\lambda\right)^2=\sum\limits_{\lambda\vdash n}f^\lambda\dim(S^\lambda)=\lvert S_n\rvert=n!$
## 3 Comments »
1. [...] look at one example of “straightening” out a polytabloid to show it’s in the span of the standard polytabloids, using the Garnir [...]
Pingback by | January 25, 2011 | Reply
2. [...] be clear that an obvious choice for the objects is to replace with the Specht module , since we’ve seen that . But what category are they in? On the left side, is an -module, but on the right side all [...]
Pingback by | January 27, 2011 | Reply
3. [...] want to show that they span the space . This is a bit fidgety, but should somewhat resemble the way we showed that standard polytabloids span Specht [...]
Pingback by | February 11, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8877479434013367, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=622944
|
Physics Forums
## Integral with Cauchy Prinicpal value
Hi everyone,
I would like to calculate the following integral:
$P\int_0^{\pi}\frac{1}{cos(x)-a}dx$, with [itex]
|a|\leq 1[/itex].
The 'P' in front stands for the so-called Cauchy Principle value.
Whenever a is not in the specified domain, the integrand does not have a pole and one can do the integration immediately (I have Maple computed it for me).
However, when a is in the above domain, the denominator can become zero and one has to integrate through the pole, hence the P.
But, I don't know how to do this in practice.
They say just cut out a piece of size delta before and after the pole and just do the integration. Than take the proper limits.
But, can the same rules be applied as wouldnt a have been in this domain. Is it allowed to use the same techniques as without any poles in the integrand?
Otherwise, are there any other techniques, tricks ... ?
Many thanks in advance.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
For starters, $$\int_0^{\pi} \frac{d\theta}{\cos(\theta)-a}=1/2 \int_{-\pi}^{\pi}\frac{d\theta}{\cos(\theta)-a}$$ and for $|a|\leq 1$ that always has a pole when a is real. Now what happens if we make the normal substitution with these types of integrals: $z=e^{i\theta}$? Do that and you'll get a closed-contour integral for $|z|=1$ with the pole now always on the contour when a is real. But we can indent around the pole rather than going through it. In that case, the integral over this contour is the part not including the indentation, that's the principal value and the contribution from the indentation depending on which direction you go around it. You can calculate the integral over the entire contour (principal value+indentation) by using the Residue Theorem and you can calculate the part over the indentation by letting the radius of that indentation go to zero and using a theorem which describes the value of an integral over such a limited indentation when it's a simple pole. Get them two, bingo-bango, the principal value is left.
When you do the typical substitution one gets $$\frac{2}{i}\int_{C_1}\frac{dz}{z^2-2za+1}$$ which results in 2 complex poles (a is real) $$z_\pm=a\pm i\sqrt{1-a^2}$$ which are indeed on the contour. One could indeed avoid the poles by for example going around them in a small circle but with the poles in the contour. The residue theorem than says that the contour integral is $2 \pi i$ times the sum of the residues. In this case adding the residus gives zero. So, what I have is $$\frac{2}{i}\int_{C_1}\frac{dz}{z^2-2za+1}=0=P\frac{2}{i}\int_{0}^{2\pi}\frac{dz}{z^2-2za+1}+\frac{2}{i}\int_{small circles}\frac{dz}{z^2-2za+1}$$ For the small circles (I have now 2 of them because I have 2 complex poles on the contour), I would parametrize them as follows (for the pole $z_+$) $$z=z_++Re^{i\phi}, dz=iRe^{i\phi}d\phi$$ and than try to do the integration and take the limit $R\to 0$ Is this the right track or not at all ... ? (I did not convince myself to this point really). Anyway, as a remark, the result should be real. Is this always for a principal value integral or is it just in this case because we are integrating a real function over a real domain?
## Integral with Cauchy Prinicpal value
When it's a simple pole and you let the radius of the indentation, $\rho$, go to zero, you can use the theorem which states:
$$\lim_{\rho\to 0} \mathop\int\limits_{\text{indentation}} fdz=\gamma i r$$
where $\gamma$ is the radian measure of the indentation and r is the residue of the pole. In your case, gamma is pi right. Also, principal value integrals can be real or complex. It's just real in this case because the integral is real.
As mentioned before: I have now 2 complex poles. When I take pi x i x Residu in both and add them I got zero, meaning the Principle value would be zero ... That is just twice the same calculation, for the integration along the whole contour and for the poles on the contour. This seems strange to me ... I did know that poles lying on the contour count for one half of a pole inside the contour. Also, parametrizing such little circles seems fine to me as long as the pole is indeed on the real axis. but now, my 2 poles!, are complex ...?
Quote by M.B. As mentioned before: I have now 2 complex poles. When I take pi x i x Residu in both and add them I got zero, meaning the Principle value would be zero ...
What's wrong with zero? What happens if you code the original principal-valued integral into Mathematica? For example, using a=9/10:
Code:
```In[17]:=
NIntegrate[1/(Cos[z] - 9/10), {z, 0, ArcCos[9/10],
Pi}, Method -> "PrincipalValue"]
Out[17]=
-1.1746159600534156*^-13```
There is apparently nothing wrong with zero, but to me it is more logical to use small circles on a straight line (e.g. for integrating functions over the whole real axis) than to parametrize them themselve on a circle, which is our contour in this case. Many thanks!
Thread Tools
| | | |
|-----------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Integral with Cauchy Prinicpal value | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 21 |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261993169784546, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/86016?sort=votes
|
## Alternative characterization of homotopy equivalence
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Using the formalism of model categories its possible define the concept of homotopy as done here.
If we take as model category $\mathbf{Top}$ having homotopy-equivalence as weak-equivalence and fibration and co-fibration defined in the standard topological way, these type of homotopies are just homotopies as defined in basic courses of algebraic topology.
From this point of view seems that weak equivalence are what really matter, so here's my question:
Is there any way to characterize homotopy equivalence (in $\mathbf{Top}$) without using the concept of homotopy?
I'm wondering if there's a way to discriminate homotopy equivalence without using the concept of homotopy at all, meaning that I'm looking for a criteria which enable to say that a certain continuous map $f \colon X \to Y$ is an homotopy equivalence without looking for a morphism $g \colon Y \to X$ and continuous maps $\mathcal F \colon X \times I \to X$ and $G \colon Y \times I \to Y$ which are indeed respectively homotopies of $g \circ f$ with $1_X$ and $f \circ g$ with $1_Y$.
-
1
Both weak equivalences and homotopy equivalences are important in abstract homotopy theory. A usual way of constructing the homotopy category of a model category is: first, divide out the homotopy relation (perhapes restricting to fibrant or cofibrant objects), second, calculus of fractions with respect to weak equivalences. – Fernando Muro Jan 18 2012 at 22:29
Have you looked at the MO question mathoverflow.net/questions/51091/… ? Answers there may or may not tell you something that you want to know. – Tom Goodwillie Jan 18 2012 at 22:35
It appears that you really mean to consider $Top$ as a model category with the homotopy equivalences (not the weak homotopy equivalences) as the weak equivalences. Is that right? – Tom Goodwillie Jan 19 2012 at 2:49
1
@TomGoodwillie: yeah you're right. The point is, because homotopy equivalences, fibrations and cofibrations seem to be the only things you need to rebuild the whole homotopy theory, I was wondering if there is any why to present this model category structure of $\mathbf{Top}$ without using the concept of homotopy. – Giorgio Mossa Jan 19 2012 at 12:41
## 2 Answers
EDIT: Now that the OP has edited his question to make clearer what he wants as an answer, I'm removing speculation about what he wanted. The answer is: yes, you can characterize homotopy equivalences as the maps which become isomorphisms after applying the localization functor to invert the weak equivalences. This answer doesn't require you to "use the notion of homotopy" because it's part of a much more general framework.
Here is a nice article on localization. One of the best reasons for studying model categories is that they let you get your hands on the maps (weak equivalences) which build homotopy equivalences. Many of the axioms for a model category are there to get around set-theoretic issues that arise when you try to localize a category with respect to an arbitrary class of morphisms. It turns out you need to localize at a class of morphisms which admits a calculus of fractions, and the class $W$ of weak equivalences does. If you read the article, you'll see how to construct $C[W^{-1}]$ in general, and you can then specialize to the case where $C=Top_*$ and $W$ is the class of weak equivalences.
Unfortunately, for computation, this highly general approach is not as good as just using the Path and Cylinder objects mentioned in the nLab article the OP links to. That's why most people who study model categories use those instead of this general framework: because a model category is more than just a category with a nice class $W$--it's a category where you can really get your hands on everything and compute!
-
I'm sorry but what about a map that is a weak equivalence without being a homotopy equivalence. Remember you are not assuming that the spaces are CW complexes. The Warsaw circle has trivial homotopy groups but is by no way contractible. – Tim Porter Jan 19 2012 at 15:59
@Tim Porter: I'm not claiming that all homotopy equivalences come from weak equivalences, just that the process of localization builds the homotopy equivalences directly from the cofibrations, fibrations, and weak equivalences without reference to path objects, cylinder objects, or the word "homotopy." This is because the homotopy equivalences are the isomorphisms in a category which is obtained by this general theory of localization. You need the cofibrations and fibrations in order to get $W$ to admit the calculus of functors, but you don't need any distinguished objects – David White Jan 19 2012 at 19:01
I'm afraid I still don't understand what the question is about and how this answer addresses it. In any model category a map is a weak equivalence if and only if it becomes an isomorphism in the localization. If we start with the standard model structure on $\mathrm{Top}$ with weak homotopy equivalences as weak equivalences, then we don't recover actual homotopy equivalences this way. – Karol Szumiło Jan 19 2012 at 20:02
One more thing -- it is not true in general that weak equivalences in a model category satisfy the calculus of fractions. It only becomes true after dividing by the homotopy relation, for which you need to use cylinders or path objects. Think about the classical construction of the derived category of an abelian category. You have to divide by the chain homotopy relation before you can prove that quasi-isomorphisms satisfy the calculus of fractions. – Karol Szumiło Jan 19 2012 at 20:15
@Karol. Your second objection is something I wasn't aware of. I guess this means even using the general theory of localization you can't escape from using cylinder objects. I read the question as seeking more information about how to pass from a model category to the homotopy category, and in particular how to obtain the homotopy equivalences in a different way than is done in classical algebraic topology. That's why I phrased my answer the way I did and pitched it at the level I did. – David White Jan 19 2012 at 21:12
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
EDIT: This is an old question, but I have stumbled upon it by accident and realized that my answer is wrong. It turns out that genuine homotopy equivalences cannot be characterized in terms of their homotopy fibers. Here's a counterexample. Let `$X = \{ 1, 2, 3, \ldots, \infty \}$` with discrete topology and `$Y = \{1, \frac{1}{2}, \frac{1}{3}, \ldots, 0 \}$` with the topology inherited from `$\mathbb{R}$`. Let `$f \colon X \to Y$` be a map given by `$f(m) = \frac{1}{m}$` (and $f(\infty) = 0$). Then `$f$` has contractible homotopy fibers (in fact they are all one-point spaces), but it is not a homotopy equivalence since the only candidate for a homotopy inverse is not continuous.
It is well-known that a map with all homotopy fibers weakly contractible is a weak equivalence and my mistake was to assume that there is a similar result for homotopy equivalences.
I have to say I don't understand the motivation behind your question. Why exactly do you want to characterize homotopy equivalences without using homotopies? Moreover I'm not sure the question is well-posed, the answers would probably refer to homotopies implicitly and it would be a matter of taste whether they count as "not mentioning homotopies". To illustrate my point, here's an attempt at an answer.
First, define the space $X$ to be contractible if it is a retract of its cone (more precisely if the canonical inclusion $X \to C X$ admits a retraction). Then define a map $X \to Y$ to be a homotopy equivalence if its homotopy fiber at every point $y \in Y$ is contractible. The homotopy fiber can be defined as a mapping cocylinder i.e. the pullback $Y^I \times_{Y \times Y} (X \times *)$.
Now, I didn't use the word "homotopy" (except in "homotopy fiber", but I explained how to "go around it"). However, for example the retraction $C X \to X$ is nothing else but a homotopy from $\mathrm{id}_X$ to some constant map. I suspect that you won't be satisfied with this kind of hiding the homotopies backstage. If this is the case you should explain precisely what counts as "not mentioning homotopies".
-
I've edited the question and added some specification, I hope I made myself more clear. Thanks. – Giorgio Mossa Jan 18 2012 at 21:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397115111351013, "perplexity_flag": "head"}
|
http://architects.dzone.com/articles/natura-non-facit-saltus
|
Big Data/BI Zone is brought to you in partnership with:
Arthur Charpentier, ENSAE, PhD in Mathematics (KU Leuven), Fellow of the French Institute of Actuaries, professor at UQàM in Actuarial Science. Former professor-assistant at ENSAE Paritech, associate professor at Ecole Polytechnique and professor assistant in economics at Université de Rennes 1. Arthur is a DZone MVB and is not an employee of DZone and has posted 85 posts at DZone. You can read more from them at their website. View Full User Profile
# Natura Non Facit Saltus
02.09.2013
| 944 views |
• Tweet
### We Recommend These Resources
#### ScaleBase Overview – Your complete scale out partner
(see John Wilkins’ article on the – interesting – history of that phrase http://scienceblogs.com/evolvingthoughts/…). We will see several smoothing techniques for insurance ratemaking. As a starting point, assume that we do not want to use segmentation techniques: everyone will pay exactly the same price.
• no segmentation of the premium
And that price should be related to the pure premium, which is proportional to the frequency (or the annualized frequency, as discussed previously), since
$%5Cmathbb%7BE%7D_%7B%5Cmathbb%7BP%7D%7D%5Cleft%28%5Csum_%7Bi=1%7D%5EN%20Y_i%5Cright%29=%5Cmathbb%7BE%7D_%7B%5Cmathbb%7BP%7D%7D%28N%29%20%5Ccdot%20%5Cmathbb%7BE%7D_%7B%5Cmathbb%7BP%7D%7D%28Y_i%29$
The probability measure is mentioned here just to recall that we can use any measure. Even $%5Cmathbb%7BP%7D_%7B%5Cboldsymbol%7BX%7D%7D$ (based on some covariates). Without any covariate, the expected frequency should be
```> regglm0=glm(nbre~1+offset(log(exposition)),data=sinistres,family=poisson)
> summary(regglm0)
Call:
glm(formula = nbre ~ 1 + offset(log(exposition)), family = poisson,
data = sinistres)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.5033 -0.3719 -0.2588 -0.1376 13.2700
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.6201 0.0228 -114.9 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 12680 on 49999 degrees of freedom
Residual deviance: 12680 on 49999 degrees of freedom
AIC: 16353
Number of Fisher Scoring iterations: 6```
```> exp(coefficients(regglm0))
(Intercept)
0.07279295```
Thus, if we do not want to take into account potential heterogeneity, we should assume that $N%5Csim%5Cmathcal%7BP%7D%28%5Clambda%29$where $%5Clambda$ is closed to 7.28%. Yes, as mentioned in class, it is rather common to see $%5Clambda$ as a percentage, i.e. a probability, since
$%5Cmathbb%7BP%7D%28N%5Cneq%200%29=1-e%5E%7B-%5Clambda%7D%5Capprox%20%5Clambda$
i.e. $%5Clambda$ can be interpreted as the probability of not have a claim (see also the law of small numbers). Let us visualize this as a function of the age of the driver,
```> a=18:100
> yp=predict(regglm0,newdata=data.frame(ageconducteur=a,exposition=1),type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")
> lines(a,yp1,lty=2)
> lines(a,yp2,lty=2)
> k=23
> points(a[k],yp0[k],pch=3,lwd=3,col="red")
> segments(a[k],yp1[k],a[k],yp2[k],col="red",lwd=3)```
We do predict the same frequency for all drivers, e.g. for some drive aged 40,
```> cat("Frequency =",yp0[k]," confidence interval",yp1[k],yp2[k])
Frequency = 0.07279295 confidence interval 0.07611196 0.06947393```
Let us now consider the case where we try to take into account heterogeneity, e.g. by age,
• The (standard) Poisson regression
The idea of the (log-)Poisson regression is to assume that instead of having $N%5Csim%5Cmathcal%7BP%7D%28%5Clambda%29$, we should have $N%7C%5Cboldsymbol%7BX%7D%5Csim%5Cmathcal%7BP%7D%28%5Clambda_%7B%5Cboldsymbol%7BX%7D%7D%29$, where
$%5Clambda_%7B%5Cboldsymbol%7BX%7D%7D=%5Cexp%28%5Cbeta_0+%5Cbeta_1%20%5Cboldsymbol%7BX%7D_1+%5Ccdots+%5Cbeta_k%5Cboldsymbol%7BX%7D_k%29$
in a very general setting. Here, let us consider only one explanatory variable, i.e.
$%5Clambda_%7BX%7D=%5Cexp%28%5Cbeta_0+%5Cbeta_1%20%7BX%7D%29$
Here, we have
```> yp=predict(regglm1,newdata=data.frame(ageconducteur=a,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")
> lines(a,yp1,lty=2)
> lines(a,yp2,lty=2)
> points(a[k],yp0[k],pch=3,lwd=3,col="red")
> segments(a[k],yp1[k],a[k],yp2[k],col="red",lwd=3)```
i.e. the prediction for the annualized claim frequency for our 40 year old driver is now 7.74% (which is slightly higher than what we had before, 7.28%)
```> cat("Frequency =",yp0[k]," confidence interval",yp1[k],yp2[k])
Frequency = 0.07740574 confidence interval 0.08117512 0.07363636```
It is possible to compute not the expected frequency , but the ratio $%5Cmathbb%7BE%7D%28N%7CX%29/%5Cmathbb%7BE%7D%28N%29$.
Above the horizontal blue line, the premium will be higher than the one obtained without segmentation, and (of course) lower below. Here, drivers younger than 44 year old will pay more, while driver older than 44 year old will be less. We have discussed, in the introduction, the necessity of segmentation. If we consider two companies, one segmenting, while the other one has a flat rate, then older drivers will go to the first company (since insurance is cheaper) while younger ones will go to the second one (again, it is cheaper). The problem is that the second company implicitly hopes that older drivers will compensate the risk. But since they’re gone, insurance will be too cheap, and the company will loose money (if not goes bankrupt). So companies have to use segmentation techniques to survive. Now, the problem is that we cannot be sure that this exponential decay of the premium is the proper way the premium should evolve as a function of the age. An alternative can be to use nonparametric techniques to visualize to true influence of the age on claims frequency.
• A pure nonparametric model
A first model can be to consider a premium, per age. This can be done considering the age of the driver as afactor in the regression,
```> regglm2=glm(nbre~as.factor(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)
> yp=predict(regglm2,newdata=data.frame(ageconducteur=a0,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a0,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")```
Here, the forecast for our 40 year old driver is slightly lower than be previous one, but the confidence interval is much larger (since we focus on a very small subclass of the portfolio: drivers aged exactly 40)
`Frequency = 0.06686658 confidence interval 0.08750205 0.0462311`
Here, we consider too small classes, and the premium is too erratic: the premium will decrease of 20% from age 40 to 41, and then increase of 50% from age 41 to 42,
```> diff(log(yp0[23:25]))
24 25
-0.2330241 0.5223478```
There is no chance that the company will keep the insured with this strategy. This discontinuity of the premium is clearly an important issue here.
• Using age classes
An alternative can be to consider age classes, from very young drivers to senior drivers.
```> level1=seq(15,105,by=5)
> regglmc1=glm(nbre~cut(ageconducteur,level1)+offset(log(exposition)),
+ data=sinistres,family=poisson)
> summary(regglmc1)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.6036 0.1741 -9.212 < 2e-16 ***
cut(ageconducteur, level1)(20,25] -0.4200 0.1948 -2.157 0.0310 *
cut(ageconducteur, level1)(25,30] -0.9378 0.1903 -4.927 8.33e-07 ***
cut(ageconducteur, level1)(30,35] -1.0030 0.1869 -5.367 8.02e-08 ***
cut(ageconducteur, level1)(35,40] -1.0779 0.1866 -5.776 7.65e-09 ***
cut(ageconducteur, level1)(40,45] -1.0264 0.1858 -5.526 3.28e-08 ***
cut(ageconducteur, level1)(45,50] -0.9978 0.1856 -5.377 7.58e-08 ***
cut(ageconducteur, level1)(50,55] -1.0137 0.1855 -5.464 4.65e-08 ***
cut(ageconducteur, level1)(55,60] -1.2036 0.1939 -6.207 5.40e-10 ***
cut(ageconducteur, level1)(60,65] -1.1411 0.2008 -5.684 1.31e-08 ***
cut(ageconducteur, level1)(65,70] -1.2114 0.2085 -5.811 6.22e-09 ***
cut(ageconducteur, level1)(70,75] -1.3285 0.2210 -6.012 1.83e-09 ***
cut(ageconducteur, level1)(75,80] -0.9814 0.2271 -4.321 1.55e-05 ***
cut(ageconducteur, level1)(80,85] -1.4782 0.3371 -4.385 1.16e-05 ***
cut(ageconducteur, level1)(85,90] -1.2120 0.5294 -2.289 0.0221 *
cut(ageconducteur, level1)(90,95] -0.9728 1.0150 -0.958 0.3379
cut(ageconducteur, level1)(95,100] -11.4694 144.2817 -0.079 0.9366
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> yp=predict(regglmc1,newdata=data.frame(ageconducteur=a,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,ylim=c(.03,.12),type="s")
> abline(v=40,col="grey")
> lines(a,yp1,lty=2,type="s")
> lines(a,yp2,lty=2,type="s")```
Here we obtain the following predictions,
and for our 40 year old driver, the frequency is now 6.84%.
`Frequency = 0.0684573 confidence interval 0.07766717 0.05924742`
But our classes were defined arbitrarily here. Perhaps should we consider other classes, to see if the prediction is sensitive to the cutting values,
```> level2=level1-2
> regglmc2=glm(nbre~cut(ageconducteur,level2)+offset(log(exposition)),
+ data=sinistres,family=poisson)```
which yields the following values for our 40 year old driver,
`Frequency = 0.07050614 confidence interval 0.07980422 0.06120807`
So here, we did not remove the discontinuity problem. An idea here can be to consider moving regions: if the goal is to predict the frequency for a 40 year old driver, perhaps the class should be (somehow) centered around 40. And center the interval around 35 for drivers aged 35. Etc.
• Moving average
Thus, it is natural to consider some local regressions, where only drivers aged almost 40 should be considered. This almost concept is related to the bandwidth. For instance, drivers between 35 and 45 can be considered as being almost40. In practice we can either consider a subset function, or we can use weights in the regressions
```> value=40
> h=5
> sinistres$omega=(abs(sinistres$ageconducteur-value)<=h)*1
> regglmomega=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson,weights=omega)```
To see what’s going on, let us consider an animated plot, where the age of interest is changing,
Here, for our 40 year old drive, we get
`Frequency = 0.06913391 confidence interval 0.07535564 0.06291218`
We do obtain a curve that can be interpreted as a local regression. But here, we do not take into account that 35 is not as close to 40 as 39 could be. An here, 34 is assumed to be very far away from 40. Clearly, we could improve that technique: kernel functions can considered, i.e. the closer to 40, the larger the weight.
```> value=40
> h=5
> sinistres$omega=dnorm(abs(sinistres$ageconducteur-value)/h)
> regglmomega=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson,weights=omega)```
which can be plotted below
Here, our prediction for our 40 year old drive is
`Frequency = 0.07040464 confidence interval 0.07981521 0.06099408`
This is the idea of kernel regression techniques. But as explained in the slides, other non parametric techniques can be considered, like spline functions.
• Smoothing with splines
In R, it is simple to use spline function (somehow much more simple than kernel smoothers)
```> library(splines)
> regglmbs=glm(nbre~bs(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)```
The prediction for our 40 year old driver is now
`Frequency = 0.06928169 confidence interval 0.07397124 0.06459215`
Note that this techniques is related to another class of models, the so-called Generalized Additive Models, i.e. GAMs.
```> library(mgcv)
> reggam=gam(nbre~s(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)```
The prediction is extremely close to the one we obtained above (the main differences being observed for very old drivers)
`Frequency = 0.06912683 confidence interval 0.07501663 0.06323702`
• Comparison of the different models
Somehow, one way or another, all those models are valid. So perhaps we should compare them,
On the graph above, we can visualize the upper and the lower bound of the prediction, for the 9 models. The horizontal line is the predicted value without taking into account heterogeneity. It is possible to consider relative values, with respect to this value,
Published at DZone with permission of Arthur Charpentier, author and DZone MVB. (source)
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Tags:
Connect with DZone
• Got a story? Tell us!
## Dear DB, Get Your Hands OFF My Memory
"Starting from scratch" is seductive but disease ridden
-Pithy Advice for Programmers
Advertising - Terms of Service - Privacy - © 1997-2012, DZone, Inc.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134641289710999, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/axioms+abstract-algebra
|
# Tagged Questions
1answer
52 views
### how to show associativity of multiplication for not just 3 operands but for n operands
ie Id like to show a(bc)=(ab)c but for any n operands eg abcdefg=gfdcabe etc I can see this is very intuitive that this should be true for all n operands, but as a logical exercise I would like to ...
2answers
107 views
### How can we know arithmetical axioms are consistent?
If we assume both distributivity and the opposite of the law of signs (ie, that $-1\times-1 = -1$) for the relative integers, then we can derive that two different numbers are actually equal. ...
1answer
320 views
### Proving $1 > 0$ using only the field axioms and order axioms
How do I prove $1 > 0$ using only field axioms and order axioms? I have tried using the cancellation law, with the identities in a field and I cannot get anywhere. Does anybody have any ...
2answers
74 views
### Is using the - symbol with the Associative Law of multiplication invalid?
I was trying to prove that $-(x + y) = -x - y$ and as you can see in the image below, I took the liberty of using the $-$ symbol as a number and applying the associative law with it. Is it kosher in ...
5answers
228 views
### Axiomatization of $\mathbb{Z}$
Though I've seen several cool axiomizations of $\mathbb{R}$, I've never seen any at all for $\mathbb{Z}$. My initial guess was that $\mathbb{Z}$ would be a ordered ring which is "weakly" well-ordered ...
4answers
388 views
### Axiomatic approach to polynomials?
I only know the "constructive" definition of $\mathbb K [x]$, via the space of finite sequences in $\mathbb K$. It essentially tells a polynomial is its coefficients. Is there a way to define ...
1answer
197 views
### Which algebra satisfies this?
Do you know if the following set of rules has an algebra more general than the usual complex numbers? Maybe someone can also help me state the rules in some mathematically rigorous (fancy :) ) way so ...
5answers
673 views
### Are $x \cdot 0 = 0$, $x \cdot 1 = x$, and $-(-x) = x$ axioms?
Context: Rings. Are $x \cdot 0 = 0$ and $x \cdot 1 = x$ and $-(-x) = x$ axioms? Arguably three questions in one, but since they all are properties of the multiplication, I'll try my luck...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9565697908401489, "perplexity_flag": "head"}
|
http://www.cfd-online.com/W/index.php?title=Introduction_to_turbulence/Statistical_analysis/Ensemble_average&oldid=5786
|
[Sponsors]
Home > Wiki > Introduction to turbulence/Statistical analysis/Ensemble average
# Introduction to turbulence/Statistical analysis/Ensemble average
### From CFD-Wiki
Revision as of 02:37, 26 May 2006 by Michail (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
## The ensemble and Ensemble Average
### The mean or ensemble average
The concept of an ensebmble average is based upon the existence of independent statistical event. For example, consider a number of inviduals who are simultaneously flipping unbiased coins. If a value of one is assigned to a head and the value of zero to a tail, then the average of the numbers generated is defined as
$X_{N}=\frac{1}{N} \sum{x_{n}}$ (2)
where our $n$ th flip is denoted as $x_{n}$ and $N$ is the total number of flips.
Now if all the coins are the same, it doesn't really matter whether we flip one coin $N$ times, or $N$ coins a single time. The key is that they must all be independent events - meaning the probability of achieving a head or tail in a given flip must be completely independent of what happens in all the other flips. Obviously we can't just flip one coin and count it $N$ times; these cleary would not be independent events
Unless you had a very unusual experimental result, you probably noticed that the value of the $X_{10}$'s was also a random variable and differed from ensemble to ensemble. Also the greater the number of flips in the ensemle, thecloser you got to $X_{N}=1/2$. Obviously the bigger $N$ , the less fluctuation there is in $X_{N}$
Now imagine that we are trying to establish the nature of a random variable $x$. The $n$th realization of $x$ is denoted as $x_{n}$. The ensemble average of $x$ is denoted as $X$ (or $\left\langle x \right\rangle$ ), and is defined as
$X = \left\langle x \right\rangle \equiv \frac{1}{N} \lim_{N \rightarrow \infty} \sum^{N}_{n=1} x_{n}$ (2)
Obviously it is impossible to obtain the ensemble average experimentally, since we can never an infinite number of independent realizations. The most we can ever obtain is the ariphmetic mean for the number of realizations we have. For this reason the arithmetic mean can also referred to as the estimator for the true mean ensemble average.
Even though the true mean (or ensemble average) is unobtainable, nonetheless, the idea is still very useful. Most importantly,we can almost always be sure the ensemble average exists, even if we can only estimate what it really is. The fact of its existence, however, does not always mean that it is easy to obtain in practice. All the theoretical deductions in this course will use this ensemble average. Obviously this will mean we have to account for these "statistical differenced" between true means and estimates when comparing our theoretical results to actual measurements or computations.
In general, the $x_{n}$ couldbe realizations of any random variable. The $X$ defined by equation 2.2 representsthe ensemble average of it. The quantity $X$ is sometimes referred to as the expacted value of the random variables $x$ , or even simple its mean.
For example, the velocity vector at a given point in space and time $x^{\rightarrow},t$ , in a given turbulent flow can be considered to be a random variable, say $u_{i} \left( x^{\rightarrow},t \right)$. If there were a large number of identical experiments so that the $u^{\left( n \right)}_{i} \left( x^{\rightarrow},t \right)$ in each of them were identically distributed, then the ensemble average of $u^{\left( n \right)}_{i} \left( x^{\rightarrow},t \right)$ would be given by
$\left\langle u_{i} \left( x^{\rightarrow} , t \right) \right\rangle = U_{i} \left( x^{\rightarrow} , t \right) \equiv \lim_{N \rightarrow \infty} \frac{1}{N} \sum^{N}_{n=1} u^{ \left( n \right) }_{i} \left( x^{\rightarrow} , t \right)$ (2)
Note that this ensemble average, $U_{i} \left( x^{\rightarrow},t \right)$ , will , in general, vary with independent variables $x^{\rightarrow}$ and $t$. It will be seen later, that under certain conditions the ensemble average is the same as the average wich would be generated by averaging in time. Even when a time average is not meaningful, however, the ensemble average can still be defined; e.g., as in non-stationary or periodic flow. Only ensemble averages will be used in the development of the turbulence equations here unless otherwise stated.
### Fluctuations about the mean
It is often important to know how a random variable is distributed about the mean. For example, figure 2.1 illustrates portions of two random functions of time which have identical means, but are obviuosly members of different ensembles since the amplitudes of their fluctuations are not diswtributed the same. it is possible to distinguish between them by examining the statistical properties of the fluctuations about the mean (or simply the fluctuations) defined by:
$X^{'}= x- X$ (2)
It is easy to see that the average of the fluctuation is zero, i.e.,
$\left\langle x^{'} \right\rangle = 0$ (2)
On the other hand, the ensemble average of the square of the fluctuation is not zero. In fact, it is such an important statistical measure we give it a special name, the variance, and represent it symbolically by either $var \left[ x \right]$ or $\left\langle \left( x^{'} \right) ^{2} \right\rangle$ The variance is defined as
$var \left[ x \right] \equiv \left\langle \left( x^{'} \right) ^{2} \right\rangle = \left\langle \left[ x - X \right]^{2} \right\rangle$ (2)
$= \lim_{N\rightarrow \infty} \frac{1}{N} \sum^{N}_{n=1} \left[ x_{n} - X \right]^{2}$ (2)
Note that the variance, like the ensemble average itself, can never really be measured, since it would require an infinite number of members of the ensemble.
It is straightforward to show from eqaution 2.2 that the variance in equation 2.6 can be written as
$var \left[ x \right] = \left\langle x^{2} \right\rangle - X^{2}$ (2)
Thus the variance is the second-moment minus the square of the first-moment (or mean). In this naming convention, the ensemble mean is the first moment.
The variance can also referred to as the second central moment of x. The word central implies that the mean has been subtructed off before squaring and averaging. The reasons for this will be clear below. If two random variables are identically distributed, then they must have the same mean and variance.
The variance is closely related to another statistical quantity called the stardard deviation or root mean square (rms) value of the random variable $x$ , which is denoted by the simbol, $\sigma_{x}$. Thus,
$\sigma_{x} \equiv \left( var \left[ x \right] \right)^{1/2}$ (2)
or
$\sigma^{2}_{x} = var \left[ x \right]$ (2)
### Higher moments
Figure 2.2 illustrates tow random variables of time which have the same mean and also the same variances, but clearly they are still quite different. It is useful, therefore, to define higher moments of the distribution to assist in distinguishing these differences.
The $m$-th moment of the random variable is defined as
$\left\langle x^{m} \right\rangle = \lim_{N \rightarrow \infty} \frac{1}{N} \sum^{N}_{n=1} x^{m}_{n}$ (2)
It is usually more convenient to work with the central moments defined by:
$\left\langle \left( x^{'} \right)^{m} \right\rangle = \left\langle \left( x-X \right)^{m} \right\rangle = \lim_{N \rightarrow \infty} \frac{1}{N} \sum^{N}_{n=1} \left[x_{n} - X \right]^{m}$ (2)
The central moments give direct information on the distribution of the values of the random variable about the mean. It is easy to see that the variance is the second central moment (i.e., $m=2$ ).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503192901611328, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/crystals+solid-state-physics
|
# Tagged Questions
1answer
33 views
### If my lattice has an atomic basis, do I also find the reciprocals of the basis vectors to get the reciprocal crystal structure?
That is what my crystal structure looks like. The blue atoms sit on every lattice point (basis vector of $\{0,0\}$) and the red atoms have basis vector of $\left\{{2\over3},{1\over3}\right\}$. The ...
0answers
33 views
### Is this 2D structure triclinic?
The only rotation axis obvious to me is rotation by 360 degrees, the identity. Vertical mirror planes I've been dicing and cutting it through several planes and I still see none. Yet, the structure ...
1answer
38 views
### What would be the basis vectors for this 2D crystal structure?
In the above image, I have a 2D crystal structure. The lattice vectors are described by: a = {-1/2, -Sqrt[3]/2}; b = {1, 0}; and the location of atoms A and B ...
3answers
93 views
### What is the difference between lattice vectors and basis vectors?
Google has not been very useful in this regard. It seems no one has clearly defined terms and Kittel has too little on this.
1answer
43 views
### I can't figure out crystal planes with negative intercepts
As seen above, I don't follow how you figure out those planes. It seems they're not using the origin labeled. I'm not really sure I understand spatially what's going on in the left figure so let's ...
0answers
23 views
### What are some applications of crystal fabrication? [closed]
I have heard of some applications here or there in certain papers, but I am looking for a broader scope of examples.
1answer
38 views
### To which real densities do carrier densities in the semi-classical model of a crystal correspond?
In the semi-classical model of a crystal in solid state physics, electrons and holes are assigned effective masses that account for their different mobilities. E.g. in silicon, holes have a bigger ...
1answer
71 views
### Algorithm for identifying planes in a Bravais Lattice
I have a lattice with Lattice Vectors $(\vec{t}_1,\vec{t}_2,\vec{t}_3)$ which are NOT orthogonal in general. How can I identify the atoms/unit cells that belong to a plane - that is normal to a given ...
1answer
57 views
### Estimate the difference between two sets of atoms
I've been working on amorphous structures derived from a crystalline one (using MD) containing $N$ atoms. I want to prove that these structures are different and to quantify their "differentness". One ...
0answers
63 views
### How to find the translation vectors of a primitive cell when primitive cell encloses multiple atoms? [closed]
How to find the translation vectors of a primitive cell when primitive cell encloses multiple atoms? Your help is appreciated! Thank you!
2answers
187 views
### Crystal visualization software for visualizing lattice with reciprocal vectors drawn in same image [closed]
I'm looking for a free crystal visualization program, preferably for Linux, that can visualize the common lattice structures in 3D interactively (rotatable with mouse) and draw in the same picture the ...
0answers
63 views
### Crystal momentum and the vector potential
I noticed that the Aharonov–Bohm effect describes a phase factor given by $e^{\frac{i}{\hbar}\int_{\partial\gamma}q A_\mu dx^\mu}$. I also recognize that electrons in a periodic potential gain a phase ...
0answers
28 views
### Are there any electro-optic crystals that are also pyroelectric but not birefringent?
As the title says, a crystal that is electr-optic and pyroelectric can it be non-birefrigent?
1answer
101 views
### What is crystal field anisotropy or effect ? It forces the magnetic moment to point in particular local direction..
Can you give a basic explanation of what is crystal field anisotropy ? What is the reason to arise ? In spin ice it forces the dipoles to point in the local 111 direction. For partially filled rare ...
0answers
97 views
### Discrete sum over an exponential with imaginary argument, considering only every second lattice site?
Let's say I sum an exponential function $e^{\imath \left(k-k^{\prime}\right) x_{i}}$ over a chain system where every member of the chain is of the same type, e.g., A-A-A-...-A-A (total of N sites) ...
1answer
197 views
### 4 digit Miller Index for a cubic structure?
As the title states, can a Miller index for a cubic structure have 4 digits? If I have a structure with intercepts (2,8,3) on the x-y-z axes respectively, the following Miller index would be (12,3,8), ...
2answers
536 views
### Madelung constant list (for surfaces as well)
Searching for this on google proved to be quite tedious, but I reckon that someone working with crystals a lot might know this off the top of his head: Is there a good source that lists the Madelung ...
1answer
367 views
### Nature of tetragonal distortion in Jahn-Teller effect
I am wondering: If I have a regular octahedron as my starting point, oriented along the x-y-z axis, and now Jahn-Teller suggest I elongate or compress along the $z$-axis, what happens along the other ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9189596772193909, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/97589/textbook-source-for-finite-group-properties-deducible-from-character-table
|
Textbook source for finite group properties deducible from character table?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Various questions have been posted on MO (some answered, some not) involving the character table of a finite group `$G$` over a splitting field such as `$\mathbb{C}$` of characteristic 0. My basic question is more about references:
1) Are there textbook or other convenient sources summarizing properties of `$G$` which can or can't be deduced from a knowledge of its character table?
By itself this is a rather artificial question, since one doesn't usually know a character table without already knowing a lot about the group, but it provides good exercise material. Recently I was going over some standard theory with a graduate student and came across old notes from a course I taught decades ago, but I can't recall which books I consulted at the time.
Two sorts of information are typically deduced from a character table using the orthogonality relations: (a) numerical data, such as the order of `$G$` or more generally all orders of centralizers and hence classes; (b) normal subgroup data, starting with the fact that normal subgroups are intersections of the kernels of characters and then deducing orders and inclusions of such subgroups. In particular, one can pinpoint the center and derived group, as well as determine whether or not `$G$` is simple. On the other hand, it's well known that nonisomorphic groups can have the same character table (e.g., the two nonabelian groups of order 8); in particular, the character table can fail to predict the orders of the class representatives labelling columns.
One substantive question of this type which I'm unclear about is this:
2) To what extent does the character table determine properties of `$G$` ranging from solvability to nilpotency?
-
4 Answers
For nilpotency, you can deduce the character table of $G/Z$ from the character table of $G$. First, determine $Z$. Second, throw out all the representations where $Z$ is not in the kernel. Third, merge the conjugacy classes which have the same trace in every representation. (This works because the irreducible representations of $G/Z$ are exactly the irreducible representations of $G$ with kernel containing $Z$, and because the inverse images of the conjugacy classes of $G/Z$ in $G$ are unions of conjugacy classes of $G$.) Then iterate.
-
2
You can deduce solvability as well; you can easily find series of normal subgroups, then the order of their quotients needs to be prime. A good source for a lot of questions like this is Isaacs's Character Theory book; he doesn't discuss all these things in one place, but a lot of them are scattered throughout the text. – Steve D May 21 2012 at 19:15
Sorry, I meant "prime power" order above; if you can find all prime order quotients, the group is supersolvable! – Steve D May 21 2012 at 19:16
I think there is a typo in the first line- you mean "from the character table of $G$" I think. Solvability is similar. You can find all the minimal normal subgroups and their orders from the character table, and if $M$ ia any one of them, you can find the character table of $G/M$ from the character table of $G$. If $G$ is solvable, then all such $M$ should have prime power order. On the othe hand if all such $M$ have prime power order, youcan work inductively- if $G$ is not solvbale, you must find a normal subgroup $N$ so that $G/N$ has a minimal normal subgroup not of prime power order. – Geoff Robinson May 21 2012 at 19:17
Thanks for the quick answer and the illuminating comments, which reinforce what I vaguely recalled should be true (and more). Probably I drew my own notes from Isaacs, since at the time few books went that far into character theory, but I was hoping for a better summary in later texts. I'll wait a bit longer before accepting what's here already as an answer to my questions 1 and 2. – Jim Humphreys May 21 2012 at 20:15
1
@Geoff: Meanwhile I hope you are taking notes for your forthcoming definitive (but concise) text on finite group characters. Ideally all of this material would be set up as an extended exercise, but at the moment it's complicated to locate all details in the literature. – Jim Humphreys May 21 2012 at 22:16
show 1 more comment
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Lots of properties related to solvability can be deduced from the character table of a group, but perhaps it is worth mentioning one property that definitely cannot be so determined: the derived length of a solvable group. Sandro Mattarei constructed such examples, including examples of $p$-groups with identical character tables but different derived lengths. I think that no examples are known where the difference in derived lengths exceeds 1, however.
-
Although it is not possible in general to determine from the character table the order of the class representative labeling a column, it is nevertheless possible to determine all the prime divisors of this order. A proof of this fact can be found in [Chapter 22, Theorem 1.1 (vi)] of
Karpilovsky, Gregory Group representations. Vol. 1. Part B. Introduction to group representations and characters. North-Holland Mathematics Studies, 175. North-Holland Publishing Co., Amsterdam, 1992. pp. MR1183468
Another interesting fact is that given a class representative $g$ it is possible to know whether or not $g$ is a commutator. I have no reference for this at hand, but this is well known to my knowledge.
-
4
This is slightly off-topic but, to clarify Anvita's comment: In a finite group $G$, an element $g$ is a commutator if and only if $$\sum\limits_{\chi \in Irr(G)} \frac{\chi(g)}{\chi(1)}\neq 0.$$ This result is stated in the paper by Liebeck, O'Brien, Shalev and Tiep which proves the Ore conjecture; they remark that it follows from a result of Frobenius (see Lemma 2.5 of that paper). – Nick Gill May 23 at 12:50
1
@Anvita: Do you know where the result quoted by Karpilovsky comes from? His books mostly consisted of compilations from research papers by other people, and I suspect one of these would be the original source. – Jim Humphreys May 24 at 15:41
@Jim: In this book, Karpilovsky gives a proof following "an argument due to Isaacs (1976)" referring to the book M.I.Isaacs, Character Theory of Finite Groups, Academic Press, New York-San Fransisco- London. 1976. There is ineed such a proof in Isaacs' book (see chapter on Brauer's theorem there, Theorem (8.21) ) attributed to G.Higman, but no explicit reference given. – Anvita Jun 3 at 22:33
For your reference requests, Marty Isaacs' book:
Character Theory of Finite Groups (Dover Books on Mathematics) [Paperback] I. Martin Isaacs (Author)
Is a classic.
-
See also comment thread on the accepted answer. – Alex Bartel May 22 2012 at 0:14
@Alex: yes, I should have looked at it first :( – Igor Rivin May 22 2012 at 0:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407880306243896, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/46801/how-do-electrons-jump-orbitals/46807
|
How do electrons jump orbitals?
My question isn't how they receive the energy to jump, but why. When someone views an element's emission spectrum, we see a line spectrum which proves that they don't exist outside of their orbitals (else we would see a continuous spectrum). Electrons can be released in the form of beta decay, thus proving that they are capable of traveling outside of orbitals contrary to the statement my teacher said that they stay within orbitals. Then, to add to the confusion, the older model of rings floating around a nucleus has, from what I can tell, been outdated, which would support this model. My teacher's explanation was that the electrons made a quantum jump of some kind. How do electrons move between orbitals or do we know how they jump, excluding the reason that energy causes them to jump, and why are positrons formed sometimes instead of electrons in Beta decay? When I'm asking "how do electrons jump" I would like to know how an electron can jump between each orbital such as how it moves and how it knows where to jump since it appears to be a jump where the electron doesn't slow into a orbital position. Specifically how they jump what is this Atomic electron transition, I understand that they jump and that they do this through absorbing and releasing energy but what is this Atomic electron transition other than what is already on the wikipedia article http://en.wikipedia.org/wiki/Atomic_electron_transition.
-
4
"the older model of rings floating around a nucleus has from what i can tell been outdated" Hmmm ... yes. Outdated for roughly 70 years. Basically your instructor may be working for a number of misconceptions about the nature of election orbitals. Mind you, even in that framework the decay electrons are generally unbound and so would not lie on any of the rings in the first place. – dmckee♦ Dec 14 '12 at 3:12
1
Beta decay is a nuclear process (specifically a weak process), rather than a chemical one. This said, orbital transitions can occur in inverse beta decay (which is to say, electron capture) however this is only a side effect of an unoccupied core orbital being generated. – Richard Terrett Dec 14 '12 at 4:21
1
– dmckee♦ Dec 14 '12 at 5:20
sorry about the miss type in my bounty at the beginning i sad "how would" when i meant *"I would" and i said "there appears to be linear motion" when i meant "there appears to be no linear motion" – Bored915 Dec 16 '12 at 23:29
4 Answers
Imagine an electron a great distance from an atom, with nothing else around. The electron doesn't "know" about the atom. We declare it to have zero energy. Nothing interesting is going on. This is our reference point.
If the electron is moving, but still far from the atom, it has kinetic energy. This is always positive. The electron, still not interacting with the atom, may move as it pleases. It has positive energy, and in any amount possible. Its wave function is a simple running plane wave, or some linear combination of them to make, for example, a spherical wave. Its wavelength, relating to the kinetic energy, may be any value.
When the electron is close to the atom, opposite charges attract, and the electron is said to be stuck in a potential well. It is moving, so has positive (always) kinetic energy, but the Coulomb potential energy is negative and in a greater amount. The electron must slow down if it moves away from the atom, to maintain a constant total energy for the system. It reaches zero velocity (zero kinetic energy) at some finite distance away, although quantum mechanics allows a bit of cheating with an exponentially decreasing wavefunction beyond that distance.
The electron is confined to a small space, a spherical region around the nucleus. That being so, the wavelength of its wavefunction must in a sense "fit" into that space - exactly one, or two, or three, or n, nodes must fit radially and circumferentially. We use the familiar quantum number n,l,m. There are discrete energy levels and distinct wavefunctions for each quantum state.
Note that the free positive-energy electron has all of space to roam about in, and therefore does not need to fit any particular number of wavelengths into anything, so has a continuous spectrum of energy levels and three real numbers (the wavevector) to describe its state.
When the atom absorbs a photon, the electron jumps from let's say for example from the 2s to a 3p orbital, the electron is not in any orbital during that time. Its wave function can be written as a time-varying mix of the normal orbitals. A long time before the absorption, which for an atom is a few femtoseconds or so, this mix is 100% of the 2s state, and a few femtoseconds or so after the absorption, it's 100% the 3p state. Between, during the absorption process, it's a mix of many orbitals with wildly changing coefficients. There was a paper in Physical Review A back around 1980 or 1981, iirc, that shows some plots and pictures and went into this in some detail. Maybe it was Reviews of Modern Physics. Anyway, keep in mind that this mixture is just a mathematical description. What we really have is a wavefunction changing from a steady 2s, to a wildly boinging-about wobblemess, settling to a steady 3p.
A more energetic photon can kick the electron out of the atom, from one of its discrete-state negative energy orbital states, to a free-running positive state - generally an expanding spherical wave - it's the same as before, but instead of settling to a steady 3p, the electron wavefunction ends as a spherical expanding shell.
I wish I could show some pictures, but that would take time to find or make...
-
Here I will address some misconceptions in the question, not addressed by the answer of DarenW.
My question isn't how they receive the energy to jump, but why. When someone views an element's emission spectrum, we see a line spectrum which proves that they don't exist outside of their orbitals (else we would see a continuous spectrum).
These emission and absorption spectra
continuum
emission spectrum
absorption
Come from the atomic orbitals, as explained in DarenW's answer. That is, the nucleus with its positive charge, say Helium with charge +2, has around it two electrons "orbiting" in allowed by the solutions of the quantum mechanical problem "orbits". Where "orbits" means a spatial location in 3 dimensional space where the probability of finding electrons is high, of spherical shape about the nucleus with very specific quantum numbers.
Electrons can be released in the form of beta decay, thus proving that they are capable of traveling outside of orbitals contrary to the statement my teacher said that they stay within orbitals. T
This is a misconception. Beta decays happen when a neutron turns into a proton and an electron, and they are phenomena pertaining to the nucleus, not the atom. The atom is described well by electromagnetic interactions, the nucleus is described by strong interactions and weak interactions. Beta decays are a weak interaction. Thus the electron of the beta decay is a free electron once it materializes and is ejected from the nucleus, particularly if all free electron orbital locations are filled. The nucleus then changes into an Z+1 charge isotope nucleus.
Here is how the neutron decay is currently visualized
How do electrons move between orbitals, excluding energy added to excite electrons,
You have to add energy to excite the electrons to higher orbitals, and usually it is with the kick of a photon of the energy of the gap between orbitals.
and why are positrons formed sometimes instead of electrons in Beta decay?
From wikipedia on electron capture
In all the cases where β+ decay is allowed energetically, the electron capture process, when an atomic electron is captured by a nucleus with the emission of a neutrino, is also allowed
It means that a proton in the nucleus turns into a neutron a positron and a neutrino. This lowers the nuclear Z by one unit, and will induce a cascade of higher orbital electrons falling in the hole left by the captured one.
-
Of course electrons CAN travel between orbitals, although they do this in not conventional (classical) way.
The question of traveling electrons between orbitals is the subject or relativistic quantum mechanics, or as it is called another way, of quantum field theory or quantum electrodynamics.
By words I can describe the situation in following way.
The orbitals are not PLACES, they are EIGEN STATES of energy operator. Electron can exist in any state, but this any state is representable by superposition of eigenstates.
So, an electron traveling from orbital $\psi_1$ to orbital $\psi_2$ is described by the state $a \psi_1 + b \psi_2$ where $a$ and $b$ are complex weights of the components of superposition. They are changing over time, having $a=1; b=0$ at the beginning of the process and $a=0; b=1$.
Also, you know that $|a|^2 + |b|^2=1$ at any instant.
The law of this changing is exponential, i.e. $a(t) \sim e^{-\lambda t}$.
The parameters of this exponent are depending on state lifetime. The shorter lifetime, the more exponent slope. Also lifetime is also related with state uncertainty. The wider the state, the shorter it's lifetime.
-
The answers so far seem pretty good, but I'd like to try a slightly different angle.
Before I get to atomic orbitals, what does it mean for an electron to "be" somewhere? Suppose I look at an electron, and see where it is (suppose I have a very sophisticated/sensitive/precise microscope). This sounds straightforward, but what did I do when I 'looked' at the electron? I must have observed some photon that had just interacted with that electron. If I want to get an idea of the motion of the electron (no just its instantaneous momentum, but its position as a function of time), I need to observe it for a period of time. This is a problem, though, because I can only observe the electron every time it interacts with a photon that I can observe. It's actually impossible for me to observe the electron continuously, I can only get snapshots of its position.
So what does the electron do between observations? I don't think anyone can answer that question. All we can say is that at one time the electron was observed at point A, and at a later time it was observed at point B. It got from A to B... somehow. This leads to a different way of thinking about where an electron (or other particle) is.
If I know some of the properties of the electron, I can predict that I'm more likely to observe an electron in some locations than in others. Atomic orbitals are a great example of this. An orbital is described by 4 quantum numbers, which I'll call $n$, $l$, $m$, $s$ (there are several notations; I think this one is reasonably common). $n$ is a description of how much energy the electron has, $l$ describes its total angular momentum, $m$ carries some information about the orientation of its angular momentum and $s$ characterizes its spin (spin is a whole topic on its own, for now let's just say that it's a property that the electron has). If I know these 4 properties of an electron that is bound to an atom, then I can predict where I am most likely to observe the electron. For some combinations of $(n,l,m,s)$ the distribution is simple (e.g. spherically symmetric), but often it can be quite complicated (with lobes or rings where I'm more likely to find the electron). There's always a chance I could observe the electron ANYWHERE, but it's MUCH MORE LIKELY that I'll find it in some particular region. This is usually called the probability distribution for the position of the electron. Illustrations like these are misleading because they draw a hard edge on the probability distribution; what's actually shown is the region where the electron will be found some high percentage of the time.
So the answer to how an electron "jumps" between orbitals is actually the same as how it moves around within a single orbital; it just "does". The difference is that to change orbitals, some property of the electron (one of the ones described by $(n,l,m,s)$) has to change. This is always accompanied by emission or absorption of a photon (even a spin flip involves a (very low energy) photon).
Another way of thinking about this is that the electron doesn't have a precise position but instead occupies all space, and observations of the electron position are just manifestations of the more fundamental "wave function" whose properties dictate, amongst other things, the probability distribution for observations of position.
-
I think im going to reward you with the bounty and one of the other answers as the answer since covered most of it before you though you answered the original question which was the bounty question. Then to add to it you understood exactly what i was trying to get at that i failed at translating as my main question. – Bored915 Dec 18 '12 at 19:27
1
Glad I could help. I agree the other answers do a good job of covering some of the formal description of the question in the framework of QM, but unless you know QM (based on what it sounds like you're covering in class, I'm guessing you have at most an intro) it can be a bit hard to follow all the detail. – Kyle Dec 18 '12 at 19:40
I finished freshmen physics and am taking IB physics now, and science has been a passion of mine so I've been trying to learn ahead with the motto the more annoying the science the more fun it is. – Bored915 Dec 18 '12 at 20:23
Ah the IB... I remember that physics course. Only the SL version was offered at my school when I took it. I remember reading up on all the optional units I couldn't take. Good luck with your studies :) – Kyle Dec 18 '12 at 21:00
Thanks i got exams this week(no f's please crossing fingers) – Bored915 Dec 19 '12 at 0:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564130902290344, "perplexity_flag": "middle"}
|
http://nebusresearch.wordpress.com/2013/03/12/on-peeking-at-cedar-point/
|
# nebusresearch
Joseph Nebus's work in progress.
## On Peeking At Cedar Point
by Joseph Nebus
I hadn’t intended to leave unanswered my little question about getting the best view of an obstructed attraction at Cedar Point, and apologize for that. Matters got in my way. And I really want to commend people to Geoffrey Brent’s solution, which avoids calculus in favor of geometric reasoning and so has that nice satisfying nature to it. (The part that turns into gibberish is rot13′d, so as not to spoil people: copy it to the box on Rot13.com and hit ‘Cypher’ to read it if you aren’t able to do the rot13 stuff in your head somehow.)
I do want to work out the solution by calculus methods, though, partly because that was actually easier for me, and partly to see whether my audience will put up with such. I’m trying to figure out how to present a more complicated subject which sure looks like it needs calculus to explain, and I’d like to have some sense whether I can write coherently on that topic so.
To set the stage: the problem was about where to stand, behind a tall obscuring fence, so as to see the greatest view of a building hidden behind the fence. To make for simple enough numbers, the viewer is assumed to have eyes six feet off the ground, the fence is eight feet tall, and the building, four feet beyond the fence, is twelve feet tall. Trusting that the ground is level — the reality isn’t quite, as it is at an amusement park — and that you can get as near or as far from the fence as you like, when does the angle between the top of the building and the top of the fence get its biggest?
I’m going to use x to represent how far the viewer is from the fence, and to use θ as the angle between the top of the building and the top of the fence. x is a traditional variable for the unknown quantity, and θ is almost as traditional for the angle. Here, exactly what the viewing angle θ is will depend on what x is, so while we’ll be solving for what x is, it will be through information about how θ depends on x. That sort of problem very often calls out for differential calculus. You can out-think this — Geoffrey Brent did — but thinking is hard work, and the tools are powerful enough that they’ll often work.
When I have a description of how the angle of what’s visible, θ, depends on how far the eye is from the fence, x, then I can find where the angle is biggest (or smallest) by taking the derivative of θ with respect to x. That derivative says how much a tiny change in x would make in θ; at the largest or the smallest angles, a tiny change in x produces no change in θ. If you don’t believe me, think of a marble rolling around in a bowl. If the marble is far away from the base, then, moving the marble a little bit sideways forces it up, or down, pretty far. If the marble is near the middle of the bowl, moving it sideways lets it move up or down not much at all.
We do need to know what the angle θ is. That’ll be the difference between the angle to the top of the building and the angle to the top of the fence. The viewer is, by definition, x feet away from the fence, and the top of the fence is two feet above the eye. Trusting that the fence is vertical, then, there are a couple of ways to describe the angle to the top of the fence, but the probably easiest is that it’s the arc-tangent of two divided by x. The building, meanwhile, is x + 4 feet away, and the top of the building is six feet above the eye. So the angle to that is the arc-tangent of six divided by the quantity x + 4.
This means that from the point x feet away from the fence, the angle of viewable building is:
$\theta = \arctan\left(\frac{6}{x + 4}\right) - \arctan\left(\frac{2}{x}\right)$
Taking the derivative — how much that expression changes as x changes — all at once looks bad, and is the sort of thing that makes me wonder why I don’t have a program which will do it automatically too, but it gets simpler if you break it down into parts. The first thing is that the derivative of the sum of two things is the same as the derivative of the first thing plus the derivative of the second, so we can take the derivative of those arctangent expressions separately. (Yes, that’s a subtraction rather than an addition, but once we reach introductory algebra we start treating subtraction as just being a special case of addition, which is coincidentally about when people really start grumbling about having to take math class.)
The next thing is that the derivative of the arctangent sounds bad, but, it isn’t: the derivative of arctan(x) with respect to x is the almost friendly-looking expression $\frac{1}{1 + x^2}$. That’s easy to deal with. Unfortunately, neither of the expressions we have is the arctangent of x; instead, they’re arctangents of expressions that depend on x.
But here again there’s a rule which makes things simpler; this one’s known as the chain rule. For this, it means the derivative, with respect to x, of $\arctan\left(\frac{2}{x}\right)$ is equal to $\frac{1}{1 + \left(\frac{2}{x}\right)^2}$ — using the formula for the derivative of arctan(x), with the thing in parentheses used in place of x — times the derivative of $\frac{2}{x}$ — which was the thing inside parentheses to start with. That derivative is an easy one, $-\frac{2}{x^2}$. So the derivative, with respect to x, of $\arctan\left(\frac{2}{x}\right)$ is $\frac{1}{1 + \left(\frac{2}{x}\right)^2}\cdot\frac{-2}{x^2}$.
Similarly again using the chain rule, the derivative of $\arctan\left(\frac{6}{x + 4}\right)$ is going to be the not-terribly-appealing-looking $\frac{1}{1 + \left(\frac{6}{x + 4}\right)^2}$ times the derivative of $\frac{6}{x + 4}$ with respect to x, which is $-\frac{6}{\left(x + 4\right)^2}$.
So the derivative of θ with respect to x is:
$\frac{d\theta}{dx} = \frac{1}{1 + \left(\frac{6}{x + 4}\right)^2}\cdot\frac{-6}{\left(x + 4\right)^2} - \frac{1}{1 + \left(\frac{2}{x}\right)^2}\cdot\frac{-2}{x^2}$ which looks horrible. But those denominators work out pretty nicely when you actually try evaluating them, as they turn out to be a lot simpler once you get over the first impression. The derivative is:
$\frac{d\theta}{dx} = \frac{-6}{\left(x + 4\right)^2 + 6^2} + \frac{2}{x^2 + 4}$ and we want to find all the values of x for which this is zero. Again I start to wonder why I don’t have a computer program that will work out symbolic algebra like this — there are many such programs available and they’re wonderful — but it’s actually quicker to do this by hand rather than wait for a package of some open-source symbolic mathematics program to download and to find what hilarious level of deranged semi-functionality is billed as “works for Mac OS”. Relying on pencil and paper, and the general idea that it’s nice to get variables out of denominators as soon as that can be done without being confusing we get:
$\frac{-6}{\left(x + 4\right)^2 + 6^2} + \frac{2}{x^2 + 4} = 0 \\ \frac{-6}{\left(x + 4\right)^2 + 6^2} = - \frac{2}{x^2 + 4} \\ 6\left(x^2 + 4\right) = 2\left(\left(x + 4\right)^2 + 6^2\right) \\ 3\left(x^2 + 4\right) = \left(x + 4\right)^2 + 6^2 \\ 3x^2 + 12 = x^2 + 8x + 16 + 36 \\ 2x^2 - 8x - 40 = 0 \\ x^2 - 4x - 20 = 0$
That last line is one of finding the roots of a quadratic polynomial; we’re good at that. We have the quadratic formula for that, one that gives us two possible solutions:
$x = \frac{-\left(-4\right) \pm \sqrt{(-4)^2 - 4\cdot1\cdot(-20)}}{2\cdot 1} \\ x = \frac{4 \pm \sqrt{96}}{2}$
The square root of 96 has to be pretty close to the square root of 100, which is 10. So, four minus ten is a negative number, putting the viewer somewhere behind the fence, where we can’t get. The only answer that’s useful for us has the positive x, that is, $x = \frac{1}{2}\left(4 + \sqrt{96}\right)$. This is $2 + \sqrt{24}$, or just about 6.9 feet in front of the fence and I am so grateful that it’s exactly what Geoffrey Brent worked out as I like the confirmation that I probably didn’t do it wrong.
Just finding this answer doesn’t properly finish our work off, though. The maximum angle has to be found at some x where the derivative of θ with respect to x is zero, or at a boundary where the possible values of x are limited; however, just because the derivative is zero does not mean we have necessarily found a maximum. We might have found a minimum instead, or we might have found an inflection point, where the rate at which the angle changes with x changes how quickly it grows.
There are a couple of tests that might be done to see whether it is a maximum or minimum or an inflection point that we’ve found. The simplest, and practical for this question, is to just test it out: is the angle at the distance $x = 2 + \sqrt{24}$ smaller than the angle at other distances? At my claimed maximum the angle separating the top of the building from the top of the fence is about 0.2211 (this happens to be radians; it’s about 25.3 degrees). For positive values of x, ones that put us on the public side of the fence, we don’t see anything greater.
That’s not exactly rigorous, though we can get away with it in this case. A more convincing proof that we’re right would be to take the derivative with respect to x of this derivative a second time. If this derivative is a negative number when $x = 2 + \sqrt{24}$ then this point is a local maximum. This is known as the “second derivative test”, for the obvious reason that it is a test. As I work it out, this second derivative is $\frac{d^2\theta}{dx^2} = \frac{12\left(x + 4\right)}{\left(\left(x + 4\right)^2 + 6^2\right)^2} - \frac{4x}{\left(x^2 + 4\right)^2}$ which, at $x = 2 + \sqrt{24}$, is roughly -0.005 — not a huge number, but we don’t care about its size. It’s a negative number, and that’s all we needed. The point we found was a maximum. (And by now you see why I so appreciate Brent’s approach: mine took less insight, but paid for it with more drudge work. This is the usual tradeoff.)
For all that, however, we wouldn’t actually get the best peek over the fence by walking just under seven feet back from the fence. We’d get the best peek over it by going to one of the adjacent rides, such as the Ferris wheel, the Wicked Twister roller coaster, the Troika, or the giant Frisbee ride maXair, all of which carry the rider well above the height of the fence where it was clear that — as of late September — the Transport Refreshments buildings were intact and in good-looking shape. (I haven’t any pictures of this because I am not so foolish as to take photographs during a ride. I might have on the Ferris wheel, I admit, but I didn’t happen to go on it.) Of course, that doesn’t say much about what might be around after the new roller coaster is fully built.
### Like this:
Posted on Tuesday, 12 March, 2013 at 11:49 pm in Geometry, Recreational Mathematics | RSS feed | Respond | Trackback URL
Tags: amusement parks, calculus, Cedar Point, geometry, math, mathematics, maths, roller coasters
### One Comment to “On Peeking At Cedar Point”
1. “You can out-think this — Geoffrey Brent did”
…I should probably mention that I saw a very similar problem in a book, yea these many years ago, so it only took very very minor tweaking.
But I will take credit for hanging onto that knowledge for a couple of decades!
eCharta
The blog
Alana Munro
Author of Women Behaving Badly
marthapfeil
Vent!
Quant Tutorials
Tutorials related to Financial Engineering, Mac OSX and other topics
Who Stole My Baby?
ramblings of an almost definitely insane person
Jcckeith
My ship has yet to come in so I'm building my own.
Natalia Maks
Travel. See. Shoot. Learn.
The Seeker's Dungeon
Lost in the struggle between Mind and Matter
Braden the Software Guy
A place for your daily program, course reviews, coding tutorials, and coffee.
Corvidae in the Fields
Welcome to the graveyard shift. Grab a shovel.
Barb Taub
Home
Ramblings From an Apathetic Adult Baby
From Justin Gawel: Eccentric Dirtbag
Wonderful Cinema
Short reviews on high quality films. No spoilers.
Ben's Bitter Blog
"We make bitter better."
Julian Froment's Blog
Just another WordPress.com weblog
The Jenny Mac Book Blog
Jenny Mac and the Man of Secrets
Stephen Liddell
Musings on a mad world
Daily (w)rite
A daily ritual of writing
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520764350891113, "perplexity_flag": "middle"}
|
http://nrich.maths.org/1867/index
|
nrich enriching mathematicsSkip over navigation
### GOT IT Now
For this challenge, you'll need to play Got It! Can you explain the strategy for winning this game with any target?
### Is There a Theorem?
Draw a square. A second square of the same size slides around the first always maintaining contact and keeping the same orientation. How far does the dot travel?
### Reverse to Order
Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number?
# Pick's Theorem
##### Stage: 3 Challenge Level:
When the dots on square dotty paper are joined by straight lines the enclosed figures have dots on their perimeter ($p$) and often internal ($i$) ones as well.
Figures can be described in this way: $(p, i)$.
For example, the red square has a $(p,i)$ of $(4,0)$, the grey triangle $(3,1)$, the green triangle $(5,0)$ and the blue shape $(6,4)$:
This text is usually replaced by the Flash movie.
Each figure you produce will always enclose an area ($A$) of the square dotty paper.
The examples in the diagram have areas of $1$, $1 {1 \over 2}$, and $6$ sq units.
Do you agree?
Draw more figures; tabulate the information about their perimeter points ($p$), interior points ($i$) and their areas ($A$).
Can you find a relationship between all these three variables ($p$, $i$ and $A$)?
Click here for a poster of this problem.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8927410840988159, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/statistics/81602-another-problem-can-t-figure-out.html
|
# Thread:
1. ## Another Problem- Can't figure it out.
A bag contains 5 Black and 3 red balls.A ball is taken out of a bag and is not returned to it.If the process is treated 3 times then what is the probability of drawing a black ball in the next draw of a ball?
a) 0.7
b) 0.625
c) 0.1
d) None of these.
2. Thought process -
I think the three balls can be drawn in 4 ways
1) All the three balls are black = 5/8 .4/7 .3/6 then the probability of next black ball can be drawn is 2/5.
2) 2 black and 1 red = 5/8 . 4/7 . 3/6 then the probability of next black ball can be drawn is 3/5.
3) 1 black and 2 red = 5/8 . 3/7 . 2/6 then the probability of next black ball can be drawn is 4/5.
4) All red = 3/8 . 2/7 . 1/6 then the probability of next black ball can be drawn is 1.
Probability = (option1 + option 2 + option3 + option 4)/chosing one out of four
= (2/5 + 3/5 + 4/5 + 1)/4C1 = 14/20 = 0.7
Please let me know if the way I am thinking is right.
Anshu.
3. Originally Posted by Curious_eager
A bag contains 5 Black and 3 red balls.A ball is taken out of a bag and is not returned to it.If the process is treated 3 times then what is the probability of drawing a black ball in the next draw of a ball?
a) 0.7
b) 0.625
c) 0.1
d) None of these.
One way of doing this would be to draw a tree diagram.
4. Thanks Mr Fantastic, Just want to know If the approach I followed is right or wrong.
5. Originally Posted by Curious_eager
1) All the three balls are black = 5/8 .4/7 .3/6 then the probability of next black ball can be drawn is 2/5.
2) 2 black and 1 red = 5/8 . 4/7 . 3/6 then the probability of next black ball can be drawn is 3/5.
3) 1 black and 2 red = 5/8 . 3/7 . 2/6 then the probability of next black ball can be drawn is 4/5.
4) All red = 3/8 . 2/7 . 1/6 then the probability of next black ball can be drawn is 1.
You have correctly identified the four possibilities.
But what you have not done correctly is to calculate the probabilities of each option.
For example, #2 should be: ${3 \choose 1}\frac{5\cdot 4\cdot 3}{8\cdot 7\cdot 6} \frac {3}{5}$.
If done correctly by adding we get $0.625$.
It is curious to note that is exactly the probability of getting a black on the first draw.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.890586256980896, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/45808/why-the-hamiltonian-and-the-lagrangian-are-used-interchangeably-in-qft-perturbat
|
# Why the Hamiltonian and the Lagrangian are used interchangeably in QFT perturbation calculations
Whenever one needs to calculate correlation functions in QFT using perturbations one encounters the following expression:
$\langle 0| some\ operators \times \exp(iS_{(t)}) |0\rangle$
where, depending on the textbook S is either (up to a sign)
1. $\int \mathcal{L}dt$ where $\mathcal{L}$ is the interaction Lagrangian
or
2. $\int \mathcal{H}dt$ where $\mathcal{H}$ is the interaction Hamiltonian.
It is straightforward to prove that if you do not have time derivatives in the interaction terms these two expressions are equivalent. However these expressions are derived through different approaches and I can not explain from first principles why (and when) are they giving the same answer.
Result 1 comes from the path-integral approach where we start with a Lagrangian and do perturbation with respect to the action which is the integral of the Lagrangian. Roughly, the exponential is the probability amplitude of the trajectory.
Result 2 comes from the approach taught in QFT 101: Starting from the Schrödinger equation, we guess relativistic generalizations (Dirac and Klein-Gordon) and we guess the commutation relations to be used for second quantization. Then we proceed to the usual perturbation theory in the interaction picture. Roughly, the exponential is the time evolution operator.
Why and when are the results the same? Why and when the probability amplitude from the path integral approach is roughly the same thing as the time evolution operator?
Or restated one more time: Why the point of view where the exponential is a probability amplitude and the point of view where the exponential is the evolution operator give the same results?
-
2
There is a change in sign either in 1 or in 2. – juanrga Dec 4 '12 at 21:17
## 4 Answers
Starting from the Hamiltonian formulation of QM one can derive the path-integral formalism (see chapter 9 in Weinberg's QFT volume 1), where the Hamiltonian action is found to be proportional to $\int \mathrm{d}t (pv - H)$.
For a subclass of theories with "a Hamiltonian that is quadratic in the momenta" (see section "9.3 Lagrangian Version of the Path-Integral formula" in above textbook), the term $(pv - H)$ can be transformed into a Lagrangian $L_H = (pv - H)$. Then the Lagrangian action is proportional to $\int \mathrm{d}t L_H$. Both actions give the same results because one is exactly equivalent to (and derived from) the other.
$$\int \mathrm{d}t (pv - H) = \int \mathrm{d}t L_H$$
Moreover, when working in the interaction representation you do not use the total Hamiltonian but only the interaction. The derivation of the Hamiltonian action is the same, except that now the total Hamiltonian is substituted by the interaction Hamiltonian $V$. Again you have two equivalent forms of write the action either in Hamiltonian or Lagrangian form.
If you consider Hamiltonians whose interaction $V$ does not depend on the momenta, then the $pv$ term vanishes and the above equivalence between the actions reduces to
$$- \int \mathrm{d}t V = \int \mathrm{d}t L_V$$
where, evidently, the interaction Lagrangian is $L_V = -V$
This is what happens for instance in QED, where the interaction $V$ depends on both position and Dirac $\alpha$ but not on momenta.
Note: There is a sign mistake in your post. I cannot edit because is less than 10 characters and I have noticed the mistake in a comment to you above, but it remains.
-
From $L=H_0-V$ and $H=H_0+V$ one sees that (for simple enough theories) the Lagrangian interaction and the Hamiltonian interaction differ only by a sign.
-
1
what conditions must hold a theory to be considered simple enough for both formulations be equivalent? – lurscher Dec 3 '12 at 20:25
1
I think that you restated the naive reason that I already gave for why all this works. However I do not see why it is true from first principles. Sorry for the vagueness of the comment. – Krastanov Dec 3 '12 at 20:55
@Krastanov: there are no firster principles for this. For a general Lagrangian not of the form $L=T-V$, the equivalence is not true. – Arnold Neumaier Dec 4 '12 at 7:57
Well, another reason why this may be the case is that the action is the same thing in both cases. You either start with the action $S = \int L$, or you start with the so-called phase space action of the Hamiltonian $S = \int p{\dot q} - H$. Given the definition of the Hamiltonian, it should be clear that these two expressions are formally identical if you have an invertible mapping from the set $(p, q) \rightarrow (q, {\dot q})$. Since interaction terms typically don't involve the time derivatives of the configuration variables (though there are exceptions), it's hardly surprising that the part of the formalism that involves only interactions will come out nearly identically, formally.
-
I do not get the part about starting from the "phase space action". I do not think that I have ever done something like that (I am a novice.). – Krastanov Dec 3 '12 at 20:57
1
@Krastanov: Start with the "phase space action" I put up above, and vary it with respect to $p$ and $q$. The result will be that you get Hamilton's equations of motion out. It's just a cute way of making the Hamiltonian formalism look more "principle of least action"-y. – Jerry Schirmer Dec 3 '12 at 22:17
Isn't the exponential in form 1) also an operator? Then it is not really the probability amplitude of a trajectory, as you claim. Both forms seem to be derived from the same origin, namely the Hamiltonian formalism, and both exponentials have the status of the time evolution operator in the interaction picture. In the path-integral approach the expression would be the formally quite different beast: $$\langle 0|\; \text{some operators}\;e^{i S[\hat{\phi}]} |0\rangle = \int D \phi \; \text{some functions (fields)} \; e^{i S[\phi]},$$ where now the exponential on the right hand side is a c-number that indeed can be interpreted as the amplitude of a field configuration $\phi(x)$. The integral is a sum over all such weighted field configurations.
If you want to see quickly why this should be the same as your expression, then remember that the ground state $|0\rangle$ is not an eigenstate of the evolution operator or the field operators, in general. It can, however, be expanded as a sum of field eigenstates (coherent states). You can then see immediately that the expansion will give a sum of c-number contributions, which also represent field configurations weighted by an exponential function. If you want to actually do this properly you will have to do some other stuff like discretise space-time and insert complete sets of field eigenstates at each point. This is covered in most field theory texts, e.g. Peskin & Schroeder, or Altland & Simons.
-
thanks for the corrections, however they do not change my question: Why the point of view where the exponential is a probability amplitude and the point of view where the exponential is the evolution operator give the same results? – Krastanov Dec 4 '12 at 20:15
1
Well, if you really want to understand why they give the same results, you have to just go through the derivation, as with anything else in mathematical physics. What I was trying to say is that both exponentials ultimately represent amplitudes weighting possible field configurations. In order to calculate transition probabilities, expectation values or whatever, you always sum over the amplitudes for indistinguishable paths between initial and final states. – Mark Mitchison Dec 4 '12 at 21:53
1
(contd.) In the path integral formalism, this sum is explicit, and we generally speak of a dominant contribution coming from the classical stationary path $\delta S/\delta \phi = 0$, with quantum fluctuations around the stationary path suppressed by a factor $e^{i S/\hbar}$. In the Hamiltonian formalism the sum is kind of 'hidden' inside the operators, and appears because non-zero commutation relations between operators mean that the interesting states (e.g. the ground state, particle number eigenstates...) will not be eigenstates of the interaction Hamiltonian/Lagrangian. – Mark Mitchison Dec 4 '12 at 22:01
1
(contd.) So in the operator formalism, it is instead commutation relations that imply fluctuations. But the physical meaning is the same. I like to think of operators as algebraic book-keeping devices that reproduce results of the path-integral formalism. But this is all horribly vague and imprecise, and will probably irritate some other user of this site who has a different personal take on the meaning of the formalism. If you want to understand for yourself, you really need to go through the derivation from one formalism to the other in, say, one of the references I gave. – Mark Mitchison Dec 4 '12 at 22:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231372475624084, "perplexity_flag": "head"}
|
http://www.physicsforums.com/library.php?do=view_item&itemid=31
|
Physics Forums
Menu
Home
Action
My entries
Defined browse
Select Select in the list MathematicsPhysics Then Select Select in the list Then Select Select in the list
Search
moment of inertia
Definition/Summary
Moment of Inertia is a property of rigid bodies. It relates rotational force (torque) to rotational acceleration in the same way that mass relates ordinary (linear) force to ordinary acceleration. Moment of Inertia has dimensions of distance squared times mass ($ML^2)$. Moment of Inertia is always relative to a given axis. The same rigid body will usually have different Moments of Inertia for different axes. Moment of Inertia is additive: the Moment of Inertia of a composite body is the sum of the Moments of Inertia of its parts (relative to the same axis).
Equations
Moment of inertia of rigid body about an axis, where r is the distance from that axis and $\rho$ is the density: $$I\ \ =\ \int dm\ r^2\ =\ \int \rho dxdydz\ r^2$$ (for comprehensive lists of moments of inertia of specific bodies, see See Also) Moment of momentum (angular momentum) about centre of mass, C: $\ \ \ \ \boldsymbol{L}_C = I_C\,\boldsymbol{\omega}$ (principal axis case) $\ \ \ \ \boldsymbol{L}_C = \tilde{I}_C\,\boldsymbol{\omega}$ (general case) where $\boldsymbol{\omega}$ is parallel to a principal axis, with principal moment of inertia I, and Î is the moment of inertia tensor. About a general point, P: $\ \ \ \ \boldsymbol{L}_P = I_C\,\boldsymbol{\omega}\ +\ \vec{PC}\times m\,\boldsymbol{v}_C$ (principal axis case) $\ \ \ \ \boldsymbol{L}_P = \tilde{I}_C\,\boldsymbol{\omega}\ +\ \vec{PC}\times m\,\boldsymbol{v}_C$ (general case) Kinetic Energy: $KE\ =\ \frac{1}{2}\,m\boldsymbol{v}_C^2\ +\ KE_{rot}\ =\ KE_{trans}\ +\ KE_{rot}$, where: $\ \ \ \ KE_{rot}\,=\,\frac{1}{2}\ \boldsymbol{\omega}\cdot\boldsymbol{L}_C\,=\,\frac{1}{2}\ I_C \omega^2$ (principal axis case) $\ \ \ \ KE_{rot}\,=\,\frac{1}{2}\ \boldsymbol{\omega}\cdot\boldsymbol{L}_C\,=\,\frac{1}{2}\ \boldsymbol{\omega}^T\boldsymbol{L}_C\,=\,\frac{1}{2}\ \boldsymbol{\omega}^T\tilde{I}_C\boldsymbol{\omega}$ (general case) Parallel axis theorem (distance $d$ between axes): $$I\ =\ I_C\ +\ md^2$$ Tensor version (displacement $\boldsymbol{d}$ between points): $$\tilde{I}\ =\ \tilde{I}_C\ -\ m(\boldsymbol{d}\times(\boldsymbol{d}\ \times))$$ Period of a pendulum (with small oscillations): $$T = 2 \pi \sqrt{\frac{I}{mgh}}$$ Radius of gyration of a body about an axis (the distance, from the axis, at which an ideal body with the same mass rotating about the same axis will have the same Moment of Inertia): $$\sqrt{\frac{I}{m}}$$
Scientists
Isaac Newton (1643-1727) Leonhard Euler (1707-1783): Theoria motus corporum solidorum seu rigidorum (1730) Jakob Steiner (1796-1863)
Recent forum threads on moment of inertia
Breakdown
Physics > Classical Mechanics >> Newtonian Dynamics
See Also
angular momentumMofI list @ wikiMofI tensor list @ wikiMofI tensor @ john baezmoment of area
Images
Extended explanation
"about a point": WARNING: there is no such thing as moment of inertia about a point. In two-dimensional problems, we often say "moment of inertia about a point", but we really mean "moment of inertia about the axis, through that point, perpendicularly out of the page" "about a point" (for moment of inertia) is perfectly acceptable in two dimensions (even in exam questions ), but it does not work at all in three dimensions. (The angular momentum vector, by comparison, is defined either about an axis or about a point: its magnitude about an axis is the component along that axis of the complete angular momentum vector about any point on that axis: however, there is no moment of inertia vector.) (The moment of inertia tensor, by comparison, is about a point: it generates the moment of inertia about any axis through that point.) Axis of rotation: If one point of a rigid body is fixed, then a whole line of points in the body is stationary. That line (which may change, both in the body itself, and in space) is the (instantaneous) axis of rotation. For a freely-moving rigid body, the axis of rotation is usually (but not always) taken to be through its centre of mass. Any parallel axis will also do. General principles: For a rigid body moving without rotation, every part of it has the same speed, and so the kinetic energy of the body is given by the general expression: $KE\ =\ (1/2)mv_C^2$ where $v_C$ is the speed of its center of mass. But if it is rotating, the speed of each part depends on its distance from the (instantaneous) axis of rotation. For example, if that axis goes through the center of mass, then $v^2\ =\ v_C^2\ +\ \omega^2r^2$, where $\omega$ is the angular velocity, and $r$ is the perpendicular distance from the axis. Imagine an object, with a fixed axis, being divided into infinitesimally small pieces, each at distance $r_i$ from the axis, and of mass $\delta m_i$, so that its total mass is $M\ =\ \sum_i \delta m_i$ The kinetic energy now becomes the sum of the KEs of each of these tiny pieces:$KE\ =\ \sum_i KE_i\ =\ (1/2)\sum_i \delta m_i v_i^2$But $v_i\ =\ \omega r_i$, so $KE\ =\ (1/2)\sum_i \delta m_i \omega ^2 r_i^2\ =\ (1/2) \omega ^2 \sum _i \delta m_i r_i^2$ We can write this as: $KE\ =\ (1/2)I\omega ^2$ where I is given by: $I\ =\ \sum _i \delta m_i r_i^2$ In the limit where the the object is broken up into infinitely many pieces, the sum is replaced by an integral, giving the Moment of Inertia of the object about the specified axis: $I\ =\ \int r^2dm$ There is a similar derivation for the formula for the angular momentum vector, $\boldsymbol{l}\ =\ I \boldsymbol{\omega}$ Four formulas: There are four principal formulas to know for moment of inertia, and they are: 1. of a point mass m about an axis, I = mr², where r is the distance between the point mass and the axis. 2. of a uniform rod of mass m and length 2L about an axis through its midpoint and perpendicular to its length, I = ⅓mL². 3. of a rectangular lamina of mass m, length 2L and any width about an axis which bisects the length, I = ⅓mL². 4. of a circular disc of mass m and radius r about an axis through its centre and perpendicular to the disc, I = ½mr². For other formulas, see the list in wikipedia. Moment of inertia tensor: Surprisingly, angular momentum is not generally aligned with rotation. The angular momentum vector is only aligned with the axis of rotation of a rigid body if that axis is a principal axis of the body. Since the angular momentum vector of an unforced rigid body must be constant (in space), the axis of rotation (if not already aligned along it) must move around it: this is precession. The moment of inertia tensor converts the angular velocity vector of a rigid body into the angular momentum vector: $\tilde{I}\,\boldsymbol{\omega}\ =\ \boldsymbol{L}$ A tensor converts one vector to a different vector. The eigenvectors of the moment of inertia tensor of a rigid body are its principal axes, and the eigenvalue of each principal axis is the (ordinary) moment of inertia about that axis. Every rigid body has either: i] three perpendicular principal axes ii] principal axes in every direction in a particular plane (all with the same moment of inertia), and a perpendicular principal axis iii] principal axes in every direction (all with the same moment of inertia) In particular, any axis of rotational symmetry of a rigid body is a principal axis. Angular momentum: The rate of change of the angular momentum vector of a rigid body about any point equals the net torque vector about that point (the moment of all the external forces acting on that body): $$\boldsymbol{\tau}_{net}\ =\ \frac{d\boldsymbol{L}}{dt}\ =\ \frac{d}{dt}\left(\tilde{I}\,\boldsymbol{\omega} \right)\ =\ \tilde{I}\,\frac{d}{dt}\left(\boldsymbol{\omega} \right)\ +\ \frac{d\tilde{I}}{dt}\left(\boldsymbol{\omega} \right)$$ the point must be stationary, or must have a velocity equal (or parallel) to the velocity of the centre of mass: that includes the centre of mass itself, and the (moving) instantaneous point of contact with a fixed surface (whether rolling or skidding) where the normal to the surface always passes through the centre of mass Although I is fixed in the body, it is not fixed in space, and so dI/dt is not zero, unless ω lies along a principal axis. If ω does lie along a principal axis, with eigenvalue IA, then the angular momentum vector is: $$\boldsymbol{L}\ =\ \tilde{I}\,\boldsymbol{\omega}\ =\ I_A\,\boldsymbol{\omega}$$ and so, taking components in that direction, if ω is fixed in that direction, then the torque about that axis equals the (ordinary) moment of inertia about that axis times the angular acceleration: $$\tau_{net}\ =\ \frac{dL}{dt}\ =\ I_A\,\frac{d\omega}{dt}$$ In lists of (ordinary) moments of inertia about an axis, that axis is always a principal axis. Rotating frame and Euler's equations: However, if we change to a frame of reference fixed in the body, and therefore rotating with it, then $d\tilde{I}/dt$ is zero, but there is an added "cross" term: $$\boldsymbol{\tau}_{net}\ =\ \frac{d\boldsymbol{L}}{dt}\ +\ \boldsymbol{\omega}\times \boldsymbol{L}\ =\ \frac{d}{dt}\left(\tilde{I}\,\boldsymbol{\omega} \right)\ \ +\ \ \boldsymbol{\omega}\times \left(\tilde{I}\,\boldsymbol{\omega}\right)\ =\ \tilde{I}\,\frac{d}{dt}\left(\boldsymbol{\omega} \right)\ \ +\ \ \boldsymbol{\omega}\times \left(\tilde{I}\,\boldsymbol{\omega}\right)$$ which, expressed relative to three perpendicular axes fixed in the body along principal axes with moments of inertia $I_1\ I_2\ \text{and}\ I_3$, gives the three Euler's equations: $$\tau_1\ =\ I_1\,\frac{d\omega_1}{dt} + (I_3\ -\ I_2)\omega_2\omega_3$$ $$\tau_2\ =\ I_2\,\frac{d\omega_2}{dt} + (I_1\ -\ I_3)\omega_3\omega_1$$ $$\tau_3\ =\ I_3\,\frac{d\omega_3}{dt} + (I_2\ -\ I_1)\omega_1\omega_2$$ Composite body … the parallel axis theorem: (Sometimes called "Steiner's theorem", or even the "Huygens-Steiner theorem") The simple formulas ($\boldsymbol{L}\ =\ I\boldsymbol{\omega}\ +\ \boldsymbol{r}\times m\,\boldsymbol{v}$ etc) only work for I about an axis through, or for Î at, the centre of mass. For a composite body, we must use I or Î relative to the combined centre of mass, which we obtain by adding I or Î of each individual part, not of course relative to its own centre of mass, but to the combined centre of mass. This "off-centre" moment of inertia of each individual part is: $$I = I_C\ +\ md^2$$ where m is the mass of the part, $I_C$ is its moment of inertia about the parallel axis through its own centre of mass, and d is the (perpendicular) distance of the combined centre of mass from that axis. The parallel axis theorem has no other useful purpose. The perpendicular axis theorem: The Moment of Inertia of a lamina (an ideal two-dimensional body) about any axis perpendicular to the lamina is the sum of its Moments of Inertia about any pair of axes in the plane of the lamina which are perpendicular to the first axis and to each other: $I = I_x\,+\,I_y$ Comparison with Inertia: The name "Moment of Inertia" suggests rotational inertia, the rotational analogue of mass (inertia) for linear motion. This is not a good name, since Moment of Inertia has dimensions of mass times distance squared, while Moment usually has dimensions of distance, and inertia has dimensions of mass (or arguably mass times distance over time), so "Moment" times "Inertia" should have dimensions of mass times distance (or arguably mass times distance squared over time). Isaac Newton, incidentally, described inertia as follows: "The vis insita, or innate force of matter is a power of resisting, by which every body, as much as in it lies, endeavors to preserve in its present state, whether it be of rest, or of moving uniformly forward in a right line." Also called mass moment of inertia: Structural engineers sometimes use the name "moment of inertia" for the second moment of area, and (so as to avoid confusion ) call the (usual) moment of inertia the mass moment of inertia, yet denote both by the same letter, ($I$). Of course, there is no connection: second moment of area has units of length to the fourth power ($L^4$), and measures a body's resistance to bending stress perpendicular to an axis, while (mass) moment of inertia has units of mass times length squared ($ML^2$), and relates angular acceleration to torque ($\tau\ =\ I\alpha$). (Similarly, sometimes the name "polar moment of inertia" is used for the polar moment of area)
Commentary
Qasim Balti @ 04:47 AM Nov9-12
Best Example of Inertia: A Bus moving and there are some people who are sitting in it but suddenly when Bus apply brakes the people who are sitting feel some kind of force which pull them forward, that force is the Force of Inertia. ~EDIT(tiny-tim): that's inertia, an old word for momentum (or mass), which has almost nothing to do with moment of inertia, see section Comparison with Inertia:
Mubina @ 02:32 AM Jul17-12
It will be better if you add some pictures, to explain.
nazarmowfaq @ 12:17 AM May7-12
very good
moses obinna @ 02:14 PM Apr18-12
i love this forum
Hitendra @ 06:58 AM Mar23-12
On this site why i cant see maths formula??? ~EDIT(tiny-tim): you need javascript enabled
tiny-tim @ 03:37 AM Mar7-12
Removed inertia tensor, and products of inertia, from "Equations" (only).
tiny-tim @ 05:51 AM Mar6-12
Split opening paragraph into two paragraphs, and made it more concise.
jambaugh @ 09:33 PM Mar5-12
I removed objection to moment of inertia about a point. Very technically the moment of inertia tensor is always defined "about a point" that point being the center of all axes of rotation for which the tensor is valid. One then has the translation formula. I also removed the statements about moment of inertia being a scalar. It is as a whole, a tensor. I rephrased things to reference components. This article needs much more editing, I'll look at it some more later. Also: "being scalar" has nothing to do with additivity, I changed the claim to a direct statement. Note that in its full generality, the moment of inertia is a linear operator mapping a rank 2 antisymmetric tensor (rotation tensor [omega cross ] ) to another rank 2 antisymmetric tensor (angular momentum tensor [L cross] ) where [v cross] is the matrix mapping u to v cross u. That may seem a bit of extreme generalization but it relates to the fact that it is perfectly reasonable to speak of rotations about a center in terms of the plane of rotation rather than the axis of rotation. a la "moment of inertia about a point".
photon rush @ 09:51 PM Oct8-11
tiny tim you are the man great explaination!
photon rush @ 08:50 PM Oct7-11
if we changed earths moment of inersia to an equal opposite by reverse polarity would we get younger instead of older
photon rush @ 06:58 PM Oct7-11
moment of inertia is scalar reference: top of page.
tiny-tim @ 07:17 AM Aug30-11
Changed \mathbf to \boldsymbol, since (with the new mathjax) that gives letters the correct slope and works with \omega.
arildno @ 01:31 PM Jan1-11
Tiny-Tim: Concerning angular momentum with respect to a point, you should emphasize that the equality between the torques of the external forces and the rate of change of angular momentum ABOUT that point only holds in the general case if we regard the point as being at rest with respect to our inertial frame. ~EDIT(tiny-tim): (or "comoving") Done it. Thanks.
manoj mishra @ 11:34 AM Oct9-10
moment of inertia is a tensor quantity
gullapalli @ 09:41 AM May31-10
The correct Physical meaning of Moment of Inertia is - The opposition of a body to any external force because of its internal structure.
Page 1 of 2 1 2 >
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9049056768417358, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/206279/when-the-injective-hull-is-indecomposable
|
# When the injective hull is indecomposable
Let $R$ be a ring and $M$ an $R$-module. Denote by $E(M)$ the injective hull of $M$. I was trying to prove that the following conditions are equivalent:
1) $(0)$ is meet-irreducible in $M$;
2) $E(M)$ is directly indecomposable.
I was able to prove that 1 implies 2 but I'm having some difficulties in proving 2 implies 1, any suggestions?
-
What does "0 inrreducible in M" mean? – rschwieb Oct 3 '12 at 0:46
@rschwieb: “$M$ has uniform dimension 1”, $M$ contains no direct sum of two submodules. – Jack Schmidt Oct 3 '12 at 0:53
@rschwieb: that was a typo, it's irreducible and it means if $(0)=N_1\cap N_2$ with $N_1,N_2\subset M$ then $N_1=0$ or $N_2=0$. – Chris Oct 3 '12 at 1:03
See page 84 of Lam's Lectures on Modules and Rings for a proof – Jack Schmidt Oct 3 '12 at 1:04
– Jack Schmidt Oct 3 '12 at 1:11
show 1 more comment
## 1 Answer
It's elementary if you know the basic properties: "$E(M)$ is a maximal essential extension of $M$", and "injective submodules are direct summands".
Suppose $E(M)$ is indecomposable. Let $A$ and $B$ are nonzero submodules of $M$ such that $A\cap B=0$. Then $E(A)$ is an injective submodule of $E(M)$, and hence it is a direct factor. The only possibilities are $E(M)$ and $0$. Since $A$ is nonzero, it is not the latter, so $E(A)=E(M)$. But this means that $A$ is essential in $E(M)$, and so $A\cap B\neq 0$, a contradiction.
Now suppose $0$ is irreducible. Let $C\oplus D=E(M)$ with $C\neq 0$. Now $(C\cap M)\cap (D\cap M)=0$, and since $M$ is essential in $E(M)$, $M\cap C\neq 0$. By irreducibility of $0$, $M\cap D=0$, but again because $M$ is essential, this amounts to $D=0$. Thus, $E(M)$ is indecomposable.
-
As you can see, commutativity does not come into play. – rschwieb Oct 3 '12 at 1:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236201643943787, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/singularity?sort=active&pagesize=15
|
# Tagged Questions
The singularity tag has no wiki summary.
3answers
105 views
### Is it possible to have a singularity with zero mass?
A singularity, by the definition I know, is a point in space with infinite of a property such as density. Density is Mass/Volume. Since the volume of a singularity is 0, then the density will thus ...
2answers
104 views
### What is a sudden singularity?
I've seen references to some sort of black hole (or something) referred to as a sudden singularity, but I haven't seen a short clear definition of what this is for the layman.
1answer
280 views
### Superposition of Ricci scalars [closed]
Suppose I have two point/line singularities in spacetime (what is important to me is that they are localized). Also suppose I have some fields in spacetime and that the two singularities interact with ...
0answers
49 views
### Singularities in Schwarzchild space-time
Can anyone explain when a co-ordinate and geometric singularity arise in Schwarzschild space-time with the element ...
5answers
325 views
### What happens to light and mass in the center of a black hole?
I know that black holes are "black" because nothing can escape it due to the massive gravity, but I am wondering if there are any theories as to what happens to the light or mass that enters a black ...
2answers
90 views
### What happens to things when things get crushed in a blackhole [duplicate]
When a black hole destroys things until they are smaller than molecules, where does it go and what happens when it clogs up?
2answers
398 views
### Can we have a black hole without a singularity?
Assuming we have a sufficiently small and massive object such that it's escape velocity is greater than the speed of light, isn't this a black hole? It has an event horizon that light cannot escape, ...
1answer
227 views
### How to thoroughly distinguish a coordinate singularity and a physical singularity
In a course on general relativity I am following at the moment, it was shown that the singularity $r=2M$ in the Schwarzschild solution is a consequence of the choice of coordinates. Introducing ...
0answers
45 views
### Naked singularity and null coordinates
I'm trying to understand the notion of a naked singularity on a more mathematical level (intuitively, it's a singularity "one can see and poke with a stick", but I'm having troubles on how to actually ...
0answers
48 views
### Naked singularity and extendable geodesics
I'm currently trying to understand the notion of a naked singularity. After consulting books by Wald and Choquet-Bruhat, it seems that for a naked singularity one must have that the causal curves can ...
2answers
183 views
### Fighting a black hole: Could a strong spherical shell inside an event horizon resist falling in to the singularity?
As a thought experiment imagine an incredibly strong spherical shell with a diameter a bit smaller than the event horizon of a particular large black hole. The shell is split into two hemispheres, ...
2answers
171 views
### What is the definition of a timelike and spacelike singularity?
What is the definition of a timelike and spacelike singularity? Trying to find, but haven't yet, what the definitions are.
6answers
133 views
### Is black hole bright at center?
As we know that light photon cannot escape the gravity of black hole so I was thinking that if that is the surface of the black hole would be bright as all the photons would be there only. Am I right ...
4answers
860 views
### If an anti-matter singularity and a normal matter singularity, of equal masses, collided would we (outside the event horizon) see an explosion?
If an anti-matter singularity and a normal matter singularity, of equal masses, collided would we (outside the event horizon) see an explosion?
4answers
864 views
### Why does a black hole have a finite mass?
I mean besides the obvious "it has to have finite mass or it would suck up the universe." A singularity is a dimensionless point in space with infinite density, if I'm not mistaken. If something is ...
2answers
106 views
### Future light cones inside black hole
In Caroll's Spacetime and Geometry, page 227, he says that from the Schwarzschild metric, you can see than from inside a black hole future events all lead to the singularity. He says you can see this ...
1answer
381 views
### What happens to a photon in a black hole?
Assume a photon enters the event horizon of a black hole. The gravity of the black hole will draw the photon into the singularity eventually. Doesn't the photon come to rest and therefore lose it's ...
3answers
118 views
### Is singularity at the exact centre of a black hole?
I've read that all paths inside the EH lead to singularity. ALL paths. Even the ones pointing away from it, right? Because there's NO pointing away from singularity, since ALL paths point to it. So ...
2answers
90 views
### about the 1D singularity of black hole
I saw some responses here saying that the singularity into the black hole is one dimension object so my question is : is it possible that the singularity is simply a merger of the 4 dimensions of the ...
1answer
128 views
### Electric field singularities
Is this the list of all possible singularities in electrostatic field $E$? near the point charge: $\frac{1}{r^2}$ near the line of charge: $\frac{1}{r}$ near the edge (not surface) of uniform ...
0answers
43 views
### Is sonoluminescence relevant to the behaviour of Navier-Stokes (or converse)?
More precisely, could Sonoluminescence be a singularity of Navier-Stokes(NS)? Is there some other connection that might be interesting, or is it completely irrelevant? Wiki page mentions NS, but says ...
1answer
43 views
### How does a geodesic equation on an n-manifold deal with singularities?
My general premise is that I want to investigate the transformations between two distinct sets of vertices on n-dimensional manifolds and then find applications to theoretical physics by: ...
4answers
220 views
### Gravity from a singularity as distance approaches zero
If you had a singularity (that had mass but took up no space), what would happen to the acceleration of an object as it approached this singularity? I would assume that it would be infinite, since as ...
2answers
100 views
### Was the singularity at Big Bang perfectly uniform and if so, why did the universe lose its uniformity?
Am I right in understanding that current theory states that Big Bang originated from a single point of singularity? If so, would this mean that this was a uniform point? If so, as the universe ...
1answer
45 views
### Relationship between Weak Cosmic Censorship and Topological Censorship
The weak cosmic censorship states that any singularity cannot be in the causual past of null infinity (reference). The topological censorship hypothesis states that in a globally hyperbolic, ...
0answers
25 views
### What is the state-of-the-art on spacelike singularities in string theory?
What lessons do we have from string theory regarding the fate of singularities in general relativity? What happens to black hole singularities? What happens to cosmological singularities? Which ...
5answers
895 views
### Why singularity in a black hole, and not just “very dense”?
Why does there have to be a singularity in a black hole, and not just a very dense lump of matter of finite size? If there's any such thing as granularity of space, couldn't the "singularity" be just ...
0answers
401 views
### Was the Big Bang a result of a decayed white hole singularity? [closed]
From my understanding, the Big Bang is theorized to have been a result of matter ejecting from a decayed white hole space/time singularity. ...
3answers
306 views
### An electron falling into a black hole
If an electron falls into a black hole. How can the Heisenberg uncertainty principle hold? The electron has fallen into the singularity now so it has a well defined position which means that it ...
1answer
140 views
### Why are black hole singularities stable?
The Friedmann equations says that huge matter densities lead to huge expansion rates. In Newtonian gravity, two massive point particles separated by an infinitesimal distance will experience an ...
3answers
341 views
### What happens to matter in extremely high gravity?
Though I am a software engineer, I have bit interest in sciences as well. I was reading about black holes and I thought if there is any existing research results on How matter gets affected because of ...
7answers
503 views
### Do all black holes have a singularity?
If a large star goes supernova, but not enough mass collapses to form a black hole, it often forms a neutron star. My understanding is that this is the densest object that can exist because of the ...
2answers
1k views
### Tension in a curved charged wire (electrostatic force) - does wire thickness matter?
Consider a conducting wire bent in a circle (alternatively, a perfectly smooth metal ring) with a positive (or negative) electric charge on it. Technically, this shape constitutes a torus. Assume ...
1answer
182 views
### Event horizons without singularities
Someone answered this question by saying that black hole entropy conditions and no-hair theorems are asymptotic in nature -- the equations give an ideal solution which is approached quickly but never ...
2answers
104 views
### Why is matter drawn into a black hole condensed into a single point within the singularity? [duplicate]
Possible Duplicate: Why is matter drawn into a black hole not condensed into a single point within the singularity? When we speak of black holes and their associated singularity, why is ...
1answer
226 views
### Why is matter drawn into a black hole not condensed into a single point within the singularity?
When we speak of black holes and their associated singularity, why is matter drawn into a black hole not condensed into a single point within the singularity?
1answer
958 views
### How can something finite become infinite?
How can the universe become infinite in spatial extent if it started as a singularity, wouldn't it take infinite time to expand into an infinite universe?
1answer
367 views
### What happens to light that falls into a black hole?
When light enters a black hole, what happens to it? I imagine the photons will either fall into the singularity, or the light will orbit just inside the event horizon indefinately. (Some background ...
2answers
98 views
### Moving “virtual” singularity?
Imagine two close, really big black holes rapidly spinning around each other. That setup would emit a terrible amount of gravitational waves. My question is, could those gravitational waves, if big ...
3answers
54 views
### What would a rotating black-hole look like to a “geo-stationary” observer orbiting the black hole
A rotating black hole is believed to contain a ring singularity rather than a point. However, if an astronaut is orbiting the black hole at exactly the same angular velocity as the blackhole (in ...
2answers
193 views
### a question about singularities in gravity and Physics in general
I had this doubt bugging my mind for a long time about singularities in Physics. I heard that R.Penrose and S.Hawking have proposed that there could be singularities at Blackholes and at the time of ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352314472198486, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions?page=1&sort=newest
|
# All Questions
0answers
4 views
### Fast way to calculate Eigen of 2x2 matrix using a formula
I found this site: http://www.math.harvard.edu/archive/21b_fall_04/exhibits/2dmatrices/index.html Which shows a very fast and simple way to get Eigen vectors for a 2x2 matrix. While harvard is quite ...
0answers
3 views
### Question on a third-order boundary value problems
This is the corollary $2.1$, from the article "Positive solutions of third order semipositone boundary value problems" if $$u'''=\lambda (\sum_{i=1}^{m} c_i(t)u^{\mu_i}-d(t))+e(t), t\in (0,1)$$ and ...
2answers
6 views
### quadratic equation precalculus
from Stewart, Precalculus, 5th, p56, Q. 79 Find all real solutions of the equation $$\dfrac{x+5}{x-2}=\dfrac{5}{x+2}+\dfrac{28}{x^2-4}$$ my solution ...
0answers
8 views
### Is a Relationship Quadratic?
I have a relationship $y=f(x)$ for which I can obtain data through simulation. I have good reason to suspect that this relationship is quadratic (rather than, say, exponential), and would like to ...
3answers
15 views
### Finding the Fourier Series of $\sin(x)^2\cos(x)^3$
I'm currently struggling at calculation the Fourier series of the given function $$\sin(x)^2 \cos(x)^3$$ Given Euler's identity, I thought that using the exponential approach would be the easiest ...
0answers
24 views
### question on summation?
Please, I need to know the proof that \left(\sum_{k=0}^{\infty }\frac{n^{k+1}}{k+1}\frac{x^k}{k!}\right)\left(\sum_{\ell=0}^{\infty }B_\ell\frac{x^\ell}{\ell!}\right)=\sum_{k=0}^{\infty ...
1answer
36 views
### Last non zero digit of $n!$
What is the last non zero digit of $100!$? Is there a method to do the same for $n!$? All I know is that we can find the number of zeroes at the end using a certain formula.However I guess that's of ...
1answer
17 views
### Euler lagrange equation is a constant
I'm working through exercises which require me to find the Euler-Lagrange equation for different functionals. I've just come across a case where the Euler Lagrange equation simplifies to $$1=0.$$ ...
0answers
3 views
### Characteristic equation/expression for addtion of n lognormal distributions
I have to find the expression for addition of n lognormal distributions (lognorm1+lognorm2+lognorm3+.....+lognorm nth) with mean values and error factors known for each lognormal distribution.Please ...
3answers
16 views
### Finding sequence in a set $A$ that tends to $\sup A$
I have been reading the book at http://www.neunhaeuserer.de/short.pdf, and have noticed that in the proof of the intermediate value theorem (Theorem 5.8 in the book), it seems to be quietly assumed ...
0answers
9 views
### Linearizing a series expansion
In the context of statistical mechanics the "classical trace" is defined as $Tr(A e^{-\beta H}) = \int dr^N dp^N A e^{-\beta H}$ where $r^N$ and $p^N$ are phase space variables. So if $\Delta H$ is a ...
0answers
11 views
### Topology of the Segre product vs. the product topology
In general, the product topology on two (quasiprojective) varieties is not the same as the topology of the product variety given by the Segre embedding. This is something I've often seen asserted is ...
0answers
11 views
### A special random subset of uniformly distributed numbers is still uniformly distributed?
Assume that I have a value range [1,1000]. Goal: I want to have 10 numbers randomly sampled from [1,1000]. case1: ...
1answer
21 views
### Combinations, Expected Values and Random Variables
A community consists of $100$ married couples ($200$ people). If during a given year, $50$ of the members of the community die, what is the expected number of marriages that remain intact? Assume ...
0answers
34 views
### What Hilbert-Style actually do? What is deduction? Why do we need it? Amazing stuff!
So, the problem is here: We can use the following axioms: \begin{align} &A\to(B\to A)&\tag{A1}\\ &[A\to(B\to C)]\to[(A\to B)\to(A\to C)]&\tag{A2}\\ &(\lnot A\to\lnot B)\to(B\to ...
0answers
11 views
### Showing it is a joint probability density function
I have two random variables $X,Y$ with a joint density function $f_{X,Y}(x,y)=x+y$ if $(x,y)\in[0,1]\times [0,1]$ and otherwise $f_{X,Y}(x,y)=0$ I want to analyze this case in different cases, first ...
1answer
24 views
### calculate kernels of matrices with angles
So my professor gave me this question: I have to find the basis of the eigenvalues of this matrix \begin{pmatrix} \cos(q) & \sin(q)\\ \sin(q) & -\cos(q)\\ \end{pmatrix} so I calculate ...
3answers
39 views
### If $x \leq g(x) \leq x^2-x+1$ where $x \in [0,2]$, can we say that $g(x)$ is continuous at $x=1$?
If $x \leq g(x) \leq x^2-x+1$ where $x \in [0,2]$, can we say that $g(x)$ continuous at $x=1$ ? Is $g(x)$ continuous in $[0,2]$?
2answers
38 views
### For every prime of the form $2^{4n}+1$, 7 is a primitive root.
What I want to show is the following statement. For every prime of the form $2^{4n}+1$, 7 is a primitive root. What I get is that $$7^{2^{k}}\equiv1\pmod{p}$$ ...
0answers
44 views
### For what $p$ is $x^p$ Lebesgue Integrable?
Revising for an exam on Monday any help with the following question would be greatly appreciated; If $f$ is a function on $(0, \infty)$ taking values in $\mathbb R$, defined $f(x)=x^p$ ($p$ is a real ...
0answers
6 views
### Polynomials, integrals convergence
Let $P_n(x)= \frac{x^n(bx -a)^n}{n!}, \ \ \ a,b,n \in \mathbb{N}$. Prove that $\int_0 ^{\pi}P_n(x) \sin xdx \rightarrow 0 \ \ \ \$ and $\ \ \ \ \int_0 ^r P_n(x)e^xdx \rightarrow 0$ \$ \ \ \ \ \ \ (n ...
0answers
21 views
### The smallest nontrivial conjugacy class in $S_n$
Find the smallest nontrivial conjugacy class in $S_n$. For small $n$, the answer is not hard to find: \begin{array}{cc} n & \text{smallest nontrivial class(es)} \\ 1 & \text{none} \\ 2 ...
0answers
13 views
### Coons patches using Matlab
I have studied Bilinearly Blended Coons patches, practically they are Coons Patches with blending funcions which are linear. I want to represent the following situation using matlab: 2 patches given ...
1answer
25 views
### Way to have inutition about the shape of a curve?
Past the usual memorized curves like $y=\sin(x), y=|x|, y=1/x, y=\ln(x), \ldots,$ is there a way to have an intuition about the shape of a curve from looking at an arbitrary function/term? (that is, ...
0answers
13 views
### Abelian Elliptic Surfaces
By abelian surface we mean a 2-dimensional algebraic complex torus. Thus $$S=\Bbb{C}^2/\Gamma$$ where $\Gamma$ is a rank $4$ lattice in $\Bbb{C}^2$ and such that $S$ is algebraic. It has trivial ...
1answer
15 views
### Sequence of continuous functions, integral, series convergence
Let $f_k$ be a sequence of continuous functions on $[0,1]$ such that $\int _0 ^1 f_k(x)x^ndx = \int _0^1 x^{n+k} dx$ for all $n \in \mathbb{N}$. Is $\sum _{k=1} ^{\infty}f_k(x)$ convergent? Could ...
2answers
22 views
### what is the diffrence between a term , constant and variable in first order logic languages ?
in the text , the author says that the language contains parenthises , sentintial connectives and n-place functions , n-place predicates , equality sign = , terms , constans and variables i have two ...
0answers
42 views
### I want help with $4\times 4$ symmetric matrix
I have a $4\times 4$ matrix $$A=\left(\begin{array}{cccc}8 & 11 & 4 & 3\\11 & 12 & 4 & 7\\4 & 4 & 7 & 12\\3 & 7 & 12 & 17\end{array}\right).$$ I want to ...
0answers
13 views
### Generalized eigenspaces of a compact operator are finite dimensional
Let $T : H\rightarrow H$ be a compact operator on a Hilbert space $H$. Say that $\lambda \in \mathbb C$ is a generalized eigenvalue of $T$ if there is some $n \geq 1$ such that $(\lambda - T)^n$ is ...
0answers
23 views
### Right Triangles and Lagrange Multipliers
Suppose that you have a right triangle $a^2+b^2=c^2$ with integral sides. Given a perimeter $p=a+b+c$, how can you use Lagrange multipliers to determine the maximum length of $a$?
1answer
19 views
### Is the the number of generators of a group the number of different generators that one finds if one counts over every generating set of the group?
Consider the additive group of integers as an example as mentioned at the bottom of the Wikipedia article. There are two generating sets that are mentioned; The set consisting of the number 1, {1}, ...
5answers
68 views
### Limit as $x$ approaches $1$ from the right of $\frac{1}{\ln x}-\frac{1}{x-1}$
$$\lim_{x\rightarrow 1^+}\;\frac{1}{\ln x}-\frac{1}{x-1}$$ So I would just like to know how to begin to solve this limit, or what topic does this problem fall under so that I can search for ...
0answers
27 views
### What is the limit of the multidimensional integral?
What is the limit of the integral $$\int_{[0,1]^n}\frac{x_1^5+x_2^5 + \cdots +x_n^5}{x_1^4+x_2^4 + \cdots +x_n^4} \, dx_1 \, dx_2 \cdots dx_n$$ as $n \to \infty ?$
0answers
13 views
### Information about a particular polynomial
This question is related to this post Consider the following polynomial $$\alpha^3 xyzw + \alpha^2(1-\alpha) yzw + \alpha(1-\alpha) zw + (1-\alpha)w.$$ One can of course generalize this to $n$ ...
0answers
16 views
### Is the Cauchy principal value “invariant” under change of variables?
Let $f \in C^{\gamma}_c(\mathbb{R})$. Let $K:\mathbb{R}^n \backslash \{\vec{0}\} \rightarrow \mathbb{R}^n$ be a singular integral kernel with the following properties: 1) K smooth everywhere except ...
0answers
18 views
### What is the broader name for fibonacci and lucas sequences?
Fibonacci and Lucas sequences are very similar in their definition. However, I could just as easily make another series with a similar definition; an example would be: $$x_0 = 53$$ $$x_1 = 62$$ x_n ...
0answers
59 views
### High school contest question
Some work on it reveals the possibility of using gamma function. Is there any easy way to compute it? $$\lim_{n\to\infty}\left(\frac{1}{n!} \int_0^e \log^n x \ dx\right)^n$$
2answers
30 views
### Expected number of pieces of a chessboard
If n squares are randomly removed from a $p \ \cdot \ q$ chessboard, what will be the expected number of pieces the chessboard is divided into? Can anybody please provide how can I approach the ...
1answer
35 views
### A binary quadratic form: $nx^2-y^2=2$
For which $n\in\mathbb{N}$ are there $(x, y)\in\mathbb{N}^2$ such that $nx^2-y^2=2$ ?
0answers
12 views
### count the number of connected induced subgraphs in a graph with bounded degree
Let $G=(V,E)$ be a graph where the maximum degree of a vertex is 4. I've been asked to find an efficient algorithm for counting how many connected induced subgraphs are in $G$. What I have so far is a ...
1answer
20 views
### Differentiation problem of power to infinity by using log property
Problem: Find $\frac{dy}{dx}$ if $y =\left(\sqrt{x}\right)^{x^{x^{x^{\dots}}}}$ Let ${x^{x^{x^{\dots}}}} =t. (i)$ Taking $\log$ on both sides $\implies {x^{x^{x^{\dots}}}}\log x = \log t$ This ...
0answers
26 views
### Use Fourier Series Techniques..
$x'' + 3x= 7$ given conditions $x'(0)=x'(5)=0$. I checked the list and I went through three books. I am doing intro to differential equations. I just don't know how to get the extensions... I was ...
2answers
26 views
### Is there any specific terminology to refer to an initial sequence of a sequence?
Lets say you have a sequence $S = (0, 1, 2, 3, 4, 5, 6, 7, 8)$ And another sequence $T = (0, 1, 2, 3)$ Is there any specific mathematical term that defines the relationship between $S$ and $T$, ...
2answers
49 views
### How does a calculator calculate the sine, cosine ,tangent using just a number?
Sine Θ = oposite/hypotenuse Cosine Θ = adjacent/hypotenuse Tangent Θ = oposite/adjacent So in order to calculate the Sine or the cosine or the tangent I need to ...
2answers
38 views
### Find Aut$(G)$, Inn$(G)$ and $\dfrac{\text{Aut}(G)}{\text{Inn}(G)}$ for $G = \mathbb{Z}_2 \times \mathbb{Z}_2$
Find Aut$(G)$, Inn$(G)$ and $\dfrac{\text{Aut}(G)}{\text{Inn}(G)}$ for $G = \mathbb{Z}_2 \times \mathbb{Z}_2$. Here is what I have here: Aut$(G)$ consists of 6 bijective functions, which maps $G$ to ...
1answer
29 views
### Flow of $rot \overrightarrow{F}$
We've got vector field: $\overrightarrow{F} = \begin{bmatrix} yz\\x^3z\\e^z\end{bmatrix}$. I want to compute flow of $rot\overrightarrow{F}$($=curl \overrightarrow{F}$) through the area of the side ...
0answers
8 views
### Asymptotic recurrences?
$$T(n) = 2T(n/2) + \Theta(n), n > 1$$ $$T(n) = \Theta (1), n \le 1$$ $$G(n) = G(\lfloor n/2 \rfloor) + G (\lceil n/2 \rceil) + \Theta(n), n > 1$$ $$G(n) = \Theta (1), n \le 1$$ Prove $T(n)$ ...
0answers
15 views
### Bernstein type inequalities. Is there a standard list?
Suppose I have a sequence of iid random variables $X_i\geq 0$ with mean $\mu$ and $\mathbb E \left(e^{tX_i}\right) = G(t)$. Set $$S_n = \sum_{i=1} X_n.$$ For the purpose of this question the ...
1answer
26 views
### Proving integrability in integration by parts in Rudin's text
Integration by parts, as stated in W. Rudin's Principles of Mathematical Analysis, Theorem 6.22, goes as follows: Suppose F and G are differentiable functions in $[a,b]$, $F'=f\in \mathcal{R}$, ...
3answers
67 views
### Definition(s) of rational numbers
The definitions of rational numbers are somewhat confusing for me. The definition of rational numbers on wikipedia and most other sites is: In mathematics, a rational number is any number that ...
15 30 50 per page
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 114, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.908799946308136, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/3492/is-there-a-closed-form-solution-for-the-partial-autocorrelation-function-of-a-ma/3493
|
# Is there a closed-form solution for the partial autocorrelation function of a Markov regime-switching process?
Consider a Markov Regime-switching process $X_{t}$ with $k$ regimes represented by $s_{t}$ such that
$$X_{t}=\mu\left(s_{t}\right)+\epsilon_{t}$$
and
$$\epsilon_{t}\sim N\left(0,\sigma^{2}\left(s_{t}\right)\right)$$
with the probability of being in state $s_{t}$ represented by $p_{t}=Qp_{t-1}$ where $p_{t}$ is a $k \cdot 1$ vector containing the probabilities and Q is a transition matrix conforming based on the number of regimes.
Each state separately would be considered i.i.d. normal, but the regime-switching process exhibits autocorrelation. Is it possible to derive a closed-form solution for the partial autocorrelation function of $X_{t}$? If so, what is it?
-
## 1 Answer
Apparently yes, (I haven't verified the math but have no reason to doubt it). For this simple case you can find a closed form in the following paper:
The closed form is given on part 4.4 of the paper but the whole thing is worth reading as it clearly shows the main properties of these models.
You can also note that contrary to your definition the observations in each state don't need to be IID (you an have other structures such as ARMA). The book by Kim and Nelson (State-Space Models with Regime-Switching) provides a lot of information on this class of models.
-
1
I'm aware of the point about ARMA models, but wanted to get to a simpler case just to understand it better. Let me take a look at that paper. – John May 14 '12 at 21:01
+1 for simplicity. Are you planning on applying this model to financial time series prediction? – Zarbouzou May 14 '12 at 21:13
I've played around with MRS models a little in econometric and financial applications, but I wasn't positive on this property. – John May 14 '12 at 21:26
@John: Any out-sample predictive ability? I haven't found them very usefull in the past. – Zarbouzou May 15 '12 at 9:57
1
– John May 15 '12 at 15:45
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9544963240623474, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/241502/find-inverse-function
|
# Find inverse function
Is it possible to get inverse of all be functions? For example, can we calculate inverse of $y=x^3+x$?
-
– Dennis Gulko Nov 20 '12 at 19:00
## 2 Answers
In general: no!
You need a function to be one-to-one for there to be any hope of an inverse. Take for example the function $f(x) = x^2.$ This function is two-to-one: $f(\pm k) = k^2.$ Let's assume we're working over the real numbers. What is the inverse of $-1$? Well, there isn't one. What is the inverse of $4$? Well, it could be either $-2$ or $+2$ because $(-2)^2 = 2^2 = 4.$ (The inverse is not well defined.)
In general: if a function is one-to-one then there is a well defined inverse from the range of the function back to the domain.
In your example, you have $f(x) = x^3 + x.$ This function is one-to-one because $f'(x) \neq 0$ for all $x \in \mathbb{R}$. The range of $f$ is the whole of the real numbers, so there will be a well defined inverse $f^{-1} : \mathbb{R} \to \mathbb{R}.$ However, it is difficult to actually write the inverse down. It will be some complicated expression. Although, we can say for certain is that it exists.
-
The function $f$ with $f(x)=x^3+x$ has an inverse that can be stated with an explicit formula that only uses elementary arithmetic and radicals. Here is a formula for that example's inverse function. The markup isn't working because of the carat in the url. Copy and paste the whole thing:
http://www.wolframalpha.com/input/?i=solve+x%3Dy%2By^3+for+y%2C+y+real
But that is special to the case where you have an invertible cubic polynomial. In general, when $f$ is an invertible function defined on some subset of $\mathbb{R}$, then only "formula" that you will have for its inverse is notational: $f^{-1}(x)$.
-
what the inverse of f(x) in this example? – varahram Nov 20 '12 at 19:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9131211638450623, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/12892?sort=newest
|
Why can’t subvarieties separate?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm posting my answer to this question as its own question:
Let $V$ be an irreducible projective variety over $\mathbb{C}$. Let $U$ be a Zariski open set in $V$. I'll use $V(\mathbb{C})$ and $U(\mathbb{C})$ to mean $V$ and $U$ equipped with their Euclidean topologies, respectively.
What is the easiest proof that $U(\mathbb{C})$ is connected?
Here's the proof I know: Suppose that $U(\mathbb{C})$ can be written as a disjoint union of two open sets $A$ and $B$. Since the complement of $U$ in $V$ is a variety of smaller dimension than $V$, a theorem of Remmert and Stein implies that the closures $\overline{A}$ and $\overline{B}$ of $A$ and $B$ in $V(\mathbb{C})$ are projective analytic sets. By Chow's theorem that projective analytic sets are algebraic, $\overline{A}$ and $\overline{B}$ are subvarieties of $V$. Since they're proper, $V$ is not irreducible, and we have a contradiction.
I guess I'm really asking for the most elementary argument, as I think the above argument is nice intuitively. A reference would be fine.
(To avoid going through the same discussion in the comments that happened at the other question, let me point out that I am aware that irreducible varieties are connected and that $U$ is itself a variety in the sense that it is locally affine. It is just not obvious to me that it is irreducible (without appealing to the above argument).)
-
It seems to me that your question is equivalent to asking why a variety over the complex numbers is connected in the Zariski topology iff its complex points are connected in the analytic topology: am I missing a nuance here? For that, Shafarevich's Basic Algebraic Geometry has a proof, and in general proofs in this book tend to be both intuitive and elementary (albeit sometimes with details left out). – Pete L. Clark Jan 25 2010 at 3:23
I don't think that's the same question. The intuition you get from thinking about manifolds is supposed to be that subvarieties have real codimension two and so shouldn't separate. The issue is that the singular locus of $V$ could separate. – Richard Kent Jan 25 2010 at 3:33
(Also, you can be reducible and connected in the analytic topology, so that direction doesn't go.) – Richard Kent Jan 25 2010 at 3:35
1
@Richard: a Zariski-open subset of an irreducible topological space is irreducible (or empty!), so I think my answer below does apply. – Pete L. Clark Jan 25 2010 at 3:38
1 Answer
[This has been completely rewritten at the request of Richard Kent.]
Let $X$ be an irreducible topological space and $U$ a non-empty open subset of $X$. Then $U$ is also irreducible -- see e.g. Proposition 141 on page 88 here. (Surely it's also in Hartshorne and lots of other places, but one of the advantages of typing up your own notes is to be able to easily point to a reference because you know exactly where it is.)
Thus the question reduces to the fact that if $X_{/\mathbb{C}}$ is an irreducible complex variety, then $X(\mathbb{C})$ with its "Euclidean topology" is connected. For this, see e.g. Section VII.2 of Shafarevich's Basic Algebraic Geometry II. (Again, there are other places, but I think his discussion is especially good.) He gives two different proofs, one of which is a simple induction on the dimension.
-
As I mentioned in the question, I am aware of this. The real question is why is the smooth locus connected in the analytic topology. – Richard Kent Jan 25 2010 at 3:37
Sorry, I'm afraid I'm still not understanding you. The smooth locus is Zariski open, so it's irreducible, so it's analytically connected by the theorem referred to above. – Pete L. Clark Jan 25 2010 at 3:41
I'm sorry. I thought it was clear in the question. I do not see why a Zariski open set in an irreducible variety is irreducible. – Richard Kent Jan 25 2010 at 3:43
This is actually a matter of general topology. See e.g. Proposition 141 on page 88 of math.uga.edu/~pete/integral.pdf. It relies on the previous Proposition, whose proof is left as an exercise, but I believe you will find it to be straightforward. – Pete L. Clark Jan 25 2010 at 3:53
Pete, thank you very much for bearing with me and giving me what I was looking for. Would you mind editing your answer to point out this point rather than the Shafarevich reference, which I knew already? Then I'll gladly accept your answer. – Richard Kent Jan 25 2010 at 4:03
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.952292263507843, "perplexity_flag": "head"}
|
http://mathforum.org/mathimages/index.php?title=The_Monty_Hall_Problem&diff=13052&oldid=13051
|
# The Monty Hall Problem
### From Math Images
(Difference between revisions)
| | | | |
|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | | |
| Line 154: | | Line 154: | |
| | Marilyn quotes cognitive psychologist Massimo Piattelli-Palmarini in her own book saying "... no other statistical puzzle comes so close to fooling all the people all the time" and "that even Nobel physicists systematically give the wrong answer, and that they insist on it, and they are ready to berate in print those who propose the right answer." When the Monty Hall problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine claiming the published solution was wrong. | | Marilyn quotes cognitive psychologist Massimo Piattelli-Palmarini in her own book saying "... no other statistical puzzle comes so close to fooling all the people all the time" and "that even Nobel physicists systematically give the wrong answer, and that they insist on it, and they are ready to berate in print those who propose the right answer." When the Monty Hall problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine claiming the published solution was wrong. |
| | | | |
| - | One letter written to vos Savant by Dr. E. Ray Bobo was especially critical of Marilyn's solution: "You are utterly incorrect about the game show question, and I hope this controversy will call some public attention to the serious national crisis in mathematical education. If you can admit your error, you will have contributed constructively toward the solution to a deplorable situation. How many irate mathematicians are needed to get you to change your mind?" | + | One letter written to vos Savant by Dr. E. Ray Bobo of Georgetown University was especially critical of Marilyn's solution: "You are utterly incorrect about the game show question, and I hope this controversy will call some public attention to the serious national crisis in mathematical education. If you can admit your error, you will have contributed constructively toward the solution to a deplorable situation. How many irate mathematicians are needed to get you to change your mind?" |
| | | | |
| | | | |
## Revision as of 11:46, 23 June 2010
Let's Make a Deal
Field: Algebra
Created By: [[Author:| ]]
Website: [ ]
Let's Make a Deal
The Monty Hall problem is a famous probability puzzle based on the 1960's game show Let's Make a Deal.
The show's host, Monty Hall, asks a contestant to pick one of three doors. One door leads to a brand new car, but the other two lead to goats. Once the contestant has picked a door, Monty opens one of the two remaining doors. He is careful never to open the door hiding the car. After Monty has opened one of these other two doors, he offers the contestant the chance to switch doors. Is it to his advantage to stay with his original choice or switch to the other unopened door?
# Basic Description
We can first imagine the case where the car is behind door 1.
In the diagram on the below, we can see what prize the contestant will win if he always stays with his initial pick after Monty opens a door. If the contestant uses the strategy of always staying, he will only win if he originally picked door 1.
If the contestant always switches doors when Monty shows him a goat, then he will win if he originally picked door 2 or door 3.
A player who stays with the initial choice wins in only one out of three of these equally likely possibilities, while a player who switches wins in two out of three. Since we know that the car is equally likely to be behind each of the three doors, we can generalize our strategy for the case where the car is behind door 1 to any placement of the car. The probability of winning by staying with the initial choice is 1/3, while the probability of winning by switching is 2/3. The contestant's best strategy is to always switch doors so he can drive home happy and goat-free.
# Aids to Comprehension
The Monty Hall problem has the distinction of being one of the rare math problems that has gained recognition on the front page of the Sunday New York Times. On July 21, 1991, the Times published a story that explained a heated argument between a Parade columnist, Marilyn vos Savant, and numerous angry readers. Many of these readers held distinguished degrees in mathematics, and the problem seemed far too elementary to warrant such difficulty in solving.
Further explanation of readers' debate with vos Savant's can be found in the Why It's Interesting section. However, if you aren't completely convinced that switching doors is the best strategy, be aware that the Monty Hall problem has been called "math's most contentious brain teaser." The following explanations are alternative approaches to the problem that may help clarify that the best strategy is, in fact, switching doors.
#### Why the Probability is not 1/2
The most common misconception is that the odds of winning are 50-50 no matter which door a contestant chooses. Most people assume that each door is equally likely to contain the car since the probability was originally distributed evenly between the three doors. They believe that they have no reason to prefer one door, so it does not matter whether they switch or stick with their original choice.
This reasoning seems logical until we realize that the two doors cannot be equally likely to hide the car. The critical fact is that the Monty does not randomly choose a door to open, so they do have a reason to prefer a certain door. Marilyn defended her answer in a subsequent column addressing this point specifically.
Suppose we pause after Monty has revealed a goat and a UFO settles down onto the stage and a little green woman emerges. The host asks her to point to one of the two unopened doors. Then the chances that she'll randomly choose the one with the prize are 1/2. But, that's because she lacks the advantage the original contestant had—the help of the host.
"When you first choose door #1 from three, there's a 1/3 chance that the prize is behind that one and a 2/3 chance that it's behind one of the others. But then the host steps in and gives you a clue. If the prize is behind #2, the host shows you #3, and if the prize is behind #3, the host shows you #2. So when you switch, you win if the prize is behind #2 or #3. You win either way! But if you don't switch, you win only if the prize is behind door #1," Marilyn explained.
This is true because when Monty opens a door, he is reducing the probability that it contains a car to 0. When the contestant makes an initial pick, there is a 1/3 chance that he picked the car and a 2/3 chance that one of the other two doors has the car. When Monty shows him a goat behind one of those two doors, the 2/3 chance is only for the one unopened door because the probability must be 0 for the one that the host opened.
#### An Extreme Case of the Problem
Imagine that you are on Let's Make a Deal are there are now 1 million doors. You choose your door, then Monty opens all but one of the remaining doors, showing you that they hide goats. It’s clear that your first choice is unlikely to have been the right choice out of 1 million doors.
Since you know that the car must be hidden behind one of the unopened doors and it is very unlikely to be behind your door, you know that it must be behind the other door. In fact, on average in 999,999 out of 1,000,000 times the other door will contain the prize because 999,999 out of 1,000,000 times the player first picked a door with a goat. Switching to the other door is the best strategy.
#### Simulation
Using a simulation is another useful way to show that the probability of winning by switching is 2/3. A simulation using playing cards allows us to perform multiple rounds of the game easily.
One simulation proposed by vos Savant herself requires only two participants, a player and a host. Three cards are held by the host, one ace that represents the prize and two lower cards that represent the mules. The host holds up the three cards so only he can see their values. The contestant picks a card, and it is placed aside so that he still cannot see the value. Monty then reveals one of the remaining low cards which represents a mule. He must choose between the two lower cards if they both remain in his hand.
If the card remaining in the host's hand is an ace, then this is recorded as a round where the player would have won by switching. Contrastingly, if the host is holding a low card, the round is recorded as one where staying would have won. Performing this simulation repeatedly will reveal that a player who switches will win the prize approximately 2/3 of the time.
#### Play the Game
http://www.nytimes.com/2008/04/08/science/08monty.html?_r=1
# A More Mathematical Explanation
[Click to view A More Mathematical Explanation]
The following explanation uses Bayes' Theorem to show how Monty revea [...]
[Click to hide A More Mathematical Explanation]
The following explanation uses Bayes' Theorem to show how Monty revealing a goat changes the game.
Let the door picked by the contestant be called door a and the other two doors be called b and c. Also, Va, Vb, and Vc, are the events that the car is actually behind door a, b, and c respectively. Finally, let Ob be the event that Monty Hall opens curtain b.
Then, the problem can be restated as follows: Is $P(V_a|O_b) = P(V_c|O_b)$? This is essentially asking if the probability that the car is behind one unopened door the same as the probability that the car is behind the other unopened door.
Using Bayes' Theorem, we know that
$P(V_a|O_b)=\frac{P(V_a)*P(O_b|V_a)}{P(O_b)}$
$P(V_c|O_b)=\frac{P(V_c)*P(O_b|V_c)}{P(O_b)}$
Also, we can assume that the prize is randomly placed behind the curtains, so
$P(V_a) = P(V_b) = P(V_c) = \frac{1}{3}$
Then we can calculate the conditional probabilities for the event Ob, which we can then use to calculate the probability of event Ob.
First, we can calculate the conditional probability that Monty opens door b if the car is hidden behind door a.
$P(O_b|V_a) = 1/2$ because if the prize is behind a, Monty can open either b or c.
$P(O_b|V_b) = 0$ because if the prize is behind door b, Monty can't open door b.
$P(O_b|V_c) = 1$ because if the prize is behind door c, Monty can only open door b.
Each of these probabilities is conditional on the fact that the prize is hidden behind a specific door, but we are assuming that each of these probabilities is mutually exclusive since the car can only be hidden behind one door. As a result, we know that P(Ob) is equal to
$P(O_b) = P(O_b \cap V_a) + P(O_b \cap V_b) + P(O_b \cap V_c)$
Using the equation for the probability of non-independent events, we can say
$P(O_b)= P(V_a)P(O_b|V_a) + P(V_b)P(O_b|V_b) +P(V_c)P(O_b|V_c)$
$= \frac{1}{3} * \frac{1}{2} + \frac{1}{3} * 0 + \frac{1}{3} * 1$
$= \frac{1}{2}$
Then, we can use $P(O_b)$, $P(O_b|V_a)$, and $P(V_a)$ to calculated $P(V_a|O_b)$.
$P(V_a|O_b) = \frac {P(V_a)*P(O_b|V_a)}{P(O_b)}$
$= \frac {\frac{1}{3} * \frac{1}{2}} {\frac{1}{2}}$
$= \frac {1}{3}$
Similarly,
$P(V_c|O_b) = \frac {P(V_c)*P(O_b|V_c)}{P(O_b)}$
$= \frac {\frac{1}{3} * 1} {\frac{1}{2}}$
$= \frac {2}{3}$
The probability of Vc (the event that car is hidden behind door c) in this case is not equal to the probability of Va (the case where the car is hidden behind the door that Monty hasn't opened and the contestant hasn't selected). The contestant is offered an opportunity to switch to door c. We have calculated that the probability of winning when door c is selected is 2/3 and the probability of winning with the contestant's original choice, door a is 1/3. If the contestant switches, he doubles his chance of winning.
# Why It's Interesting
Variations of the problem have been popular game teasers since the 19th century, but the "Lets Make a Deal" version is most widely known.
#### History of the Problem
The earliest of several probability puzzles related to the Monty Hall problem is Bertrand's box paradox, posed by Joseph Bertrand in 1889. In Bertrand's puzzle there are three boxes: a box containing two gold coins, a box with two silver coins, and a box with one of each. The player chooses one random box and draws a coin without looking. The coin happens to be gold. What is the probability that the other coin is gold as well? As in the Monty Hall problem the intuitive answer is 1/2, but the actual probability 2/3.
#### Ask Marilyn: A Story of Misguided Hatemail
The question was originally proposed by a reader of “Ask Marilyn”, a column in Parade Magazine in 1990. Marilyn's correct solution, that switching doors was the best strategy, caused an uproar among mathematicians. While most people responded that switching should not matter, the contestant’s chances for winning in fact double if he switches doors. Part of the controversy, however, was caused by the lack of agreement on the statement of the problem itself.
Most statements of the problem, including the one in Marilyn's column, do not match the rules of the actual game show. This was a source of great confusion when the problem was first presented. The main ambiguities in the problem arise from the fact that it does not fully specify the host's behavior.
For example, imagine a host who wasn't required to always reveal a goat. The host's strategy could be to open a door only when the contestant has selected the correct door initially. This way, the host could try to tempt the contestant to switch and lose.
When first presented with the Monty Hall problem, an overwhelming majority of people assume that switching does not change the probability of winning the car even when the problem was stated to remove all sources of ambiguity. An article by Burns and Wieth cited various studies on the Monty Hall problem that document difficulty solving the Monty Hall problem specifically. These previous articles reported 13 studies using standard versions of the Monty Hall dilemma, and reported that most people do not switch doors. Switch rates ranged from 9% to 23% with a mean of 14.5%, even when the problem was stated explicitely.
This consistency is especially remarkable given that these studies include a range of different wordings, methods of presentations, languages, and cultures.
Marilyn quotes cognitive psychologist Massimo Piattelli-Palmarini in her own book saying "... no other statistical puzzle comes so close to fooling all the people all the time" and "that even Nobel physicists systematically give the wrong answer, and that they insist on it, and they are ready to berate in print those who propose the right answer." When the Monty Hall problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine claiming the published solution was wrong.
One letter written to vos Savant by Dr. E. Ray Bobo of Georgetown University was especially critical of Marilyn's solution: "You are utterly incorrect about the game show question, and I hope this controversy will call some public attention to the serious national crisis in mathematical education. If you can admit your error, you will have contributed constructively toward the solution to a deplorable situation. How many irate mathematicians are needed to get you to change your mind?"
#### Monty and Monkeys
A recent article published in The New York Times uncovered an interesting relationship between the Monty Hall problem and a study on cognitive dissonance using monkeys. If the calculations of Yale economist M. Keith Chen are correct, then some of the most famous experiments in psychology might be flawed. Chen believes the researchers drew conclusions based on natural inclination to incorrectly evaluate probability.
The most famous experiment in question is the 1956 study on rationalizing choices. The researchers studied which M&M colors were most preferred by monkeys. After identifying a few colors of M&Ms that were approximately equally favored by a monkey - say, red, blue, and yellow, - the researchers gave the monkey a choice between two of the colors.
In one case, imagine that a monkey chose yellow over blue. Then, the monkey would be offered the choice between blue and red M&Ms. Researchers noted that about two-thirds of the time the monkey would choose red. The 1956 study claimed that their results reinforced the theory of rationalization: Once we reject something we are convinced that we never like it anyway.
Dr. Chen reexamined the experimental procedure, and says that monkey's rejection of blue might be attributable to statistics alone. Chen says that there must be some difference in preference between the original red, blue, and yellow. If this is the case, then the monkey's choice of yellow over blue wasn't arbitrary. Like Monty Hall's decision to open a door that hid a goat, the monkey's choice between yellow and blue discloses additional information. In fact, when a monkey favors yellow over blue, there's a two-thirds chance that it also started off with a preference for red over blue- which would explain why the monkeys chose red 2/3 of the time in the Yale experiment.
To why this is true, consider Chen's conjecture that monkeys must have some slight preference between the three colors they are being offered. The table below shows all the possible combinations of ways that a monkey could possibly rank its M&Ms.
We can see that in the case where the monkey preferred yellow over blue, they monkey preferred red over blue 2/3 in 2/3 of the rankings.
Although Chen agrees that the study may have still discovered useful information about preferences, but he doesn't believe it has been measured correctly yet. "The whole literature suffers from this basic problem of acting as if Monty's choice means nothing."
Monty Hall problem, the study of monkeys, and other problems involving unequal distributions of probability are notoriously difficult for people to solve correctly. Even academic studies may be littered with mistakes caused by difficulty interpreting statistics.
#### 21
The 2008 movie 21 increased public awareness of the Monty Hall problem. 21 opens with an M.I.T. math professor using the Monty Hall Problem to explain theories to his students. The Monty Hall problem is included in the movie to show the intelligence of the main character because he is immediately able to solve such a notoriously difficult problem.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
[[Category:]]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9590229392051697, "perplexity_flag": "middle"}
|
http://nrich.maths.org/80
|
nrich enriching mathematicsSkip over navigation
### Magazines
Let's suppose that you are going to have a magazine which has 16 pages of A5 size. Can you find some different ways to make these pages? Investigate the pattern for each if you number the pages.
### Stairs
This challenge is to design different step arrangements, which must go along a distance of 6 on the steps and must end up at 6 high.
### Month Mania
Can you design a new shape for the twenty-eight squares and arrange the numbers in a logical way? What patterns do you notice?
# It's All about 64
##### Stage: 2 Challenge Level:
I was with a class of children in Bromley near to London, when I suddenly came up with this idea, and I put it to the youngsters at the school. They did a lot of work on it so I thought I'd share it with you.
It's all about $64$!
Lots of you know that $64$ is $8$ times $8$. So if you were asked to write down all the numbers up to $64$ you might decide to do eight lots of $8$ . [It's a bit like $100$ in that you may well write ten lots of $10$ to get up to $100$ and produce a $100$ square.]
I suggested to them that they tried writing the numbers up to $64$ in an interesting way so that the shape they made at the end would be interesting, different, more exciting ... than just a square. Here are the ones that some of them came up with to show that the numbers could be arranged in an interesting way.
Most of them, as you see, ended up with shapes that were not squares. Those that did end up with an $8$ by $8$ square put the numbers in an interesting order into the shape.
When they did that they were then asked to made a tile [or frame] that was made up of four squares.
Here are some examples:-
The idea now was to place one these tiles/frames somewhere on the table of $64$ so that it covered four numbers. [The tiles were made so that the squares were the same size as the squares on each of the numbers in the $64$ table.]
The numbers underneath the tile/frame were added up and recorded. The tile/frame was then moved around the table of $64$ to different positions and each time the total of the four numbers underneath was recorded.
Well that's what you need to do. It's fun creating new $64$ tables in different shapes.
Now comes the investigative part ...
Explore by looking at the totals that you've found and and think about any relationships that you notice.
You then need to think about why these sets of answers are occurring. The youngsters at the Bromley school found lots of things out ... now it is your turn to do the same.
Lastly of course you need to ask, "I wonder what would happen if ...?"
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9809923768043518, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-challenge-problems/205346-slope-pulley-problem-please-guide-me-thank-you-advance.html
|
# Thread:
1. ## Slope and pulley problem ? Please guide me. Thank you in advance
One of two identical masses lies on a smooth plane, which is inclined at sin-1 1/14 to the horizontal, and is 2m from the top. A light inextensible string attached to this mass passes along the line of greatest slope , over a smooth pulley fixed at the top of the incline; the other end carries to the other mass hanging freely 1m above the floor. If the system is released from rest, find the time taken for the hanging mass to reach the floor?
2. ## Re: Slope and pulley problem ? Please guide me. Thank you in advance
You question is a little vauge and light on details, but here are a list of assumptions. The string is massless, the plane is frictionless, and the pully has no torque. If all of these are true then you should be able to draw a force diagram for each block
$F_{\text{plane}}=-mg\sin(\theta)+T$
$F_{\text{hang}}=mg-T$
So the net force is given by
$F_{net}=-mg\sin(\theta)+T+mg-T=(m+m)a_{\text{net}}$
Solving for the net acceleration gives (steps not shown)
$a_{net}=\frac{13}{28}g$
Now with the acceleration above and the initial velocity $v_0=0$ and the initial positiion $y_0=2$
Use the kinematic equation
$y-y_0=v_0t+\frac{1}{2}at^2$
to solve for the time
3. ## Re: Slope and pulley problem ? Please guide me. Thank you in advance
Thank you very much. The questions are extracted from "Understanding Mechanics". Thank you for your time and applying kinematic equation I got t= 2.82 seconds. Thank you Thank you Thank you so very much
4. ## Re: Slope and pulley problem ? Please guide me. Thank you in advance
may I know what is the value of mass?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.903933048248291, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/72286/list
|
## Return to Question
4 added 1 characters in body
Let $X$ be a regular scheme over a field $k$ and $Δ^m$ be the algebraic $m$-simplex $\mathrm{Spec} k[t_0,...,t_m]/(1-\sum_jt_j)$. The group $z^i(X,m)$ is the free abelian group generated by all closed integral subvarieties on $X×Δ^m$ of codimension $i$ which intersect all faces $X×Δ^j$ properly for all $j < m$. Taking alternative sum of these intersections makes $z^i(X,*)$ a chain complex. Bloch's higher Chow groups is defined as homology groups of these complexes.
In Jinhyun Park's answer to the question What do higher Chow groups mean, he elaborates that one can see higher Chow groups as algebraic-geometric version of singular homology theory.
Since higher Chow groups are extensively studied. A natural question is
Can we define the algebro-geometric version of singular cohomology theory using above constructions? Is this a good object to study?What about Chow cohomology(Sorry if this does not make sense, I am new to this stuff)?
3 added 59 characters in body
Let $X$ be a regular scheme over a field $k$ and $Δ^m$ be the algebraic $m$-simplex $\mathrm{Spec} k[t_0,...,t_m]/(1-\sum_jt_j)$. The group $z^i(X,m)$ is the free abelian group generated by all closed integral subvarieties on $X×Δ^m$ of codimension $i$ which intersect all faces $X×Δ^j$ properly for all $j < m$. Taking alternative sum of these intersections makes $z^i(X,*)$ a chain complex. Bloch's higher Chow groups is defined as homology groups of these complexes.
In Jinhyun Park's answer to the question What do higher Chow groups mean, he elaborates that one can see higher Chow groups as algebraic-geometric version of singular homology theory.
Since higher Chow groups are extensively studied. A natural question is
Can we define the algebro-geometric version of singular cohomology theory using above constructions? Is this a good object to study?What about Chow cohomologycohomology(Sorry if this does not make sense, I am new to this stuff)?
2 added 25 characters in body
Let $X$ be a regular scheme over a field $k$ and $Δ^m$ be the algebraic $m$-simplex $\mathrm{Spec} k[t_0,...,t_m]/(1-\sum_jt_j)$. The group $z^i(X,m)$ is the free abelian group generated by all closed integral subvarieties on $X×Δ^m$ of codimension $i$ which intersect all faces $X×Δ^j$ properly for all $j < m$. Taking alternative sum of these intersections makes $z^i(X,*)$ a chain complex. Bloch's higher Chow groups is defined as homology groups of these complexes.
In Jinhyun Park's answer to the question What do higher Chow groups mean, he elaborates that one can see higher Chow groups as algebraic-geometric version of singular homology theory.
Since higher Chow groups are extensively studied. A natural question is
Can we define the algebraic-geometric algebro-geometric version of singular cohomology theory using above constructions? Is this a good object to studystudy?What about Chow cohomology?
1
# Higer Chow groups and singular cohomology theory
Let $X$ be a regular scheme over a field $k$ and $Δ^m$ be the algebraic $m$-simplex $\mathrm{Spec} k[t_0,...,t_m]/(1-\sum_jt_j)$. The group $z^i(X,m)$ is the free abelian group generated by all closed integral subvarieties on $X×Δ^m$ of codimension $i$ which intersect all faces $X×Δ^j$ properly for all $j < m$. Taking alternative sum of these intersections makes $z^i(X,*)$ a chain complex. Bloch's higher Chow groups is defined as homology groups of these complexes.
In Jinhyun Park's answer to the question What do higher Chow groups mean, he elaborates that one can see higher Chow groups as algebraic-geometric version of singular homology theory.
Since higher Chow groups are extensively studied. A natural question is
Can we define the algebraic-geometric version of singular cohomology theory using above constructions? Is this a good object to study?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9135879874229431, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/16091/what-is-the-fundamental-probabilistic-interpretation-of-quantum-fields/16103
|
# What is the fundamental probabilistic interpretation of Quantum Fields?
In quantum mechanics, particles are described by wave functions, which describe probability amplitudes. In quantum field theory, particles are described by excitations of quantum fields. What is the analog of the quantum mechanical wave function? Is it a spectrum of field configurations (in analogy with QM wave functions' spectrum of particle observables), where each field configuration can be associated with a probability amplitude? Or is the field just essentially a superposition of infinitely many wave functions for each point along the field (as if you quantized a continuous mattress of infinitesimal coupled particles)?
-
## 4 Answers
(Lubos just posted an answer, but I think this is sufficiently orthogonal to post too). The usual wavefunction for a bosonic field is a complex number for each field configuration at all points of space:
$$\Psi(\phi(x))$$
This wavefuntion(al) obeys the Schrodinger equation with the field Hamiltonian, where the field momentum is a variational derivative operator acting on $\Psi$. This formulation is fine in principle, but it is not useful to work with this object directly under usual circumstances for the following reasons:
• You need to regulate the field theory for this wavefunctional to make mathematical sense. If you try to set up the theory in the continuum right from the beginning, to specify a wavefunction over each field configuration you need to work just as hard as to do a rigorous definition for the field theory. For example, just to normalize the wavefunction over all constant time slice field values, you need to do a path integral over all the constant time field configurations. This is a path integral in one dimension less, but the thing you are integrating is no longer a local action, so there is no gain in simplicity. Even after you normalize, the expectation value of operators in the wavefunctional is a field theory problem in itself, in one dimension less, but with a nonlocal action.
• Once you regulate on a lattice, the field wavefunctional is just an ordinary wavefunction of all the field values at all positions. But even when you put it on an infinite volume lattice, a typical wavefunctional in infinite volume will have a divergent energy, because you will have a certain energy density at each point when the wavefuntional is not the vacuum, a finite energy density. Infinite energy configurations of the field theory, those with a finite energy density, are very complicated, because they do not decompose into free particles at asymptotic times, but keep knocking around forever.
• The actual equations of motion for the wavefuntional are not particularly illuminating, and do not have the manifest Lorentz symmetry, because you chose a time-slice to define the wavefunction relative to.
These problems are overcome by working with the path integral. In the path integral, if you are adamant that you want the wavefuntion, you can get it by doing the path integral imposing a boundary condition on the fields at a certain time. But a path integral Monte-Carlo simulation, or even with just a little bit of Wick rotation, will make the wavefunction settle to be the vacuum, and insertions will generally only perturb to finite energy configurations, so you get the things you care about for scattering problems.
Still the wavefunction of fields is used in a few places for special purposes, although, with one very notable exception, the papers tend to be on the obscure side. There are 1980s papers which attempted to find the string formulation of gauge theories which tried to work with the field Hamiltonian in the Schordinger representation, and these were by famous authors, but the name escapes me (somebody will know, maybe Lubos knows immediately).
The best example of where this approach bears fruit is when the reduction in dimension gives a field theory which has a relationship with known solvable models. This is the example of the 2+1 gauge vacuum, which was analyzed in the Schrodinger representation by Nair and collaborators in the past decade.
A recent paper which reviews and extends the results is here: http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.6376v1.pdf . This is, by far, the most significant use of Schrodinger wavefunctions in field theory to date.
-
It seems you are asking "of what variables is the wave function in quantum field theory a function of?". However, this question doesn't have a unique answer; in contrast with your implicit assertion, it doesn't have a unique answer in ordinary quantum mechanics of particles, either.
The wave function is a set of complex numbers that determine the complex coefficients in front of basis vectors of a particular basis (or a "continuous basis") of the Hilbert space. So quantum mechanics' wave function may be encoded in $\psi(x)$; but it may also be described in the momentum representation, $\tilde \psi(p)$ which is the Fourier transform of $\psi(x)$.
We may also choose different eigenvectors, a different basis. For example, the harmonic oscillator (and many other systems) has a discrete set of energy eigenstates labeled by $n=0,1,2,3,\dots$. In that case, the coefficients $a_n$ completely encode the state vectors – the wave function – as well.
There is a hugely infinite number of possible bases, and even a very large number of bases that are commonly used.
In quantum field theory, the most natural basis (or the most widely used one) is the basis analogous to the harmonic oscillator energy eigenstate example. For each type of a quantum field, each polarization, and each $\vec k$ (the wavenumber, i.e. the frequency and the direction of motion), we define the contribution of this mode of the quantum field to the non-interacting part of the energy. This turns out to be just a rescaled harmonic oscillator with the spectrum $E=n\hbar\omega$ where $n$ is a non-negative integer that may be interpreted as the number of particles.
In this basis, to specify the wave function, we need to determine the complex amplitude in front of the $n_i=0$ "vacuum state", in front of the numerous $n_i=(0,0,0,1,0,0,...)$ states which is equivalent to the non-relativistic wave function (e.g. of positions or, more often, momenta) $\tilde \psi(p_1)$, the complex numbers determinining the complex amplitudes for all two-excitation eigenstates $\tilde \psi(p_1,p_2)$, and so on, indefinitely (increasing number of particles).
However, it's also true that one may choose a "functionally super continuous" basis of all configurations of the quantum fields. In this language, the state vector is a functional $$\Psi [\phi(x,y,z), A_i(x,y,z),\dots]$$ which depends on all the fields at $t=0$ but not their time derivatives. A functional is morally equivalent to a function of infinitely many variables (continuously infinitely many, at least in this case). It's not terribly practical to work with such a description of the wave function and it's hard to "measure" the values of the functional at different points (for different configurations of the fields) but it is possible.
The particle-occupation basis mentioned above is more physical because Nature evolves according to the Hamiltonian so all states (objects and their configurations) that tend to be at least a little bit "lasting" are close to energy eigenstates. That's why bases that are close to bases of energy eigenstates (or eigenstates of a "free" part of the Hamiltonian etc.) are generally more useful and natural to work with, especially when one thinks about applications.
-
In nonrelativistic quantum mechanics you can think (modulo technicalities with rigged Hilbert spaces) of the coordinate representation of the wavefunction as the projection of the state vector $\Psi$ in the "direction" of the position eigenvector, i.e. $\Psi(x)=\langle x|\Psi \rangle$.
Quantum field theory is traditionally (in textbooks) formulated in the interaction picture. It can, however, also be formulated in the Schroedinger picture, in which a natural notion of wavefunction emerges. Instead of being a complex valued function on the space of particle positions, it now becomes a complex valued functional on the space of field configurations. So the wavefunction(al) is the projection of a Schroedinger picture state $\Psi$ in the "direction" of a field configuration $\phi(\mathbf{x})$
$|\Psi(t) \rangle = \int \cal{D}\phi \Psi[\phi,t]|\phi \rangle$
It satisfies the functional Schroedinger equation
$i\frac{\partial}{\partial t} \Psi [\phi ,t] = H(\phi (\mathbf{x}), -i\frac{\delta}{\delta \phi (\mathbf{x})})\Psi [\phi ,t]$
The only textbook reference treating this that I know of is Brian Hatfield's book.
-
D'oh I see some more detailed answers appeared while I was typing mine.... – twistor59 Oct 24 '11 at 7:20
Maybe a less detailed answer, but I think you most directly address my intended question. Thanks! I will check out the Hatfield book. – user1247 Oct 24 '11 at 18:33
Ron and Luboš's Answers, +1. FWIW, however, I have found it worthwhile to take QFT to be a stochastic signal processing formalism in the presence of Lorentz invariant (quantum) noise.
The devil is in the details, and I cannot claim to be able to say much, or even anything, about interacting quantum fields, but it is possible to construct random fields that are empirically equivalent, in a specific sense, to the quantized complex Klein-Gordon field (EPL 87 (2009) 31002, http://arxiv.org/abs/0905.1263v2) and to the quantized electromagnetic field (http://arxiv.org/abs/0908.2439v2, completely rewritten a few weeks ago). Needless to say, the fact that a random field satisfies the trivial commutation relation $[\hat\chi(x),\hat\chi(y)]=0$ instead of the nontrivial commutation relation $[\hat\phi(x),\hat\phi(y)]=\mathrm{i}\!\Delta(x-y)$ plays out in numerous ways, and this is not for anyone who wants to stay in the mainstream.
I take it as a significant ingredient that we treat quantum fields as (linear) functionals from a Schwartz space $\mathcal{S}$ of window functions into a $\star$-algebra $\mathcal{A}$ of operators, $\hat\phi:\mathcal{S}\rightarrow\mathcal{A};f\mapsto\hat\phi_f$, instead of dealing with the operator-valued distribution $\hat\phi(x)$ directly, even though we may construct $\hat\phi_f$ directly from $\hat\phi(x)$, by "smearing", $\hat\phi_f=\int\hat\phi(x)f(x)\mathrm{d}^4x$. In terms of these operators, the algebraic structure of the quantized free Klein-Gordon field algebra is completely given by the commutator $[\hat\phi_f,\hat\phi_g]=(f^*,g)-(g^*,f)$, where $(f,g)=\int f^*(x)\mathrm{i}\!\Delta_+(x-y)g(y)\mathrm{d}^4x\mathrm{d}^4y$ is a Hermitian (positive semi-definite) inner product. A window function formalism is part of the Wightman axiom approach to QFT. Note that what are called "window functions" in signal processing are generally called "test functions" in QFT. The signal processing community works with Fourier and other transforms in a way that has close parallels with quantum theory and implicitly or explicitly uses Hilbert spaces.
In the vacuum state of the quantized free Klein-Gordon field, we can compute a Gaussian probability density using the operator $\hat\phi_f$, $$\rho_f(\lambda)=\left<0\right|\delta(\hat\phi_f-\lambda)\left|0\right>=\frac{e^{-\frac{\lambda^2}{2(f,f)}}}{\sqrt{2\pi(f,f)}},$$ which depends on the inner product $(f,f)$ [take the Fourier transform of the Dirac delta, use Baker-Campbell-Hausdorff, then take the inverse Fourier transform]. The reason it's good to work with smeared operators $\hat\phi_f$ instead of with operator-valued distributions $\hat\phi(x)$ is that the inner product $(f,f)$ is only defined when $f$ is square-integrable, which a delta function at a point is not. We can compute a probability density using an operator $\hat\phi_f$ in all states, but, of course, we can only compute a joint probability density such as $$\rho_{f,g}(\lambda,\mu)=\left<0\right|\delta(\hat\phi_f-\lambda)\delta(\hat\phi_g-\mu)\left|0\right>$$ if $\hat\phi_f$ commutes with $\hat\phi_g$; in other words, whenever, but in general only when, the window functions $f$ and $g$ have space-like separated supports. At space-like separation, the formalism is perfectly set up to generate probability densities, which is why QFT is like stochastic signal processing, but at time-like separation, the nontrivial commutation relations prevent the construction of probability densities.
I take it to be significant that the scale of the quantum field commutator, the imaginary component of the inner product $(f,g)$, is the same as the scale of the fluctuations in the probability density $\rho_f(\lambda)$, determined by the diagonal component, $(f,f)$. It's possible, indeed, to construct a quantum field state in which the two scales are different (Phys. Lett. A 338, 8-12(2005), http://arxiv.org/abs/quant-ph/0411156v2), so the equivalence could be thought as surprising as the equivalence of gravitational and inertial mass.
Needless to say, there many people who are working away at QFT. The approach I've outlined is only one, with one person working on it, in contrast to string theory, supersymmetry, noncommutative space-time geometry, etc., all of which have had multiple Physicist-decades or centuries of effort poured into them. It's probably better to follow the money. Also, please note that I have hacked this out in an hour, which I have done because rehearsal is always good. Did anyone read all of this?
-
1
This is the description of the vaccuum state of a free field theory, which is always given by a Gaussian wavefunctional, and in the case of the electromagnetic field dates back to Schwinger (or Feynman, I think only Schwinger gave it explicitly). The commutation relation for stochastic fields is not trivial, even though they are classical-looking. I find the most interesting versions are the stochastic 2d fields which reproduce the N=2 SUSY theory by Nicolai map. This is an active field now. – Ron Maimon Oct 24 '11 at 14:06
's true, Ron. To construct a nontrivial Lorentz invariant vacuum state for a random field that has trivial commutators for the observables $\hat\chi(x)$ in a quantum field formalism requires nontrivial commutators for creation and annihilation operators, which is what makes the connection into QFT. Such an approach effectively puts the dynamical evolution and the description of measurements into the state instead of into the algebra ($[\hat\chi(x),\hat\chi(y)]=0$ gives no information, so information has to come from some other part of the structure). – Peter Morgan Oct 24 '11 at 14:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222269654273987, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/96649/smooth-quadric-over-p-adic-integers/96653
|
## Smooth quadric over p-adic integers
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $k$ be a $p$-adic field with ring of integers $\mathcal{O}_K$ and residue field $\mathbb{F}$. Say I have a (projective) quadric $Q$ which is smooth over $\mathcal{O}_K$, such that the reduction $\bar{Q}$ (smooth over $\mathbb{F}$) contains a line $\cong \mathbb{P}^1$ defined over $\mathbb{F}$. Does it follow that $Q$ contains a line $\cong \mathbb{P}^1$ defined over $\mathcal{O}_K$?
If yes, I guess this would somehow follow from Hensel's lemma, but I'm not sure how exactly.
-
## 1 Answer
Yes, for the reason you give. The Hilbert scheme of lines in $Q$ is smooth over $\mathcal O_K$ (the obstruction to smoothness lies in $H^1$ of a normal bundle $\mathcal N$; the homogeneity of $Q$ shows that $\mathcal N$ is generated by $H^0$, so has $H^1=0$ from the classification of bundles on $\mathbb P^1$). Now apply Hensel's lemma to this Hilbert scheme.
-
Ok, great answer! – Wanderer May 11 2012 at 10:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089299440383911, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.