url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathhelpforum.com/discrete-math/184921-equivalence-classes.html
|
# Thread:
1. ## Equivalence Classes
How do you find the distinct equivalence classes of R?
A = {-4, -3, -2, -1, 0, 1, 2, 3, 4, 5}. R is defined on A as follows:
For all x, y elements of A, x R y <=> 3|(x-y).
I have the answer, but I do not understand the work leading up to the answer. I appreciate any help.
I am having the same issue with this one:
A = {-4, -3, -2, -1, 0, 1, 2, 3, 4}. R is defined on A as follows:
For all (m,n) elements of A, m R n <=> 5|(m^2 - n^2).
2. ## Re: Equivalence Classes
Originally Posted by lovesmath
How do you find the distinct equivalence classes of R?
A = {-4, -3, -2, -1, 0, 1, 2, 3, 4, 5}. R is defined on A as follows:
For all x, y elements of A, x R y <=> 3|(x-y).
I have the answer, but I do not understand the work leading up to the answer. I appreciate any help.
I am having the same issue with this one:
A = {-4, -3, -2, -1, 0, 1, 2, 3, 4}. R is defined on A as follows:
For all (m,n) elements of A, m R n <=> 5|(m^2 - n^2).
In the first case you want $x-y$ to a multiple of 3.
So $-4~\&~2$ are in the same class because $-4-2=-6$ a multiple of 3.
You do the next one: multiples of 5.
3. ## Re: Equivalence Classes
So for 3|(x - y), do you find the combinations of numbers that give you remainders 0, 1 and 2, and each of those combinations will be its own equivalence class?
4. ## Re: Equivalence Classes
Originally Posted by lovesmath
So for 3|(x - y), do you find the combinations of numbers that give you remainders 0, 1 and 2, and each of those combinations will be its own equivalence class?
All you do its look at each pair of numbers.
Is their difference a multiple of 3?
If yes, those two numbers belong to the same class.
If no, they are not in the same class.
So $\{-4,-1,2,5\}$ is one class.
5. ## Re: Equivalence Classes
For the second relation, 5|(m^2 - n^2), I got the following equivalence classes:
{0}, {-4, -1, 1, 4}, {-3, -2, 2, 3}
Are these correct?
6. ## Re: Equivalence Classes
Originally Posted by lovesmath
For the second relation, 5|(m^2 - n^2), I got the following equivalence classes:
{0}, {-4, -1, 1, 4}, {-3, -2, 2, 3} Are these correct?
Yes they are correct. Good for you.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8918694853782654, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/217719/how-do-i-prove-that-a-irreducible-polynomial-in-fx-has-a-root-in-an-extension
|
How do I prove that a irreducible polynomial in F[x] has a root in an extension E of F.
In order to demonstrate the root extension theorem, I need to prove that if an element of F[x]/(p(x)) is represented as $\overline a =a+(p(x))$ where $p(x)=a_0+a_1x+\cdots+a_nx^n$, then$\overline a_0+\overline a_1\overline x+\cdots+\overline a_n\overline {x^n}=0$ implies that $a_0+a_1\overline x+\cdots+a_n\overline {x^n}$=0. It seems easy, but I've tried a lot and I couldn't understad why we remove the overline of the coefficients.
-
1
you seem to have forgotten a "$=0$" before and after the "implies" – Glougloubarbaki Oct 21 '12 at 0:22
@Glougloubarbaki yes, thank you – user42912 Oct 21 '12 at 0:26
I deleted my comments, and posted them as an answer. Sorry. – Rankeya Oct 21 '12 at 3:38
1 Answer
In my opinion this is just notation because you are identifying $F$ as a subfield of $F[x]/(p(x))$. So, elements $\overline{a_i}$ are identified with $a_i$.
Technically, what this proof is showing is that if $\overline{F}$ is the image of $F$ under the natural injective ring map $F \rightarrow F[x]/(p(x))$, then the polynomial $\overline{a_n}T^n + \dots + \overline{a_1}T + \overline{a_0} \in \overline{F}[T]$ has the root $\overline{x}$, where now $F[x]/(p(x))$ is an extension of $\overline{F}$ (in the strict sense of an inclusion of fields). But, $F[x]$ and $\overline{F}[T]$ are isomorphic, and $\overline{a_n}T^n + \dots + \overline{a_1}T + \overline{a_0}$ is the image of $p(x)$ under this isomorphism.
If you look at Lang's Algebra (Proposition 2.3 of chapter V), there he actually constructs an extension of $F$ (if by extension you mean a strict inclusion of fields and not just an injective map of fields) in which $p(x)$ has a root.
-
Maybe I should add that the edition of Lang's Algebra that I have is the revised third edition. – Rankeya Oct 21 '12 at 3:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500188827514648, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/27760/advanced-topics-in-string-theory
|
# Advanced topics in string theory
I'm looking for texts about topics in string theory that are "advanced" in the sense that they go beyond perturbative string theory. Specifically I'm interested in
1. String field theory (including superstrings and closed strings)
2. D-branes and other branes (like the NS5)
3. Dualities
4. M-theory
5. AdS/CFT
6. Matrix theory
7. F-theory
8. Topological string theory
9. Little string theory
I'm not interested (at the moment) at string phenomenology and cosmology
I prefer texts which are (in this order of importance)
• Top-down i.e. would give the conceptual reason for something first instead of presenting it as an ad-hoc construction, even if the conceptual reason is complicated
• Use the most appropriate mathematical language / abstractions (I'm not afraid of highbrow mathematics)
• Up to date with recent developments
-
## 2 Answers
Among normal books, Becker-Becker-Schwarz probably matches your summary most closely. However, you may want to look at a list of string theory books:
http://motls.blogspot.com/2006/11/string-theory-textbooks.html
Don't miss the "resource letter" linked at the bottom which is good for more specialized issues such as string field theory. An OK review of string field theory could be this one
http://arxiv.org/abs/hep-th/0102085
but it was written before many recent advances, such as Martin Schnabl's analytic solution for the closed string vacuum and its followups.
I must correct your comment that you're interested in "nonperturbative" issues such as string field theory. It's been established that despite some expectations, string field theory is just another way to formulate perturbative string theory. It is not useful to learn anything about the strong coupling, not even in principle. And it becomes a mess in the superstring case. There are no functional string field theory descriptions with closed string physical states seen in the physical spectrum at all; it has various reasons. For example, a description that is ultimately "a form of field theory" can never produce the modular invariance $SL(2,{\mathbb Z})$ (needed to get rid of the multiple counting of the 1-loop diagrams). String theory is extremely close to a field theory but it is not really a field theory in spacetime in this strict sense and this fact becomes much more apparent for closed strings (which include gravity at low energies) than in the case of open strings (that may be largely emulated by point-particle fields – related to Yang-Mills being in the low-energy limit of open strings).
For a review of topological string theory, see e.g.
http://motls.blogspot.com/2004/10/topological-string-theory.html
Quite generally, when you study the literature (or reviews), you may find out that your pre-existing expectations about the amount of knowledge people have about various subtopics i.e. about their "relative importance in the current picture" is different than you may expect a priori. Without knowing the actual content, one can't sensibly "allocate" the number of pages to various subtopics as you did so.
-
Thx a dozen for the answer! Regarding string field theory, I still have a feeling I need to learn because many string theory texts allude to it even if not using it. At the least I want to understand why it ultimately failed. Also it might be that some ideas from SFT can be recycled in some ultimately more successful way. Besides that, my understanding is that SFT did have at least 1 success namely Sen's work about D-brane annihilation – Squark Jan 7 '12 at 10:19
Regarding BBS, I have it (made it to page 189) but I got the feeling it glosses over too many things. I found D'Hoker's text in "Quantum Fields and Strings: A Course for Mathematicians vol. 2" much more readable, even though it might be partly an illusion because my mind was prepared by other texts. Unfortunatelly, though, D'Hoker doesn't go beyond basic perturbation theory – Squark Jan 7 '12 at 10:35
Dear @Squark, thanks for your interest. SFT didn't really "fail". It is a totally consistent and from many viewpoints uniquely useful - "explicitly local, off-shell" - formulation of perturbative open string dynamics and the open-string-related solutions such as D-branes solutions (as tachyon condensation from other D-brane starting points). The most explicit and rigorous framework to discuss tachyon condensation etc. It's very manageable in the bosonic string case. People could have expected some other miraculous things from SFT but there have never been "rational justifications" for them. – Luboš Motl Jan 7 '12 at 10:48
Otherwise I skipped F-theory, Matrix theory, and little string theory. There are specialized reviews of those because they're really special. To understand F-theory including the applications that give it the "juice", one must understand lots of algebraic geometry, bundles, and so on. A great arena for people who love (advanced) geometry. The physical essence is really simple: it's type IIB where the axion-dilaton $\tau$ is interpreted as the complex structure of a $T^2$ fiber of two new dimensions attached to each point. – Luboš Motl Jan 7 '12 at 10:58
Matrix string theory was reviewed in various papers e.g. Susskind-Bigatti and Taylor, see the resource letter by Marolf at the bottom of my 2006 page. Little string theory is extremely special. One should read the original papers and their most important followups. This is a domain intensely studied by a few people in the world so it's obviously not terribly efficient to write "textbooks" of it. – Luboš Motl Jan 7 '12 at 11:00
show 1 more comment
You can consult this webstie.
-
1
This URL was (for years) on the top of my page, too. ;-) – Luboš Motl Jan 7 '12 at 10:46
Sorry, I did not check your link. But let this post be as it is. – Satoshi Nawata Jan 9 '12 at 20:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371156096458435, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/120601-help-dy-dx-d-2y-dx-2-problem.html
|
Thread:
1. Help with a dy/dx and d^2y/dx^2 problem!
Hey guys,
Just wondering if someone can please answer this question! Could really use the explanation guys.
Cheers!
Hey guys,
Just wondering if someone can please answer this question! Could really use the explanation guys.
Cheers!
The first and second derivatives require continuous functions. Just individual values is not sufficient. I presume that you are really asked to estimate or approximate the derivatives.
My first thought would be to assume a linear function but then the second derivative would be 0, automatically. The next simplest thing to do is to assume a quadratic function. I would be inclined to write the function as $a(x-2)^2+ b(x-2)+ c$ so when x= 0, this is just $a(-2)^2+ b(-2)+ c= 4a- 2b+ c= 7$, when x= 2, this is just $a(0)^2+ b(0)+ c= c= 13$, and when x= 4, this is just [tex]a(2)^2+ b(2)+ c= 4a+ 2b+ c= 43.
That is, you have 4a- 2b+ c= 7, c= 13, and 4a+ 2b+ c= 43. It should be easy to solve those for a, b, and c and so write the quadratic function.
Of course, if $f(x)= ax^2+ bx+ c$, then $f'(x)= 2ax+ b$ and $f"(x)= 2a$. Evaluate those at x= 0 and x= 1.
Now, that does not use the values at x= 6, or 8 at all. You could, if you wished, write a $4^{th}$ degree polynomial passing through those 5 points.
Again, the question, as you posed it, is impossible. There exist an infinite number of twice-differentiable functions passing through those 5 points. Unless there is more to this problem than you have told us, what I am suggesting is the simplest way to get an answer.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283613562583923, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/220605/connectedness-of-two-particular-set-of-matrices/223422
|
# connectedness of two particular set of matrices
I need to know whether the $1)$ The set of all symmetric positive definite matrices are connected or not?
Well I guess, This set is convex set, Let $M$ be a symmetric positive definite so $X^TMX>0, X\in \mathbb{R}^n$, now for any two such matrix $A,B$, we have $X^T[tA+(1-t)B]X=tX^TAX+(1-t)X^TBX>0, t\in[0,1]$ Hence This set is path connected.
-
1
Correct ${}{}{}$ – Will Jagy Oct 25 '12 at 5:37
## 1 Answer
It's correct (you just have to specify you take $X\neq 0$).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8572873473167419, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2008/12/30/dimensions-of-symmetric-and-antisymmetric-tensor-spaces/?like=1&_wpnonce=67418a1937
|
# The Unapologetic Mathematician
## Dimensions of Symmetric and Antisymmetric Tensor Spaces
We’ve laid out the spaces of symmetric and antisymmetric tensors. We even showed that if $V$ has dimension $d$ and a basis $\{e_i\}$ we can set up bases for $S^n(V)$ and $A^n(V)$. Now let’s count how many vectors are in these bases and determine the dimensions of these spaces.
The easy one will be the antisymmetric case. Every basic antisymmetric tensor is given by antisymmetrizing an $n$-tuple $e_{i_1}\otimes...\otimes e_{i_n}$ of basis vectors of $V$. We may as well start out with this collection in order by their indices: $i_1\leq...\leq i_n$. But we also know that we can’t have any repeated vectors or else the whole thing collapses. So the basis for $A^n(V)$ consists of subsets of the basis for $V$. There are $d$ basis vectors overall, and we must pick $n$ of them. But we know how to count these. This is a number of combinations:
$\displaystyle\dim\left(A^n(V)\right)=\binom{d}{n}=\frac{d!}{n!(d-n)!}$
Now what about symmetric tensors? We can’t do quite the same thing, since now we can allow repetitions in our lists. Instead, what we’ll do is this: instead of just a list of basis vectors of $V$, consider writing the indices out in a line and drawing dividers between different indices. For example, consider th basic tensor of $\left(\mathbb{F}^5\right)^{\otimes 4}$: $e_1\otimes e_3\otimes e_3\otimes e_4$. First, it becomes the list of indices
$\displaystyle1,3,3,4$
Now we divide $1$ from $2$, $2$ from $3$, $3$ from $4$, and $4$ form $5$.
$\displaystyle1,|,|,3,3,|,4,|$
Since there are five choices of an index, there will always be four dividers. And we’ll always have four indices since we’re considering the fourth tensor power. That is, a basic symmetric tensor corresponds to a choice of which of these eight slots to put the four dividers in. More generally if $V$ has dimension $d$ then a basic tensor in $S^n(V)$ has $n$ indices separated by $d-1$ dividers. Then the dimension is again given by a number of combinations:
$\displaystyle\dim\left(S^n(V)\right)=\binom{n+d-1}{d-1}=\frac{(n+d-1)!}{n!(d-1)!}$
About these ads
Like Loading...
## 13 Comments »
1. [...] Let’s look at the dimensions of antisymmetric tensor spaces. We worked out that if has dimension , then the space of antisymmetric tensors with tensorands [...]
Pingback by | December 31, 2008 | Reply
2. I cannot help but wonder if it makes any sense to change the factorials by Euler gamma functions and talk about spaces non-integer dimensions. Is this a field (of study) in pure mathematics?
Comment by Melvin | January 21, 2009 | Reply
3. Melvin, it all depends on what you mean by “dimension”. Remember that we’ve defined the dimension of a vector space as the cardinality of a basis, and cardinalities of (finite) sets are always natural numbers.
Comment by | January 21, 2009 | Reply
4. @Melvin: I guess maybe you’ve heard of fractals and fractal dimensions?
Perhaps more relevant to gamma functions per se are various analytic expressions which come up under the rubric of “dimensional regularization” in quantum field theory, where so-called “D-dimensional integrals” come into play for complex values of D; gamma factors seem to arise frequently there for example. This reminds me too that there is a notion of fractional differentiation that people sometimes talk about, where the Cauchy formula for the $n^{th}$ derivative of an analytic function,
$\displaystyle f^{(n)}(z_0) = \frac{n!}{2\pi i} \int_{|z - z_0| = r} \frac{f(z)}{(z - z_0)^{n+1}} dz$
is extended to non-integer values of $n$, with an obvious substitution of the Gamma function for $n!$.
Tangential but fun fact: the \$(n-1)\$-dimensional volume of the unit ball in $\mathbb{R}^n$ is $\pi^{n/2}/(n/2)!$, interpreted as appropriate for both odd and even $n$. This I feel deserves a non-factorial exclamation mark
Comment by | January 23, 2009 | Reply
5. John: I agree that cardinality is a natural number. The point is if one can make any sense of vector spaces with positive non-integer dimension. You can ignore me, I am sure this daydream is getting me a lot of points in the crackpot index.
Todd: Dimensional regularization was what triggered my daydream and then I found this nice post on vector spaces. More precisely, in QFT (at least for QED) one does not usually bother with changing the dimensionality of spinors and one keeps it appropriate to four spacetime dimensions. The momentum integrals are lifted into arbitrary dimensions and we physicist do what we do. In class I heard a remark about how Dirac spinors do not generalize in a straightforward way to arbitrary number of dimensions; even for the D = 2,3,4,5,6,7,8,9 modulo 8 cases each one is very different.
My daydream was something like what if one could use some framework where all the objects have a nice generalization to arbitrary dimensions and some physical requirement (like what happens in string theory) fixes the dimensionality to be a positive but real number. It would be like the ultimate joke from Nature. (Again, crackpot index running high.)
I am sure all I am saying reduces to taking the expression “analytic continuation” to seriously. After all dimensional regularization is an intermediate step in the calculation and is just one particular pick of a regularization scheme. Good physics does not depend on the choice of this scheme.
Fractional calculus and fractals sound interesting and I certainly know nothing about that. At some point I will check it out.
Comment by | January 23, 2009 | Reply
6. Just one more random thought then. I don’t know whether you (Melvin) follow the writings of John Baez, for example his This Week’s Finds, but speaking of cardinalities, there is an interesting idea he’s written about here and there where one considers the cardinality not just of sets but more generally of groupoids, which can be fractional. I’ll curb the impulse to say much about it in a compressed comment, but check it out. I think you’d find a lot worth ruminating on, coming as you do from a physics perspective.
Comment by | January 23, 2009 | Reply
7. Thanks for the pointer!
Comment by | January 23, 2009 | Reply
8. I’ll point out this as a direction to ponder: what would it mean to have a groupoid as a basis, rather than just a set?
To be even heavier-handed: how is a set s groupoid? If you’ve been reading since I was doing category theory you should be able to tell.
Comment by | January 23, 2009 | Reply
9. It’s a rather mysterious (at least, to me) fact that sometimes goes by the name of “combinatorial reciprocity” that the dimension of the symmetric tensor space can also be written $(-1)^n {-d \choose n}$. James Propp and others have done some interesting work exploring a way to make “negative cardinality” rigorous, and it’s interesting to think about how to combine these ideas with groupoid ideas to get both positive and negative real cardinalities.
Comment by Qiaochu Yuan | June 7, 2009 | Reply
10. My gut feeling, Qiaochu (and you may have considered this yourself) is that this type of thing can be understood formally via superalgebra (i.e., tensor calculus in $\mathbb{Z}_2$-vector spaces). The rough idea here is that exterior powers (with dimension d choose n) are the “supersymmetric counterparts” of symmetric powers. In fact, my preferred stopgap technique for dealing with negative cardinalities is via superalgebra.
If you’d like, we can discuss this a little: I have an email account [email protected].
Comment by | June 8, 2009 | Reply
11. Argh, I meant $\mathbb{Z}_2$-graded vector spaces.
Comment by | June 8, 2009 | Reply
12. [...] is represented by an antisymmetric tensor with the sides as its tensorands. But we’ve calculated the dimension of the space of such tensors: . That is, once we represent these parallelepipeds by antisymmetric [...]
Pingback by | November 3, 2009 | Reply
13. [...] just keep going and let our methods apply to such more general tensors. Anyhow, we also know how to count the dimension of the space of such [...]
Pingback by | November 9, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## RSS Feeds
RSS - Posts
RSS - Comments
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 43, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933600127696991, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/115590/one-problem-in-bounds-and-constructions-for-unconditionally-secure-distributed
|
## One problem in “Bounds and constructions for unconditionally secure distributed key distribution schemes for general access structures”
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In this paper "Bounds and constructions for unconditionally secure distributed key distribution schemes for general access structures" on page 286 it's written "The independence of the mappings $\lbrace \phi _j \rbrace _{j \in H_G}$, implies that $ker\varphi _h^0 + (\cap _{j \in H_G} ker \varphi _j^0 )=E_0^l$." May I ask for more explanation of this sentence please?
-
I haven't seen many cryptography questions on this forum, which I take as evidence that we don't draw much interest from that community. That doesn't mean we can't answer your question, but you might have to make it a bit easier on us. Can you write down a description of these maps $\phi$? Of $E$? Of $H_G$? Who knows? Maybe writing a longer question and trying to pinpoint exactly where your confusion is will lead you to the answer yourself. If you have no luck here, try the theoretical computer science overflow site (but add a link here to any post you make there). – David White Dec 6 at 14:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498419165611267, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/625/why-is-the-derivative-of-a-circles-area-its-perimeter-and-similarly-for-sphere?answertab=active
|
# Why is the derivative of a circle's area its perimeter (and similarly for spheres)?
When differentiated with respect to $r$, the derivative of $\pi r^2$ is $2 \pi r$, which is the circumference of a circle.
Similarly, when the formula for a sphere's volume $\frac{4}{3} \pi r^3$ is differentiated with respect to $r$, we get $4 \pi r^2$.
Is this just a coincidence, or is there some deep explanation for why we should expect this?
-
1
(I realise that it might not be clear what the $n$-dimensional generalisation is of this, but perhaps this would happen even in different geometries or metric spaces?). – bryn Jul 24 '10 at 3:01
3
– Jonathan Fischoff Jul 24 '10 at 3:23
1
And next explain why it fails for the square... or the ellipse... – GEdgar Dec 6 '11 at 14:22
1
You mentioned that it's true for the 2-sphere, and for the 3-sphere, but it should be noted that it is also true for the 1-sphere, which is the interval from -r to r, which has 1-volume of 2r. The derivative of 2r wrt r is 2, which is the measure of its "surface", measure for 0-dimensional items being the same as cardinality. – Hexagon Tiling Jan 7 '12 at 11:47
1
@GEdgar : I make out the area of a square of 'radius' $r$ as $4r^2$ and the perimeter as $8r$; the idea continues to work there (for essentially the same uniformity reason that it does on the sphere). Of course, it doesn't work on rectangles for the same reason it doesn't work on ellipses... – Steven Stadnicki Apr 8 '12 at 18:11
show 1 more comment
## 6 Answers
There is an article on the web that deals, in depth, with this question. Here is a quote from it:
“We were intrigued by the students' work, and this paper is the result of our attempt to answer the question, “When is surface area equal to the derivative of volume?"”
www.math.byu.edu/~mdorff/docs/DorffPaper07.pdf
-
Consider increasing the radius of a circle by an infinitesimally small amount, $dr$. This increases the area by an annulus (or ring) with inner radius $2 \pi r$ and outer radius $2\pi(r+dr)$. At this ring is extremely thin, we can imagine cutting the ring and then flattening it out to form a rectangle with width $2\pi r$ and height $dr$ (the side of length $2\pi(r+dr)$ is close enough to $2\pi r$ that we can ignore that). So the area gain is $2\pi r\cdot dr$ and to determine the rate of change with respect to $r$, we divide by $dr$ and so we get $2\pi r$. This is of course, just an informative, intuitive explanation and not a formal proof. The same reasoning works with a sphere, we just flatten it out to a rectangular prism instead.
-
I know I'm nercoing, but. . . +1 for annulus – 000 Jan 30 '12 at 5:25
The circle (and sphere) is not really that special. It also works for the square if you measure it using not the side length $s$, but half that, $h=s/2$. Then its area is $A=(2h)^2=4h^2$ with derivative $dA/dh=8h$ which is its perimeter.
-
How does one set up the integral to find the area of a circle? An area was defined for a square or rectangle to be the width times the length. It is the equivalent for all geometries. For a circle working in polar coordinates the differential area equivalent is $dr$ while the differential width would be $r \,d\theta$.
So... $$dA = r\, d \theta\, dr.$$ Here $r \,d\theta$ is the differential arc (width) times the differential length $dr$. You can see that by inspecting the form of this differential equation the fundamental form for finding the area of a circle is in the form of what we know to be the circumference of a circle. If we divide through by $dr$. So the connection is implicit in the basic geometry. Because we are working in a polar system.
-
1
You can use LaTeX pretty much as usual. Just enclose your formulas in dollar signs. For example, `$\theta$` gives $\theta$. – t.b. Dec 6 '11 at 14:52
The explanation is very simple. Take a sphere of radius $r$, volume $V$, and surface area $A$. Now paint it, with a layer of thickness $\delta r$. The volume of paint required is (to first order in $\delta r$) $A\delta r$, which gives you straight away: $$\delta V = A \delta r$$ Hence, in the limit:
$$\frac{dV}{dr} = A$$
-
Because you use the integral (read: anti-derivative) to find the area under the curve - even a curve in polar coordinates.
-
This doesn't explain why the coefficients match up. – Ben Alpert Jul 24 '10 at 3:35
@Ben: Yes it does. Try reading 'integral' as 'anti-derivative'. – BlueRaja - Danny Pflughoeft Jul 24 '10 at 4:06
1
This answer is right, but I understand Ben Alpert's confusion. It's slightly unclear when I read it over again. – Justin L. Jul 24 '10 at 11:10
I hadn't thought of this. Does this also explain why it works in 3 dimensions? – bryn Jul 27 '10 at 4:59
2
I agree it doesn't really explain why the coefficients match up. If this answers the question, then how does this work with a square? Is the derivative of $x^2$ equal to $4x$? Your explanation works for one specific example. How about an explanation that really works in general? At the least, you need more details to explain it. – Graphth Dec 6 '11 at 13:10
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942854106426239, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/73719/injective-function-on-a-dense-set
|
## Injective Function on a Dense Set
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is a topological question that came up tangentially to some material I was working on. Suppose $X$ and $Y$ are complete metric spaces and $D$ is a dense subset of $X$. Let $f:D\mapsto Y$ be a continuous injection. Extend $f$ to a function $g:X\mapsto Y$ by continuity. Must $g$ be injective? It seems to me that the answer should be yes, but I haven't been able to prove it.
-
1
You may state the question starting from g. Can it happen that a continuous non-injective $g:X\to Y$ between complete metric space restrict to an injective function on a dense subset $D$? Of course it can, another example is e.g $x\mapsto x^2$ on $X=Y=\mathbb{R}$, restricting on $D$:={positive rationals and negative irrationals}. – Pietro Majer Aug 26 2011 at 13:13
## 2 Answers
Map the open unit interval to a circle minus a point, and then extend it to the closed interval.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Joel has completely answered the question, but let me add another example, with a bigger failure of injectivity of $g$. Let $D$ be the set of points in the plane of the form `$(\frac1n,\sin n)$` for positive integers $n$. Its closure consists of $D$ plus the segment `$S=\{0\}\times[-1,1]$` on the $y$-axis. The projection $(x,y)\mapsto x$ is one-to-one on $D$ but its continuous extension to the closure $D\cup S$, given by the same projection formula, is constant on $S$, i.e., on a bigger set than the set $D$ on which it's one-to-one.
There are analogous examples with the segment replaced by any complete, separable, metric space, for example, Hilbert space. And the only reason for needing the word "metric" there is because it was in the question; otherwise, there would be a Stone-Čech compactification example here.
-
To answer your query, which I've now deleted, about getting the accent on Čech's name: on my computer (UK keyboard, Linux) it's done by typing AltGr-@ followed by C. – Tom Leinster Aug 26 2011 at 3:56
@Tom: Thanks. – Andreas Blass Aug 26 2011 at 4:50
Another similar example to this would be to use a dense subset of the plane that contains at most one point from each column. (One can even make it countable by using only rational points.) Thus, the projection map of this set to the $x$-axis is injective, but extends by continuity to the full projection map of the plane to the $x$-axis. – Joel David Hamkins Aug 26 2011 at 11:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386581182479858, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/33366/the-unprecedented-success-of-the-intersection-operator/33373
|
## The unprecedented success of the “intersection” operator
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
You might think that the title is an overstatement of a well-known fact but it is the best title I can come up with for the wonders the intersection operator does in some fields of math.
Recently,(on summer vacation) I was studying one subject after another and after changing about three subjects, I began to notice that in all these the set-theoretic intersection operator always carried over some property of the parent sets to the one obtained after the intersection.
To summarize briefly:
Let $A$ and $B$ be two sets, say with property P. Then, $A \cap B$ has property P.
Evidences, some trivial:
## Topology
• The intersection of two open sets is open.
• The intersection of two closed sets is closed.
• The nonempty intersection of two subspaces of a metric space is a metric space.
......and so forth.
## Algebra
• The intersection of two subspaces of a vector space is a vector.
• The intersection of two subgroups of a group is a group(w.r.t the same binary operation and clearly the intersection is between the underlying sets).
• The intersection of two sub-fields of a field is a field.
......the list continues.
The third subject was Graph Theory, but I haven't yet come across the notion of intersection.
Now I would like to ask whether this trend always holds or whether there is some underlying principle each discipline abides by when using the notion of intersection. Is there any property deviating from this trend? What are the reasons for the ubiquity of the quoted property?
-
3
Well, the intersection of two subsets of a group which are not subgroups does not necessarily have that same property... – Mariano Suárez-Alvarez Jul 26 2010 at 7:59
1
@Mariano,in your case P $=$ being a subset. So, taking two subsets of a group, their intersection is also a subset. – To be cont'd Jul 26 2010 at 8:16
1
No: in my case P = being a subset which is not a subgroup. – Mariano Suárez-Alvarez Jul 26 2010 at 8:17
1
But I will be more pleased if I see counterexamples that are not 2nd derivatives. That is when P is an atomic property and not a combination of two. – To be cont'd Jul 26 2010 at 8:41
1
The property of being a non-atomic property is well-defined. There is then a subjective question of which decompositions of a non-atomic property are nontrivial, and a related question of which properties (when presented without a given decomposition) also have a decomposition rendering them non-atomic. The desire for counterexamples that are minimal as possible is a reasonable one. – T. Jul 27 2010 at 18:42
show 2 more comments
## 5 Answers
When property P is universal ($\forall ...$) it is likely to correspond to closed sets, and thus be preserved under intersection. Examples: axioms of a group, ring, field, directed graph; having symmetry under a given group.
However, if P is existential ($\exists ...$) it corresponds to open sets and is more likely to be preserved by unions (or products), not intersections. Examples: being algebraically closed, having at least 53 elements. (Well, algebraic closure is $\forall \exists$ so of course it is even more complicated. But falling out of the pure $\forall$ class it fails the intersection property.)
The first situation is possibly more common because we want structures to satisfy some, well, structural properties. Properties expressed by equations usually correspond to closed sets.
To some extent this is formalized in Birkhoff's theorem On equational presentations. Any book on Universal Algebra will discuss it.
Also, the sample of concepts is biased, because definitions that become standard are often selected for their useful formal properties. Concepts not having stability under intersection (or union, or inheritance by sub- or super-structures) are less likely to be used.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a general sense in which any property that is closed under arbirtrary intersection is exactly a closure property.
To explain what I mean, suppose that $X$ has property $P$ and that the collection of subsets of $X$ with property $P$ is closed under arbitrary intersection. Then I claim that there is a function $cl$, a closure operator, defined on subsets of $X$ such that $A\subset cl(A)=cl(cl(A))$ and $A\subset B\to cl(A)\subset cl(B)$, for which the sets with property $P$ are exactly the sets $A$ that are closed with respect to $cl$, meaning that $A=cl(A)$.
To see this, simply let $cl(A)$ be the intersection of all $B$ with property $P$ such that $A\subset B$.
Apart from this, there many examples of natural properties that are not closed under intersection.
• The intersection of two groups is not necessarily a group. (e.g. when they are not subgroups of a larger group.)
• Same for almost any other type of algebraic structure.
• The intersection of two nonempty sets may not be nonempty.
• The intersection of two ultrafilters on a set is not necessarily an ultrafilter.
• The intersection of two maximal ideals in a ring is not necessarily a maximal ideal.
• The intersection of two unbounded subsets of the plane may not be unbounded.
• etc.
-
In Introduction to Topology by Bert Mendelson, around pp50's, it actually asks to prove the above notion of closure for closed sets. – To be cont'd Jul 26 2010 at 11:39
Here is an example of a $\forall$ property that is not closed under intersection: the intersection of two linear orders on a set is not necessarily a linear order. Rather, it is the partial order that the two linear orders have in common. But linearity is $\forall$-expressible, as $\forall p,q\, (p\leq q\vee q\leq p)$. – Joel David Hamkins Jul 26 2010 at 15:07
Joel, the $\forall$ property of being a linear order is closed in the usual sense, since it is expressed by equations on the $Z/2$-valued function on pairs that records the relation, such as $f(p,q) + f(q,p) = f(p,p)=1$. It satisfies the same intersection property as closed sets in topology: if $A$ and $B$ are substructures of $X$ having property $P$ (as defined in $X$ or inherited from it, e.g., both are closed sets in a given topology on $X$, or both are subsets of a linearly ordered set $X$) then so is $A \cap B$. Closed sets in different topologies need not have closed intersection. – T. Jul 27 2010 at 3:02
I agree with that, although I would describe it differently. First order universal formulas used to define sets of points are of course closed under arbitrary intersection (or even just subsets). I was in contrast using a universal formula as a second order definition, to define a property of the relation appearing in it (linearity), and these are not necessarily closed under intersection. – Joel David Hamkins Jul 27 2010 at 3:27
Also, in all six of the examples of intersection-closed properties given in the question, the intersection was of substructures of a given structure. Under this interpretation it would be very interesting to see an $\forall$ property not closed under intersection. Being a field is an interesting example, since existence of inverses is a priori a $\forall \exists$ statement. However, direct sums of fields do have an equational presentation (by Birkhoff's theorem), and any subset of a field that is of this direct-sum form is a field, so the inverses axiom doesn't stop intersection-closure. – T. Jul 27 2010 at 3:33
show 1 more comment
So it's pretty clear, I think, that this trend doesn't always hold. For example, the intersection of two simply connected subspaces of a space does not have to be simply connected. (For example, the intersection of hemispheres on the sphere is a circle).
I think that the reason it holds in all the cases you mention except the case of open sets, is that the property is somehow a "closure" property. Closed sets are closed under limits (or nets, more correctly), subobjects of algebraic objects are closed under some operations, etc. The pervasiveness you see is due to the fact that "closure" type properties behave well under intersection. The open set example, is, I think, a red herring. There, notice that only finite intersections maintain the property in question, and that this behavior with respect to intersections is part of the definition... Probably to avoid having too few open sets.
Anyway, I'm not sure what else there is to say about this question- it's rather vague, so I don't even know if this is the kind of answer you wanted.
-
Thanks, Dylan. But can you come up with two set having just one "big" property and not following the trend? In your case, though acceptable an answer, I get the feeling that simple-connectedness and being a subspace are two properties. See my comment above. – To be cont'd Jul 26 2010 at 8:45
Well. I think the fact that simple connectedness and being a subspace are two properties is just a notational issue. Moreover, if you consider an ambient space, what should be simply connected if not a subspace? Being a subfield of a field also means to be closed under various operations, and hence could be regarded as not just one property. – Stefan Geschke Jul 26 2010 at 9:19
Mulling this over, I thought: "Compared to what?". So another question is "Why are more properties closed under intersection than under union"? While that may be arguable, there's something to be said for looking at it (e.g. a union of two non-trivial subgroups of the same subgroup is not a subgroup).
My intuition is that it has to do with the fact that, constructively, $(P \rightarrow X \wedge Y) \Leftrightarrow (P \rightarrow X) \wedge (P \rightarrow Y)$ but $(P \rightarrow X \vee Y) \nLeftrightarrow (P \rightarrow X) \vee (P \rightarrow Y)$. Here $P$ restricts the properties $X$ and $Y$ to an interesting 'sub-universe'.
I'm sure someone with more knowledge of lattice or even category theory can restate this more succinctly.
-
1
The second paragraph of yatima2975's answer implicitly involves quantification. In (classical) propositional logic, $P\to(X\vee Y)$ is equivalent to $(P\to X)\vee(P\to Y)$; to see this, just check the truth tables. What yatima2975's intuition really seems to depend on is that universal quantification distributes over "and" but not over "or". (Since existential quantification distributes over "or" but not over "and", I don't see much clarification here.) – Andreas Blass Jul 26 2010 at 15:05
Andreas, your parenthetical addition of 'classical' is indeed right, and relevant! I've been doing functional progamming and constructive logic for far too long. I have edited my post accordingly; thank you for catching my mistake. – yatima2975 Jul 26 2010 at 15:50
You might want to check the definition of a filter: http://en.wikipedia.org/wiki/Filter_%28mathematics%29
I think the definition of a filter exemplifies the "intersection property" you talk about.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500307440757751, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/19610/help-with-using-some-trig-identities?answertab=oldest
|
# Help with using some trig identities
Need some help with the steps in converting the derivatives of the following functions.
1. derivative of $\cos(\tan(x))$ to $\frac{-\sin(\tan (x))}{\cos^2(x)}$
I can get $-\sec^2(x) \cdot (\sin(\tan(x))$ using chain rule, but then I am stuck. I guess I just need help on understanding how $\sec^2(x) = \frac{1}{\cos^2(x)}$
2. derivative of $\sin(x)\tan(x)$ to $\sin(x) + \tan(x) \cdot \sec(x)$
I can get $\cos(x) \cdot \sec^2(x)$ but then I am unsure what to do. Thanks for any help!
-
## 2 Answers
For your first question, remember that $\sec$ is the reciprocal of $\cos$ by definition.
For the second, you should use the product rule to find the derivative, don't simply take the derivative of each factor. Then use the fact that $\tan{x}=\frac{\sin{x}}{\cos{x}}$ to simplify your expression and get the desired form. Try rewriting every function in terms of $\sin$ and $\cos$ to most easily simplify.
-
Thanks, I see using product rule I get sinx(1/cox^2(x)) + (sinx/cosx)(cosx) which simplifies to sinx + tanxsecx – Finzz Jan 30 '11 at 22:31
@Finzz, exactly, nice job. – yunone Jan 30 '11 at 23:04
1. $\sec(x)$ is defined to be $\frac{1}{\cos(x)}$, that is the ratio of the length of the hypotenuse and the length of the adjacent side.
2. Note that $\tan(x)\cdot \sec(x) = \frac{\sin(x)}{\cos(x)} \cdot \frac{1}{\cos(x)} = \sin(x) \cdot \frac{1}{\cos^2(x)}$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121978878974915, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/279976/show-that-the-distrubtion-function-f-z-of-z-suffices-f-zz-fz2?answertab=oldest
|
# Show that the distrubtion function $F_Z$ of $Z$ suffices $F_Z(z)=F(z)^2$
Let $X$ and $Y$ be two stochastically independent, equally distributed random variables with distribution function $F$. Define $Z = \max (X, Y)$.
1) Show that the distrubtion function $F_Z$ of $Z$ suffices $F_Z(z)=F(z)^2$.
I got this:
$F_Z(z)=P(\{Z\leq z\})=P(\{\max (X, Y) \leq z\})=...$
A hint would be appreciated.
-
2
try to write $\max{(X,Y)}\le z$ in a nice way and use independence. – n.c. Jan 16 at 11:20
@n.c. This is the nicest that I get: $F_Z(z)=P(\{Z\leq z\})=P(\{\omega:Z(\omega)\leq z\}) =P(\{\omega:\max (X, Y)(\omega)\leq z\}) =P(\{\omega:X(\omega)\leq z\ \lor Y(\omega)\leq z\})$ I'm not sure about the last step. – Kasper Jan 16 at 11:21
3
your last equality is not true. it should be a $\wedge$. Otherwise the sets are not equal. Then just apply independence and identically distributed – n.c. Jan 16 at 11:32
## 1 Answer
Just for completeness:
$$\{\omega\in\Omega:\max{\{X(\omega),Y(\omega)\}}\le z\}=\{\omega\in\Omega:X(\omega)\le z\wedge Y(\omega)\le z\}$$
$"\supset"$: If both, $X(\omega)$ and $Y(\omega)$ are smaller or equal $z$, so is the maximum of both.
$"\subset"$:If the $\max{\{X(\omega),Y(\omega)\}}$ is smaller or equal $z$, then by definition both, $X(\omega)$ $\textbf{and}$ $Y(\omega)$ have to be smaller or equal $z$.
Therefore
$$F_Z(z)=P[Z\le z]=P[\max{\{X,Y\}}\le z]=P[X\le z\wedge Y\le z]=P[X\le z]P[Y\le z]=P[X\le z]^2=F(z)^2$$ where we have used independence and identically distributed in the two last equalities.
Note that in general, if you have $n\in \mathbb{N}$, $X_1,\dots,X_n$ iid. random variables with distribution function $F(z)$, then the random variable $Z:=\max{\{X_1,\dots,X_n\}}$ has the distribution function $$F_Z(z)=F(z)^n$$.
-
thanks for this excellent answer ! +1 – Kasper Jan 16 at 13:34
@Kasper You are welcome – n.c. Jan 16 at 13:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8316826224327087, "perplexity_flag": "head"}
|
http://nrich.maths.org/5647/note
|
### Clock Hands
This investigation explores using different shapes as the hands of the clock. What things occur as the the hands move.
### Watch the Clock
During the third hour after midnight the hands on a clock point in the same direction (so one hand is over the top of the other). At what time, to the nearest second, does this happen?
### Take the Right Angle
How many times in twelve hours do the hands of a clock form a right angle? Use the interactivity to check your answers.
# How Safe Are You?
## How Safe Are You?
We're going to look at opening safes!
Many have dials on them, and you turn the dials - as you see in these two pictures - to open them.
So I'll show you a simple dial first with just the numbers $0 - 5$ on it.
To open the safe, the dial has to have $2$ next to the arrow, like this;
The question for you is, how much turning of the dial did we have to do to get the $2$ at the top?
The next safe we get to has a different dial, it has the numbers $0 -7$.
How much turning to get the number $5$ at the top? [Shown in the second picture.]
The number that you have to get at the top is often called the "combination" of the safe. These three different safes all start with $0$ [zero] at the top. You have to find the amount of turning to get to the combination, shown in each second picture:
Now have a go at these different safes. Remember they would all start with $0$ [zero] at the top, so, how much turning to get the required combinations shown below?
### Why do this problem?
This activity is a rather different way of giving pupils some experience of turning and measuring angles in degrees. It may particularly appeal to those pupils who like visualising something real.
### Possible approach
You might want to make a version of one of the dials out of two pieces of card, fixed in the middle with a paper fastener. In this way, you could ask the class to visualise the turning and offer their solutions with explanations, before checking their thoughts using the card model.
How this is approached and pupils' thoughts will vary a lot according to their age and experience. It might be alright for the youngest learners to give an answer that is an anticlockwise direction but they perceive it as clockwise, just because it's the numbers that are rotating. For older pupils it would be a good discussion point for them to consider in which direction the turning occurs. Are their answers for anticlockwise or clockwise turning?
You could print out this sheet of the dials for children to work on in pairs.
### Key questions
Which way are you rotating/turning your hand?
Is it more than half a full turn each time? Can you be more exact?
How many degrees there are in a circle?
### Possible extension
Pupils with some knowledge of the $360^\circ$ complete turn will probably be able to have a go at the later questions. More advanced pupils would be able to create their own problems for other pupils to answer. They might like to use these images as printable dials for those harder questions they may want to set.
### Possible support
A circular protractor would be useful for some pupils. Some preliminary discussion could be had about the turning involved in an analogue clock.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9569697976112366, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/7832/list
|
Return to Answer
2 complex addendum
As long as the form is positive definite and the unit ball is convex, you get a perfectly good Banach space using any symmetric $n$-linear form on a real vector space $V$. The degree $n$ is necessarily even. It is equivalent to defining the norm as the $n$th root of a homogeneous degree $n$ polynomial. $\ell^p$ is an example for any even integer $p$. There are many other examples. I found a paper, Banach spaces with polynomial norms, by Bruce Reznick, that studies these norms. He obtains various results; the most appealing one to me at a glance is that these Banach spaces are all reflexive.
Off-hand I can't think of any simple way to recover positive definiteness starting with odd polynomials. The cube of the norm on $\ell^3$ is a polynomial in the absolute values of the coordinates rather than the coordinates themselves.
Addendum: To address Darsh's comment, what you would look at in the complex case is self-conjugate polynomials of degree $(n,n)$. Equivalently, as with all complex Banach norms, the realification is a real Banach norm which is invariant under complex scalar rotation.
1
As long as the form is positive definite and the unit ball is convex, you get a perfectly good Banach space using any symmetric $n$-linear form on a real vector space $V$. The degree $n$ is necessarily even. It is equivalent to defining the norm as the $n$th root of a homogeneous degree $n$ polynomial. $\ell^p$ is an example for any even integer $p$. There are many other examples. I found a paper, Banach spaces with polynomial norms, by Bruce Reznick, that studies these norms. He obtains various results; the most appealing one to me at a glance is that these Banach spaces are all reflexive.
Off-hand I can't think of any simple way to recover positive definiteness starting with odd polynomials. The cube of the norm on $\ell^3$ is a polynomial in the absolute values of the coordinates rather than the coordinates themselves.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260804653167725, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/116026/calculating-the-lebesgue-decomposition-of-a-measure
|
## Calculating the Lebesgue decomposition of a measure [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
How we should calculate the Lebesgue decomposition of a measure? Please explain it with an example such I can get the whole idea behind it.
-
Have a look at the FAQ. This question does not belong here. – Anthony Quas Dec 10 at 23:49
This is a wrong place for this sort of questions. Please, read FAQ. – Alexandre Eremenko Dec 10 at 23:52
## 1 Answer
Let e.g. $\nu$ be a finite Borel measure on $[a,b]$ and $m$ the Lebesgue measure. So the function $[a,b]\ni x\mapsto \nu\big([a,x)\big)$ is a BV function. It is therefore differentiable $m$-a.e., and its derivative $\rho(x)$ coincides with the Radon-Nikodym derivative of the absolutely continuous part of $\nu$. Knowing $\rho$ you can deduce the Lebesgue decomposition of $\nu$ wrto the Lebesgue measure: $\nu=\nu _ a+\nu _ s$ with $\nu _a\perp\nu _ s$ where $\nu_a(E)=\int_E\rho(x)dm(x)$.
-
"Pietro Majer", thsnk you a lot. – Omid Saba Dec 11 at 4:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8955571055412292, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/29300?sort=newest
|
## What’s wrong with the surreals?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Of all the constructions of the reals, the construction of the surreals seems the most elegant to me.
It seems to immediately capture the total ordering and precision of Dedekind cuts at a fundamental level since the definition of a number is based entirely on how things are ordered. It avoids, or at least simplifies, the convergence question of Cauchy sequences. And it naturally transcends finiteness without sacrificing awareness of it.
The one "rumor" I've consistently heard is that it is hard to naturally define integrals and derivatives in the surreals, although I have yet to see a solid technical justification of that.
Are there known results that suggest we should avoid further study of this construction, or that show limitations of it?
-
Regarding the difficulties wtih integrals and derivatives, see the appendix to the second edition of On Numbers and Games. I don't think that this is really relevant to your question, though. Once you have the reals you can just develop analysis as usual; you don't have to carry the surreals around as excess baggage everywhere if you don't want to. – Timothy Chow Jun 24 2010 at 1:58
In asking a big-picture question I was looking for an equally vague-but-intuitive answer, although of course technical evidence is extremely useful! You're reference to the end of On Numbers and Games is probably the single best answer so far, or rather, Conway's epilogue itself. Thanks, Timothy. (I think I'll rephrase this as another answer). – tylern Jun 24 2010 at 22:06
## 5 Answers
At a recent conference in Paris on Philosophy and Model Theory (at which I also spoke), Philip Ehrlich gave a fascinating talk on the surreal numbers and new developments, showcasing it as unifying many disparate paths in mathematics. The abstract is available here, on page 8, and here his draft article on the Absolute Arithmetic Continuum. The principal new technical development is a focus on the underlying tree.
Philip expressed his frustration that Conway often treated his creation of surreal numbers as a kind of game or just-for-fun project---an attitude reinforced by the excellent Knuth book---whereas they are in fact a profound mathematical development unifying disparate threads of mathematical investigation into a single unifying structure. And he made a very strong case for this position at the conference.
Meanwhile, perhaps exhibiting Philip's point, at a conference on logic and games here at CUNY, I once heard Conway describe the surreal numbers as one of the great disappointments of his life, that they did not seem after all to have the profound unifying nature that he (and many others) thought they might. Philip Ehrlich strove to make the case that Conway was his own worst enemy in promoting the surreals, and that they actually do have the unifying nature Conway thought they did, but that Conway scared people away from this perspective by treating them as a toy. I encourage you to read Philip's articles.
So my answer, supporting Philip, is that nothing is wrong with the surreals---please have at them! Of course they have their own issues, which will need to be surmounted, but we shall all benefit from a greater investigation of them.
-
3
Unifying nature in what sense? That is, what are examples of mathematical (as opposed to philosophical) concepts from outside of pure set theory were once seen as unconnected but now are seen as unified by means of this work? And what does the tree allow one to do which couldn't be done before? Or is the interest internal to logic and set theory? A skim of the draft copy did not clarify this for me. – Boyarsky Jun 24 2010 at 4:45
2
Apparently the Continental philosopher Alain Badiou regards the surreals as being of substantial philosophical importance, since they supply a way of talking about the infinite which still satisfy the ordered field axioms (in contrast with, say ordinal arithmetic). (I can't honestly claim to understand Badiou, but I do enjoy reading him: it's like reading about the foundations of mathematics from some parallel universe where Russell became a disciple of Husserl instead of Frege..!) – Neel Krishnaswami Jun 24 2010 at 8:29
7
It seems the main thing that would be needed to broaden the interest of surreals would be to provide some tools from transferring results back from the surreals back to more classical number systems (analogous to the transfer principle in nonstandard analysis). The situation here reminds me of that of generalised functions in analysis. There are many, many ways to generalise the concept of a function, but only distributions have really been successful, because there are ways to get from distributions back to classical functions, e.g. by convolving with a test function. – Terry Tao Jun 25 2010 at 0:04
7
For the first 50 years after Hensel, $p$-adic analysis was championed by a few but viewed as esoteric by most; intrinsic beauty wasn't enough. Then Dwork used it to prove a Weil Conjecture and Tate invented rigid-analytic spaces to make analytic continuation possible over totally disconnected fields (with applications in number theory), and now it's a huge industry. Surreals are in need of a Dwork and Tate. (Although the set theory they require is elementary to set theorists, I conjecture it is a real obstacle for many without such expertise. Look at the history of non-standard analysis.) – Boyarsky Jun 25 2010 at 1:21
4
A Boyarsky turning to Dwork for an example. Am I the only one amused by this? – KConrad Jun 25 2010 at 3:52
show 6 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is not a major issue but there was a remark made in the master thesis that can be found at the following address http://www.mamane.lu/concoq/ that there is a small gap in the proof of the transitivity of the ordre relation in the original book of Conway. See the report, page 49-53.
-
To me one of the more fascinating aspects of the surreals is that application by Kruskal and others to construct higher order asymptotic expansions. For example, if you want to understand the asymptotics of the function $$f(x)= {1\over 1-x}+e^{-1/x}$$ on $(0,\epsilon)$ and differentiate it from $g(x)={1\over 1-x}$ you look at the series" $$1+x+x^2+x^3+\dots+e^{-1/x}.$$ Kruskal and his co-authors have used surreal numbers to give an approach to these expansions and applications.
This type of expansion can also be be dealt with using the transseries of Ecalle or the logarithmic-exponential series developed in model theory.
-
I just read the epilogue of On Numbers and Games as suggested by Timothy Chow, and here I see very concrete references to some technical difficulties that may demystify some research in this area, although it is apparent by other answers that there is still much optimism.
Some remarks based on the epilogue (written in 2000 by Conway):
There is a nice definition of the surreals which does not require equality as a defined relation. It is not formally given, but I'm guessing we can think of it as a mapping from an ordinal set to signs {-,+}, each sign being a direction we take in the surreal tree. Then identity is equality.
However, Conway remarks that this has two problems:
1. It forsakes the "genetic" (his word, also in quotes at first) approach of the L,R definition. I don't fully understand this, but I'm guessing he means that we're building everything on the intuition of a total ordering (and maybe a "time-of-creation" idea), and the surreals will always be identifiable with L,R sets, so why not just define them that way?
2. The sign-sequence definition requires that the ordinals are defined first.
Conway goes on to discuss work by Simon Norton (a proposed definition of an integral) and Martin Kruskal.
The general direction here is to define things in terms of (L,R) sets (classes??) in such a way that equal numbers (in the defined equality) give equal answers; and that classical analysis remains intact.
Conway gives Norton's integral definition, which has some good properties, but fails to integrate the surreal-exponential function in accordance with classical analysis (we get ex instead of ex-1 when integrating over [0,x]).
In summary, I'm choosing to interpret all of these comments and answers together (thanks to all) as: the surreals are indeed a worthwhile construction, although there is a noted lack of progress on extending calculus to work equally elegantly in a surreal-general setting.
In case others are curious, here are some references (I have read none of them, yet) Conway gives in this epilogue:
• The Theory of Surreal Numbers by Harry Gonshor
• Foundations of Analysis over Surreal Number Fields by Norman Alling
• Real Numbers, Generalizations of the Reals, and Theories of Continua by Philip Ehrlich
-
Gonshor's book is quite elegant and starts defining surreal numbers right away as sequences on {-,+}. Of course you need to define ordinals first, but for me that's not a disadvantage (maybe for Conway it is, since he seems to try to avoid to "restrict" himself to a particular axiomatic system such as ZFC, assuming that it's possible to even make sense of such a viewpoint). – David FernandezBreton Jun 24 2011 at 1:40
Conway himself lists a few disadvantages in On Numbers and Games, Chapter 2.
One that can be dealt with quickly is that it is quite tricky to make the process stop after constructing the reals! We can cure this by adding to the construction the proviso that if $L$ is non-empty but with no greatest member, then $R$ is non-empty with no least member, and vice versa. This happily restricts us exactly to the reals. The remaining disadvantages are that the dyadic rationals receive a curiously special treatment, and that the inductive definitions are of an unusual character. From a purely logical point of view these are unimportant quibbles (we discuss the induction problems later in more detail), but they would predispose me against teaching this to undergraduates as "the" theory of real numbers.
[Edit: I interpreted the question as, what is wrong with the construction of the real numbers via surreals? This might have been a misinterpretation. The question does, after all, literally ask what is wrong with the surreals. Obviously there is nothing wrong with the surreals. I thought this was obvious and so I assumed the question must have been something else.]
-
2
I actually remember one more disadvantage mention by Conway: equality is a defined relation, and a pretty subtle one at that. It's not too easy to figure out when two expressions for surreal numbers are actually equal. – Alon Amit Jun 24 2010 at 15:14
5
Equality is also a "defined relation" if you define the real numbers as Cauchy sequences of rationals, or even if you define the rational numbers as ordered pairs of integers, although in most cases we sweep it under the rug by talking about "equivalence classes." So I don't really see why the fact that equality of surreals is a "defined relation" should be regarded as a disadvantage. – Mike Shulman Jun 25 2010 at 5:19
1
@Timothy: I don't want to blindly go ahead and pretend; I want clear definitions first. Notions like "set of all maps" and "set of all subsets" underlie many constructions. The way of thinking about many things is within a set-theoretic framework. How do you define and construct algebraic closures of fields and topological absolute Galois groups, or define what a "field" is, without set theory? There are reasons I think the analogy with category theory is not so apt, but this isn't the place to get into it. Saying "classes behave just like sets" sounds like word games; it can't be that easy. – Boyarsky Jun 26 2010 at 7:08
1
@Boyarsky: It really is that easy. Every set-theoretic construction you would want to do can be done with classes. You just have to make sure not to run afoul of something like Russell's paradox, and that's something you've had to be careful of in "ordinary" set theory anyway. If you're not happy flying by the seat of your pants (which 99% of mathematicians do in the case of set theory; how many mathematicians can even list the axioms of ZFC?), then take a look at the axioms for Morse-Kelley or NBG. Don't assume that classes are difficult just because they're unfamiliar. – Timothy Chow Jun 27 2010 at 17:46
3
The ordinals and cardinals also form proper classes, but these number concepts have proved enormously fruitful. For that matter, the collection of all sets is also a proper class, but we still work with sets. So why not surreals? Of course, one must respect the set/class issues, but this is neither difficult nor mysterious. – Joel David Hamkins Jun 28 2010 at 18:07
show 8 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9608487486839294, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/166596/when-does-non-negativity-of-the-integral-of-a-function-imply-that-the-function-i
|
# When does non-negativity of the integral of a function imply that the function itself is non-negative?
Let $(\Omega,\Sigma)$ be a measurable space and $(\omega_k)_{k\in\mathbb{N}}$ a sequence of elements of $\Omega$. Let $$\mathcal{M}:=\left\{\sum_{k=1}^\infty a_k\cdot\delta_{\omega_k}: \quad(a_k)_{k\in\mathbb{N}}\subset(0,+\infty),\quad \sum_{k=1}^\infty a_k=1\right\},$$ where $\delta_{\omega_k}$ denotes the point mass at $\omega_k$. Let $f:\Omega\rightarrow\mathbb{R}$ be a measurable function and denote by $\mathcal{N}$ the set of measures $\mu\in\mathcal{M}$ with respect to which $f$ is $\mu$-integrable. Assume $\mathcal{N}\neq\emptyset$. If $\vert f\vert$ is bounded, then we have the following implication: $$\forall \mu\in\mathcal{N}\qquad\int_{\Omega} fd\mu\geq 0\Rightarrow \forall \omega\in\Omega:f(\omega)\geq 0.$$ But is it also necessary to have $\vert f\vert$ bounded, or can one weaken this condition?
-
## 1 Answer
If $|f|$ is not bounded, then $\int_{\Omega} f d \mu$ might not be well defined. Keep in mind that if $\mu= \sum_{k=1}^\infty a_k\cdot\delta_{\omega_k}$ then
$$\int_{\Omega} f d \mu= \sum_{k=1}^\infty a_k\cdot f(\omega_k) \,.$$
Too see that you cannot drop the bounedndess, if $f$ is unbounded, for each $k$ pick some $\omega_k$ so that $|f(\omega_k)| > 2^k$, and set $\mu = \sum_{k=1}^\infty \frac{1}{2^{k+1}}\cdot\delta_{\omega_k}$.
Then $\int_{\Omega} f d \mu$ is an infinite series, each term being in absolute value at least one, and they can be positive and negative....
*P.S. * The above idea actually can be used to show that $f \in L^1(\mu) ; \forall \mu \in {\mathcal M} \Leftrightarrow |f|$ is bounded. Also, keep in mind that when you calculate the sum $\sum_{k=1}^\infty a_k\cdot f(\omega_k)$ you must have absolute convergence, or "absolute" divergence to infinity, you cannot speak of conditional convergence because for the measure $\sum_{k=1}^\infty a_k\cdot\delta_{\omega_k}$, the order in which you list the terms doesn't matter.
The only way you could make sense of this by dropping the boundedness condition is by asking that for all $\mu \in {\mathcal M}$ you have either $f \in L^1(\mu)$ and $\int f d \mu \geq 0$ or $\int f d \mu = + \infty$. But this condition is equivalent to the negative part $f^-$ of $f$ is bounded....
-
Thanks a lot for the comment. I have changed the question in that I am assuming that f is $\mu_0$-integrable for some $\mu_0\in\mathcal{M}$. I think under this assumption, the implication still holds for $\vert f\vert$ bounded, since one can construct a measure $\mu\in\mathcal{M}$, based on the measure $\mu_0$, which would give too much weight to the negative values of $f$ and make the integral negative... – Andy Teich Jul 4 '12 at 15:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587773084640503, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/91178/splitting-fields-of-polynomials-over-finite-fields
|
Splitting fields of polynomials over finite fields
I can't follow a statement in my notes:
"Let $K$ be a finite field, with $f \in K[X]$ an irreducible polynomial of degree $d$. Then any finite extension $L/K$ is normal, and so if $L$ contains one root of $f$ then it contains all the roots of $f$. Therefore, the splitting field $L$ of $f$ is of the form $K(\alpha)$, where $f$ is the minimal polynomial for $\alpha$."
I can see why $L$ must be normal (any finite extension of a finite field is Galois), and so by definition if $L$ contains one root of $f$ then it contains all the roots of $f$. I don't follow the next sentence at all:
i) Why must $L$ be the splitting field of $f$? EDIT: Is this $L$ now a 'new' $L$?
ii) If $L$ is the splitting field of $f$, why must it be of the form $K(\alpha)$?
-
For ii), I can see why it would be true if $f$ were separable (primitive element theorem), but not why it's true for inseparable $f$ – Jonathan Dec 13 '11 at 18:15
Another way of explaining the problem in your comment would be to observe that over a finite field all the irreducible polynomials are separable. An irreducible inseparable polynomial is always a $p$th power of a polynomial with coefficients in $K^{1/p}$. But when $K$ is finite, the Frobenius automorphism is onto, and hence $K^{1/p}=K$. Thus the polynomial would be a $p$th power of a polynomial in $K[x]$, and hence couldn't be irreducible after all. – Jyrki Lahtonen Dec 17 '11 at 8:29
1 Answer
i) Your edit is correct. The statement is telling you that any finite extension of a finite field is normal. Since the splitting field of $f$ is necessarily finite (with degree at most $d!$), then it must also be normal.
ii) $K(\alpha)$ is a finite extension of $K$, and so (by the previous sentence) is a normal extension. It contains a root of $f$, so must therefore contain all the roots of $f$ by normality, and therefore contains the splitting field of $f$. To see that $K(\alpha)$ is precisely the splitting field of $f$, observe that a splitting field of $f$ must contain $K$ and $\alpha$ and so contains $K(\alpha)$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271596670150757, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/22366-radical-question.html
|
Thread:
1. Radical Question
I got in a radical 3 and 10 but I don't know what to do next.
2. Originally Posted by fluffy_penguin
I got in a radical 3 and 10 but I don't know what to do next.
$270=3^3 \times 10$
so:
$\root 3 \of{270}=3 \root 3 \of {10}$
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9556223154067993, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/52446-easy-complex-equation.html
|
# Thread:
1. ## easy complex equation
i know this is an easy equation by how the rest of the worksheet is i just dont know how to solve it :s Only just started the topic
solve in x and y
x + iy = 4 - 2i
2. Originally Posted by djmccabie
i know this is an easy equation by how the rest of the worksheet is i just dont know how to solve it :s Only just started the topic
solve in x and y
x + iy = 4 - 2i
Just compare sides: $x=4$ and $y=-2$.
3. ahh i see, i knew i would be simple, thanks
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381265044212341, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/58107/fast-algorithm-for-adding-an-equation-to-a-system?answertab=oldest
|
# Fast Algorithm For Adding An Equation To A System?
Assume an $N \times N$ matrix $A$ and a length $N$ vector $b$. I've already solved the system $Ax = b$ for $x$ using standard methods. (If you want you can assume that I have the inverse of $A$ as well.)
Now, I have a block matrix $$A' = \begin{pmatrix} A & V \\ U & 0 \\ \end{pmatrix}$$
and a block vector $b' = (b, b^*)^T$. $U$ is an $1 \times N$ matrix, $V$ is an $N \times 1$ matrix, and the zero in the lower right corner is a scalar (or $1 \times 1$ matrix if you prefer).
In other words $A'$ is a matrix created by adding one row and one column onto $A$ and $b'$ is a vector created by adding one element onto $b$. I want to solve the system $A'x' = b'$ for $x'$. This is the same system as $Ax=b$ except that it has one extra equation and one extra variable to be solved for. Is there an efficient (less than $O(N^3)$) algorithm for this given that I've already computed the solution to $Ax=b$?
-
## 1 Answer
Look up Sherman Morrison Woodbury formula. That precisely answers your question. You will need to solve for $Ay = b$ though (or) if you have the inverse of $A$, the solution can be updated in $\mathcal{O}(N^2)$ steps. A nice thing about Sherman Morrison Woodbury thing is that it works even if $V \in \mathbb{R}^{N \times p}$ and $U \in \mathbb{R}^{p \times N}$. It will cost $\mathcal{O}(pN^2)$. The special case with $p=1$ is also called the Sherman Morrison formula.
-
I saw that before I posted here. The problem is that the matrix multiplies involved in applying it make it also O(N^3). You have to multiply two NxN matrices at some point, which is an O(N^3) operation. – dsimcha Aug 17 '11 at 18:07
@dsimcha: If you have the inverse of $A$, all you will be doing is matrix vector products and hence the additional cost will only be $\mathcal{O}(N^2)$ – user17762 Aug 17 '11 at 18:09
@Sirvarm: Look at equation 1 in the Wikipedia article you linked to. IIUC, A^-1 * U * (C - V * A^-1 * U) * V is an NxN matrix. You then multiply this monstrosity by A^-1, hence O(N^3). – dsimcha Aug 17 '11 at 18:58
@dsimcha: You multiply a vector by a matrix. Hence the cost is $\mathcal{O}(N^2)$. Note that $U$ and $V$ are vectors. – user17762 Aug 17 '11 at 19:03
Right, but (C - V * A^-1 * U) is a scalar, A^-1 ^ U is a column vector and V is a row vector. Thus this multiplication produces a matrix. – dsimcha Aug 17 '11 at 19:13
show 7 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9257380962371826, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/99984?sort=votes
|
## Line bundles and rational singularities
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi, I have some problem to understand the proof of lemma 3.2 of this article: http://www.ams.org/journals/jams/2001-14-03/S0894-0347-01-00368-X/.
The lemma states the following: Let `$X$` be a variety and `$f: Y \rightarrow X$` a resolution of singularities. Assume that `$X$` has rational singularities. Then a line bundle `$L$` on `$Y$` is the pullback `$f^*M$` of some linebundle `$M$` on `$X$` if and only if the restriction of $L$ to each formal fibre of $f$ is trivial. Moreover, when this holds, `$M=f_*L$`.
For the proof of the "if" part, suppose that the restriction of `$L$` to each formal fibre is trivial. The teorem on formal functions shows that the completions of the stalks of the sheaves `$R^if_* \mathcal{O}_Y$` and `$R^if_*L$` at any point `$x \in X$` are isomorphic for each $i$. Since $X$ has rational singularities, `$R^if_*L=0$` for all `$i>0$` and `$M=f_*L$` is a linebundle on $X$.
Since `$f^*M$` is torsion free, the natural adjunction map `$\eta: f^*f_*L \rightarrow L$` is injective, so there is a short exat sequence `$$ 0 \rightarrow f^*f_*L \stackrel{\eta}{\rightarrow} L \rightarrow Q \rightarrow 0.$$` By tha projection formula and the fact that $X$ has rational singularities, `$R^if_*(f^*M)=M \otimes R^if_* \mathcal{O}_Y=0$` for all `$i>0$`. The fact that `$\eta$` is the unit of adjunction implies that `$f_* \eta$` has a left inverse, and in particular is surjective. Applying `$f_*$` to the exact triple we conclude that `$f_*Q=0$`, and, by the theorem on formal functions `$f_*(Q \otimes L^{-1})=0$`, in particular `$Q \otimes L^{-1}$` has no nonzero global sections. Tensoring the exact triple with `$L^{-1}$` gives a contradiction, unless $Q=0$. Hence $\eta$ is an isomorphism and we are done.
I did not understand this last step. Tensoring the short exact sequence with `$L^{-1}$` and then taking global sections, we get `$ \Gamma(Y, f^*M \otimes L^{-1}) \cong \Gamma(Y, \mathcal{O}_Y)$`, because the last term is zero. How can I deduce from this that `$f^*M \otimes L^{-1} \simeq \mathcal{O}_Y$` and then `$f^*M \cong L$`? Where is the contradiction? Thank you
-
## 2 Answers
For any line bundle $F$, giving a map of sheaves $\mathcal O_X \to F$ is equivalent to giving a global section of $F$.
In your case, take $F=Q \otimes L^{-1}$ with the map given by your exact sequence tensorized by $L ^{-1}$. As $F$ has no non-zero global section, the aforesaid map is trivial, so $\eta \otimes Id_{L^{-1}}$ is an isomorphism, and therefore so is $\eta$ too.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Since $X$ has rational singularities, it is normal. Then the singular locus of $X$ has codimension at most $2$ and $f \colon X \to Y$ is an isomorphism in codimension $1$.
This imples that the support of $Q$ has codimension at least $2$, hence $c_1(Q)=0$.
Therefore your exact sequences gives `$c_1(f^*M)=c_1(L)$`, that is `$c_1(f^*M \otimes L^{-1})=0$`, that is $f^*M \otimes L^{-1} \in \textrm{Pic}^0(Y)$.
Now $H^0(Y, f^*M \otimes L^{-1})=H^0(Y, \mathcal{O}_Y)=\mathbb{C}$ implies that $f^*M \otimes L^{-1}$ is a line bundle of degree $0$ with a non-zero global section, hence it must be isomorphic to $\mathcal{O}_Y$.
This implies $Q=0$: in fact, we obtain $$\textrm{Hom}(f^*M, L)=H^0(Y, f^*M^{-1} \otimes L)=H^0(Y, \mathcal{O}_Y)=\mathbb{C},$$ so $\eta\colon f^*M \longrightarrow L$ is necessarily an isomorphism.
-
Thank you, Francesco. But I don't know how to use Chern classes and Picard groups. – emmy Jun 19 at 15:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9337823987007141, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/22398-geometric-interpetation-addition-two-dot-products.html
|
# Thread:
1. ## Geometric interpetation of the addition of two dot products???
Hi,
Let's assume that one needs to calculate the dot product between two 3 dimensional vectors (unit vectors) $u_1$ and $u_2$. This can otherwise be interpreted as finding the angle between them i.e. $\cos\psi_m$.
Further assume that another pair of 3 dimensional unit vectors exists $u_3$ and $u_2$. Note that one of the unit vectors is the same in both cases. The dot product between these two can also be expressed by the angle between them as $\cos\psi_n$.
What will be the result if I would like to add/substract these two angles $\cos\psi_m$ +- $\cos\psi_n$?
I would appreciate if someone could also explain me geometrically the implications of the addition/subtraction process. Is there any other way of expressing the result of the addition/substraction into a single term?
Thanks a lot
Regards
Alex
2. First, you need to be very sure that you understand that the dot product of two vectors is a number and not an angle. For unit vectors that number is the cosine of the angle between the vectors. If it is positive the angle is acute; if it is negative the angle is obtuse; if it is zero the angle is a right angle. As to you actual question, what does adding the cosines of any two angles with a common ray mean? Surely that are many different possible meanings.
3. Yeap I do know that the dot product between two vectors is a scalar. Lets say that the vectors are not orthogonal so that the cosine is not zero. So does the addition of $\cos\psi_n$with $\cos\psi_m$ have a geometrical meaning?
Can you please list some of those meanings?
Thanks again
Best Regards
4. Originally Posted by tecne
So does the addition of $\cos\psi_n$with $\cos\psi_m$ have a geometrical meaning?
I have idea.
What does 0.5 + 0.75 mean geometrically?
What does 0.25 - 0.5 mean geometrically?
What does cos(A)+cos(B) mean geometrically?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508386850357056, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/58688/subspace-of-mathbbrn-spanned-by-the-image-of-convex-n-1-polyhedra-under
|
## Subspace of $\mathbb{R}^n$ spanned by the image of convex $(n-1)$-polyhedra under the face-counting map
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Fix $n \in \mathbb{N}$. A convex polyhedron $C$ in $\mathbb{R}^n$ is the convex hull of finitely many points with nonempty interior. For $H$ a supporting hyperplane, ie $C$ is contained in one of the two closed half-spaces bounded by $H$, we call $H \cap C$ a $j$-face of $C$, where $j$ is the affine dimension of $H \cap C$. By convention, $\varnothing$ is called a $-1$-face of $C$ and $C$ an $n$-face of itself.
Define a function $F$ from the set of convex polyhedra to $\mathbb{R}^{n+2}$ by coordinates, so that $F(C) = (a^C_{-1}, ..., a^C_n)$, where $a^C_j$ is the number of $j$-faces of $C$ for $j=-1,...,n$. Let $W$ be the affine subspace of $\mathbb{R}^{n+2}$ generated by $\operatorname{im} F$.
It's clear that $a^C_{-1}=1$ and $a^C_n=1$. Euler's formula $\displaystyle \sum_{j=-1}^n (-1)^j a^C_j = 0$ (which may be more familiar as the Euler characteristic $V+E-F=2$ in the case of $n=3$) is a third affine relation between the $a^C_j$'s. Hence, $\operatorname{dim}W \le n-1$.
Is it always true for any n that $\operatorname{dim}W = n-1$? Put differently, for any $n$, are the three equations above the only affine relationships that must be satisfied by $a^C_j$'s for all convex polyhedra $C \subset \mathbb{R}^n$, or is there some $n$ in which there is another relation?
I seem to recall an affirmative answer to this, but I can't remember how it was solved or where I found it.
-
## 1 Answer
Yes this is always true. One just needs to exhibit enough polyhedra so that the span of their f-vectors is $n-1$ dimensional. Such a family is given by the polyhedra $\Delta^k\times I^{n-k}$, where $\Delta^k$ is the $k$-simplex and $I^{k}$ is the $k$-cube.
The same argument can be used that the corresponding dimension for simplicial polyhedra is $\lfloor \frac{n}{2}\rfloor +1$, and so the only affine relations are the Dehn-Sommerville relations. One looks at the family $\Delta^k\times \Delta^{n-k}$, $k=0,1,\dots,\lfloor\frac{n}{2}\rfloor$.
-
Excellent! The answer is both easy to understand and to verify. I'm a bit upset I couldn't think of it myself. – The Cheese Stands Alone Mar 17 2011 at 12:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.959530770778656, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/37363/prime-numbers-stretch-to-infinity-but-what-about-the-distance-between-them
|
# Prime numbers stretch to infinity, but what about the distance between them?
That is, let $p_n$ be the nth positive prime number. Does $$L = \lim\limits_{n \to \infty} \left( p_{n+1} - p_n \right)$$ equal infinity?
-
5
I doubt whether the behavior of that sequence is completely known, since it is still unsolved whether there are infinitely many twin primes. – Tobias Kildetoft May 6 '11 at 7:47
## 3 Answers
No, the limit (probably) does not exist.
The sequence $p_{n+1} - p_n$ has a name: it is called the sequence of prime gaps. Define $g_n = p_{n+1} - p_n$, then you are interested in the sequence $g_1, g_2, \dots$. The following facts are known:
• For any $k$, it is easy to see that the $(k-1)$ numbers $k!+2, k!+3, \dots, k!+k$ are all non-prime. Thus there exist arbitrarily long sequences of composite numbers; in other words, $g_n$ can get arbitrarily large: for any $k$ there exists $n$ such that $g_n \ge k$. Equivalently, $$\limsup_{n \to \infty} (p_{n+1} - p_n) = \infty.$$
• There is the (believed) twin-prime conjecture which states that there exist infinitely many primes that differ by 2; this means that $g_n$ takes the value $2$ for infinitely many $n$, or $$\liminf_{n \to \infty} (p_{n+1} - p_n) = 2.$$ Even if it turns out to be false for 2, there is Polignac's conjecture that for any even integer $k$, there exist infinitely many primes that differ by $k$; this has not been proved or disproved for any $k$. Even better, it has been proved in 2005, assuming a certain conjecture (which I think is weaker than the Riemann hypothesis) that there are infinitely many $n$ for which $g_n$ is at most $16$, thus $$\liminf_{n \to \infty} (p_{n+1} - p_n) \le 16.$$
• The "average" gap between the $n$th prime and the next increases logarithmically: $g_n \approx \ln p_n$. So you can say that the distance between primes does become infinite "on average"; though infinitely often it touches very small numbers.
-
I accepted your answer essentially because of the last topic. I should have thought in terms of average too, quite interesting. – Luke May 7 '11 at 21:37
– ShreevatsaR May 15 at 6:06
Assuming some fairly reasonable conjectures, then $(p_{n+1}-p_n)$ is a divergent sequence (i.e. a limit does not exist). For example:
Conjecture: There exists an infinite number of twin primes. (where $p_{n+1}-p_n=2$ an infinite number of times)
Conjecture: There exists an infinite number of prime pairs that differ by 4. (where $p_{n+1}-p_n=4$ an infinite number of times)
However, we can observe that $\lim\sup (p_{n+1}-p_n)=\infty$, since $n!+k$ is composite (properly divisible by k) for all $n \geq 2$ and $2 \leq k \leq n$. Therefore there are arbitrarily large prime gaps.
-
If the limit is not infinity, then there can be found infinitely many primes differing by less than some bound M. Answering whether that is the case seems to be an open problem still.
The closest (contingent) answer after some quick searching seems to be the one noted at the bottom of this page about the Elliott–Halberstam conjecture stating that there are infinitely many pairs of primes differing by at most 16 (they used the linked conjecture in their proof).
If that were true, then the answer to your question would be no.
Though, maybe it is of interest to you that there are arbitrarily large gaps between primes, which is a related question. The proof is pretty simple: consider the numbers following $n!$...
-
1
The numbers from $n!+2$ through to $n!+n$ are clearly all composite, though these sized sequences of $n-1$ consecutive composite numbers in fact appear a lot earlier for $n>3$. – Henry May 11 '11 at 1:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9636416435241699, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/1144-number-sequences-drawing-diagram.html
|
# Thread:
1. ## Number sequences - drawing a diagram
From the book: "Draw a diagram that shows how the sequence of odd numbers can be used to produce the sequence of squares. The diagram can contain numbers, addition symbols, equal signs, and arrows, but it should not contain any words. Your goal is to draw the diagram so that anyone who knows how to add whole numbers can look at it and see the relationship between the numbers in the two sequences."
1, 3, 5, 7,... The sequence of odd numbers
1, 4, 9, 16,... The sequence of squares
Here is my answer. What do you think? Is this a clear diagram? Would you have done it differently?
Attached Thumbnails
2. ## Good job!
I think that the diagram is very clear. I looked at the diagram and understood it before I read the post.
3. Thanks!
4. I think the question is looking for something like:
Attached Thumbnails
5. Bear with me because that diagram looks like it should make more sense to me than it does. If I'm interpreting it right...well, examine the flow of my arrows as they go from point A to the solution. In the first block, 1+3=4, makes sense. In the next block, the odd number we need is on the outside bottom left, but in the third block, it's in the inside bottom right.
Wait, I'm just now seeing how it's all lined up on the top row in the third block.
I'll attach a second image.
That second block looks a little confusing, but just follow the arrow from the 4 to the 5 to the answer (9), to the 7 to the answer (16, which appears in the expanded third block). I just don't think I'd be able to figure it out at a glance, especially without the key on the bottom. It seems to act like more of a math puzzle (a fun one, but still)?
Attached Thumbnails
6. Originally Posted by Euclid Alexandria
Bear with me because that diagram looks like it should make more sense to me than it does. If I'm interpreting it right...well, examine the flow of my arrows as they go from point A to the solution. In the first block, 1+3=4, makes sense. In the next block, the odd number we need is on the outside bottom left, but in the third block, it's in the inside bottom right.
Wait, I'm just now seeing how it's all lined up on the top row in the third block.
I'll attach a second image.
That second block looks a little confusing, but just follow the arrow from the 4 to the 5 to the answer (9), to the 7 to the answer (16, which appears in the expanded third block). I just don't think I'd be able to figure it out at a glance, especially without the key on the bottom. It seems to act like more of a math puzzle (a fun one, but still)?
Think of the blocks of numbers in the diagram as having blobs
in place of the numbers, then the corresponding number in the
diagram is the count of blobs around the diagram to that point.
The blobs all form square arrays and so have a total number of
blobs which is a square. And we see that at each stage we are
adding the next odd number of blobs as a border to the previous
diagram.
The diagram might have worked better without the numbers?
RonL
7. From your explanation, I now understand your diagram as a process of steps toward each sum, rather than a series of calculations around the matrix, which I knew wasn't its intention beforehand (even at first glance, I could tell 1 + 2 + 3 + 4 was not the diagram's intended message). I couldn't see at first glance that a process of steps was involved.
Yes, this diagram would make more sense without the numbers -- which was tempting; I could make some cool blobs with Photoshop. Except the book only wanted the diagram to contain numbers, addition symbols, equal signs, and arrows. So we're stuck with the numbers. The diagram makes sense after your explanation, but since no explanations can be used, we want a diagram that bodes well for first glances...not just for myself, but for a very broad audience: "anyone who knows how to add whole numbers," who "can look at it and see the relationship between the numbers in the two sequences."
8. Hello,
I'm not quite sure if you are looking for that kind of diagramm, which I've attached below. I didn't invent it - I just found the right book.
Attached Thumbnails
9. ## Second part makes sense, but first part less clear
Those are interesting diagrams. I also think it is a good example of what CaptainBlack was trying to describe with his diagram. Is there a page of text accompanying your diagrams? I understand some parts of it, but some parts of it are difficult to understand -- particularly the equations surrounding the first diagram, such as 1/2n(n+1), 2n+1, etc. I understand that n=5, but for instance, if 2n+1=11, I don't understand how that answer relates to the rest of the diagram.
Unfortunately, I don't think I could have used symbols and squares such as these to illustrate my diagram. The book only wanted the diagram to contain numbers, addition symbols, equal signs, and arrows. Furthermore, the diagram needed to be understandable for a very general audience, "anyone who knows how to add whole numbers" according to the book, and "anyone who can look at it and see the relationship between the numbers in the two sequences." If more symbols were allowed, perhaps my attached diagram would be clearer?
Attached Thumbnails
10. hello, here I am again:
The first diagram shows a rextangle. The area contains exactly three times the sum of squares. The area of a rectangle is calculated as
$A=length \times width$
The long side contains the sum of integer:
$1+2+3+...+n=\frac{1}{2}\cdot n\cdot(n-1)$
the width measures 2n+1
So you get:
$3\cdot (1^2+2^2+3^2+...+n^2)=\frac{1}{2}\cdot n\cdot(n-1)\ \cdot \ (2n+1)$
that means:
$(1^2+2^2+3^2+...+n^2)=\frac{1}{6}\cdot n\cdot(n-1)\ \cdot \ (2n+1)$
And to the text which explanes those diagrams: It exists, OK, but it's in German and I don't know if that will help you further on.
I hope I was a little bit of assistance.
Bye
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9673815965652466, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/11363/list
|
## Return to Answer
4 added 26 characters in body
Regarding a question which arose in comments to another answer: The lambda ring structure is on $RG$ is not enough to reconstruct the group. Dade has given examples (MathSciNet review here; paper does not appear to be available onlinehere) of pairs of groups which have the same character table with the same power maps, and from this it follows that the whole lambda ring structure is the same.
3 added 45 characters in body
Regarding a question which arose in comments to another answer: The lambda ring structure is on $RG$ is not enough to reconstruct the group. Dade has given examples (I do MathSciNet review here; paper does not have access appear to mathscinet here, but this is in the very first number of J. Alg.be available online) of pairs of groups which have the same character table with the same power maps, and from this it follows that the whole lambda ring structure is the same.
2 added 1 characters in body
Regarding a question which arose in comments to another answer: The lambda ring structure is on $RG$ is not enough to reconstruct the group. Dade has given examples (I do not have access to mathscinet here, but this is in the very first number of J. Alg.) of pairs of groups whichhave which have the same character table with the same power maps, and from this it follows that the whole lamnda lambda ring structure is the same.
1
Regarding a question which arose in comments to another answer: The lambda ring structure is on $RG$ is not enough to reconstruct the group. Dade has given examples (I do not have access to mathscinet here, but this is in the very first number of J. Alg.) of pairs of groups whichhave the same character table with the same power maps, and from this it follows that the whole lamnda ring structure is the same.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9472690224647522, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/7775?sort=votes
|
## Do DG-algebras have any sensible notion of integral closure?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose R → S is a map of commutative differential graded algebras over a field of characteristic zero. Under what conditions can we say that there is a factorization R → R' → S through an "integral closure" that extends the notion of integral closure in degree zero for connective objects, and respects quasi-isomorphism of the map R to S?
I'm willing to accept answers requiring R → S may need to have some special properties. The motivation for this question is that I'd actually like a generalization of this construction to ring spectra; the hope is that if there is a canonical enough construction on the chain complex level, it has a sensible extension to contexts where it's usually very difficult to construct "ring" objects directly.
-
## 1 Answer
I like this question a lot. It deserves an answer, and I really wish I had a good one. Instead, I offer the following idea. Maybe it has some merit?
### Background
Let me fix some terminology. Suppose $f:R\to S$ a homomorphism of (classical, commutative) rings.
1. An element $s\in S$ is said to be integral over $R$ if there is a monic polynomial $p\in (R/\ker f)[x]$ such that $s$ is a root of $p$; this is equivalent to saying that the subring $(R/\ker f)[s]\subset S$ is finite over $(R/\ker f)$.
2. We say that $f$ is integrally closed if it is a monomorphism and if every element of $S$ that is integral over $R$ is in $R$.
3. At the opposite extreme, we say that $f$ is integrally surjective if every element of $S$ is integral over $R$. (This turns out to be equivalent to being a colimit of proper homomorphisms of finite presentation.)
4. Among the integrally surjective homomorphisms are the elementary integrally surjective homomorphisms, i.e., homomorphisms of the form $R\to (R/\mathfrak{a})[x]/(p)$, where $R$ is of finite presentation, $\mathfrak{a}\subset R$ is a finitely generated ideal, and $p$ is any monic polynomial.
The classical integral closure construction can now be described as a unique factorization of every homomorphism $f:R\to S$ into an integrally surjective homomorphism $R\to\overline{R}$ followed by an integrally closed monomorphism $\overline{R}\to S$.
In §3.6 of Mathieu Anel's (really cool!) paper, he describes this factorization system and the "proper topology" constructed from it. In particular, he observes that integrally closed monomorphisms are precisely those morphisms satisfying the unique right lifting property with respect to all elementary integrally surjective homomorphisms.
### Making this work for $E_{\infty}$ ring spectra
Since you expressed interest in getting integral closure off the ground for $E_{\infty}$ ring spectra, I'll work in that context.
We can use Andre Joyal's theory of factorization systems in ∞-categories (see §5.2.8 of Jacob Lurie's Higher Topos Theory) to try to play this same game in the ∞-category $\mathcal{C}$ of connective $E_{\infty}$ ring spectra. (For reasons that will become clear, I'm worried about making this fly for nonconnective $E_{\infty}$ ring spectra.) Once a set $I$ of elementary integrally surjective morphisms is selected, integrally closed monomorphisms are determined as the class of maps that are right orthogonal to $I$, and the integral closure construction is a factorization system on $\mathcal{C}$.
So what should $I$ be? I think there might be some flexibility here, depending on your aims, but here's a proposal:
1. Start with the coherent connective $E_{\infty}$ ring spectra that are of finite presentation (over the sphere spectrum). (Concretely, these are the connective $E_{\infty}$ ring spectra $A$ with the following properties: (1) $\pi_0A$ is of finite presentation, (2) for every integer $n$, $\pi_nA$ is a finitely presented module over $\pi_0A$, and (3) the absolute cotangent complex $L_A$ is a perfect $A$-module.) We'll only need these kinds of $E_{\infty}$ ring spectra in our construction of $I$.
2. Among these $E_{\infty}$ ring spectra, consider the set $I'$ of all morphisms $A\to B$ of finite presentation that induce a surjection on $\pi_0$. Let's call the maps of $I'$ quotients.
3. Now we need to enlarge $I'$ to allow ourselves morphisms that act as though they are of the form $A\to A[x]/(p)$ for $p$ monic. For any of our connective $E_{\infty}$ ring spectra $A$, we can consider any finitely generated and free $E_{\infty}$-$A$-algebra $A[X]$ (i.e., the symmetric algebra on some free and finitely generated $A$-module), and we can consider quotients (in the sense above) $A[X]\to B$ where $B$ is almost perfect as an $A$-module (equivalently, $\pi_nB$ is finitely presented as an $\pi_0A$-module for every integer $n$); let us add the resulting composites $A\to A[X]\to B$ to our set $I'$ to obtain the set $I$.
Now the morphisms that are right orthogonal to $I$ can be called integrally closed monomorphisms of $E_{\infty}$ ring spectra; call the set of them $S_R$. The morphisms that are left orthogonal to that can be called the integrally surjective morphisms of $E_{\infty}$ ring spectra. The integral closure would be a factorization system $(S_L, S_R)$. One shows the existence of a factorization (via a presentation argument), and it follows from general nonsense (more precisely, 5.2.8.17 of HTT) that it is unique.
### Three observations
1. It's not clear that $I$ is big enough for all purposes. One might want to allow shifts of free modules to generate our finitely generated and free $E_{\infty}$-$A$-algebras in the description above. I haven't thought carefully about this.
2. It's not so obvious how to talk about quotients $A\to B$ when dealing with nonconnective guys. One wants to say that the fiber (in the category of $A$-modules) is "not any more nonconnective than $A$." I'm not quite sure how to formulate this. In any case, that's why I restricted attention to the connective guys above.
3. Predictably, this is definitely not compatible with the usual integral closure: if I take two classical rings $R$ and $S$ and a ring homomorphism $R\to S$, the integral closure of $HR$ in $HS$ is not in general an Eilenberg-Mac Lane spectrum. If $R$ is a $\mathbf{Q}$-algebra, the two notions are compatible, however.
### Making any computation
... seems really hard. But maybe that's not such a big surprise. After all, integral closures are hard to compute classically as well.
-
Hey, Clark, thanks for your answer. (Sorry, on looking back I was a little unclear about properties I wanted from a "new" notion of integral closure.) This kind of model-category-esque factorization was exactly the kind of thing I was hoping for. – Tyler Lawson Jan 8 2010 at 13:48
What I seem to be finding is that the pi_0-focus under this definition (with connectivity and Noetherian hypotheses), any map R -> R' which is obtained by attaching an E_∞-cell to kill off a homotopy element qualifies as a quotient. As a result, the "integral closure" of a map R -> S seems to be an object R' whose pi_0 is the integral closure of the map on pi_0, and whose higher homotopy groups map isomorphically to those of S. – Tyler Lawson Jan 8 2010 at 13:50
I'd be curious if there's a more restrictive set of maps you could place to capture things that look more like an integral closure on higher homotopy (which is related to the nonconnectivity issue as well). E.g., maps R -> R' where R' admits a faithful action on a perfect R-module. – Tyler Lawson Jan 8 2010 at 13:58
Hey Tyler - OK, so the structure of what I'm proposing seems to fit the bill, but the details are off. Re your 2nd question: I was worried about this too, at first, but the finiteness rescues you: if $A$ is coherent and of finite presentation over the sphere (more generally, I guess you could try to work Bousfield-locally if you wanted), then $H\pi_0A$ may not be. In particular, if you have to add on infinitely many cells to kill the higher homotopy, then your cotangent complex gets too big. – Clark Barwick Jan 11 2010 at 0:50
Re your 3d comment: Interesting. As far as I can see, the real issue is that I didn't have an easy way to characterize "closed embeddings" in spectral algebraic geometry, apart from the clumsy one I chose. (In other words, I don't have a notion of surjection of $E_{\infty}$ ring spectra, apart from my stupid one.) Are you suggesting a way to do better? (That would be awesome!) – Clark Barwick Jan 11 2010 at 0:58
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 86, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403290748596191, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/2120/does-there-exist-a-bijective-f-mathbbn-to-mathbbn-such-that-sum-fn/2122
|
# Does there exist a bijective $f:\mathbb{N} \to \mathbb{N}$ such that $\sum f(n)/n^2$ converges?
We know that $\displaystyle\zeta(2)=\sum\limits_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6}$ and it converges.
• Does there exists a bijective map $f:\mathbb{N} \to \mathbb{N}$ such that the sum $$\sum\limits_{n=1}^{\infty} \frac{f(n)}{n^2}$$ converges.
If our $s=2$ was not fixed, then can we have a function such that $\displaystyle \zeta(s)=\sum\limits_{n=1}^{\infty} \frac{f(n)}{n^s}$ converges
-
Now that this question is on the front page again, could someone edit the title so it says "Does there exist a bijective $f:\mathbb{N}\rightarrow\mathbb{N}$..."? – Rahul Narain Oct 7 '10 at 18:39
4
– Jonas Meyer Nov 2 '10 at 1:05
10
@Chandru1: Please do not post others' problems and solutions as your own. – Jonas Meyer Nov 2 '10 at 1:09
## 2 Answers
For $s>2$ you can take $f(n)=n$.
For $s=2$ if you have $m < n$ and $a=f(m) > b=f(n)$ then $$\frac{a}{m^2}+\frac{b}{n^2}>\frac{b}{m^2}+\frac{a}{n^2}$$ (proved either naively or as a case of the "rearrangement inequality") so the sum $\sum f(n)/n^2$ can be reduced by swapping the values $f(m)$ and $f(n)$. Hence $$\sum\frac{f(n)}{n^2}\ge\sum\frac{n}{n^2}$$ which is divergent.
-
For $s=2$ the answer is negative. This series doesn't converge.
To prove this we can use Abel transformation.
$$\sum_{n=1}^{n=N} \frac{f(n)}{n^2} = \sum_{n=1}^{n=N} (\sum_{k=1}^{k=n} f(k)) (\frac{1}{n^2} - \frac{1}{(n + 1)^2}) + (\sum_{n=1}^{N} \frac{1}{n^2})\frac{1}{(N+1)^2}$$
Since $f$ is bijection, $\sum_{k=1}^{k=n} f(k) \ge \frac{n^2}{2}$ hence the first sum is greater than $\sum_{n=1}^{N}\frac{c}{n}$ for some $c > 0$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284580945968628, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/06/16/fatous-lemma/?like=1&source=post_flair&_wpnonce=9a71b83933
|
# The Unapologetic Mathematician
## Fatou’s Lemma
Today we prove Fatou’s Lemma, which is a precursor to the Fatou-Lebesgue theorem, and an important result in its own right.
If $\{f_n\}$ is a sequence of non-negative integrable functions then the function defined pointwise as
$\displaystyle f_*(x)=\liminf\limits_{n\to\infty} f_n(x)$
is also integrable, and we have the inequality
$\displaystyle\int f_*\,d\mu\leq\liminf\limits_{n\to\infty}\int f_n\,d\mu$
In fact, the lemma is often stated for a sequence of measurable functions and concludes that $f_*$ is measurable (along with the inequality), but we already know that the limit inferior of a sequence of measurable functions is measurable, and so the integrable case is the most interesting part for us.
So, we define the functions
$\displaystyle g_n(x)=\inf\limits_{i\geq n}f_i(x)$
so that each $g_n$ is integrable, each $g_n\leq f_n$ and the sequence $\{g_n\}$ is pointwise increasing. Monotonicity tells us that for each $n$ we have
$\displaystyle\int g_n\,d\mu\leq\int f_n\,d\mu$
and it follows that
$\displaystyle\lim\limits_{n\to\infty}\int g_n\,d\mu\leq\liminf\limits_{n\to\infty}\int f_n\,d\mu<\infty$
We also know that
$\displaystyle\lim\limits_{n\to\infty}g_n(x)=\liminf\limits_{n\to\infty}f_n(x)=f_*(x)$
which means we can bring the monotone convergence theorem to bear. This tells us that
$\displaystyle\int f_*\,d\mu=\lim\limits_{n\to\infty}\int g_n\,d\mu\leq\liminf\limits_{n\to\infty}\int f_n\,d\mu$
as asserted.
If it happened that $f_*$ were not integrable, then some of the $f_n$ would have to be only measurable — not integrable — themselves. And it couldn’t just be a finite number of them, or we could just drop them from the sequence. No, there would have to be an infinite subsequence of non-integrable $f_n$, which would mean an infinite subsequence of their integrals would diverge to $\infty$. Thus when we take the limit inferior of the integrals we get $\infty$, as we do for the integral of $f$ itself, and the inequality still holds.
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
## 6 Comments »
1. [...] dominates the sequence , the sequence will be non-negative. Fatou’s lemma then tells us [...]
Pingback by | June 17, 2010 | Reply
2. [...] their positive and negative parts. Then you can prove the monotone convergence theorem, followed by Fatou’s lemma, and then the Fatou-Lebesgue theorem, which leads to dominated convergence theorem, and we’re [...]
Pingback by | June 18, 2010 | Reply
3. [...] every , and since we find that . Then Fatou’s lemma shows us that . Thus the -finite case is true as [...]
Pingback by | September 3, 2010 | Reply
4. Would you mind explaining why each g_n is integrable? Many thanks!
Comment by rich | December 7, 2012 | Reply
5. Sorry, rich, I don’t have a solid answer off the top of my head (I was never really an analyst). My intuition is that the infimum of any (countable?) collection of integrable functions is integrable.
Comment by | December 9, 2012 | Reply
• Thanks for the reply! I think your intuition is correct – I’ll poke around in my copy of Apostol to see if I can find a substantiating theorem.
Comment by rich | December 10, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423145055770874, "perplexity_flag": "middle"}
|
http://blog.aggregateknowledge.com/tag/agile-analytics/
|
AK Tech Blog
Everything You Need to Know
## HLL Intersections
December 17, 2012 By
### Why?
The intersection of two streams (of user ids) is a particularly important business need in the advertising industry. For instance, if you want to reach suburban moms but the cost of targeting those women on a particular inventory provider is too high, you might want to know about a cheaper inventory provider whose audience overlaps with the first provider. You may also want to know how heavily your car-purchaser audience overlaps with a certain metro area or a particular income range. These types of operations are critical for understanding where and how well you’re spending your advertising budget.
As we’ve seen before, HyperLogLog provides a time- and memory-efficient algorithm for estimating the number of distinct values in a stream. For the past two years, we’ve been using HLL at AK to do just that: count the number of unique users in a stream of ad impressions. Conveniently, HLL also supports the union operator ( $\cup$ ) allowing us to trivially estimate the distinct value count of any composition of streams without sacrificing precision or accuracy. This piqued our interest because if we can “losslessly” compute the union of two streams and produce low-error cardinality estimates, then there’s a chance we can use that estimate along with the inclusion-exclusion principle to produce “directionally correct” cardinality estimates of the intersection of two streams. (To be clear, when I say “directionally correct” my criteria is “can an advertiser make a decision off of this number?”, or “can it point them in the right direction for further research?”. This often means that we can tolerate relative errors of up to 50%.)
The goals were:
1. Get a grasp on the theoretical error bounds of intersections done with HLLs, and
2. Come up with heuristic bounds around $m$, $overlap$, and the set cardinalities that could inform our usage of HLL intersections in the AK product.
Quick terminology review:
• If I have set of integers $A$, I’m going to call the HLL representing it $H_{A}$.
• If I have HLLs $H_{A}, H_{B}$ and their union $H_{A \cup B}$, then I’m going to call the intersection cardinality estimate produced $|H_{A \cap B}|$.
• Define the $overlap$ between two sets as $overlap(A, B) := \frac{|A \cap B|}{min(|A|, |B|)}$.
• Define the cardinality ratio $\frac{max(|A|, |B|)}{min(|A|, |B|)}$ as a shorthand for the relative cardinality of the two sets.
• We’ll represent the absolute error of an observation $|H_{A}|$ as $\Delta |H_{A}|$.
That should be enough for those following our recent posts, but for those just jumping in, check out Appendices A and B at the bottom of the post for a more thorough review of the set terminology and error terminology.
### Experiment
We fixed 16 $overlap$ values (0.0001, 0.001, 0.01, 0.02, 0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0) and 12 set cardinalities (100M, 50M, 10M, 5M, 1M, 500K, 100K, 50K, 10K, 5K, 1K, 300) and did 100 runs of each permutation of $(overlap, |A|, |B|)$. A random stream of 64-bit integers hashed with Murmur3 was used to create the two sets such that they shared exactly $min(|A|,|B|) \cdot overlap = |A \cap B|$ elements. We then built the corresponding HLLs $H_{A}$ and $H_{B}$ for those sets and calculated the intersection cardinality estimate $|H_{A} \cap H_{B}|$ and computed its relative error.
Given that we could only generate and insert about 2M elements/second per core, doing runs with set cardinalities greater than 100M was quickly ruled out for this blog post. However, I can assure you the results hold for much larger sets (up to the multiple billion-element range) as long as you heed the advice below.
### Results
This first group of plots has a lot going on, so I’ll preface it by saying that it’s just here to give you a general feeling for what’s going on. First note that within each group of boxplots $overlap$ increases from left to right (orange to green to pink), and within each plot cardinality ratio increases from left to right. Also note that the y-axis (the relative error of the observation) is log-base-10 scale. You can clearly see that as the set sizes diverge, the error skyrockets for all but the most similar (in both cardinality and composition) sets. I’ve drawn in a horizontal line at 50% relative error to make it easier to see what falls under the “directionally correct” criteria. You can (and should) click for a full-sized image.
Note: the error bars narrow as we progress further to the right because there are fewer observations with very large cardinality ratios. This is an artifact of the experimental design.
A few things jump out immediately:
• For cardinality ratio > 500, the majority of observations have many thousands of percent error.
• When cardinality ratio is smaller than that and $overlap > 0.4$, register count has little effect on error, which stays very low.
• When $overlap \le 0.01$, register count has little effect on error, which stays very high.
Just eyeballing this, the lesson I get is that computing intersection cardinality with small error (relative to the true value) is difficult and only works within certain constraints. Specifically,
1. $\frac{|A|}{|B|} < 100$, and
2. $overlap(A, B) = \frac{|A \cap B|}{min(|A|, |B|)} \ge 0.05$.
The intuition behind this is very simple: if the error of any one of the terms in your calculation is roughly as large as the true value of the result then you’re not going to estimate that result well. Let’s look back at the intersection cardinality formula. The left-hand side (that we are trying to estimate) is a “small” value, usually. The three terms on the right tend to be “large” (or at least “larger”) values. If any of the “large” terms has error as large as the left-hand side we’re out of luck.
So, let’s say you can compute the cardinality of an HLL with relative error of a few percent. If $|H_{A}|$ is two orders of magnitude smaller than $|H_{B}|$, then the error alone of $|H_{B}|$ is roughly as large as $|A|$.
$|A \cap B| \le |A|$ by definition, so
$|A \cap B| \le |A| \approx |H_{A}| \approx \Delta |H_{B}|$.
In the best scenario, where $A \cap B = A$, the errors of $|H_{B}|$ and $|H_{A \cup B}| \approx |H_{B}|$ are both roughly the same size as what you’re trying to measure. Furthermore, even if $|A| \approx |B|$ but the overlap is very small, then $|A \cap B|$ will be roughly as large as the error of all three right-hand terms.
### On the bubble
Let’s throw out the permutations whose error bounds clearly don’t support “directionally correct” answers ($overlap < 0.01$ and $\frac{|A|}{|B|} > 500$) and those that trivially do ($overlap > 0.4$) so we can focus more closely on the observations that are “on the bubble”. Sadly, these plots exhibit a good deal of variance inherent in their smaller sample size. Ideally we’d have tens of thousands of runs of each combination, but for now this rough sketch will hopefully be useful. (Apologies for the inconsistent colors between the two plots. It’s a real bear to coordinate these things in R.) Again, please click through for a larger, clearer image.
By doubling the number of registers, the variance of the relative error falls by about a quarter and moves the median relative error down (closer to zero) by 10-20 points.
In general, we’ve seen that the following cutoffs perform pretty well, practically. Note that some of these aren’t reflected too clearly in the plots because of the smaller sample sizes.
Register Count Data Structure Size Overlap Cutoff Cardinality Ratio Cutoff
8192 5kB 0.05 10
16384 10kB 0.05 20
32768 20kB 0.05 30
65536 41kB 0.05 100
### Error Estimation
To get a theoretical formulation of the error envelope for intersection, in terms of the two set sizes and their overlap, I tried the first and simplest error propagation technique I learned. For variables $Y, Z, ...$, and $X$ a linear combination of those (independent) variables, we have
$\Delta X = \sqrt{ (\Delta Y)^2 + (\Delta Z)^2 + ...}$
Applied to the inclusion-exclusion formula:
$\begin{array}{ll} \displaystyle \Delta |H_{A \cap B}| &= \sqrt{ (\Delta |H_{A}|)^2 + (\Delta |H_{B}|)^2 + (\Delta |H_{A \cup B}|)^2} \\ &= \sqrt{ (\sigma\cdot |A|)^2 + (\sigma\cdot |B|)^2 + (\sigma\cdot |A \cup B|)^2} \end{array}$
where
$\sigma = \frac{1.04}{\sqrt{m}}$ as in section 4 (“Discussion”) of the HLL paper.
Aside: Clearly $|H_{A \cup B}|$ is not independent of $|H_{A}| + |H_{B}|$, though $|H_{A}|$ is likely independent of $|H_{B}|$. However, I do not know how to a priori calculate the covariance in order to use an error propagation model for dependent variables. If you do, please pipe up in the comments!
I’ve plotted this error envelope against the relative error of the observations from HLLs with 8192 registers (approximately 5kB data structure).
Despite the crudeness of the method, it provided a 95% error envelope for the data without significant differences across cardinality ratio or $overlap$. Specifically, at least 95% of observations satisfied
$(|H_{A \cap B}| - |A \cap B|) < \Delta |H_{A \cap B}|$
However, it’s really only useful in the ranges shown in the table above. Past those cutoffs the bound becomes too loose and isn’t very useful.
This is particularly convenient because you can tune the number of registers you need to allocate based on the most common intersection sizes/overlaps you see in your application. Obviously, I’d recommend everyone run these tests and do this analysis on their business data, and not on some contrived setup for a blog post. We’ve definitely seen that we can get away with far less memory usage than expected to successfully power our features, simply because we tuned and experimented with our most common use cases.
### Next Steps
We hope to improve the intersection cardinality result by finding alternatives to the inclusion-exclusion formula. We’ve tried a few different approaches, mostly centered around the idea of treating the register collections themselves as sets, and in my next post we’ll dive into those and other failed attempts!
### Appendix A: A Review Of Sets
Let’s say we have two streams of user ids, $S_{A}$ and $S_{B}$. Take a snapshot of the unique elements in those streams as sets and call them $A$ and $B$. In the standard notation, we’ll represent the cardinality, or number of elements, of each set as $|A|$ and $|B|$.
Example: If $A = \{1,2,10\}$ then $|A| = 3$.
If I wanted to represent the unique elements in both of those sets combined as another set I would be performing the union, which is represented by $A \cup B$.
Example: If $A = \{1,2,3\}, B=\{2,3,4\}$ then $A \cup B = \{1,2,3,4\}$.
If I wanted to represent the unique elements that appear in both $A$ and $B$ I would be performing the intersection, which is represented by $A \cap B$.
Example: With $A, B$ as above, $A \cap B = \{2,3\}$.
The relationship between the union’s cardinality and the intersection’s cardinality is given by the inclusion-exclusion principle. (We’ll only be looking at the two-set version in this post.) For reference, the two-way inclusion-exclusion formula is $|A \cap B| = |A| + |B| - |A \cup B|$.
Example: With $A, B$ as above, we see that $|A \cap B| = 2$ and $|A| + |B| - |A \cup B| = 3 + 3 - 4 = 2$.
For convenience we’ll define the $overlap$ between two sets as $overlap(A, B) := \frac{|A \cap B|}{min(|A|, |B|)}$.
Example: With $A, B$ as above, $overlap(A,B) = \frac{|A \cap B|}{min(|A|, |B|)} = \frac{2}{min(3,3)} = \frac{2}{3}$.
Similarly, for convenience, we’ll define the cardinality ratio $\frac{max(|A|, |B|)}{min(|A|, |B|)}$ as a shorthand for the relative cardinality of the two sets.
The examples and operators shown above are all relevant for exact, true values. However, HLLs do not provide exact answers to the set cardinality question. They offer estimates of the cardinality along with certain error guarantees about those estimates. In order to differentiate between the two, we introduce HLL-specific operators.
Consider a set $A$. Call the HLL constructed from this set’s elements $H_{A}$. The cardinality estimate given by the HLL algorithm for $H_{A}$ is $|H_{A}|$.
Define the union of two HLLs $H_{A} \cup H_{B} := H_{A \cup B}$, which is also the same as the HLL created by taking the pairwise max of $H_{A}$‘s and $H_{B}$‘s registers.
Finally, define the intersection cardinality of two HLLs in the obvious way: $|H_{A} \cap H_{B}| := |H_{A}| + |H_{B}| - |H_{A \cup B}|$. (This is simply the inclusion-exclusion formula for two sets with the cardinality estimates instead of the true values.)
### Appendix B: A (Very Brief) Review of Error
The simplest way of understanding the error of an estimate is simply “how far is it from the truth?”. That is, what is the difference in value between the estimate and the exact value, also known as the absolute error.
However, that’s only useful if you’re only measuring a single thing over and over again. The primary criteria for judging the utility of HLL intersections is relative error because we are trying to measure intersections of many different sizes. In order to get an apples-to-apples comparison of the efficacy of our method, we normalize the absolute error by the true size of the intersection. So, for some observation $\hat{x}$ whose exact value is non-zero $x$, we say that the relative error of the observation is $\frac{x-\hat{x}}{x}$. That is, “by what percentage off the true value is the observation off?”
Example: If $|A| = 100, |H_{A}| = 90$ then the relative error is $\frac{100 - 90}{100} = \frac{10}{100} = 10\%$.
## On Accuracy and Precision
October 29, 2011 By
A joint post from Matt and Ben
Believe it or not, we’ve been getting inspired by MP3′s lately, and not by turning on music in the office. Instead, we drew a little bit of inspiration from the way MP3 encoding works. From wikipedia:
“The compression works by reducing accuracy of certain parts of sound that are considered to be beyond the auditory resolution ability of most people. This method is commonly referred to as perceptual coding. It uses psychoacoustic models to discard or reduce precision of components less audible to human hearing, and then records the remaining information in an efficient manner.”
Very similarly, in online advertising there are signals that go “beyond the resolution of advertisers to action”. Rather than tackling the problem of clickstream analysis in the standard way, we’ve employed an MP3-like philosophy to storage. Instead of storing absolutely everything and counting it, we’ve employed a probabilistic, streaming approach to measurement. This lets us give clients real-time measurements of how many users and impressions a campaign has seen at excruciating levels of detail. The downside is that our reports tends to include numbers like “301M unique users last month” as opposed to “301,123,098 unique users last month”, but we believe that the benefits of this approach far outweigh the cost of limiting precision.
### Give a little, get a lot
The precision of our approach does not depend on the size of the thing we’re counting. When we set our precision to +/-1%, we can tell the difference between 1000 and 990 as easily as we can tell the difference between 30 billion and 29.7 billion users. For example when we count the numbers of users a campaign reached in Wernersville, PA (Matt’s hometown) we can guarantee that we saw 1000 +/- 10 unique cookies, as well as saying the campaign reached 1 Billion +/- 10M unique cookies overall.
Our storage size is fixed once we choose our level of precision. This means that we can accurately predict the amount of storage needed and our system has no problem coping with increases in data volume and scales preposterously well. Just to reiterate, it takes exactly as much space to count the number of users you reach in Wernersville as it does to count the total number of users you reach in North America. Contrast this with sampling, where to maintain a fixed precision when capturing long-tail features (things that don’t show up a lot relative to the rest of the data-set, like Wernersville) you need to drastically increase the size of your storage.
The benefits of not having unexpected storage spikes, and scaling well are pretty obvious – fewer technical limits, fewer surprises, and lower costs for us, which directly translates to better value for our users and a more reliable product. A little bit of precision seems like a fair trade here.
The technique we chose supports set-operations. This lets us ask questions like, “how many unique users did I see from small towns in Pennsylvania today” and get an answer instantaneously by composing multiple data structures. Traditionally, the answers to questions like this have to be pre-computed, leaving you waiting for a long job to run every time you ask a question you haven’t prepared for. Fortunately, we can do these computations nearly instantaneously, so you can focus on digging into your data. You can try that small-town PA query again, but this time including Newton, MA (Ben’s hometown), and not worry that no one has prepared an answer.
Unfortunately, not all of these operations are subject to the same “nice” error bounds. However, we’ve put the time in to detect these errors, and make sure that the functionality our clients see degrades gracefully. And since our precision is tunable, we can always dial the precision up as necessary.
### Getting insight from data
Combined with our awesome streaming architecture this allows us to stop thinking about storage infrastructure as the limiting factor in analytics, similar to the way MP3 compression allows you to fit more and more music on your phone or MP3-player. When you throw the ability to have ad-hoc queries execute nearly instantly into the mix, we have no regrets about getting a little bit lossy. We’ve already had our fair share of internal revelations, and enabled clients to have quite a few of their own, just because it’s now just so easy to work with our data.
Filed Under: Data Science, General
## Building a Big Analytics Infrastructure
September 8, 2011 By
There is much buzz about “big data” and “big analytics” but precious little information exists about the struggle of building an infrastructure to tackle these problems. Some notable exceptions are Facebook, Twitter’s Rainbird and MetaMarket’s Druid. In this post we provide an overview of how we built Aggregate Knowledge’s “big analytics” infrastructure. It will cover how we mix rsyslog, 0MQ and our in-house streaming key-value store to route hundreds of thousands of events per second and efficiently answer reporting queries over billions of events per day.
### Overview and Goal
Recording and reporting on advertising events (impressions, interactions, conversions) is the core of what we do at Aggregate Knowledge. We capture information about:
• Who: audience, user attributes, browser
• What: impression, interaction, conversion
• Where: placement, ad size, context
• When: global time, user’s time, day part
just to name a few. We call these or any combination of these our keys. The types of metrics (or values) that we support but aren’t limited to:
• Counts: number of impressions, number of conversions, number of unique users (unique cookies)
• Revenue: total inventory cost, data cost
• Derived: click-through rate (CTR), cost per action (CPA)
Our reports support drill-downs and roll-ups all of the available dimensions — we support many of the standard OLAP functions.
We are architecting for a sustained event ingest rate of 500k events per second over a 14 hour “internet day” yielding around 30 billion events per day (or around 1 trillion events a month). Our daily reports run over billions of events should take seconds to run and our monthly or lifetime reports run over hundreds of billion events should take at most minutes.
Over the past few years we have taken a few different paths to produce our reports with varying degrees of success.
### First Attempt: Warehouses, Map-Reduce and Batching
When I first started at Aggregate Knowledge we had a multi-terabyte distributed warehouse that used Map-Reduce to process queries. The events were gathered from the edge where they were recorded and batch loaded into the warehouse on regular intervals. It stored hundreds of millions of facts (events) and took hours to generate reports. Some reports on unique users would take longer than a day to run. We had a team dedicated to maintaining and tuning the warehouse.
At the time our event recorders were placed on many high-volume news sites and it was quite common for us to see large spikes in the number of recorded events when a hot news story hit the wires. It was common for a 5 minute batch of events from a spike to take longer than 5 minutes to transfer, process and load which caused many headaches. Since the time it took to run a report was dependent on the number of events being processed, whenever a query would hit one of these spikes, reporting performance would suffer. Because we provided 30-, 60- and 90-day reports, a spike would cause us grief for a long time.
After suffering this pain for a while, this traditional approach of storing and aggregating facts seemed inappropriate for our use. Because our data is immutable once written, it seemed clear that we needed to pre-compute and store aggregated summaries. Why walk over hundreds of millions of facts summing along some dimension more than once if the answer is always a constant — simply store that constant. The summaries are bounded in size by the cardinality of the set of dimensions rather than the number of events. Our worries would move from something we could not control — the number of incoming events — to something that we could control — the dimensionality and number of our keys.
### Second Attempt: Streaming Databases and Better Batching
Having previously worked on a financial trading platform, I had learned much about streaming databases and Complex Event Processing (e.g. Coral8, StreamBase, Truviso). Our second approach would compute our daily summaries in much the same way that a financial exchange keeps track and tally of trades. The event ingest of the streaming database would be the only part of our infrastructure affected by spikes in the number of events since everything downstream worked against the summaries. Our reporting times went from hours to seconds or sub-seconds. If we were a retail shop that had well-known dimensionality then we would likely still be using a streaming database today. It allowed us to focus on immediate insights and actionable reports rather than the warehouse falling over or an M-R query taking 12 hours.
Once worrying about individual events was a thing of the past, we started to look at the dimensionality of our data. We knew from our old warehouse data that the hypercube of dimensional data was very sparse but we didn’t know much else. The initial analysis of the distribution of keys yielded interesting results:
Zipf: Key frequency
Keys are seen with frequencies that tend to follow Zipf’s Law:
the frequency of any key is inversely proportional to its rank in the frequency table
Put simply: there are a large number of things that we see very infrequently and a small number of things that we see very often. Decile (where the black line is 50%) and CDF plots of the key frequency provide additional insights:
60% of our keys have been seen hundreds of times or less and around 15% of our keys had been seen only once. (The graph only covers one set of our dimensions. As we add more dimensions to the graph the CDF curve gets steeper.) This told us that not only is the hypercube very sparse but the values tend to be quite small and are updated infrequently. If these facts could be exploited then the storage of the hypercube could be highly compressed even for many dimensions with high cardinality and stored very efficiently.
We improved our transfer, transform and loading batch processes to better cope with event volume spikes which resulted in less headaches but it still felt wrong. The phrase “batching into a streaming database” reveals the oxymoron. We didn’t progress much in computing unique user counts. Some of the streaming databases provided custom support for unique user counting but not at the volume and rate that we required. Another solution was needed.
### Third Attempt: Custom Key-Value Store and Streaming Events
From our work with streaming databases we knew a few things:
• Out-of-order data was annoying (this is something that I will cover in future blog posts);
• Counting unique sets (unique users, unique keys) was hard;
• There was much efficiency to be gained in our distribution of keys and key-counts;
• Structured (or semi-structured) data suited us well;
• Batching data to a streaming database is silly;
Unfortunately none of the existing NoSQL solutions covered all of our cases. We built a Redis prototype and found that the majority of our code was in ingesting our events from our event routers, doing key management and exporting the summaries to our reporting tier. Building the storage in-house provided us the opportunity to create custom types for aggregation and sketches for the cardinality of large sets (e.g. unique user counting). Once we had these custom types it was a small leap to go from the Redis prototype to a full-featured in-house key-value store. We call this beast “The Summarizer”. (“The Aggregator” was already taken by our Ops team for event aggregation and routing.)
The summarizer simply maps events into summaries by key and updates the aggregates. Streaming algorithms are a natural fit in this type of data store. Many O(n^m) algorithms have streaming O(n) counterparts that provide sufficiently accurate results. (We’ll be covering streaming algorithms in future posts.) It provides us with a succinct (unsampled) summary that can be accessed in O(1). Currently we can aggregate more than 200,000 events per second per core (saturating just shy of 1 million events per second) where some events are aggregated into more than ten summaries.
Our summaries are computed per day. (Future blog posts will provide more information about how we treat time.) They are designed such that they rarely contain more than 10M rows and are stored in CSV format. Initially we used CSV simply because we already had all of the code written for ingesting 3rd party CSV data. We quickly found other uses for them: our analysts gobbled them up and use them in Excel, our data scientists use them directly in R, and even our engineers use them for back-of-the-envelope calculations. Having manageable summaries and/or sketches enabled agile analytics.
To get the events into our summarizer we completely rethought how events move through our infrastructure. Instead of batching events, we wanted to stream them. To deal with spikes and to simplify maintenance, we wanted to allow the events to be queued if downstream components became unavailable or unable to meet the current demand. We needed to be able to handle our desired ingest rate of 500k events per second. The answer was right under our noses: rsyslog and 0MQ. (See the “Real-Time Streaming for Data Analytics” and “Real-Time Streaming with Rsyslog and ZeroMQ” posts for more information.)
### Wrap Up
Our challenge was to be able to produce reports on demand over billions of events in seconds and over hundreds of billions in minutes while ingesting at most 500,000 events per second. Choosing the defacto technology de jour caused us to focus on fixing and maintaining the technology rather than solving business problems and providing value to our customers. We could have stayed with our first approach, immediately scaling to hundreds of nodes and taking on the challenges that solution presents. Instead, we looked at the types of answers we wanted and worked backwards until we could provide them easily on minimal hardware and little maintenance cost. Opting for the small, agile solution allowed us to solve business problems and provide value to our customers much more quickly.
#### Footnote
Astute readers may have noticed the point on the far-left of the CDF graph and wondered how it was possible for a key to have been seen zero times or wondered why we would store keys that have no counts associated with them. We only summarize what is recorded in an event. The graph shows the frequency of keys as defined by impressions and it doesn’t include the contribution of any clicks or conversions. In other words, these “zero count keys” mean that for a given day there are clicks and/or conversions but no impressions. (This is common when a campaign ends.) In hindsight we should have summed the count of impressions, clicks and conversions and used that total in the graph but this provided the opportunity to show a feature of the summarizer — we can easily find days for which clicks and conversions have no impressions without running a nasty join.
Tagged With: Aggregation, Agile Analytics, key-value store, MapReduce, rsyslog, Sketch, zeromq, Zipf
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 87, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374651312828064, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/89750/showing-two-matrices-are-similar?answertab=oldest
|
# Showing two matrices are similar
I have to show that each of the following matrices
$$\frac{1}{\sqrt{2}} \begin{pmatrix} 0&1&0\\ 1&0&1\\ 0&1&0 \end{pmatrix}\quad , \frac{1}{\sqrt{2}} \begin{pmatrix} 0&-i&0\\ i&0&-i\\ 0&i&0 \end{pmatrix} , \begin{pmatrix} 1&0&0\\ 0&0&0\\ 0&0&-1 \end{pmatrix}$$
are equivalent to one of the following
$$\begin{pmatrix}0&0&0\\0&0&1\\0&-1&0\end{pmatrix},\begin{pmatrix}0&0&-1\\0&0&0\\1&0&0\end{pmatrix},\begin{pmatrix}0&1&0\\-1&0&\\0&0&0\end{pmatrix}$$
It is a relatively simple but time consuming problem to construct a similarity transformation. I have nine real independent variables and 9 equations to check for each matrix. I want to know if there is any process or software that could help me do this calculation?
-
1
Which kind of "equivalent" are you using? – Henning Makholm Dec 8 '11 at 23:24
@Henning Sorry. I am following a physics book and these matrices are actually representations. Two representations were defined to be equivalent if their matrices are related by a similarity transformation. This was what I had in my mind. – kuch nahi Dec 9 '11 at 0:36
## 2 Answers
Since you mentioned a "similarity transform", I presume your "equivalent" means "similar"; also I presume this is over $\mathbb C$. Well, the first thing I would do is find the eigenvalues of each matrix; the second (if necessary) is the Jordan canonical form. As for software, Maple will handle this quite easily; I imagine most other CAS's will also.
EDIT: Oops, all of your first three matrices are similar to each other, and none is similar to any of the last three. Are you sure you quoted the question right?
-
Yes, I am sure. The first three matrices are what physicists call spin 1 representation. The final three matrices are the adjoint representation of SU(2) (excluding a phase factor of -i). The question is to show that the two representations are equivalent (a result which is used later to construct roots). I phrased only the computational part in my question. – kuch nahi Dec 9 '11 at 0:28
@kuchnahi The top matrices are Hermitian hence their eigenvalues are real. Bottom left two are skew-symmetric and hence have eigenvalues pure imaginary. This means they can't be similar. The only option is the bottom right but that also has complex eigenvalues. – user13838 Dec 9 '11 at 1:26
The bottom right doesn't work already because it is invertible and none of the other 5 are. – Henning Makholm Dec 9 '11 at 12:27
Sorry,you all were right. I made a type in the third matrix – kuch nahi Dec 9 '11 at 15:50
1
In fact, if the top 3 matrices are $A_j$, consider $U A_j U^{-1}$ where $U = \pmatrix{-1/\sqrt{2} & 0 & 1/\sqrt{2}\cr -i/\sqrt{2} & 0 & -i/\sqrt{2}\cr 0 & 1 & 0\cr}$ – Robert Israel Dec 9 '11 at 18:10
show 1 more comment
Two matrices are similar if their traces are equal.
-
So every traceless matrix is the zero matrix? – EuYu Nov 4 '12 at 4:21
3
This is true only for $1\times 1$ matrices. – Jason DeVito Nov 4 '12 at 4:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336436986923218, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/relative-motion+inertial-frames
|
# Tagged Questions
4answers
147 views
### Inertial Frames of Reference - Inertial vs. Accelerated Frames
According to Robert Resnick's book "Introduction to Special Relativity", a line states the following as the definition of an inertial frame of reference: "We define an inertial system as a frame of ...
2answers
70 views
### Reality error and relative velocity
Suppose a person is walking in rain carrying an umbrella. He is tilting his umbrella at some angle with the vertical so as to protect himself from the rain. But a neutral observer who is standing ...
1answer
123 views
### Galilean relativity in projectile motion
Consider a reference frame $S^'$ moving in the initial direction of motion of a projectile launched at time, $t=0$. In the frame $S$ the projectile motion is: $$x=u(cos\theta)t$$ ...
1answer
109 views
### Galileo's dictum and how light cannot violate it
Okay. So I've been told that the speed of light is constant and cannot violate Galileo's dictum, but even if it weren't constant (in a vacuum), how would it violate it anyway? Say you are on a train ...
3answers
759 views
### Is acceleration relative?
A while back in my Dynamics & Relativity lectures my lecturer mentioned that an object need not be accelerating relative to anything - he said it makes sense for an object to just be accelerating. ...
2answers
482 views
### Galilean transformation in relativity
Assume flat spacetime in a general relativistic framework (or special relativity for that matter) and two observers $A$ and $B$, with non-vanishing velocity relative to each other. We know that they ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332778453826904, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-topics/24300-four-4-s-urgent-help-needed.html
|
# Thread:
1. ## Four 4's: Urgent help needed
Hey all,
I could really use some help with this one:
Using four 4's and any combination of math operations, represent as many of the whole numbers 1-100 as you can.
I found operations that result in: 1, 2, 3, 5, 6, 8, 9, 10, 11, 12, 13, 16, 20, 24, 36, 60, 64, 65, 68, and 72.
I am having a tough time finding the others from 1 through 100. Any help will be greatly appreciated.
Thanks,
cheyanne
2. $\frac {4^4}{4+4} = 32$
$4*4 + \frac {4}{4} = 17$
$4*4 - \frac {4}{4} = 15$
$4*(4+4) - 4 = 28$
$4 + 4 - \frac {4}{4} = 7$
$4! + 4 + 4 - 4 = 28$
$4! + 4 + \frac {4}{4} = 29$
$4! + 4 - \frac {4}{4} = 27$
${{4+4} \choose {4}} + 4 = 74$
${{4+4} \choose {4}} - 4 = 66$
3. Hello, cheyanne!
Using four 4's and any combination of math operations,
represent as many of the whole numbers 1-100 as you can.
This is a classic (very old) problem . . .
Here are some tricks you may wish to explore . . .
. . $\frac{4}{.4} \:=\:10$
. . $\frac{4!}{.4} \:=\:60$
One of the hardest is: . $\frac{4!+ 4.4}{.4} \;=\;71$
One of my students came up with: . $\frac{\left(\dfrac{4}{.4}\right)!}{(4+4)!} \;=\;90$
4. Originally Posted by cheyanne
Hey all,
I could really use some help with this one:
Using four 4's and any combination of math operations, represent as many of the whole numbers 1-100 as you can.
I found operations that result in: 1, 2, 3, 5, 6, 8, 9, 10, 11, 12, 13, 16, 20, 24, 36, 60, 64, 65, 68, and 72.
Hello,
here are some of the missing numbers:
$14=\left(\sqrt{4}+\sqrt{4}\right)^{\sqrt{4}}-\sqrt{4}$
$18= 4 \cdot 4 + \frac4{\sqrt{4}}$
$19 = 4! - 4 - \frac44$
$21 = 4! - 4 + \frac44$
$23 = 4! - \frac{\sqrt{4} + \sqrt{4}}{4}$
$25 = \left(4 + \frac44 \right)^{\sqrt{4}}$
5. Hello, everyone!
Did you know there is an Ultimate Solution to the "Four four's"?
Check this out . . .
$n \;=\;-\log_{\left(\frac{4}{\sqrt{4}}\right)} \left[\log_4\left(\sqrt{\sqrt{\sqrt{\cdots\sqrt{4}}}}\ri ght)\right]$
. . . . . . . . . . . . . . . .
n radicals
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213815927505493, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/45702-area-region-between-two-curves-print.html
|
# Area of the Region Between Two Curves
Printable View
• August 10th 2008, 08:45 PM
MathGeek06
Area of the Region Between Two Curves
Find the area of the region between y = x^2 and y = 4-3x from x = -1 to x = 4/3
• August 10th 2008, 08:52 PM
Chris L T521
Quote:
Originally Posted by MathGeek06
Find the area of the region between y = x^2 and y = 4-3x from x = -1 to x = 4/3
Did you graph the bounded region? This will help you set up the integral.
http://img.photobucket.com/albums/v4...tuff/parab.jpg
Recall that the area between two curves is $A=\int_a^b \left(f(x)-g(x)\right) \,dx$
I hope this helps!
--Chris
• August 10th 2008, 09:00 PM
MathGeek06
Yes, I graphed the bounded region but when I set up my integrals, I got a negative answer, which clearly was incorrect.
I had an integral of ((4-3x) - x^3)) from x = -1 to x= 1 + the integral ((x^2) - (4-3x)) from x = 1 to x = 4/3.
• August 10th 2008, 09:24 PM
Chris L T521
Quote:
Originally Posted by MathGeek06
Yes, I graphed the bounded region but when I set up my integrals, I got a negative answer, which clearly was incorrect.
I had an integral of ((4-3x) - x^3)) from x = -1 to x= 1 + the integral ((x^2) - (4-3x)) from x = 1 to x = 4/3.
$\int_{-1}^1 (4-3x-x^2)\,dx+\int_1^{\frac{4}{3}}(x^2+3x-4)\,dx$
$\implies\left.\bigg[4x-\tfrac{3}{2}x^2-\tfrac{1}{3}x^3\bigg]\right|_{-1}^1+\left.\bigg[\tfrac{1}{3}x^3+\tfrac{3}{2}x^2-4x\bigg]\right|_1^{\frac{4}{3}}$
$\implies\bigg[\left(4-\tfrac{3}{2}-\tfrac{1}{3}\right)-\left(-4-\tfrac{3}{2}+\tfrac{1}{3}\right)\bigg]+\bigg[\left(\tfrac{64}{81}+\tfrac{8}{3}-\tfrac{16}{3}\right)-\left(\tfrac{1}{3}+\tfrac{3}{2}-4\right)\bigg]=\color{red}\boxed{\frac{1235}{162}}$
Does this make sense?
--Chris
All times are GMT -8. The time now is 04:46 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390056133270264, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/126028/distance-between-parametric-function-and-a-point/126032
|
# Distance between parametric function and a point
Given a parametric function $$\begin{align*} x &= x_o + v_x t + \frac{1}{2} a_x t^2\\ y &= y_o + v_y t + \frac{1}{2} a_y t^2\end{align*}$$
and point $P$ at $(c, d)$ , find all point(s) on the function that are distance $R$ from point $P$, assuming that $R > 0$ and none of the constants equal $0$.
This is part of a programming project (that I'm doing on my own, not for homework), so the answer needs to be obtainable using only expressions. Graphically solving the problem won't be possible.
Thanks in advance.
-
Welcome to math.SE. If this is a homework, then please add `(homework)` tag. – user2468 Mar 29 '12 at 19:24
Not homework. Personal project, for fun. – Jon W Mar 30 '12 at 18:52
## 1 Answer
I'm note sure if `(homework)`.
Hint:
I'm assuming that we're given $R, c, d, x_0, v_x, a_x, y_0, v_y, a_y.$
The squared distance between $P = (c,d)$ and $(x,y)$ is given by $$R^2 = (x - c)^2 + (y - d)^2 \tag{1}$$ Substitute the definition of $$x = x_o + v_x * t + \frac{1}{2} * a_x * t^2 \tag{2} \\ y = y_o + v_y * t + \frac{1}{2} * a_y * t^2$$ in $(1)$
You're left with a polynomial in $t.$ Solve for $t,$ this will give you different values of $t = \{t_1, \ldots, t_4\}.$ Substitute each $t_i$ in $(2)$ to get different $(x,y)$ points.
Update # 1:
I'm too lazy to LaTeX the following equation:
```` 2 2 4 3
(0.25 ay + 0.25 ax ) t + (1.0 vy ay + 1.0 vx ax) t
2 2 2
+ (vx + 1.0 xo ax - 1.0 ax c - 1.0 ay c + vy + 1.0 yo ay) t
2 2 2
+ (-2. vy c + 2. yo vy + 2. xo vx - 2. vx c) t + xo + 2. c - 1. R
2
- 2. xo c - 2. yo = 0
````
Anyways, that's the degree $4$ polynomial in $t.$ To find a closed-form expressions for the roots $$\{t_1 = \cdots, t_2 = \cdots, t_3 = \cdots, t_4 = \cdots\},$$ you will need to do a lot of algebra on this polynomial. For example this.
-
The problem is that because this is for a program, each possible value of $t$ must be defined as a single expression ($t1 = ..., t2 = ...$) using $R,c,d,x_0,v_x,a_x,y_0,v_y,a_y$. The numerical values for these constants are unknown, since the values will vary when this is used in the program. The problem that I'm facing is that when I take the expanded polynomial and solve for $t$ using my graphing calculator, it is unable to find a solution. – Jon W Mar 30 '12 at 18:46
@JonW I added some more comments to my answer,. – user2468 Mar 30 '12 at 19:00
– Jon W Mar 30 '12 at 19:09
@JonW Indeed. The big expression above gives you $A, B, C, D, E.$ You're on the right path. – user2468 Mar 30 '12 at 19:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8416869640350342, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/189307-linear-equation-matrices.html
|
Thread:
1. Linear Equation and matrices.
It's been a ten years ago when I learning those equations, some how recently my boss just give me a question and asking for help, I d't have any ideal back to carry out those equation where all the knowledge already give back to teacher, i guess.
Therefore, hope some one here can give me a little help on it. Thank you.
Part 1
Writing those in linear equation,
a) x-y+z = 3
b) 4x-3y-z = 6
c) 3x +y+2z = 23
in the form of Ax=b. State the size of matrix A and its main diagonal elements
Thanks to Prove It Part 1 Solved
Part 2
Using the elementary row operation (ERO) on the augmented matrix A|b, solve the system of linear equation above.
<---Try to resolve but not correct, need help
2. Re: Linear Equation
Originally Posted by gilagila
It's been a ten years ago when I learning those equations, some how recently my boss just give me a question and asking for help, I d't have any ideal back to carry out those equation where all the knowledge already give back to teacher, i guess.
Therefore, hope some one here can give me a little help on it. Thank you.
Writing those in linear equation,
a) x-y+z = 3
b) 4x-3y-z = 6
c) 3x +y+2z = 23
in the form of Ax=b. State the size of matrix A and its main diagonal elements
$\displaystyle \mathbf{A}$ is the matrix which consists of all the coefficients of your variables, $\displaystyle \mathbf{x}$ is a column vector containing your variables, and $\displaystyle \mathbf{b}$ is a column vector containing the values on the right-hand-side of the equation.
So do you think you can write down what values are in the matrices?
3. Re: Linear Equation
Originally Posted by Prove It
$\displaystyle \mathbf{A}$ is the matrix which consists of all the coefficients of your variables, $\displaystyle \mathbf{x}$ is a column vector containing your variables, and $\displaystyle \mathbf{b}$ is a column vector containing the values on the right-hand-side of the equation.
So do you think you can write down what values are in the matrices?
Not sure is it correct or not, let me try some
a) x-y+z = 3
b) 4x-3y-z = 6
c) 3x +y+2z = 23
if A = 1 -1 1...........x = x.........b= 3
.........4 -3 -1 ..............y..............6
.........3 1 2...................z............23
so Size of matrix A = square matrix 3 x 3 and the main diagonal elements = 1, -3 , 2
am i right ?
4. Re: Linear Equation
Originally Posted by gilagila
Not sure is it correct or not, let me try some
a) x-y+z = 3
b) 4x-3y-z = 6
c) 3x +y+2z = 23
if A = 1 -1 1 x = x b= 3
4 -3 -1 y 6
3 1 2 z 23
so Size of matrix A = square matrix 3 x 3 and the main diagonal elements = 1, -3 , 2
am i right ?
I know what you have done, just to make it more readable...
$\displaystyle \mathbf{A} = \left[\begin{matrix}1 & -1 & \phantom{-}1 \\ 4 & -3 & -1 \\ 3 & \phantom{-}1 & \phantom{-}2\end{matrix}\right]$
$\displaystyle \mathbf{x} = \left[\begin{matrix} x \\ y \\ z \end{matrix}\right]$
$\displaystyle \mathbf{b} = \left[\begin{matrix} 3 \\ 6 \\ 23 \end{matrix}\right]$
And you are correct that the elements in the main diagonal are 1, -3 and 2
5. Re: Linear Equation
Originally Posted by Prove It
I know what you have done, just to make it more readable...
$\displaystyle \mathbf{A} = \left[\begin{matrix}1 & -1 & \phantom{-}1 \\ 4 & -3 & -1 \\ 3 & \phantom{-}1 & \phantom{-}2\end{matrix}\right]$
$\displaystyle \mathbf{x} = \left[\begin{matrix} x \\ y \\ z \end{matrix}\right]$
$\displaystyle \mathbf{b} = \left[\begin{matrix} 3 \\ 6 \\ 23 \end{matrix}\right]$
And you are correct that the elements in the main diagonal are 1, -3 and 2
new in this forum, just want to ask how you type those matrix ? using what function / icon in the edit area ?
6. Re: Linear Equation
it's called latex, there is a tutorial here. to use it, you use the "tex" tags. for example:
"[tex ]x^2 + x + \sqrt{2}[ /tex]" produces $x^2 + x + \sqrt{2}$
(don't put the spaces after the tex or before the /tex)
when row-reducing, there are 3 operations we can perform:
1) switch two rows,
2) multiply one row by a non-zero number,
3) multiply one row by a non-zero number and add/subtract it from another row.
the idea is try to try get a leading 1 in each row, each leading 1 is rightward of the rows above, and we want all 0's below each leading 1 (you can also try to have 0's above each leading 1 as well, which brings the matrix into reduced row echelon form). we may wind up with 0 rows at the bottom.
in your matrix, since we already have a leading 1 in the first row, the first step is to eliminate the rest of the first column. subtracting 4 times the 1st row from the second, and then subtracting 3 times the first row form the third, we get:
$\begin{bmatrix}1&-1&1&3\\0&1&-5&-6\\0&4&-1&14 \end{bmatrix}$
(this is the augmented matrix) can you continue?
7. Re: Linear Equation
Originally Posted by Deveno
it's called latex, there is a tutorial here. to use it, you use the "tex" tags. for example:
"[tex ]x^2 + x + \sqrt{2}[ /tex]" produces $x^2 + x + \sqrt{2}$
(don't put the spaces after the tex or before the /tex)
when row-reducing, there are 3 operations we can perform:
1) switch two rows,
2) multiply one row by a non-zero number,
3) multiply one row by a non-zero number and add/subtract it from another row.
the idea is try to try get a leading 1 in each row, each leading 1 is rightward of the rows above, and we want all 0's below each leading 1 (you can also try to have 0's above each leading 1 as well, which brings the matrix into reduced row echelon form). we may wind up with 0 rows at the bottom.
in your matrix, since we already have a leading 1 in the first row, the first step is to eliminate the rest of the first column. subtracting 4 times the 1st row from the second, and then subtracting 3 times the first row form the third, we get:
$\begin{bmatrix}1&-1&1&3\\0&1&-5&-6\\0&4&-1&14 \end{bmatrix}$
(this is the augmented matrix) can you continue?
thanks for the help. I just do it in MS Word and make it as a picture, somehow d't know wher eis wrong, cannot get the answer right. Pls advice. TQ
8. Re: Linear Equation
Originally Posted by Deveno
in your matrix, since we already have a leading 1 in the first row, the first step is to eliminate the rest of the first column. subtracting 4 times the 1st row from the second, and then subtracting 3 times the first row form the third, we get:
$\begin{bmatrix}1&-1&1&3\\0&1&-5&-6\\0&4&-1&14 \end{bmatrix}$
(this is the augmented matrix) can you continue?
when 4R1 - R2, in r2 it should be 4(1)-4, 4(-1)-(-3), 4(1)-(-1), 4(3)-6 = 0,-1,5,6, why your no.2 figure is +1 not -1 ? and your no.4 figure is -6 not +6 ?
9. Re: Linear Equation
there are different ways to row-reduce, you used slightly different operations than i did, but you can see that you (eventually) wound up with the same 2nd row as me, and the negative of the 3rd row.
the error you made in your row-reduction is in the very last step, instead of adding 8 to -3 (in the 4th column), you added 8 to 3, getting 11, instead of the 5 you should have gotten.
you may check that x = 5, y = 4, z = 2 is indeed a solution to your original equations.
10. Re: Linear Equation
Originally Posted by Deveno
there are different ways to row-reduce, you used slightly different operations than i did, but you can see that you (eventually) wound up with the same 2nd row as me, and the negative of the 3rd row.
the error you made in your row-reduction is in the very last step, instead of adding 8 to -3 (in the 4th column), you added 8 to 3, getting 11, instead of the 5 you should have gotten.
you may check that x = 5, y = 4, z = 2 is indeed a solution to your original equations.
oops..ya..finally i got the answer. Thanks for the tip on the mistake on last row deduction.
This problem has been solved. Thank you all.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271416068077087, "perplexity_flag": "middle"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Laws_of_Kepler
|
# All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Kepler's laws of planetary motion
(Redirected from Laws of Kepler)
Johannes Kepler's primary contributions to astronomy/astrophysics were the three laws of planetary motion. Kepler derived these laws, in part, by studying the observations of Brahe. Isaac Newton would later design his laws of motion and universal gravitation and verify that Kepler's laws could be derived from them. The generic term for an orbiting object is "satellite".
Contents
## Kepler's laws of planetary motion
• Kepler's first law (1609): The orbit of a planet about a star is an ellipse with the star at one focus.
• Kepler's second law (1609): A line joining a planet and its star sweeps out equal areas during equal intervals of time.
• Kepler's third law (1618): The square of the sidereal period of an orbiting planet is directly proportional to the cube of the orbit's semimajor axis.
## Kepler's first law
The orbit of a planet about a star is an ellipse with the star at one focus.
There is no object at the other focus of a planet's orbit. The semimajor axis, a, is half the major axis of the ellipse. In some sense it can be regarded as the average distance between the planet and its star, but it is not the time average in a strict sense, as more time is spent near apocentre than near pericentre.
### Connection with Newton's laws
Newton proposed that "every object in the universe attracts every other object along a line of the centers of the objects proportional to each objects mass, and inversely proportional to the square of the distance between the objects."
This section proves that Kepler's first law is consistent with Newton's laws of motion. We begin with Newton's law F=ma:
$m\frac{d^2\mathbf{r}}{dt^2} = f(r)\widehat{\mathbf{r}}$
Here we express F as the product of its magnitude and its direction. Recall that in polar coordinates:
$\frac{d\mathbf{r}}{dt} = \dot r\widehat{\mathbf{r}} + r\dot\theta\widehat{\theta}$
$\frac{d^2\mathbf{r}}{dt^2} = (\ddot r - r\dot\theta^2)\widehat{\mathbf{r}} + (r\ddot\theta + 2\dot r \dot\theta)\widehat{\theta}$
In component form we have:
$m(\ddot r - r\dot\theta^2) = f(r)$
$m(r\ddot\theta + 2\dot r\dot\theta) = 0$
Now consider the angular momentum:
$\mathbf{L} = \left|\mathbf{r} \times m\frac{d\mathbf{r}}{dt}\right| = \left|mr^2\dot\theta\right|$
So:
$r^2\dot\theta = \ell$
where $\ell=L/m$ is the angular momentum per unit mass. Now we substitute. Let:
$r = \frac{1}{u}$
$\dot r = -\frac{1}{u^2}\dot u = -\frac{1}{u^2}\frac{d\theta}{dt}\frac{du}{d\theta}= -\ell\frac{du}{d\theta}$
$\ddot r = -\ell\frac{d}{dt}\frac{du}{d\theta} = -\ell\dot\theta\frac{d^2u}{d\theta^2}= -\ell^2u^2\frac{d^2u}{d\theta^2}$
The equation of motion in the $\hat{\mathbf{r}}$ direction becomes:
$\frac{d^2u}{d\theta^2} + u = - \frac{1}{m\ell^2u^2}f\left(\frac{1}{u}\right)$
Newton's law of gravitation states that the central force is inversely proportional to the square of the distance so we have:
$\frac{d^2u}{d\theta^2} + u = \frac{k}{m\ell^2}$
where k is our proportionality constant.
This differential equation has the general solution:
$u = A\cos(\theta-\theta_0) + \frac{k}{m\ell^2}.$
Replacing u with r and letting θ0=0:
$r = \frac{1}{A\cos\theta + \frac{k}{m\ell^2}}$.
This is indeed the equation of a conic section with the origin at one focus. Q.E.D.
## Kepler's second law
A line joining a planet and its star sweeps out equal areas during equal intervals of time.
This is also known as the law of equal areas. Suppose a planet takes 1 day to travel from points A to B. During this time, an imaginary line, from the Sun to the planet, will sweep out a roughly triangular area. This same amount of area will be swept every day.
As a planet travels in its elliptical orbit, its distance from the Sun will vary. As an equal area is swept during any period of time and since the distance from a planet to its orbiting star varies, one can conclude that in order for the area being swept to remain constant, a planet must vary in velocity. Planets move fastest when at perihelion and slowest when at aphelion.
This law was developed, in part, from the observations of Brahe that indicated that the velocity of planets was not constant.
This law corresponds to the angular momentum conservation law in the given situation.
### Proof of Kepler's second law:
Assuming Newton's laws of motion, we can show that Kepler's second law is consistent. By definition, the angular momentum $\mathbf{L}$ of a point mass with mass m and velocity $\mathbf{v}$ is :
$\mathbf{L} \equiv m \mathbf{r} \times \mathbf{v}$.
where $\mathbf{r}$ is the position vector of the particle.
Since $\mathbf{v} = \frac{d\mathbf{r}}{dt}$, we have:
$\mathbf{L} = \mathbf{r} \times m\frac{d\mathbf{r}}{dt}$
taking the time derivative of both sides:
$\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F} = 0$
since the cross product of parallel vectors is 0. We can now say that $|\mathbf{L}|$ is constant.
The area swept out by the line joining the planet and the sun, is half the area of the parallelogram formed by $\mathbf{r}$ and $d\mathbf{r}$.
$dA = \begin{matrix}\frac{1}{2}\end{matrix} |\mathbf{r} \times d\mathbf{r}| = \begin{matrix}\frac{1}{2}\end{matrix} \left|\mathbf{r} \times \frac{d\mathbf{r}}{dt}dt\right| = \frac{\mathbf{|L|}}{2m}dt$
Since $|\mathbf{L}|$ is constant, the area swept out by is also constant. Q.E.D.
## Kepler's third law (harmonic law)
The square of the sidereal period of an orbiting planet is directly proportional to the cube of the orbit's semimajor axis.
P2 ~ a3
P = object's sidereal period in years
a = object's semimajor axis, in AU
Thus, not only does the length of the orbit increase with distance, also the orbital speed decreases, so that the increase of the sidereal period is more than proportional.
See the actual figures: attributes of major planets.
Newton would modify this third law, noting that the period is also affected by the orbiting body's mass, however typically the central body is so much more massive that the orbiting body's mass may be ignored. (See below.)
## Applicability
The laws are applicable whenever a comparatively light object revolves around a much heavier one because of gravitational attraction. It is assumed that the gravitational effect of the lighter object on the heavier one is negligible. An example is the case of a satellite revolving around Earth.
## Application
Assume an orbit with semimajor axis a, semiminor axis b, and eccentricity ε. To convert the laws into predictions, Kepler began by adding the orbit's auxiliary circle (that with the major axis as a diameter) and defined these points:
• c center of auxiliary circle and ellipse
• s sun (at one focus of ellipse); $\mbox{length }cs=a\varepsilon$
• p the planet
• z perihelion
• x is the projection of the planet to the auxiliary circle; then $\mbox{area }sxz=\frac ba\mbox{area }spz$
• y is a point on the circle such that $\mbox{area }cyz=\mbox{area }sxz=\frac ba\mbox{area }spz$
and three angles measured from perihelion:
• true anomaly $T=\angle zsp$, the planet as seen from the sun
• eccentric anomaly $E=\angle zcx$, x as seen from the centre
• mean anomaly $M=\angle zcy$, y as seen from the centre
Then
area cxz = area cxs + area sxz = area cxs + area cyz
$\frac{a^2}2E=a\varepsilon\frac a2\sin E+\frac{a^2}2M$
giving Kepler's equation
$M=E-\varepsilon\sin E$.
To connect E and T, assume r = length sp then
$a\varepsilon+r\cos T=a\cos E$ and rsinT = bsinE
$r=\frac{a\cos E-a\varepsilon}{\cos T}=\frac{b\sin E}{\sin T}$
$\tan T=\frac{\sin T}{\cos T}=\frac ba\frac{\sin E}{\cos E-\varepsilon}=\frac{\sqrt{1-\varepsilon^2}\sin E}{\cos E-\varepsilon}$
which is ambiguous but useable. A better form follows by some trickery with trigonometric identities:
$\tan\frac T2=\sqrt\frac{1+\varepsilon}{1-\varepsilon}\tan\frac E2$
(So far only laws of geometry have been used.)
Note that area spz is the area swept since perihelion; by the second law, that is proportional to time since perihelion. But we defined $\mbox{area }spz=\frac ab\mbox{area }cyz=\frac ab\frac{a^2}2M$ and so M is also proportional to time since perihelion—this is why it was introduced.
We now have a connection between time and position in the orbit. The catch is that Kepler's equation cannot be rearranged to isolate E; going in the time-to-position direction requires an iteration (such as Newton's method) or an approximate expression, such as
$E\approx M+\left(\varepsilon-\frac18\varepsilon^3\right)\sin M+\frac12\varepsilon^2\sin 2M+\frac38\varepsilon^3\sin 3M$
via the Lagrange reversion theorem. For the small ε typical of the planets (except Pluto) such series are quite accurate with only a few terms; one could even develop a series computing T directly from M.[1]
## Kepler's understanding of the laws
Kepler did not understand why his laws were correct; it was Isaac Newton who discovered the answer to this more than fifty years later. Newton, understanding that his third law of motion was related to Kepler's third law of planetary motion, devised the following:
$P^2 = \frac{4\pi^2}{G(m_1 + m_2)} \cdot a^3$
where:
• P = object's sidereal period
• a = object's semimajor axis
• G = 6.67 × 10−11 N · m2/kg2 = the gravitational constant
• m1 = mass of object 1
• m2 = mass of object 2
• π = mathematical constant pi
Astronomers doing celestial mechanics often use units of years, AU, G=1, and solar masses, and with m2<<m1, this reduces to Kepler's form. SI units may also be used directly in this formula.
## See also
03-10-2013 05:06:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276720285415649, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/5937/what-happens-if-an-rsa-key-pair-has-identical-public-and-private-exponents/5938
|
# What happens if an RSA key pair has identical public and private exponents?
Rather, is it possible for big prime numbers?
Classroom examples usually involve smaller primes, so for example if you are given a prime number pair $p = 3$, $q = 13$ you would get $n = 39$ and $e = d = 5$, making encryption and decryption the same for a message $x$ or ciphertext $x$.
This can be a big problem, e.g. if a server sends you cipher $x$, you can decrypt it and get the message, then you encrypt the cipher $x$ itself and if you get the message again then you know the public key is the same as the private key.
Is this possible in industry?
-
@JanDvorak Just comparing the modulus would do it. And searches on the internet have been performed. – owlstead Jan 6 at 22:22
## 1 Answer
No, the public and private exponents will never be the same for real (that is, not toy) RSA keys.
The public exponent is almost always be deliberately chosen as a small value (with 65537, 3 and 17 being the most popular choices). In contrast, the private exponent will always be a huge value; always at least $(p-1)/e$ (where $p$ is the larger prime factor of the modulus, and $e$ is the public exponent), and will generally (almost always) be much larger than that. This implies that if you have a 1024-bit RSA key, and a public exponent of 65537, then the private exponent will be at least 495 bits long.
Even if you (for some unknown reason) select a random large public exponent, then it is still extremely unlikely. If we look into the conditions that allow $d = e$, we see that both of the following must hold:
$e^2 \bmod (p-1) = 1$
$e^2 \bmod (q-1) = 1$
Having even one of $e^2 \bmod (p-1)$ and $e^2 \bmod (q-1)$ to be 1 is extremely unlikely; having both conditions happen is not something to worry about.
-
Thanks for your answer :) As a curious addendum, has there been a case where the public and private key ended up the same? If so, what happened? – Schwit Janwityanujit Jan 7 at 21:28
1
@SchwitJanwityanujit: I have never heard about such a case for a real RSA key. – poncho Jan 7 at 21:38
Could one create such a key pair intentionally? – Paŭlo Ebermann♦ Jan 12 at 21:11
1
@PaŭloEbermann: sure, if one wanted to. The obvious way would involve selecting $p$ and $q$ such that $p-1$ and $q-1$ have known factorization; that'd make selecting the value $e$ straightforward ($e = 1, -1 \bmod r$ for every prime power factor of $p-1$ or $q-1$). – poncho Jan 12 at 21:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9078013896942139, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/109603/list
|
## Return to Answer
3 edited body
I think that the explanation
"Because the Cartan classification of isomorphism classes of semisimples is discrete (no continuous families), connected components of the space of semisimples are always contained within isomorphism classes"
is a bit simplistic. The real reason, as many people have already mentioned, is Weyl's theorem on complete reducibility, which of course fails in charatcteristic $p$. And it shouldn't come as a surpise that over an algebraically closed field $K$ of characteristic $p>3$ one encounters situations where there are finitely many isoclasses of simple Lie algebras of dimension $N$ and, at the same time, there exist algebraic families of simple $N$ dimensional N$-dimensional Lie algebras {$\mathfrak{g}_t|\ t\in K$} over$K$such that$\mathfrak{g}_t\cong L$for all$t\ne 0$and$\mathfrak{g}_0\not\cong L$for some simple Lie algebra$L\$.
Indeed, let $N=p^2-2$. Then it follows from the the classification theory that there are finitely many isoclasses of simple $N$-dimensional Lie algebras over $K$. Now consider the associative $K$-algebra $A$ generated by two elements $x,y$ such that $x^p=y^p=0$ and $[x,y]=1$. This is a fake modular version of the first Weyl algebra, and it is easy to see that it is simple and has dimension $p^2$. It has a finite increasing algebra filtration (with $x,y$ living in degree $1$) such that the corresponding graded algebra $P:={\rm gr}(A)$ is the truncated polinomial ring in $x,y$ with induced Poisson bracket satisfying {$x,y$} $=1.$ Then the Lie algebra $[A,A]/K1$ is isomorphic to $\mathfrak{psl}_p(K)$ whilst the Lie algebra {$P,P$}$/K1$ is nothing but the simple Cartan type Lie algebra $H(2;\underline{1})^{(2)}$. Both Lie algebras are simple of dimension $N$ and the latter is a contraction of the former. Moreover, they are not isomorphic when $p>3$.
2 deleted 12 characters in body
I think that the explanation
"Because the Cartan classification of isomorphism classes of semisimples is discrete (no continuous families), connected components of the space of semisimples are always contained within isomorphism classes"
is a bit simplistic. The real reason, as many people have already mentioned, is Weyl's theorem on complete reducibility, which of course fails in charatcteristic $p$. And it shouldn't come as a surpise that over an algebraically closed field $K$ of characteristic $p>3$ one encounters situations where there are finitely many isoclasses of simple Lie algebras of dimension $N$ and, at the same time, there exist algebraic families of simple $N$ dimensional Lie algebras {$\mathfrak{g}_t|\ t\in K$} over $K$ such that $\mathfrak{g}_t\cong L$ for all $t\ne 0$ and $\mathfrak{g}_0\not\cong L$ for some simple Lie algebra $L$.
Indeed, let $N=p^2-2$. Then it follows from the the classification theory that there are finitely many isoclasses of simple $N$-dimensional Lie algebras over $K$. Now consider the associative associative $K$-algebra $A$ generated by two elements $x,y$ such that $x^p=y^p=0$ and $[x,y]=1$. This is a fake modular version of the first Weyl algebra, and it is easy to see that it is simple and has dimension $p^2$. It has a finite increasing algebra filtration (with $x,y$ living in degree $1$) such that the corresponding graded algebra $P:={\rm gr}(A)$ is the truncated polinomial ring in $x,y$ with induced Poisson bracket satisfying {$x,y$} $=1.$ Then the Lie algebra $[A,A]/K1$ is isomorphic to $\mathfrak{psl}_p(K)$ whilst the Lie algebra {$P,P$}$/K1$ is nothing but the simple Cartan type Lie algebra $H(2;\underline{1})^{(2)}$. Both Lie algebras are simple of dimension $N$ and the latter is a contraction of the former. Moreover, they are not isomorphic when $p>3$.
1
I think that the explanation
"Because the Cartan classification of isomorphism classes of semisimples is discrete (no continuous families), connected components of the space of semisimples are always contained within isomorphism classes"
is a bit simplistic. The real reason, as many people have already mentioned, is Weyl's theorem on complete reducibility, which of course fails in charatcteristic $p$. And it shouldn't come as a surpise that over an algebraically closed field $K$ of characteristic $p>3$ one encounters situations where there are finitely many isoclasses of simple Lie algebras of dimension $N$ and, at the same time, there exist algebraic families of simple $N$ dimensional Lie algebras {$\mathfrak{g}_t|\ t\in K$} over $K$ such that $\mathfrak{g}_t\cong L$ for all $t\ne 0$ and $\mathfrak{g}_0\not\cong L$ for some simple Lie algebra $L$.
Indeed, let $N=p^2-2$. Then it follows from the the classification theory that there are finitely many isoclasses of simple $N$-dimensional Lie algebras over $K$. Now consider the associative associative $K$-algebra $A$ generated by two elements $x,y$ such that $x^p=y^p=0$ and $[x,y]=1$. This is a fake modular version of the first Weyl algebra, and it is easy to see that it is simple and has dimension $p^2$. It has a finite increasing algebra filtration (with $x,y$ living in degree $1$) such that the corresponding graded algebra $P:={\rm gr}(A)$ is the truncated polinomial ring in $x,y$ with induced Poisson bracket satisfying {$x,y$} $=1.$ Then the Lie algebra $[A,A]/K1$ is isomorphic to $\mathfrak{psl}_p(K)$ whilst the Lie algebra {$P,P$}$/K1$ is nothing but the simple Cartan type Lie algebra $H(2;\underline{1})^{(2)}$. Both Lie algebras are simple of dimension $N$ and the latter is a contraction of the former. Moreover, they are not isomorphic when $p>3$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 99, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374114274978638, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/dipole+time-reversal
|
# Tagged Questions
2answers
106 views
### Power due to dipole radiation and time reversal symmetry in classical E&M
The dipole formula for the power loss emitted by a time varying electric dipole is (in natural units) $P = \frac{\dot d_i^2}{6 \pi}$. This is clearly even under time reversal symmetry $T$, but a ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9048359394073486, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/library.php?do=view_item&itemid=287
|
Physics Forums
Menu
Home
Action
My entries
Defined browse
Select Select in the list MathematicsPhysics Then Select Select in the list Then Select Select in the list
Search
virtual particles
Definition/Summary
Virtual particles are a mathematical device used in perturbation expansions of the S-operator (transition matrix) of an interaction in quantum field theory. No virtual particle physically appears in the interaction: all possible virtual particles, and their antiparticles, occur equally and together in the mathematics, and must be removed by integration over the values of their momenta. In the coordinate-space representation of a Feynman diagram, the virtual particles are on-mass-shell (realistic), but only 3-momentum is conserved at each vertex, not 4-momentum, so there is no immediate way of obtaining 4-momentum-conserving delta functions. In the momentum-space representation, the virtual particles are both on- and off-mass-shell (unrealistic), but 4-momentum is conserved at each vertex, and also round each loop (as shown by a delta function for each). In the coordinate-space representation, each virtual particle appears "as itself", but in the momentum-space representation, it is represented by a "propagator" (a function of its 4-momentum).
Equations
Calculation for an "H"-shaped Feynman diagram for the interaction between an electron and a photon with given incoming and outgoing 4-momentums, and with "exchanged" 4-momentum $Q = (\boldsymbol{Q},E)$: The "centre part" of the transition probability is: $$\frac{1}{(2\pi)^3}\ \int\frac{d^3\boldsymbol{q}}{2\sqrt{\boldsymbol{q}^2\ +\ m^2}}\ \int\ d^3(\boldsymbol{x}_1-\boldsymbol{x}_2)\ \int\ d(t_1-t_2)\ \ e^{i(E(t_1-t_2)-\boldsymbol{Q}\cdot(\boldsymbol{x}_1-\boldsymbol{x}_2))}\ e^{i\boldsymbol{q}\cdot(\boldsymbol{x}_1-\boldsymbol{x}_2)}$$ $$\times\ \left(\theta(t_1-t_2)\ e^{-i\sqrt{\boldsymbol{q}^2\ +\ m^2}(t_1-t_2)}\ (\gamma_{i}\boldsymbol{q}^{i}+ \gamma_{0}\sqrt{\boldsymbol{q}^2\ +\ m^2}+im)\right.$$ $$\left. +\ \theta (t_2-t_1)\ e^{i\sqrt{\boldsymbol{q}^2 \ +\ m^2}(t_1-t_2)}\ (\gamma_i\boldsymbol{q}^i+ \gamma_{0}\sqrt{\boldsymbol{q}^2\ +\ m^2}-im)\right)$$ (the integral is over all virtual electrons with 3-momentum $\boldsymbol{q}$ created on the left of the "H", and all virtual positrons with 3-momentum $\boldsymbol{q}$ created on the right, and $\theta(t)\ =\ 1\text{ if }t > 0\ \text{ but }=\ 0\text{ if }t < 0$) $$=\ \frac{1}{(2\pi)^3}\ \int\frac{d^3\boldsymbol{q}}{2\sqrt{\boldsymbol{q}^2\ +\ m^2}}\ \int\ d^3(\boldsymbol{x}_1-\boldsymbol{x}_2)\ e^{i((\boldsymbol{q}-\boldsymbol{Q})\cdot(\boldsymbol{x}_1-\boldsymbol{x}_2))}\ \int\ d(t_1-t_2)$$ $$\times\ \left(\theta(t_1-t_2)\ e^{i(E-\sqrt{\boldsymbol{q}^2\ +\ m^2})(t_1-t_2)}\ (\gamma_{i}\boldsymbol{q}^{i}+ \gamma_{0}\sqrt{\boldsymbol{q}^2\ +\ m^2}+im)\right.$$ $$\left.+\ \theta(t_2-t_1)\ e^{i(E+\sqrt{\boldsymbol{q}^2\ +\ m^2})(t_1-t_2)}\ (\gamma_{i}\boldsymbol{q}^{i}- \gamma_{0}\sqrt{\boldsymbol{q}^2\ +\ m^2}+im)\right)$$ (where in the terms with $\theta(t_2-t_1)$ we have replaced $\boldsymbol{q}\text{ and }d^3\boldsymbol{q}$ by $-\boldsymbol{q}\text{ and }-d^3\boldsymbol{q}$) $$=\ \lim_{\varepsilon\rightarrow 0+}\frac{1}{(2\pi)^3}\ \int\frac{d^3\boldsymbol{q}}{2\sqrt{\boldsymbol{q}^2\ +\ m^2}}\ \delta^3(\boldsymbol{q},\boldsymbol{Q})\ \int\ d(t_1-t_2)$$ $$\times\ \frac{-1}{2\pi i}\ \left(\ (\gamma_i\boldsymbol{q}^i+ \gamma_0\sqrt{\boldsymbol{q}^2\ +\ m^2}+im)\ \int\frac{e^{i(E\ -\ \sqrt{\boldsymbol{q}^2\ +\ m^2}\ -\ s)(t_1-t_2)}}{s\ +\ i\varepsilon}\,ds\right.$$ $$\left.+\ \ (\gamma_i\boldsymbol{q}^i-\gamma_0\sqrt{\boldsymbol{q}^2\ +\ m^2}+im)\ \int\frac{e^{i(E\ +\ \sqrt{\boldsymbol{q}^2\ +\ m^2}\ +\ s)(t_1-t_2)}}{s\ +\ i\varepsilon}\,ds\right)$$ (here, a fictional energy variable, $s$, has been introduced, enabling $\theta(t)$ to be replaced by $\lim_{\varepsilon\rightarrow 0+}(-1/2\pi i)\int e^{-ist}ds/(s+i\varepsilon)$) $$=\ \lim_{\varepsilon\rightarrow 0+}\frac{1}{(2\pi)^3}\ \frac{1}{2\sqrt{\boldsymbol{Q}^2\ +\ m^2}}$$ $$\times\ \frac{-1}{2\pi i}\ \left(\ (\gamma_i\boldsymbol{Q}^i+ \gamma_0\sqrt{\boldsymbol{Q}^2\ +\ m^2}\ +\ im)\ \int\frac{\delta(s,E- \sqrt{\boldsymbol{Q}^2\ +\ m^2})}{s\ +\ i\varepsilon}\,ds\right.$$ $$\left.+\ \ (\gamma_i\boldsymbol{Q}^i-\gamma_0\sqrt{\boldsymbol{Q}^2\ +\ m^2}\ +\ im)\ \int\frac{\delta(s,-E-\sqrt{\boldsymbol{Q}^2\ +\ m^2})}{s\ +\ i\varepsilon}\,ds\right)$$ $$=\ \lim_{\varepsilon\rightarrow 0+}\frac{-1}{(2\pi)^4\,i}\ \frac{\gamma_i\boldsymbol{Q}^i\ +\ im}{2\sqrt{\boldsymbol{Q}^2\ +\ m^2}}\left(\frac{1}{E-\sqrt{\boldsymbol{Q}^2\ +\ m^2}\ +\ i\varepsilon}\ +\ \frac{1}{-E-\sqrt{\boldsymbol{Q}^2\ +\ m^2}\ -\ i\varepsilon}\right)$$ $$+\ \frac{-1}{(2\pi)^4\,i}\ \frac{\gamma_0}{2}\left(\frac{1}{E-\sqrt{\boldsymbol{Q}^2\ +\ m^2}\ +\ i\varepsilon}\ -\ \frac{1}{-E-\sqrt{\boldsymbol{Q}^2\ +\ m^2}\ -\ i\varepsilon}\right)$$ $$=\ \frac{1}{(2\pi)^4\,i}\ \frac{\gamma_i\boldsymbol{Q}^i\ +\ \gamma_0E\ +\ im}{\boldsymbol{Q}^2\ -\ E^2\ +\ m^2}\ =\ \frac{1}{(2\pi)^4}\ \frac{-i\,Q\hspace{-1.0ex}/\ +\ m}{2mk_0}$$ where $k_0$ is the (non-zero!) final energy of the photon, measured in the reference frame in which the initial electron is stationary, and $Q\hspace{-1.0ex}/\ =\ \gamma_{\mu}Q^{\mu}\ =\ \gamma_i\boldsymbol{Q}^i\ +\ \gamma_0E$.
Scientists
Richard Feynman (1918-1988) Gian-Carlo Wick (1909-1992) Freeman Dyson (1923-)
Recent forum threads on virtual particles
Breakdown
Physics > Quantum >> Relativistic Waves & Fields
See Also
Feynman diagram
Images
Extended explanation
Dyson (perturbation) expansion: The nth order of the Dyson expansion of the S-operator includes a time-ordered product of n copies of the Hamiltonian, evaluated at n different 4-positions (events): $T\{H(x_1)\cdots H(x_N)\}$ "Time-ordered" means that the copies are re-arranged in order of their t-components, with the earliest on the right. For example, if $t_3>t_1>t_2$, then $T\{H(x_1)H(x_2)H(x_3)\}\ =\ H(x_3)H(x_1)H(x_2)$ Although time-ordering is generally not Lorentz-invariant, it is for non-spacelike pairs of events, and therefore is for Hamiltonians, provided that they commute at spacelike pairs: $H(x_1)H(x_2)\ =\ H(x_2)H(x_1)\text{ if }(x_1-x_2)^2>0$: see Weinberg, (3.5.13-14) Each copy is the sum (integral) of products of (usually) three operators: they are the creation or annihilation operator for three particles of fixed type (for example, two electrons or positrons, and a photon). These types must be one particle which is not its own anti-particle (with two operators), such as an electron, and one particle which is (with one operator), such as a photon, or three particles which are. This sum is over every possible 3-momentum for each particle and its anti-particle, each evaluated at the same 4-position: $H(x_n)\ =\ \int \boldsymbol{a}(m,\boldsymbol{p},x_n) \boldsymbol{a}(m',\boldsymbol{p}',x_n) \boldsymbol{a}(m'',\boldsymbol{p}'',x_n) \,d\boldsymbol{p}\,d \boldsymbol{p}'\,d \boldsymbol{p}''$ These particles are known as virtual particles. They are not created or destroyed in the actual interaction: they appear only in the mathematics. On-mass-shell virtual particles: These virtual particles are realistic: in other words, they can exist: after all, only particles which can exist can have creation or annihilation operators! Being realistic, each such particle is on-mass-shell (or "on-shell"): the energy is fixed by the 3-momentum: $E^2\ =\ \boldsymbol{p}^2\ +\ m^2$, where $m$ is the standard mass for that particle. Same on-mass-shell virtual particle at each end of every line: The two virtual particles (on-mass-shell), one with a creation operator on one side of an internal line in a Feynman diagram, and one with an annihilation operator on the other side, must be the same. This is because a product $H(x_1)\cdots H(x_N)$ is zero unless every creation operator at one 4-position is "matched" by its own annihilation operator at another 4-position: these matches are represented by internal lines joining each such pair of points. By definition, therefore, a Feynman diagram must have each internal line "matched". So if the 3-momentum is $\boldsymbol{q}$ on one side, it must be $-\boldsymbol{q}$ on the other side. "Phase" at each vertex: Each particle at a vertex $x_n$ is associated with a "phase" $e^{ip\cdot x_n}$, where $p$ is its 4-momentum. Each vertex is associated with a different value of $x_n$ (a 4-vector), and the three particles (literally) connected to that vertex all share that value, so that all three produce a combined "phase" such as $e^{i(p\ -\ k'\ -\ q)\cdot x_n}$. Dirac Delta functions: The importance of the combined "phase" at each vertex is that it may be replaced by a Dirac delta function, provided that it is integrated over all possible values of $x_n$. This is because the "phase" oscillates if the factor $(p\ -\ k'\ -\ q)$ is non-zero, but is a constant ($1$) if the factor is zero (for all values of x). The oscillations make the integral zero if $(p\ -\ k'\ -\ q)$ is non-zero, and the constant, $1$, makes the integral the same as if the "phase" was not there if $(p\ -\ k'\ -\ q)$ is zero, apart from a factor of $2\pi$. Therefore, the "phase", and the integral over $x_n$, can both be replaced by the symbol $\delta(p\ -\ k'\ -\ q)$ (a Dirac delta function), which has the effect of eliminating all versions of the diagram except those in which the total momentum at the vertex is zero. This is another way of saying that 4-momentum is conserved at the vertex. Unfortunately , in the coordinate-space representation, the diagram cannot be integrated over all possible values of x, because values of the t-component of x which are less than the t-component of y (the value for the other vertex) are not allowed for virtual particles (and values that are greater are not allowed for virtual anti-particles). For each allowed value of the t-component, all values of the other three ("spatial") components are allowed, and so the 3-momentum (only) is conserved, but this is of no immediate use. Only when this time-ordering restriction is removed (by the mathematical trick of including unrealistic particles) can the diagram be integrated over all possible values of x and y, thereby conserving 4-momentum. Dirac delta function is not a function The Dirac delta function is not a function but a distribution. It only makes sense in the middle of an integral: it reduces the number of variables to be integrated over, while imposing constraints on the eliminated variables. For details, see this thread. Coordinate-space representation: Each internal line $x_jx_i$ in the coordinate-space representation represents the creation and annihilation operators of every possible particle and anti-particle. These particles and anti-particles are realistic (on-mass-shell): each obeys the self-energy-momentum equation $E^2\ =\ \boldsymbol{p}^2\ +\ m^2$. The particles are created at $x_i$ and annihilated, later, at $x_j$. The anti-particles are created at $x_j$ and annihilated, earlier, at $x_i$. It is arbitrary which we call a particle and which an anti-particle: we usually call the positron an anti-particle, but we can call the electron an anti-particle instead, if we adjust the Hamiltonian accordingly. A photon, of course, is its own anti-particle. They are all virtual in the sense that none of them is actually created and annihilated: the diagram is a mathematical device, and must be integrated (summed) over the 3-momentum of every possible realistic particle and anti-particle, and also over all time-ordered values of $x_i$ and $x_j$. Momentum-space representation: The trick which removes the time-ordering restriction is the introduction of a new phase, combining the time coordinate with a new energy variable $s$: together with the original 3-momentum variable(s), this gives a new four-component variable $q \ =\ (\boldsymbol{q},s)$ which behaves as a 4-momentum, and appears in a combined phase of the form $e^{i(q-Q)\cdot(x_j-x_i)}$, which disappears when integrated over $x_j-x_i$, to be replaced by a delta function $\delta(q,Q)$, multiplied by a propagator (a function of $q$). For a simple example, see Equations above, in which the propagator is $(-i\,q\hspace{-0.8ex}/\ +\ m)/(q^2\ +\ m^2\ -\ i\varepsilon)$. This case is simpler than usual, since the delta functions in this case eliminate the need to integrate over $q$. An "H"-diagram has been chosen, rather than the similar "stick-man" diagram, since it involves a virtual electron rather than a virtual photon: this is partly to avoid the complications of gauge theory, and partly to emphasise that the electromagnetic interaction is not mediated solely by virtual photons. A different variable 4-momentum $q$ is assigned to each internal line. Since they do not have the standard mass appropriate to their line (they have every possible mass), they are called off-mass-shell, and may also be considered the 4-momentums of a virtual particle. Again, these are virtual in the sense that none of them is actually created and annihilated: indeed, most of them (unlike the coordinate-space virtual particles) could not exist, since they do not have the mass of an actual particle. The diagram must be now integrated (summed) over the 4-momentum of every realistic and unrealistic particle and anti-particle, but not over the coordinate values, $x_1,x_2,\cdots x_n$, since these have been replaced by delta functions. This elimination of the coordinates, and of the need to integrate over them, justifies the change of name from "coordinate-space representation" to "momentum-space representation". Casimir effect: Would someone like to contribute a comment on the Casimir effect? Avoiding off-mass-shell virtual particles: In principle, the mathematical device of off-mass-shell virtual particles (and the whole momentum-space representation) can always be avoided, but in practice the calculations will usually be horrendous. In the simple example above, however, we could easily have avoided them by using the substitutions $t = (E-\sqrt{\boldsymbol{q}^2\ +\ m^2})(t_1-t_2)$ and $t = (E+\sqrt{\boldsymbol{q}^2\ +\ m^2})(t_1-t_2)$ at the beginning, giving: $$\int\ dt\ \times\ \left(\theta(t)\ e^{it}\ (\gamma_{i}\boldsymbol{q}^{i}+ \gamma_{0}\sqrt{\boldsymbol{q}^2\ +\ m^2}+im)\left/(E-\sqrt{\boldsymbol{q}^2\ +\ m^2})\right.\right.$$ $$\left.+\ \theta(-t)\ e^{it}\ (\gamma_{i}\boldsymbol{q}^{i}- \gamma_{0}\sqrt{\boldsymbol{q}^2\ +\ m^2}+im)\left/(E+\sqrt{\boldsymbol{q}^2\ +\ m^2})\right.\right)$$ and then using $\theta(t)+\theta(-t) = 1$ and so $∫(Ae^{it}\theta(t)+Be^{it}\theta(-t))dt = (A+B)/2$, thus achieving the same final result.
Commentary
tiny-tim @ 01:27 PM Oct27-11
Added new section "Avoiding off-mass-shell virtual particles" to ext expl. No change to Equations (despite the Edit log).
tiny-tim @ 05:54 AM Oct23-11
Replaced \mathbf by \boldsymbol, to get q and Q sloping instead of upright. Rewrote q-slash and Q-slash as q\hspace{-0.8ex} and Q\hspace{-1.0ex}. No other changes.
tiny-tim @ 07:15 AM Dec2-10
Fixed missing LaTeX. No other changes.
tiny-tim @ 08:12 AM Aug24-09
"Equations" almost right, now … but trying to simplify it just seems to make it more complicated … and that's even without adding in the (p-slash + im) factor ~EDIT: done the slash : but still not finished
tiny-tim @ 02:02 AM Aug15-09
hmm … still not quite right
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9047924876213074, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/45055/hamiltonian-in-position-basis
|
# Hamiltonian in position basis
Let $H = \frac{-h^2}{2m}\frac{\partial^2 }{\partial x^2}$. I want to find the matrix elements of $H$ in position basis. It is written like this:
$\langle x \mid H \mid x' \rangle = \frac{-h^2}{2m}\frac{\partial^2}{\partial x^2} \delta(x -x')$.
How do we get this? are we allowed to do $\langle x | \frac{\partial^2}{\partial x^2} \mid x' \rangle = \frac{\partial^2}{\partial x^2} \langle x \mid x' \rangle$? Why? It seems some thing similar is done above.
-
That's exactly what happens. Except since it's a continuous variable, the delta function appears as $\delta (x - x`)$ and not $\delta_{x,x`}$ – Kitchi Nov 25 '12 at 11:17
1
@Kitchi But how can we take the differential operator out? – DurgaDatta Nov 25 '12 at 12:29
3
The partial derivative is taken with $x$, but you're acting it on $x`$, which is independent of $x$. So you can take it out, and the result will be your answer. – Kitchi Nov 25 '12 at 12:36
@Kitchi How would we argue if the LHS was complex conjugate i.e <x'|H|x>, of the original one? – DurgaDatta Nov 25 '12 at 12:52
## 4 Answers
You're given $$H=-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}$$ This is an operator, so it acts on functions of x $$H\psi(x) = -\frac{\hbar^2}{2m}\frac{d^2\psi(x)}{dx^2}$$
The LHS is just the inner product of $\langle x|$ with the new state $H|\psi \rangle$, and on the RHS, $\psi(x)$ is just the inner product of $\langle x|$ with $|\psi \rangle$, so $$\langle x|H|\psi \rangle = -\frac{\hbar^2}{2m}\frac{d^2\langle x|\psi \rangle}{dx^2}$$ Subsitute the position eigenstate $|x' \rangle$ for $|\psi \rangle$ and the result follows.
-
Indeed, the operator H is the sum of the kinetic energy operator and the potential energy operator. The kinetic energy operator is $$T =\frac{P^2}{2m}$$ other part, the operator $P$ in the x basis is $$\langle x|P^2|x' \rangle = -\hbar^2 \frac{d^2}{dx^2}$$
-
+1. If you do not already know the Hamlitonian's representation in the position basis, this is the way to do it. Break down the kinetic energy operator into momentum, and then, if you must, break down the position eigenstates into the momentum basis. When you have nothing left but $\langle x | p \rangle$ and such terms, you do have to know that, at least, but they're Fourier duals, so you should know that by now. – Muphrid Dec 26 '12 at 20:07
The key here is that you can think of $\frac{\partial^2}{\partial x^2}$ as an operator acting on the $x$ variable - which, happily, does not occur in $| x^\prime \rangle$. Hence it is possible to move the operator out of the bracket (otherwise, there would be a pesky multiplication rule kicking in).
A different approach would be to calculate the eigenvalues of $H$ in the position basis rather than the matrix elements, since $H$ is diagonal in this basis, this works and the two results are equal. We then have:
$$\langle x | H | x^\prime \rangle = E_x \Psi(x) \delta(x-x^\prime) = E_x \langle x | x^\prime \rangle = H \langle x | x^\prime \rangle$$ where:
• the first equality is due to the diagonality of $H$ in this basis.
• the second equality is true as we can represent the wave function of a particle by $\Psi(r) = \langle r | x \rangle$.
• the third equality is the eigenvalue equation for $H$.
-
1
H is diagonal in the basis of energy eigenstates. How do we know that H is also diagonal in position basis? – DurgaDatta Nov 25 '12 at 12:31
Good point, sorry, I was under the assumption that $H$, as it only depends on $\partial_x^2$, was also automatically diagonal in the position eigenbases (and the fact that a $\delta(x-x^\prime)$ appears supports said assumption). However, I currently don't have a conclusive argument as to why that would be necessarily true. – Claudius Nov 25 '12 at 12:37
If the matrix elements of your Hamiltonian are given by $\langle x|H|x`\rangle$.
This can be written as $\langle x|Hx`\rangle$, so if you take a complex conjugate, you end up with $\langle x`H|x\rangle$, since $H$ is hermitian. Therefore, the expression you have obtained is valid in both cases.
-
Warning: I'm still not 100% convinced that this is the right method. If I can confirm that this is right/wrong, I'll post back here. Till then, please treat this with caution. – Kitchi Nov 25 '12 at 14:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560257792472839, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=613582
|
Physics Forums
Page 1 of 2 1 2 >
## Why is quantum entanglement special?
Clearly, I am a newb at this. However:
I was reading a bit on qubits and quantum entanglement and -- though I know that QM has no analog to classical mechanics -- the general concept seems to be: two particles interact and become quantum entangled, and then have non-local ties to each other. To me, this sounds kind of like "An immensely tiny billiard ball spinning one direction along an arbitrary axis comes into contact with another stationary ball in a deserted place. Nobody has any idea which direction the balls are spinning in. The two balls are then given to two separate researchers (the researchers have special ideal-environment cubes to contain the balls during transport so they have ZERO effect on the balls). When a researcher determines which axis and direction one of the two billiard balls is spinning along, the other researcher is guaranteed to find that the other billiard ball is spinning along the same axis in the opposite direction.
But of COURSE the researcher was guaranteed to find that the other ball is tied to the first because we know that the first ball will change the state of the second in a known way (in this case, friction between the balls causes the originally-stationary one to move on the same axis as the first in an opposite direction). Seemingly, if we know how the balls came in contact and know that they were not affected in any way since the initial contact (the analog to quantum decoherence??), we can accurately predict the entire system based on one measurement.
I'm sure it's not that simple.... what did I miss? Why is quantum entanglement so surprising and magical?
Best,
mieubrisse
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Homework Help That's what everybody thinks at first but the stats work out differently. Look up "Bell's Inequality". It is the degree of the correlation between entangled particles over repeated measurements that is special.
For a clear exposition of the entanglement weirdness I always recommend the "Quantum Mechanics in your Face" lecture by the inimitable Sid Coleman. It describes a simple thought experiment which nicely highlights the surprise quantum mechanical correlation.
Blog Entries: 1
## Why is quantum entanglement special?
mieubrisse, when most people hear about quantum entanglement, this is exactly what they believe the explanation to be. Einstein himself believed that this kind of "local hidden variables" explanation was enough to account for entanglement. But then JS Bell proved his famous theorem, which says that entanglement possesses certain properties that are incompatible with such explanation. Bell's Theorem and the reasoning behind it are absolutely fascinating. You can read about it in this excellent article:
http://quantumtantra.com/bell2.html
Recognitions:
Gold Member
Science Advisor
Quote by mieubrisse I'm sure it's not that simple.... what did I miss? Why is quantum entanglement so surprising and magical? Best, mieubrisse
Welcome to PhysicsForums, mieubrisse!
As already mentioned, this is the EPR view. Bell came later, and that is where the magical part comes in.
For the angle settings analogous to your example, you are correct. You might expect such results. But at other angle settings, the relationships don't hold up. Read about Bell, and then come back with some more questions!
Recognitions: Homework Help This comes up so often I feel I need to work out some example that does not lend itself to the classical idea... and make it explicit. Coleman's lecture looks like it could contain one ... I'll have to look more closely. Its tricky - to see what I mean: Herbert's description of twin light in his article Each photon's polarization depends on the other's polarization but that polarization in turn depends on the first. This mutual dependence renders each photon polarization completely indefinite subject to the system-wide rule that should a situation ever arise (say in a measurement) that one photon acquires a definite polarization, then the other photon--no matter how far away--must instantly take on that same value for its polarization. ... leads itself to the counter-argument that the two photons just decided which polarization to have while they were still close together - that way, completely classically, it wouldn't matter how far away they were, a measurement of one allows us to deduce the state of the other. This is about where most popular description leave off ... but it is the statistics in the resulting experiment that is important. The kicker is the "code mismatch" observation at the end - which nicely ties the experiment with Bell's inequality. Even so - I'm not sure of his math - after all, in his example #4 the detectors are aligned 60deg to each other - wouldn't you expect a 75% mismatch in that situation - classically? (By his argument - aligning the detectors at 90deg to each other would be 3x30deg for 3x25% = 75% mismatch or less - but reality give 100% mismatch - but we know that crossed detectors means they never agree!) I've misread it or there's a missing assumption. How do we close off this line of reasoning without going all hand-wavey? Coleman seems to have one of the keys - most description make some statement about faster than light communication (eg. Herbert - above) when this is actually the opposite of what is meant: FTL is only needed for the classical description (he says). But I only went through the lecture once so far: it has a high information density for this sort of lecture, he is a messy speaker (pretty much normal for an academic), the transparencies are hard to read, and the audio cuts out at annoying times. It is amazing the lecture is as understandable as it is. He also has randomly aligning detectors too, and relies on the observed vs predicted statistics - but makes the nature of the separation part clear. refs (repeated so you don't have to go back and hunt for the links) Coleman: "QM IYF" Herbert: "Simple Proof"
Quote by Simon Bridge But I only went through the lecture once so far: it has a high information density for this sort of lecture, he is a messy speaker (pretty much normal for an academic), the transparencies are hard to read, and the audio cuts out at annoying times. It is amazing the lecture is as understandable as it is.
You're right - the audiovisual quality isn't great. If it helps, there's an exposition of Coleman's thought experiment here.
Quote by mieubrisse Clearly, I am a newb at this. However: I was reading a bit on qubits and quantum entanglement and -- though I know that QM has no analog to classical mechanics -- the general concept seems to be: two particles interact and become quantum entangled, and then have non-local ties to each other. To me, this sounds kind of like "An immensely tiny billiard ball spinning one direction along an arbitrary axis comes into contact with another stationary ball in a deserted place. Nobody has any idea which direction the balls are spinning in. The two balls are then given to two separate researchers (the researchers have special ideal-environment cubes to contain the balls during transport so they have ZERO effect on the balls). When a researcher determines which axis and direction one of the two billiard balls is spinning along, the other researcher is guaranteed to find that the other billiard ball is spinning along the same axis in the opposite direction. But of COURSE the researcher was guaranteed to find that the other ball is tied to the first because we know that the first ball will change the state of the second in a known way (in this case, friction between the balls causes the originally-stationary one to move on the same axis as the first in an opposite direction). Seemingly, if we know how the balls came in contact and know that they were not affected in any way since the initial contact (the analog to quantum decoherence??), we can accurately predict the entire system based on one measurement. I'm sure it's not that simple.... what did I miss? Why is quantum entanglement so surprising and magical? Best, mieubrisse
Quantum entanglement is a lot like what you imagine it to be, I think. Underlying disturbances interact or are emitted via the same atomic process, then they exhibit relationships which are understandable via classical principles (eg., conservation of angular momentum). The correlations in optical Bell tests are quite expected vis the historically observed behavior of light wrt crossed polarizers. That is, there's nothing surprising or magical about the correlations. But you will not be able to fashion a local hidden variable model of quantum entanglement in line with Bell's formulation. What this means has been a matter of conjecture for over 50 years and is still unresolved in terms of the consensus of scientific opinion.
But one thing is certain, I think, and that is that quantum entanglement correlations are not surprising, and certainly not weird or magical.
Wow, thank you all for such quick and thorough responses! I was able to get time enough today to read the Herbert article (I'll get to the Coleman video tomorrow), and with all seriousness when the revelation about the angles came out, I got shivers. Actual physical shivers. Because that just SHOULDN'T happen! But it does! What in the whole wide world is going on with this reality? But that leads me to my next question, and one that Herbert leads the reader to ask. If I understood the article correctly, Bell's proof is untouchable. What he has proved is complete and 100% solid, and will never die. Quantum theory (a somewhat abstract term in the article's way of putting it) on the other hand ISN'T as absolute. Now, quantum theory predicts that we will never see non-local facts, only non-local theory due to our underlying non-local reality. So my question is thus: WHY is reality non-local when everything we see is local? Have I followed the article correctly? I'll undoubtedly need time to fully digest what I read, but I think I have the gist of it even though a complete understanding isn't there yet. On a related note, what does the community think of this article (the one that got me started on this quantum adventure): http://www.sfgate.com/cgi-bin/articl...BF1.DTL&ao=all
Recognitions:
Gold Member
Science Advisor
Quote by mieubrisse So my question is thus: WHY is reality non-local when everything we see is local?
Great question!
There are interpretations in which there is nothing non-local in the Bohmian sense of non-local. For example, the Time Symmetric class of interpretations allow a future context to be part of the equation. In these, locality is respected but classical causality is not.
But really, the answer to your question is: we don't know.
Another noob here and my original question was the same as mieubrisse's, so thanks to you guys for some excellent references. I ordered my calcite today! Non-local reality: would this not imply that all particles are entangled with each other - the difference being a matter of degree?
Recognitions:
Gold Member
Science Advisor
Quote by FieryJack Non-local reality: would this not imply that all particles are entangled with each other - the difference being a matter of degree?
Welcome to PhysicsForums, FieryJack!
I cannot answer that particular question, but perhaps our Demystifier will tackle it. He always has a good take on the non-local side of things.
Quote by mieubrisse So my question is thus: WHY is reality non-local when everything we see is local?
It isn't known that reality is nonlocal. Some physicists and philosophers infer that it is, and some don't.
Just because you can make an explicitly nonlocal theory doesn't mean that nature is nonlocal, and just because you can't make an explicitly local theory (a la Bell) doesn't mean that nature isn't exlusively local.
Quote by mieubrisse I'll undoubtedly need time to fully digest what I read, but I think I have the gist of it even though a complete understanding isn't there yet.
The gist of it is that interpreting arguments like Bell's and Herbert's is somewhat complicated and still a matter of debate, and the bottom line is that there's no physical evidence for the existence of nonlocal propagations in nature.
Quote by mieubrisse On a related note, what does the community think of this article (the one that got me started on this quantum adventure): http://www.sfgate.com/cgi-bin/articl...BF1.DTL&ao=all
Retrocausality ...
But keep in mind that I'm just a layperson also (albeit one who's studied enough of this stuff to have certain opinions about it), and my take on these considerations might change as I learn more. It's best to pay most attention to what people like DrC, Demystifier, et al. have to say about these issues, imo.
Recognitions: Homework Help Retrocausality is one of the fun results from attempts to reconcile quantum mechanics with relativity. The trick is coming up with an experiment ... afaik: it is purely theoretical right now. eg. http://sydney.edu.au/time/conference...al_models.html ... I think you'll need at least senior undergrad quantum mechanics to see what they are going on about... but the abstracts will give you useful starting points for your own searches. The news article looks like it is confusing things ... as usual. As a rule of thumb - treat all press pronouncements about science as highly suspect. eg, it is unhelpful to think of entanglement experiments as involving FTL communication.
Something that will help is to stop thinking like a materialist. Accept that there are more things going on than your main senses can detect. "Imagination is more important than knowledge." - Einstein
@ mieubrisse,
Quantum entanglement is mysterious in that, as DrC and others mentioned, nobody has, and QM doesn't provide, a precise qualitative understanding of it. But I don't think there's any reason to call it magical or weird. It more or less obviously has to do with relationships between the measureable motional properties of entangled disturbances, because that's what entanglement experiments are designed to produce.
Anyway, by way of clarification, the term quantum nonlocality doesn't mean the same thing as classical nonlocality. Classical nonlocality refers to either action at a distance, which is just a contradiction in terms, or faster than light propagations. Quantum nonlocality refers to (note the quote following the citations):
Experimental study of a subsystem in an entangled two-photon state
Dmitry V. Strekalov, Yoon-Ho Kim, and Yanhua Shih
Phys. Rev. A 60, 2685–2688 (1999)
http://arxiv.org/abs/quant-ph/9811060
Following the creation of the pair, the signal and idler may propagate to different directions and be separated by a considerably large distance. If it is a free propagation, the state will remain unchanged except for the gain of a phase, so that the precise momentum (energy) correlation of the pair still holds. The conservation laws guarantee the precise value of an observable with respect to the pair (not to the individual subsystems). It is in this sense, we say that the entangled two-photon state of SPDC is nonlocal. Quantum theory does allow a complete description of the precise correlation for the spatially separated subsystems, but no complete description for the physical reality of the subsystems defined by EPR. It is in this sense, we say that quantum mechanical description (theory) of the entangled system is nonlocal.
Recognitions: Science Advisor I do not know, what you mean by "qualitative" understanding. Perhaps you mean intuition, and of course, we don't have an intuition about entanglement, because in everyday life we don't have experience with such phenomena. The reason is that we are surrounded by macroscopic systems, with an overwhelming number of coupled microscopic degrees of freedom. Fortunately, our senses coarse-grain over all the many unimportant microscopic details, and the relevant macroscopic observables behave to a utmost high accuracy according to the laws of classical physics. That's why we have found the classical description of nature before we knew about the quantum nature behind. The only way we can understand quantum theory is through mathematics and the "mapping" between mathematical abstract structures (Hilbert-space vectors, statistical operators, and operators representing observables, etc.) to the real world (Born's probability rule for the interpretation of quantum mechanical states, spectral theory of operators). If it comes to non-locality one has to be a bit careful, what one means. The most comprehensive quantum theory we have today is relativistic quantum field theory, which is by construction a theory of local interactions. All causal actions are thus described by local interactions. "Local" means here that we describe systems of elementary particle with a set of field operators and a Hamiltonian that is derived from the spatial integral over polynomials of field operators at the same space-time point. The field operators also fulfil local transformation laws under proper orthochronous Lorentz transformations. Using such a description of a quantum system implies the socalled "linked-cluster principle", which states that experiments on systems that are very far away from each other are stochastically independent, i.e., the probabilities for different local subsystems at far distances factorize. On the other hand, quantum states can describe non-local correlations, and that's what's commonly discussed when it comes to entanglement. E.g., nowadays quantum opticians can easily produce entangled two-photon states. The important point is that these are real two-photon states, i.e., a Fock state with a precise photon number of two. The entanglement of the photons in such a state, usually produced with help of parametric down conversion by shooting a laser through a birefrigerent crystal, are entangled with respect to their polarization state. This we cannot describe with everyday language, and we have to go to the level of mathematics. The state, I have in mind is of the form $$|\psi \rangle=\frac{1}{\sqrt{2}} [|HV \rangle-|V H \rangle].$$ I've noted only the polarization part of the single-photon states, and I use the usual shorthand for tensor products $|H V \rangle:=|H \rangle \otimes |V \rangle$. In principle, for the following discussion, I'd have to also note the spatial part of these states, but that's cumbersome, and I hope I can make clear my point of view in this somewhat simplifying notation. The point is that this photon state is prepared at the very beginning using a local device, namely the crystal for the parametric down conversion. Then the two photons propagate without further interactions, and after some time the spatial probability distribution for measuring one of the photons is peaked at very far distant positions (note that I don't talk about positions of photons, which cannot be defined in a simple way, but that's not so much an issue here). So Alice and Bob put far distant photo detectors with polarization filters in the direction of these two spots of high probability to detect a photon. Each of them measures single photons. First of all we may ask about the probability that, say, Alice detects a photon with a certain polarization, if her polarization filter is directed in horizontal direction. This is described by the socalled reduced statistical operator for the one-photon subsystem. With the above given state, Alice thus describes the state of her one photon as $$\hat{\rho}_{A}=\text{Tr}_2 |\psi \rangle=\frac{1}{2} \hat{1}=1/2 (|H \rangle \langle H | + |V \rangle \langle V|).$$ This means, she has a totally unknown polarization state. If she samples very many photons, using all angles of her polarization filter, she'll come to the conclusion that she simply has an unpolarized photon source. The same holds true for Bob. Both cannot conclude that their photons come from a single source of entangled photon pairs with their local measurements alone. Now they can also measure the time of their photon registration. Now suppose that Alice points her polarizer in H direction and Bob his in V direction. Then our entangled state is such that whenever Alice registers a photon (which is, of course, only in 50% of all produced photon pairs) also Bob must register his photon (assuming detectors with 100% efficiencies) too. That means there is a 100% correlation between Alice's and Bob's polarization state of their photons, although the polarization state of each of their photons is maximally random (unpolarized photons!). Now, this experiment has been done such that the registration events were spacelike. According to relativity thus there cannot be any causal effect of Alice's photon detection on Bob's and vice versa. Thus, the non-local correlation cannot have been caused by Alice's or Bob's measurement, and the above description of the quantum theoretical analysis should make it very clear that such an assumption is not necessary to make at all. In this minimal interpretation, there is no necessity for a "collapse of state" or any other mysterical "spooky action at a distance". This is only due to the collapse interpretation of the Copenhagen and related schools of quantum theory. If one sticks to the minimal necessary assumptions, i.e., Born's probability rule as the interpretation of the quantum-theoretical states, no such assumptions have to be made, and there is no contradiction between quantum dynamics and Einstein causality, according to which no signals can propagate faster than with the speed of light. Thus also the criticism by Einstein, Podolsky, and Rosen against quantum theory becomes immaterial. According to the Minimal Interpretation, of course, nature is inherently probabilistic, i.e., indeterministic. An observable has only a determined value if the system has been prepared in an eigenstate of the corresponding operator, describing this observable. Otherwise this observable doesn't have a determined value, and you can only give probabilities for the outcome of measurements of this observable. That's it. Whether you consider this as a "complete description" of nature or not, is your own belief. It has nothing to do with nature which behaves as it behaves. Physics states as precisely as it can facts about phenomena in nature and tries to describe this behavior with mathematical theories. In this way you can make predictions about the probabilities of measurements, given a previous preparation of the system. Experiments have then to use ensembles of independently prepared systems and get, within the statistics of the experiment, the probabilities for measuring the values of the observables of such prepared ensembles and compare it with the the quantum-theoretically predicted ones. So far, quantum theory has survived all tests to a very high precision. Also the non-classical features of entanglement with their non-local correlations that cannot be described by classical local hidden-variable theories (Bell's and related inequalities), have been confirmed with a huge significance. In this sense we must come to the conclusion that quantum theory is indeed a complete description of nature and from this we must conclude that nature is inherently non-deterministic.
Page 1 of 2 1 2 >
Thread Tools
| | | |
|-----------------------------------------------------------|------------------------------|---------|
| Similar Threads for: Why is quantum entanglement special? | | |
| Thread | Forum | Replies |
| | Quantum Physics | 8 |
| | Quantum Physics | 52 |
| | Quantum Physics | 4 |
| | Quantum Physics | 107 |
| | Special & General Relativity | 15 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518103003501892, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/51553-3x3-matrix-eigenvalues.html
|
# Thread:
1. ## 3x3 Matrix - Eigenvectors
Okey, im trying to find the eigenvectors of a 3x3 matrix.
(I have found the eigenvalues)
I'm going to use an example so I can solve my own matrix myself;
In this example the eigenvectors are found, but how?
So my question is How are the calculations done to recieve the values of x, y, and z (eigenvectors)?
Eg. take example 1 with the eigenvalue of 0
Any help appriciated!
2. in order to find the eigenvector for the eigenvalue r , you first subtract r from the diagonal of the matrix and then find the kernel of that matrix. this is because the eigenvector needs to solve the equation Av=rv, or 0 = Av-rv=Av-rIv=(A-rI)v. the best way to do it is to find its reduced row echelon form and then solve the system.
for example with the 0 eigenvalue you get
$<br /> \begin{bmatrix}<br /> 3-0 & 0 & 0 \\-4 & 6-0 & 2 \\16 & -15 & -5-0<br /> \end{bmatrix} \rightarrow \begin{bmatrix}<br /> 1 & 0 & 0 \\-2 & 3 & 1 \\3.2 & -3 & -1<br /> \end{bmatrix} \rightarrow \begin{bmatrix}<br /> 1 & 0 & 0 \\0 & 3 & 1 \\0 & 0 & 0<br /> \end{bmatrix}<br />$ (almost reduced form...)
now, a vector is in the kernel of that matrix only if it has the form (0,-x,3x), so the base for the kernel is the vector (0,-1,3) which is the eigenvector. if the base is of higher dimension then you have more than 1 eigenvector for that specified eigenvalue
3. Thanks for you answere, buw I didn't get a whole lot wiser.
What do you mean by "kernel". And why/how do you get to the form (0, -x, 3x) from the reduced matrix? :-P
Can it as well be solved by finding the equation = 0? (Or something similar?)
eg.
3Xn = 0 = Xn
-4Xn + 6Yn + 2Zn = 0 = Yn
16Xn - 15Yn - 5Zn = 0 = Zn
?
Thanks yet again!
Here is the matrix im trying to find the eigenvectors for;
I have found that the eigenvalues has to be (-0,8 , - 0,2 , 1) (Is this correct?)
Could you show me how you would find the eigenvectors?
4. The kernel for matrix A is the subspace of vectors that solves the equation Av=0
in the example I gave, you have
$<br /> \begin{bmatrix}<br /> 1 & 0 & 0 \\ 0 & 3 & 1 \\ 0&0&0<br /> \end{bmatrix} <br /> \begin{bmatrix}<br /> x \\ y \\ z<br /> \end{bmatrix} = <br /> \begin{bmatrix}<br /> 0 \\ 0 \\ 0<br /> \end{bmatrix}<br />$
or in other words you have the equations 1*x=0 from the first row, and 3y+z=0 from the second row, so x must be zero and the relation between y and z is -3y=z, so the vectors we're looking for are (0,y,-3y)
you can try to solve these equations without changing to the reduced echelon form but then you get a more difficult set of equations to solve. actually, the process of solving them is exactly the process of reducing the matrix to its echelon form, but I think that using the matrix notation is easier (specially, if the matrix is bigger than 3x3)
for example, with your matrix and the eigenvalue -0.8 you get
$<br /> \begin{bmatrix}<br /> 0-(-0.8) & 1.05 & 0.25 \\ 0.8 & 0-(-0.8) & 0 \\ 0&0.8&0-(-0.8)<br /> \end{bmatrix} = \begin{bmatrix}<br /> 0.8 & 1.05 & 0.25 \\ 0.8 & 0.8 & 0 \\ 0 & 0.8 & 0.8<br /> \end{bmatrix}<br />$ $<br /> \rightarrow \begin{bmatrix}<br /> 0 & 0 & 0 \\ 0.8 & 0.8 & 0 \\ 0 & 0.8 & 0.8<br /> \end{bmatrix} \rightarrow \begin{bmatrix}<br /> 0 & 0 & 0 \\ 0.8 & 0 & -0.8 \\ 0 & 0.8 & 0.8<br /> \end{bmatrix} \rightarrow \begin{bmatrix}<br /> 0 & 0 & 0 \\ 1 & 0 & -1 \\ 0 & 1 & 1<br /> \end{bmatrix}<br />$
now to find the kernel you solve
$\begin{bmatrix}<br /> 0 & 0 & 0 \\ 1 & 0 & -1 \\ 0 & 1 & 1<br /> \end{bmatrix}<br /> \begin{bmatrix}<br /> x \\ y \\ z<br /> \end{bmatrix} = <br /> \begin{bmatrix}<br /> 0 \\ 0 \\ 0<br /> \end{bmatrix}$
and you get x-z=0 and y+z=0 so x=z and y=-z, so the vectors are of the form (z,-z,z) and the base is the vector (1,-1,1). and just to check that this is the correct answer
$<br /> \begin{bmatrix}<br /> 0 & 1.05 & 0.25 \\ 0.8 & 0 & 0 \\ 0&0.8&0<br /> \end{bmatrix} <br /> \begin{bmatrix}<br /> 1 \\ -1 \\ 1<br /> \end{bmatrix} = <br /> \begin{bmatrix}<br /> -1.05+0.25 \\ 0.8 \\ -0.8<br /> \end{bmatrix} = <br /> -0.8*<br /> \begin{bmatrix}<br /> 1 \\ -1 \\ 1<br /> \end{bmatrix}<br />$
5. I can see how reduced echelon form helps. Thanks!
I'll take a deeper look on how to form a matrix on reduced echelon, if you dont mind explaining that bit a tiny more? (Since I have your attention... )
Or point me in a direction where its some "easy reading".
This part: In particular how you managed to get row 1 to 0 0 0 (Im assuming you have used Gauss-elimination)
6. after subtracting the second row from the first you get (0, 0.25, 0.25)
now multiply it by 0.8/0.25 you get (0, 0.8, 0.8) which is the same as the third row, so you can delete one of them.
finding the row reduced form of a matrix, is just the result of Gaussian elimination (if that what you meant by that question).
7. I see! Surley nice of you to explain it to me so thorough! I appriciate it!
So as I can se you have
R1 - R2
R1 * 0.8/0.25 (Which becomes the same as R3
Delete R1 (R1 - R3? )
Then R2 - R3
Correct?
Thanks yet again! You are very helpful indeed!
8. Im trying to solve the eigenvectors for the eigenvalues -0,2 and 1.
I can't seem to find any method on how to reduce those to find an answere.
I understand if you are getting sick of me asking, but if you could give me a hint/explanation on how to solve them, I sure would appriciate it...
9. well, the technique is simple. first go to the left most column that is not all zeros (in this case its the first column). multiply all the rows that have nonzero scalar on that column, so they will all have the same scalar.
for example with the eigenvalue -0.2, the first column has two nonzero scalars -0.2 and 0.8, so you multiply the first row by 4 (or the second by 1/4 or any other way you like).
$\begin{bmatrix} 0.2 & 1.05 & 0.25 \\ 0.8 & 0.2 & 0 \\ 0 & 0.8 & 0.2 \end{bmatrix} \rightarrow \begin{bmatrix} 0.8 & 4.2 & 1 \\ 0.8 & 0.2 & 0 \\ 0 & 0.8 & 0.2 \end{bmatrix}$
now you choose a row with nonzero scalar on the column and switch it with the first row. because in this case you already have a nonzero scalar on the first row you don't need to do the switch. now subtract the first row from the other rows so they'll all have zero (except for the first row) on that column.
$\begin{bmatrix} 0.8 & 4.2 & 1 \\ 0 & -4 & -1 \\ 0 & 0.8 & 0.2 \end{bmatrix}$
now you do the same thing with the bottom left 2x2 sub matrix
$\rightarrow \begin{bmatrix} 0.8 & 4.2 & 1 \\ 0 & -4 & -1 \\ 0 & 4 & 1 \end{bmatrix} \rightarrow \begin{bmatrix} 0.8 & 4.2 & 1 \\ 0 & -4 & -1 \\ 0 & 0 & 0 \end{bmatrix}$
now use the leading coefficients (the left most nonzero numbers in each row) to zero all the numbers above them. in this case you only have the -4 and you want to zero the 4.2 above it.
$\rightarrow \begin{bmatrix} 4 & 21 & 5 \\ 0 & -4 & -1 \\ 0 & 0 & 0 \end{bmatrix}\rightarrow \begin{bmatrix} 4 & 0 & -0.25 \\ 0 & -4 & -1 \\ 0 & 0 & 0 \end{bmatrix}$
you can multiply by scalars so that the leading coefficient would be 1's but this is not necessary. finally you get the 4x-z/4=0 and -4y-z=0 which give you the eigenvector (1/16,-1/4,1) or (1,-4,16)
this works the same for matrices of different size. anyway, I advise you to find an introductory book (or a web page) for linear algebra to see how it works in the general case.
10. Man! You saved my day!
I really appriciate the effort, thanks yet again! I understand a great deal now on how the process is done at least!
11. A quick question if you know how to explain it;
If all of the eigenvalues are less than 1 of a dynamic population matrix(linear equation) it would die out, and if all of the eigenvalues are bigger than 1 it would grow. Why is this?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329162240028381, "perplexity_flag": "head"}
|
http://theoryclass.wordpress.com/2012/09/29/walrasian-with-indivisible-goods/?like=1&_wpnonce=84af3d0b4c
|
# Walrasian with Indivisible Goods
September 29, 2012 in Auctions, Mechanism design
A paper by Azevdo, Weyl and White in a recent issue of Theoretical Economics caught my eye. It establishes existence of Walrasian prices in an economy with indivisible goods, a continuum of agents and quasilinear utility. The proof uses Kakutani’s theorem. Here is an argument based on an observation about extreme points of linear programs. It shows that there is a way to scale up the number of agents and goods, so that in the scaled up economy a Walrasian equilibrium exists.
First, the observation. Consider ${\max \{cx: Ax = b, x \geq 0\}}$. The matrix ${A}$ and the RHS vector ${b}$ are all rational. Let ${x^*}$ be an optimal extreme point solution and ${\Delta}$ the absolute value of the determinant of the optimal basis. Then, ${\Delta x^*}$ must be an integral vector. Equivalently, if in our original linear program we scale the constraints by ${\Delta}$, the new linear program has an optimal solution that is integral.
Now, apply this to the existence question. Let ${N}$ be a set of agents, ${G}$ a set of distinct goods and ${u_i(S)}$the utility that agent ${i}$ enjoys from consuming the bundle ${S \subseteq G}$. Note, no restrictions on ${u}$ beyond non-negativity and quasi-linearity.
As utilities are quasi-linear we can formulate the problem of finding the efficient allocation of goods to agents as an integer program. Let ${x_i(S) = 1}$ if the bundle ${S}$ is assigned to agent ${i}$ and 0 otherwise. The program is
$\displaystyle \max \sum_{i \in N}\sum_{S \subseteq G}u_i(S)x_i(S)$
subject to
$\displaystyle \sum_{S \subseteq G}x_i(S) \leq 1\,\, \forall i \in N$
$\displaystyle \sum_{i \in N}\sum_{S \ni g} x_i(S) \leq 1 \forall g \in G$
$\displaystyle x_i(S) \in \{0,1\}\,\, \forall i \in N, S \subseteq G$
If we drop the integer constraints we have an LP. Let ${x^*}$ be an optimal solution to that LP. Complementary slackness allows us to interpret the dual variables associated with the second constraint as Walrasian prices for the goods. Also, any bundle ${S}$ such that ${x_i^*(S) > 0}$ must be in agent ${i}$‘s demand correspondence.
Let ${\Delta}$ be the absolute value of the determinant of the optimal basis. We can write ${x_i^*(S) = \frac{z_i^*(S)}{\Delta}}$ for all ${i \in N}$ and ${S \subseteq G}$ where ${z_i^*(S)}$ is an integer. Now construct an enlarged economy as follows.
Scale up the supply of each ${g \in G}$ by a factor of ${\Delta}$. Replace each agent ${i \in N}$ by ${N_i = \sum_{S \subseteq G}z_i^*(S)}$ clones. It should be clear now where this is going, but lets dot the i’s. To formulate the problem of finding an efficient allocation in this enlarged economy let ${y_{ij}(S) = 1}$ if bundle ${S}$ is allocated the ${j^{th}}$ clone of agent ${i}$ and zero otherwise. Let ${u_{ij}(S)}$ be the utility function of the ${j^{th}}$ clone of agent ${i}$. Here is the corresponding integer program.
$\displaystyle \max \sum_{i \in N}\sum_{j \in N_i}\sum_{S \subseteq G}u_{ij}(S)y_{ij}(S)$
subject to
$\displaystyle \sum_{S \subseteq G}y_{ij}(S) \leq 1\,\, \forall i \in N, j \in N_i$
$\displaystyle \sum_{i \in N}\sum_{j \in N_i}\sum_{S \ni g} y_{ij}(S) \leq \Delta \forall g \in G$
$\displaystyle y_{ij}(S) \in \{0,1\}\,\, \forall i \in N, j \in N_i, S \subseteq G$
Its easy to see a feasible solution is to give for each ${i}$ and ${S}$ such that ${z_i^*(S) > 0}$, the ${z_i^*(S)}$ clones in ${N_i}$ a bundle ${S}$. The optimal dual variables from the relaxation of the first program complements this solution which verifies optimality. Thus, Walrasian prices that support the efficient allocation in the augmented economy exist.
### Email Subscription
Join 76 other followers
## 2 comments
October 11, 2012 at 4:35 pm
Eduardo
Hi Rakesh,
beautiful argument, its nice to see things from a new angle, and this linear programming approach is may be useful to have in the quiver for the future. We had a result on finite economies that was cut from the paper, but we never considered this linear programming route.
Thank you, Eduardo. I suspect the same approach can be used for matching problems with couples or non-substitutable preferences, for example. The difficulty will be to show the existence of a feasible fractional solution to the assignment problem with blocking constraints.
Rakesh
rakesh
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 50, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8910019993782043, "perplexity_flag": "head"}
|
http://scicomp.stackexchange.com/questions/2710/a-sufficient-number-of-distances-to-recover-relative-positions-of-n-points
|
# A sufficient number of distances to recover relative positions of n points
On several places I found different claims on a sufficient number of distances to recover relative positions of $n$ points in $d$-dimensional space.
For instance, work from
http://www.dimitris-agrafiotis.com/Papers/jcc20078.pdf
(page 5, right column) claims that only $\left(\frac{n-d/2-1}{d+1}\right)$ distances are sufficient.
In another paper I found that $n(d+1) -(d+1)(d+2)/2$ distances are sufficient. Plugging in $n=3$ and $d=2$ yields different results (what about the result with the first paper?) What is the correct answer? A reference containing a more elaborate account would be appreciated.
-
The paper you specify seems to only state the second of your two claims about the number of distances. Where did you find the first claim $\left(\frac{n-d/2-1}{d+1}\right)$ ? – Paul♦ Jul 5 '12 at 20:27
I guess my statement was ambiguous. Another paper makes the different claim. Note that the first claim from your comment is in the paper, at p.5, right column. – usero Jul 5 '12 at 21:32
Could you provide a link to the other paper? – Paul♦ Jul 6 '12 at 0:04
– usero Jul 6 '12 at 8:11
## 2 Answers
For $d>1$, you need to determine all distances of a simplex consisting of $d+1$ points, and then the distances from each other point to these points. ($d$ points and distances already reduce the possibilities to a finite number, but then one needs additional disambiguation information).
This makes a total of $d(d+1)/2+(n-d-1)(d+1) = n(d+1)-(d+2)(d+1)/2$ distances.
-
Thanks. You'd be surprised on how many places I've found the incorrect statement from the first reference cited in my question. – usero Jul 6 '12 at 13:55
The formula from the paper is a factor of $(d+1)^2$ times smaller than the second formula you mention. Since the paper is only citing the equation as a well known fact I am going to have to argue that he simply mis-typed it, replacing the multiplication by $d+1$ with a division by $d+1$. Also, the fact that it says $1/3$ of a distance is required for the 3 point, 2-D case doesn't make any sense at all.
-
– usero Jul 6 '12 at 8:32
I see that you already got an answer so I don't know how helpful this comment will be. I was simply making an argument based on the fact that they were different, so only one could be correct. I was choosing the first equation based on the fact that it gave fractional answers to a problem in which only integer values made any sense at all. I didn't have any reference for supporting the other equation as correct. – Godric Seer Jul 6 '12 at 16:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469262957572937, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/52739/setting-up-a-local-coordinate-system-in-space-time-using-only-a-single-clock-and?answertab=votes
|
# Setting up a local-coordinate system in space-time using only a single clock and light beams
I have a question to ask about the operationalist view of space-time. I am a mathematician who happens to be interested in physics, so if anyone thinks that my question is a silly or vague one, please feel free to close it.
Many physics textbooks that discuss Special Relativity mention that one way of setting up a coordinate system in flat Minkowski space is to imagine that there is an infinite grid of meter rules that pervades the universe, with synchronized clocks located at each node of the grid. In order to obtain more precise measurements of space-time events, one simply divides the grid into more nodes. I feel that this presents problems, because what does it mean to say that ‘all the clocks are synchronized’? Any act of moving a clock from a starting point to its designated node is sure to involve some acceleration, and this is going to upset any synchronization attempt. Also, such a description is not satisfactory because it is impossible to deal with an infinite number of meter rules and clocks.
Things are made worse in curved space-time because of the further effect of gravitational time-dilation. Hence, I believe that the only logically consistent way to perform measurements of space-time events is to use a single atomic clock carried by a single observer and to use the world-lines of photons emitted by a light source carried by the observer to ‘carve out’ a local-coordinate system. The exact description of how this can be done is beyond me, so I would like to gather some useful information from anyone in the community. Thank you very much!
-
Hi, and welcome to Physics Stack Exchange! Are you asking specifically whether it is possible to set up a global coordinate system using a single clock, or would you be interested in a description of how clocks at separate locations can be synchronized? – David Zaslavsky♦ Feb 1 at 5:43
Answers to both questions would be greatly appreciated, but I am more interested in the first question. Also, as I am interested in curved space-time, I suppose that we can only talk about local-coordinate systems. I know that many philosophical questions can be answered using light beams, such as what it means for a curve to be ‘straight’, but I would like to see how one can use light beams and a single clock to set up, in a logically consistent way, a local coordinate system in order to perform measurements, all in a manner that conforms to the operationalist view of space-time. – Haskell Curry Feb 1 at 5:55
## 3 Answers
The radar method is a general approach that works for non-inertial observers and curved spacetime.
Two co-ordinates of an event are given by your clock time at which the event intersects your future and past light cone, called retarded time and advanced time, ($\tau^+,\tau^-$, resp.). Or use a diagonal combination thereof: $\tau^\star = \frac{\tau^+ + \tau^-}{2}$, called radar time, and $\rho = c\frac{\tau^+ - \tau^-}{2}$, called radar distance.
This diagonal combination has the property that, in the case of an unaccelerated observer in flat spacetime, $\tau^\star$ and $\rho$ are equal to the usual measures (the "infinite grid of rulers and clocks" business).
Two other co-ordinates can be given by the incoming angles ($\Omega^+$) of the null geodesic from the event to you. This is the reception or retarded co-ordinate system. The dual system, the trasmission or advanced co-ordinate system, would use the outgoing angles ($\Omega^-$) of the null geodesic from you to the event.
For a non-rotating unaccelerated observer in flat spacetime, the two pairs of angles are equal to each other and to the usual polar co-ordinate angles.
In flat spacetime this will assign a unique co-ordinate to every reachable event, that is, every event in the observer's causal diamond. In curved spacetime it will assign at least one co-ordinate to every reachable event, however there may be duplicates. One can restrict to the boundary of the causal past and future, as described in answer I linked to above. Then, under certain causality assumptions, every reachable event gets a unique $\tau^\star$ and $\rho$. The surfaces of constant $\tau^\star$ and $\rho$ are then 2-D globally spacelike surfaces, but not always topologically $\mathcal{S}^2$, rather, they will be some subquotient of $\mathcal{S}^2$. That is, for a given $\tau^\star$ and $\rho$ some angle pairs $\Omega^+$ will not be valid (corresponding to parts of the light cone that have "fallen behind"), and some events on the boundary of validity will have more than one angle pair.
-
Thank you, Retarded Potential! The link that you’ve provided contains another link to just the paper that I’m looking for. – Haskell Curry Feb 3 at 1:30
(Note: Please see addendum at bottom for paper references relevant to the question.)
The question of how to synchronized clocks was first addressed by Einstein himself in his famous special relativity paper, and it occupies much of the discussion in that paper prior to him getting into electromagnetics.
What Einstein suggested was this: Both observers have identically constructed clocks, and from that they assume that the clocks keep equal time. But how do they make sure they have the same time, not just the same tick rate?
Einstein suggested what would now be called a handshake: A sends a time stamp to B via a light beam, and B immediately sends his own time stamp back to A. When A receives that "handshake" back from B, she knows both how long it took to get the information and what time B had (by symmetry) halfway through that time period.
That's enough information for A to update her clock to that of B, plus half of the delay in the response to take care of how long it took B's time stamp to arrive.
There are other more subtle issues such as whether B might be moving relative to A, but those too can be taken care of using only light beams by ensuring there is no frequency shift (Doppler effect) when viewing the returning beam.
Not only is this procedure pretty straightforward, it's useful. For example, you would not be reading this message if the electric company you use didn't use the same kind of synchronization procedure to ensure that distant parts of an electrical network are all very precisely in sync with each other. If they did not do that, the generators would get out of phase and start destroying each other. Meaningful time synchronization thus is not some abstract concept, but something real and very much needed for anything networked. The important point for getting this type of synchronization in time is that the various parts must not be moving relative to each other. That's where special relativity kicks in, not in synchronization itself.
There is another technique that you touched on, which is this: If you move a clock very slowly from one location to another, the relativistic effects can be made very, very small -- so small in fact that you really can always make them small enough to ignore them. It's not nearly as practical as Einstein synchronization, but it comes in handy at times.
Now, as for dividing up the rulers, yes, you cannot get infinitely small. But you can actually do very well at it simply by creating networks of very small clocks. Many computer networks are actually pretty decent examples of that. However, my favorite image is that of an entire region of space filled with tiny, tiny clocks, say a millimeter across each, with each clock constantly synchronizing with its nearest neighbors.
As Einstein himself pointed out, synchronization is a transitive operation, so having all those clocks talking to each other in that fashion eventually leads to an entire region of space filled with meaningfully synchronized particle-like clocks. Add a bit of data recording, and you can also use that network of particle-like clocks to keep that region of space both highly synchronized and capable of collecting data about both itself and other objects.
Even more interesting, if another object goes sailing through such a region at high speed, it can in principle extract a very exact time from each of the particle-like clocks with which it collides. The time is exact because the contact is "proximate" or touching, which eliminates the usual spacetime ambiguities when exchanging data between frames. The fast-moving object thus can "read" exactly what time the network of particle-like clocks thinks it is at each point of its journey, and vice-versa.
Surprisingly, and not very obviously, even this concept is in Einstein's paper after a fashion, for this reason: He invokes the idea of a rod moving through another frame and "reading" the time of that frame at each end of the rod. As it turns out, the only way you can do that is by creating a synchronized network of very tiny clocks that can "touch" each end of the rod as it passes. Any scheme that uses light (versus proximity) to pass the same information immediately becomes ambiguous, since different frame would interpret the message passing as different "mixes" of space and time. Proximate contact removes that, and enable Einstein's original moving-rod though experiment to be realizable in real space with real equipment.
I have never seen a name for the idea of asymptotically shrinking, particle-like, data-collecting, fully networked and fully synchronized nano-clocks occupying a volume of space, but it's a straightforward and definitely doable extension of things that modern communications systems do all the time. I like to call this idea of nano-sized clocks a synchronized network of observing particles, for which the acronym is, well... snoop.
So, if you want to do Einstein's moving rod experiment in real life, you will pretty much have to create some form of snoop first. There are actually much easier ways to do it than building real nano-clocks -- cloud chambers with carefully timed imaging certainly come to mind -- but the concept of a snoop helps analyze how you would go about doing many somewhat obscure-sounding special relativity experiments without getting lost in ambiguous data.
Please notice that I've intentionally gone in a somewhat different direction from what I think your suggestion is, which is to use a single clock with photon world lines extending out from it. The single central clock works great for defining a single cell of space and time, but I think you would find that you would end up with just one large-granularity pendulum clock by the time you work out all the details of the idea. You can do a lot with one well-defined cell of that type, but the fact that you can't even do Einstein's very first special relativity thought experiment (the moving rod) requires multiple synchronized clock cells is a pretty good argument that synchronization and multiple cells (multiple identical clocks) will also be needed to collect enough data to test even special relativity in detail, let along general.
If on the other hand you accept the idea of a "snoop limit" at which you can approximate synchronized clocks at point in a region of space to whatever level of detail is sufficient for your particular experiment, I think you'll end up with a much more satisfying (and experimentally unambiguous) result. In terms of those light pendulums I mentioned:
... you network a large number of them together by having them touch on their side corners. If the central clocks can also collect data, that's actually the more precise way to define a snoop.
Addendum 2013-02-02: Several relevant online references
As a result of the excellent information added via @RetardedPotential's answer, I now know the "standard" name for rho cells within the curved-space community: causal diamonds. The beautiful causal diamond diagram in the link I just gave is from this 2009 blog entry by theoretical physicists Sabine Hossenfelder and Stefan Scherer. From the variety I'm seeing in searching for examples, there does not seem to be any single highly standardized way of labeling the backward and forward light geodesics that form the sides of causal diamonds. I labeled them ${\pi_\phi}^-$ and ${\pi_\phi}^+$ for the backwards ($-$) and forwards ($+$) photon coordinates ($\pi$) of frame $\phi$, but I may relabel them if something more standard exists.
More importantly, the idea of using cells small enough to ensure that space is locally flat -- the same smoothness assumption that underlies calculus -- has also been explored! I call such networks snoops, but the 2007 G.W. Gibbons and S.N. Solodukhin paper The geometry of small causal diamonds (Physics Letters B, Vol. 649, Issue 4, 7 June 2007, p.317–324) clearly recognizes this same idea and explores it from the perspective of analyzing curved space.
And wow! Here's what looks to be a very relevant to fine-grained causal diamond analysis from just a couple of months ago:
The Discrete Geometry of a Small Causal Diamond, by Mriganko Roy, Debdeep Sinha, and Sumati Surya. Submitted to arXiv on on 4 Dec 2012
So, Haskell Curry, you may want to check out the above references as starting points if you are interested in fine-grained mapping of curved spaces.
(My own SR interests in snoops remain a bit tangential to curved spaces... um, did I just make a pun?)
2013-02-03
Back in 2007, @lubosmotl summarized a talk by Raphael Bousso in which Bousso mentions causal diamonds in the context of his holographic universe idea. The phrase is clearly older, e.g. see this 1999 mention of causal diamonds by George Svetlichny. Most likely it was a common catch-phrase in the 1990s for the straightforward relativity concept of intersecting forward and backward light cones.
My rho cells necessarily have the same geometry as causal diamonds, but that's about as far as the resemblance goes. By definition a rho cell has an unaccelerated rest-mass clock as its spine, and uses that clock to define $\tau_\rho$, $t_\rho$, and $l_\rho$, the last being isotropic only within clock frame. The clock could be as simple as a muon, and the reflectors could be replaced by photon exchanges between clocks. However, without a rest-mass spine, I don't readily see how a causal diamond can have a cause to be causal about, so to speak.
-
Terry, thank you for a really informative explanation. I still have some questions left. Your description of clock synchronization fits neatly into the operational view of space-time, as it can be carried out physically (I like the idea of a time-stamp). However, this makes sense only in flat space-time. In curved space-time, however, how does one carry out measurements in a valid manner? Any region of curved space-time, no matter how small, is ultimately still curved. Can a huge collection of clocks in a volume of curved space-time yield direct information about the curvature itself? – Haskell Curry Feb 1 at 9:13
For example, can a concentrated collection of nano-clocks allow us to compute the metric tensor of curved space-time? Also, it seems that clocks are doing most of the work, and the role of light beams is simply to relay information between clocks. This sounds absolutely fine to me, as long as it gives us a meaningful way to make measurements, even in regions of space-time with high curvature, such as near a black hole. – Haskell Curry Feb 1 at 9:17
Haskell Curry, sorry for the delays, was on travel. The volume-of-clock methods use the same smoothness assumptions as calculus: It is assumed that for any given situation you can make the clock lattice sufficiently small for the local curvature to become vanishingly small. Curvature then becomes an explicit metric of how you perform the synchronization process, with non-trivially curved space requiring e.g. that clocks in gravity wells really will be slower than others. So the point was just that: snoops allow detailed analysis of geodesics, at least in principle. – Terry Bollinger Feb 3 at 2:00
Also, I should note: Einstein's original synchronization method of course assumes flat space, since he wrote it a decade before his general relativity paper! But it's more general than that, if you make the clocks explicit and require that the network as a whole settle into a single non-oscillatory solution. A rather interesting problem, that. Also: At a quick read, Retarded Potential (I still chuckle every time I read that name) has very nicely captured the curvature issues for a large cells -- causal diamonds? is that what they are called? Cool! -- while I was going for small granularity. – Terry Bollinger Feb 3 at 2:10
– Terry Bollinger Feb 3 at 2:36
The idea of the of the coordinate system in Minkowsky space is a generalization of, for example, a Cartesian coordinate system in which at any point you have a set of numbers that give your position within that coordinate system. The clocks are added to be able to visualize the coordinate system in 3 dimensions but you can also think as time being a perpendicular axis to the other 3.
For example imagine in 1+1 dimensions, that's 1 space dimension and 1 time dimension. You extend an infinite one dimensional rod and place clocks on it, with a spacing that can be as small as you want. Then if you are on the rod at some space point and time, you get a number from the rod which represents your space coordinate, and a number from the clock which is your time coordinate. The clocks are synchronized in the sense that if you send a beam of light from a point $(x_0,t_0)$, when the beam reaches a space position of an observer at $x_0+\Delta x$, the clock beside the observer will measure a time $t_0+\Delta x/c$.
Another way of looking at this construction is embedding it in a 2D Euclidean space, in which some straight infinite line represents the space coordinate and a perpendicular straight line represents the time coordinate. In this representation, it is clearer what we mean by having synchronized clocks. It is just that the path in which light travels in this 2D representation has a slope of 45º (assuming $c=1$). Thus, in the clock-rod picture, dividing the grid into more nodes means, in this picture, that you add more cuts on your perpendicular axes. This shouldn't cause more trouble than constructing, for example, the real line (which for some pure mathematicians has some logical flaws).
On the second note, about curved space time. The word curved means that in the clock-rod picture you cannot extend your rods forever because of the intrinsic curvature of space-time. What you can do then, is extend them only to a certain extend, which you consider as a "local" coordinate system. Thus this coordinate system is only valid in a neighborhood of some space-time point you chose. This construction is realized naturally on manifolds, as in those mathematical structures you have a notion of a local coordinate system associated to every element of your topological space.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496874809265137, "perplexity_flag": "middle"}
|
http://en.wikibooks.org/wiki/Haskell/YAHT/Type_basics
|
# Haskell/YAHT/Type basics
Preamble
Introduction
Getting Started
Language Basics (Solutions)
Type Basics (Solutions)
IO (Solutions)
Modules (Solutions)
Advanced Language (Solutions)
Advanced Types (Solutions)
Monads (Solutions)
Advanced IO
Recursion
Complexity
This box: view • talk • edit
Haskell uses a system of static type checking. This means that every expression in Haskell is assigned a type. For instance `'a'` would have type `Char`, for "character." Then, if you have a function which expects an argument of a certain type and you give it the wrong type, a compile-time error will be generated (that is, you will not be able to compile the program). This vastly reduces the number of bugs that can creep into your program.
Furthermore, Haskell uses a system of type inference. This means that you don't even need to specify the type of expressions. For comparison, in C, when you define a variable, you need to specify its type (for instance, int, char, etc.). In Haskell, you needn't do this -- the type will be inferred from context.
Note
If you want, you certainly are allowed to explicitly specify the type of an expression; this often helps debugging. In fact, it is sometimes considered good style to explicitly specify the types of outermost functions.
Both Hugs and GHCi allow you to apply type inference to an expression to find its type. This is done by using the `:t` command. For instance, start up your favorite shell and try the following:
Example:
```Prelude> :t 'c'
'c' :: Char
```
This tells us that the expression `'c'` has type `Char` (the double colon `::` is used throughout Haskell to specify types).
## Simple Types
There are a slew of built-in types, including `Int` (for integers, both positive and negative), `Double` (for floating point numbers), `Char` (for single characters), `String` (for strings), and others. We have already seen an expression of type `Char`; let's examine one of type `String`:
Example:
```Prelude> :t "Hello"
"Hello" :: String
```
You can also enter more complicated expressions, for instance, a test of equality:
Example:
```Prelude> :t 'a' == 'b'
'a' == 'b' :: Bool
```
You should note that even though this expression is false, it still has a type, namely the type `Bool`.
Note
`Bool` is short for Boolean (pronounced "boo-lee-uhn") and has two possible values: `True` and `False`.
You can observe the process of type checking and type inference by trying to get the shell to give you the type of an ill-typed expression. For instance, the equality operator requires that the type of both of its arguments are of the same type. We can see that `Char` and `String` are of different types by trying to compare a character to a string:
Example:
```Prelude> :t 'a' == "a"
ERROR - Type error in application
*** Expression : 'a' == "a"
*** Term : 'a'
*** Type : Char
*** Does not match : [Char]
```
The first line of the error (the line containing "`Expression`") tells us the expression in which the type error occurred. The second line tells us which part of this expression is ill-typed. The third line tells us the inferred type of this term and the fourth line tells us what it needs to have matched. In this case, it says that type `Char` doesn't match the type `[Char]` (a list of characters -- a string in Haskell is represented as a list of characters).
As mentioned before, you can explicitly specify the type of an expression using the :: operator. For instance, instead of `"a"` in the previous example, we could have written `("a"::String)`. In this case, this has no effect since there's only one possible interpretation of `"a"`. However, consider the case of numbers. You can try:
Example:
```Prelude> :t 5 :: Int
5 :: Int
Prelude> :t 5 :: Double
5 :: Double
```
Here, we can see that the number 5 can be instantiated as either an `Int` or a `Double`. What if we don't specify the type?
Example:
```Prelude> :t 5
5 :: Num a => a
```
Not quite what you expected? What this means, briefly, is that if some type `a` is an instance of the `Num` class, then type of the expression `5` can be of type `a`. If that made no sense, that's okay for now. In Section Classes we talk extensively about type classes (which is what this is). The way to read this, though, is to say "a being an instance of Num implies a."
Exercises
Figure out for yourself, and then verify the types of the following expressions, if they have a type. Also note if the expression is a type error:
1. `'h':'e':'l':'l':'o':[]`
2. `[5,'a']`
3. `(5,'a')`
4. `(5::Int) + 10`
5. `(5::Int) + (10::Double)`
## Polymorphic Types
Haskell employs a polymorphic type system. This essentially means that you can have type variables, which we have alluded to before. For instance, note that a function like `tail` doesn't care what the elements in the list are:
Example:
```Prelude> tail [5,6,7,8,9]
[6,7,8,9]
Prelude> tail "hello"
"ello"
Prelude> tail ["the","man","is","happy"]
["man","is","happy"]
```
This is possible because `tail` has a polymorphic type: `[`$\alpha$`]` -> `[`$\alpha$`]`. That means it can take as an argument any list and return a value which is a list of the same type.
The same analysis can explain the type of `fst`:
Example:
```Prelude> :t fst
fst :: (a,b) -> a
```
Here, GHCi has made explicit the universal quantification of the type values. That is, it is saying that for all types `a` and `b`, `fst` is a function from `(a,b)` to `a`.
Exercises
Figure out for yourself, and then verify the types of the following expressions, if they have a type. Also note if the expression is a type error:
1. `snd`
2. `head`
3. `null`
4. `head . tail`
5. `head . head`
## Type Classes
We saw last section some strange typing having to do with the number five. Before we delve too deeply into the subject of type classes, let's take a step back and see some of the motivation.
### Motivation
In many languages (C++, Java, etc.), there exists a system of overloading. That is, a function can be written that takes parameters of differing types. For instance, the canonical example is the equality function. If we want to compare two integers, we should use an integer comparison; if we want to compare two floating point numbers, we should use a floating point comparison; if we want to compare two characters, we should use a character comparison. In general, if we want to compare two things which have type $\alpha$, we want to use an $\alpha-compare$. We call $\alpha$ a type variable since it is a variable whose value is a type.
Note
In general, type variables will be written using the first part of the Greek alphabet: $\alpha, \beta, \gamma, \delta, ...$.
Unfortunately, this presents some problems for static type checking, since the type checker doesn't know which types a certain operation (for instance, equality testing) will be defined for. There are as many solutions to this problem as there are statically typed languages (perhaps a slight exaggeration, but not so much so). The one chosen in Haskell is the system of type classes. Whether this is the "correct" solution or the "best" solution of course depends on your application domain. It is, however, the one we have, so you should learn to love it.
### Equality Testing
Returning to the issue of equality testing, what we want to be able to do is define a function `==` (the equality operator) which takes two parameters, each of the same type (call it $\alpha$), and returns a boolean. But this function may not be defined for every type; just for some. Thus, we associate this function `==` with a type class, which we call Eq. If a specific type $\alpha$ belongs to a certain type class (that is, all functions associated with that class are implemented for $\alpha$), we say that $\alpha$ is an instance of that class. For instance, Int is an instance of Eq since equality is defined over integers.
### The Num Class
In addition to overloading operators like `==`, Haskell has overloaded numeric constants (i.e., 1, 2, 3, etc.). This was done so that when you type in a number like 5, the compiler is free to say 5 is an integer or floating point number as it sees fit. It defines the Num class to contain all of these numbers and certain minimal operations over them (addition, for instance). The basic numeric types (Int, Double) are defined to be instances of Num.
We have only skimmed the surface of the power (and complexity) of type classes here. There will be much more discussion of them in Section Classes, but we need some more background before we can get there. Before we do that, we need to talk a little more about functions.
### The Show Class
Another of the standard classes in Haskell is the `Show` class. Types which are members of the `Show` class have functions which convert values of that type to a string. This function is called `show`. For instance `show` applied to the integer 5 is the string "5"; `show` applied to the character 'a' is the three-character string "'a'" (the first and last characters are apostrophes). `show` applied to a string simply puts quotes around it. You can test this in the interpreter:
Example:
```Prelude> show 5
"5"
Prelude> show 'a'
"'a'"
Prelude> show "Hello World"
"\"Hello World\""
```
Note
The reason the backslashes appear in the last line is because the interior quotes are "escaped", meaning that they are part of the string, not part of the interpreter printing the value. The actual string doesn't contain the backslashes.
Some types are not instances of Show; functions for example. If you try to show a function (like `sqrt`), the compiler or interpreter will give you some cryptic error message, complaining about a missing instance declaration or an illegal class constraint.
## Function Types
In Haskell, functions are first class values, meaning that just as `1` or `'c'` are values which have a type, so are functions like `square` or `++`. Before we talk too much about functions, we need to make a short diversion into very theoretical computer science (don't worry, it won't be too painful) and talk about the lambda calculus.
### Lambda Calculus
The name "Lambda Calculus", while perhaps daunting, describes a fairly simple system for representing functions. The way we would write a squaring function in lambda calculus is: $\lambda x . x*x$, which means that we take a value, which we will call $x$ (that's what $\lambda x .$ means) and then multiply it by itself. The $\lambda$ is called "lambda abstraction." In general, lambdas can only have one parameter. If we want to write a function that takes two numbers, doubles the first and adds it to the second, we would write: $\lambda x \lambda y . 2*x+y$. When we apply a value to a lambda expression, we remove the outermost $\lambda$ and replace every occurrence of the lambda variable with the value. For instance, if we evaluate $(\lambda x . x*x) 5$, we remove the lambda and replace every occurrence of $x$ with $5$, yielding $(5*5)$ which is $25$.
In fact, Haskell is largely based on an extension of the lambda calculus, and these two expressions can be written directly in Haskell (we simply replace the $\lambda$ with a backslash(\) and the $.$ with (->); also we don't need to repeat the lambdas; and, of course, in Haskell we have to give them names if we're defining functions):
```square = \x -> x*x
f = \x y -> 2*x + y
```
You can also evaluate lambda expressions in your interactive shell:
Example:
```Prelude> (\x -> x*x) 5
25
Prelude> (\x y -> 2*x + y) 5 4
14
```
We can see in the second example that we need to give the lambda abstraction two arguments, one corresponding to `x` and the other corresponding to `y`.
### Higher-Order Types
"Higher-Order Types" is the name given to those types whose elements are functions. The type given to functions mimicks the lambda calculus representation of the functions. For instance, the definition of square gives $\lambda x . x*x$. To get the type of this, we first ask ourselves what the type of `x` is. Say we decide `x` is an `Int`. Then, we notice that the function `square` takes an `Int` and produces a value `x*x`. We know that when we multiply two `Int`s together, we get another `Int`, so the type of the results of `square` is also an `Int`. Thus, we say the type of `square` is `Int -> Int`.
We can apply a similar analysis to the function `f` above(\x y -> 2*x + y). The value of this function (remember, functions are values) is something which takes a value `x` and produces a new value, which takes a value `y` and produces `2*x+y`. For instance, if we take `f` and apply only one number to it, we get $(\lambda x \lambda y . 2x+y) 5$ which becomes our new value $\lambda y . 2(5)+y$, where all occurrences of $x$ have been replaced with the applied value, $5$.
So we know that `f` takes an `Int` and produces a value of some type, of which we're not sure. But we know the type of this value is the type of $\lambda y . 2(5)+y$. We apply the above analysis and find out that this expression has type `Int -> Int`. Thus, `f` takes an `Int` and produces something which has type `Int -> Int`. So the type of `f` is `Int -> (Int -> Int)`.
Note
The parentheses are not necessary; in function types, if you have $\alpha \rightarrow \beta \rightarrow \gamma$ it is assumed that $\beta \rightarrow \gamma$ is grouped. If you want the other way, with $\alpha \rightarrow \beta$ grouped, you need to put parentheses around them.
This isn't entirely accurate. As we saw before, numbers like `5` aren't really of type `Int`, they are of type `Num a => a`.
We can easily find the type of Prelude functions using ":t" as before:
Example:
```Prelude> :t head
head :: [a] -> a
Prelude> :t tail
tail :: [a] -> [a]
Prelude> :t null
null :: [a] -> Bool
Prelude> :t fst
fst :: (a,b) -> a
Prelude> :t snd
snd :: (a,b) -> b
```
We read this as: "head" is a function that takes a list containing values of type "a" and gives back a value of type "a"; "tail" takes a list of "a"s and gives back another list of "a"s; "null" takes a list of "a"s and gives back a boolean; "fst" takes a pair of type "(a,b)" and gives back something of type "a", and so on.
Note
Saying that the type of `fst` is `(a,b) -> a` does not necessarily mean that it simply gives back the first element; it only means that it gives back something with the same type as the first element.
We can also get the type of operators like `+` and `*` and `++` and `:`; however, in order to do this we need to put them in parentheses. In general, any function which is used infix (meaning in the middle of two arguments rather than before them) must be put in parentheses when getting its type.
Example:
```Prelude> :t (+)
(+) :: Num a => a -> a -> a
Prelude> :t (*)
(*) :: Num a => a -> a -> a
Prelude> :t (++)
(++) :: [a] -> [a] -> [a]
Prelude> :t (:)
(:) :: a -> [a] -> [a]
```
The types of `+` and `*` are the same, and mean that `+` is a function which, for some type `a` which is an instance of `Num`, takes a value of type `a` and produces another function which takes a value of type `a` and produces a value of type `a`. In short hand, we might say that `+` takes two values of type `a` and produces a value of type `a`, but this is less precise.
The type of `++` means, in shorthand, that, for a given type `a`, `++` takes two lists of `a`s and produces a new list of `a`s. Similarly, `:` takes a value of type `a` and another value of type `[a]` (list of `a`s) and produces another value of type `[a]`.
### That Pesky IO Type
You might be tempted to try getting the type of a function like `putStrLn`:
Example:
```Prelude> :t putStrLn
putStrLn :: String -> IO ()
Prelude> :t readFile
readFile :: FilePath -> IO String
```
What in the world is that `IO` thing? It's basically Haskell's way of representing that these functions aren't really functions. They're called "IO Actions" (hence the `IO`). The immediate question which arises is: okay, so how do I get rid of the `IO`. In brief, you can't directly remove it. That is, you cannot write a function with type `IO String -> String`. The only way to use things with an `IO` type is to combine them with other functions using (for example), the do notation.
For example, if you're reading a file using `readFile`, presumably you want to do something with the string it returns (otherwise, why would you read the file in the first place). Suppose you have a function `f` which takes a `String` and produces an `Int`. You can't directly apply `f` to the result of `readFile` since the input to `f` is `String` and the output of `readFile` is `IO String` and these don't match. However, you can combine these as:
```main = do
s <- readFile "somefile"
let i = f s
putStrLn (show i)
```
Here, we use the arrow convention to "get the string out of the IO action" and then apply `f` to the string (called `s`). We then, for example, print `i` to the screen. Note that the let here doesn't have a corresponding in. This is because we are in a do block. Also note that we don't write `i <- f s` because `f` is just a normal function, not an IO action. Note: `putStrLn (show i)` can be simplified to `print i` if you want.
### Explicit Type Declarations
It is sometimes desirable to explicitly specify the types of some elements or functions, for one (or more) of the following reasons:
• Clarity
• Speed
• Debugging
Some people consider it good software engineering to specify the types of all top-level functions. If nothing else, if you're trying to compile a program and you get type errors that you cannot understand, if you declare the types of some of your functions explicitly, it may be easier to figure out where the error is.
Type declarations are written separately from the function definition. For instance, we could explicitly type the function `square` as in the following code (an explicitly declared type is called a type signature):
```square :: Num a => a -> a
square x = x*x
```
These two lines do not even have to be next to each other. However, the type that you specify must match the inferred type of the function definition (or be more specific). In this definition, you could apply `square` to anything which is an instance of `Num`: `Int`, `Double`, etc. However, if you knew apriori that `square` were only going to be applied to value of type `Int`, you could refine its type as:
```square :: Int -> Int
square x = x*x
```
Now, you could only apply `square` to values of type `Int`. Moreover, with this definition, the compiler doesn't have to generate the general code specified in the original function definition since it knows you will only apply `square` to `Int`s, so it may be able to generate faster code.
If you have extensions turned on ("-98" in Hugs or "-fglasgow-exts" in GHC(i)), you can also add a type signature to expressions and not just functions. For instance, you could write:
```square (x :: Int) = x*x
```
which tells the compiler that `x` is an `Int`; however, it leaves the compiler alone to infer the type of the rest of the expression. What is the type of `square` in this example? Make your guess then you can check it either by entering this code into a file and loading it into your interpreter or by asking for the type of the expression:
Example:
```Prelude> :t (\(x :: Int) -> x*x)
```
since this lambda abstraction is equivalent to the above function declaration.
### Functional Arguments
In the section on Lists we saw examples of functions taking other functions as arguments. For instance, `map` took a function to apply to each element in a list, `filter` took a function that told it which elements of a list to keep, and `foldl` took a function which told it how to combine list elements together. As with every other function in Haskell, these are well-typed.
Let's first think about the `map` function. Its job is to take a list of elements and produce another list of elements. These two lists don't necessarily have to have the same types of elements. So `map` will take a value of type `[a]` and produce a value of type `[b]`. How does it do this? It uses the user-supplied function to convert. In order to convert an `a` to a `b`, this function must have type `a -> b`. Thus, the type of `map` is `(a -> b) -> [a] -> [b]`, which you can verify in your interpreter with ":t".
We can apply the same sort of analysis to `filter` and discern that it has type `(a -> Bool) -> [a] -> [a]`. As we presented the `foldr` function, you might be tempted to give it type `(a -> a -> a) -> a -> [a] -> a`, meaning that you take a function which combines two `a`s into another one, an initial value of type `a`, a list of `a`s to produce a final value of type `a`. In fact, `foldr` has a more general type: `(a -> b -> b) -> b -> [a] -> b`. So it takes a function which turn an `a` and a `b` into a `b`, an initial value of type `b` and a list of `a`s. It produces a `b`.
To see this, we can write a function `count` which counts how many members of a list satisfy a given constraint. You can of course use `filter` and `length` to do this, but we will also do it using `foldr`:
```module Count
where
import Char
count1 p l = length (filter p l)
count2 p l = foldr (\x c -> if p x then c+1 else c) 0 l
```
The functioning of `count1` is simple. It filters the list `l` according to the predicate `p`, then takes the length of the resulting list. On the other hand, `count2` uses the initial value (which is an integer) to hold the current count. For each element in the list `l`, it applies the lambda expression shown. This takes two arguments, `c` which holds the current count and `x` which is the current element in the list that we're looking at. It checks to see if `p` holds about `x`. If it does, it returns the new value `c+1`, increasing the count of elements for which the predicate holds. If it doesn't, it just returns `c`, the old count.
Exercises
Figure out for yourself, and then verify the types of the following expressions, if they have a type. Also note if the expression is a type error:
1. `\x -> [x]`
2. `\x y z -> (x,y:z:[])`
3. `\x -> x + 5`
4. `\x -> "hello, world"`
5. `\x -> x 'a'`
6. `\x -> x x`
7. `\x -> x + x`
## Data Types
Tuples and lists are nice, common ways to define structured values. However, it is often desirable to be able to define our own data structures and functions over them. So-called "datatypes" are defined using the data keyword.
### Pairs
For instance, a definition of a pair of elements (much like the standard, built-in pair type) could be:
```data Pair a b = Pair a b
```
Let's walk through this code one word at a time. First we say "data" meaning that we're defining a datatype. We then give the name of the datatype, in this case, "Pair." The "a" and "b" that follow "Pair" are unique type parameters, just like the "a" is the type of the function `map`. So up until this point, we've said that we're going to define a data structure called "Pair" which is parameterized over two types, `a` and `b`. Note that you can't have `Pair a a = Pair a a` — in this case write `Pair a = Pair a a`.
After the equals sign, we specify the constructors of this data type. In this case, there is a single constructor, "Pair" (this doesn't necessarily have to have the same name as the type, but in the case of a single constructor it seems to make more sense). After this pair, we again write "a b", which means that in order to construct a `Pair` we need two values, one of type `a` and one of type `b`.
This definition introduces a function, `Pair :: a -> b -> Pair a b` that you can use to construct `Pair`s. If you enter this code into a file and load it, you can see how these are constructed:
Example:
```Datatypes> :t Pair
Pair :: a -> b -> Pair a b
Datatypes> :t Pair 'a'
Pair 'a' :: a -> Pair Char a
Datatypes> :t Pair 'a' "Hello"
:t Pair 'a' "Hello"
Pair 'a' "Hello" :: Pair Char [Char]
```
So, by giving `Pair` two values, we have completely constructed a value of type `Pair`. We can write functions involving pairs as:
```pairFst (Pair x y) = x
pairSnd (Pair x y) = y
```
In this, we've used the pattern matching capabilities of Haskell to look at a pair and extract values from it. In the definition of `pairFst` we take an entire `Pair` and extract the first element; similarly for `pairSnd`. We'll discuss pattern matching in much more detail in the section on Pattern matching.
Exercises
1. Write a data type declaration for `Triple`, a type which contains three elements, all of different types. Write functions `tripleFst`, `tripleSnd` and `tripleThr` to extract respectively the first, second and third.
2. Write a datatype `Quadruple` which holds four elements. However, the first two elements must be the same type and the last two elements must be the same type. Write a function `firstTwo` which returns a list containing the first two elements and a function `lastTwo` which returns a list containing the last two elements. Write type signatures for these functions.
### Multiple Constructors
We have seen an example of the data type with one constructor: `Pair`. It is also possible (and extremely useful) to have multiple constructors.
Let us consider a simple function which searches through a list for an element satisfying a given predicate and then returns the first element satisfying that predicate. What should we do if none of the elements in the list satisfy the predicate? A few options are listed below:
• Raise an error
• Loop indefinitely
• Write a check function
• Return the first element
• $...$
Raising an error is certainly an option (see the section on Exceptions to see how to do this). The problem is that it is difficult/impossible to recover from such errors. Looping indefinitely is possible, but not terribly useful. We could write a sister function which checks to see if the list contains an element satisfying a predicate and leave it up to the user to always use this function first. We could return the first element, but this is very ad-hoc and difficult to remember; and what if the list itself is empty?
The fact that there is no basic option to solve this problem simply means we have to think about it a little more. What are we trying to do? We're trying to write a function which might succeed and might not. Furthermore, if it does succeed, it returns some sort of value. Let's write a datatype:
```data Maybe a = Nothing
| Just a
```
This is one of the most common datatypes in Haskell and is defined in the Prelude.
Here, we're saying that there are two possible ways to create something of type `Maybe a`. The first is to use the nullary constructor `Nothing`, which takes no arguments (this is what "nullary" means). The second is to use the constructor `Just`, together with a value of type `a`.
The `Maybe` type is useful in all sorts of circumstances. For instance, suppose we want to write a function (like `head`) which returns the first element of a given list. However, we don't want the program to die if the given list is empty. We can accomplish this with a function like:
```firstElement :: [a] -> Maybe a
firstElement [] = Nothing
firstElement (x:xs) = Just x
```
The type signature here says that `firstElement` takes a list of `a`s and produces something with type `Maybe a`. In the first line of code, we match against the empty list `[]`. If this match succeeds (i.e., the list is, in fact, empty), we return `Nothing`. If the first match fails, then we try to match against `x:xs` which must succeed. In this case, we return `Just x`.
For our `findElement` function, we represent failure by the value `Nothing` and success with value `a` by `Just a`. Our function might look something like this:
```findElement :: (a -> Bool) -> [a] -> Maybe a
findElement p [] = Nothing
findElement p (x:xs) =
if p x then Just x
else findElement p xs
```
The first line here gives the type of the function. In this case, our first argument is the predicate (and takes an element of type `a` and returns `True` if and only if the element satisfies the predicate); the second argument is a list of `a`s. Our return value is maybe an `a`. That is, if the function succeeds, we will return `Just a` and if not, `Nothing`.
Another useful datatype is the `Either` type, defined as:
```data Either a b = Left a
| Right b
```
This is a way of expressing alternation. That is, something of type `Either a b` is either a value of type `a` (using the `Left` constructor) or a value of type `b` (using the `Right` constructor).
Exercises
1. Write a datatype `Tuple` which can hold one, two, three or four elements, depending on the constructor (that is, there should be four constructors, one for each number of arguments). Also provide functions `tuple1` through `tuple4` which take a tuple and return `Just` the value in that position, or `Nothing` if the number is invalid (i.e., you ask for the `tuple4` on a tuple holding only two elements).
2. Based on our definition of `Tuple` from the previous exercise, write a function which takes a `Tuple` and returns either the value (if it's a one-tuple), a Haskell-pair (i.e., `('a',5)`) if it's a two-tuple, a Haskell-triple if it's a three-tuple or a Haskell-quadruple if it's a four-tuple. You will need to use the `Either` type to represent this.
### Recursive Datatypes
We can also define recursive datatypes. These are datatypes whose definitions are based on themselves. For instance, we could define a list datatype as:
```data List a = Nil
| Cons a (List a)
```
In this definition, we have defined what it means to be of type `List a`. We say that a list is either empty (`Nil`) or it's the `Cons` of a value of type `a` and another value of type `List a`. This is almost identical to the actual definition of the list datatype in Haskell, except that uses special syntax where `[]` corresponds to `Nil` and `:` corresponds to `Cons`. We can write our own `length` function for our lists as:
```listLength Nil = 0
listLength (Cons x xs) = 1 + listLength xs
```
This function is slightly more complicated and uses recursion to calculate the length of a `List`. The first line says that the length of an empty list (a `Nil`) is $0$. This much is obvious. The second line tells us how to calculate the length of a non-empty list. A non-empty list must be of the form `Cons x xs` for some values of `x` and `xs`. We know that `xs` is another list and we know that whatever the length of the current list is, it's the length of its tail (the value of `xs`) plus one (to account for `x`). Thus, we apply the `listLength` function to `xs` and add one to the result. This gives us the length of the entire list.
Exercises
Write functions `listHead`, `listTail`, `listFoldl` and `listFoldr` which are equivalent to their Prelude twins, but
function on our `List` datatype. Don't worry about exceptional conditions on the first two.
### Binary Trees
We can define datatypes that are more complicated than lists. Suppose we want to define a structure that looks like a binary tree. A binary tree is a structure that has a single root node; each node in the tree is either a "leaf" or a "branch." If it's a leaf, it holds a value; if it's a branch, it holds a value and a left child and a right child. Each of these children is another node. We can define such a data type as:
```data BinaryTree a
= Leaf a
| Branch (BinaryTree a) a (BinaryTree a)
```
In this datatype declaration we say that a `BinaryTree` of `a`s is either a `Leaf` which holds an `a`, or it's a branch with a left child (which is a `BinaryTree` of `a`s), a node value (which is an `a`), and a right child (which is also a `BinaryTree` of `a`s). It is simple to modify the `listLength` function so that instead of calculating the length of lists, it calculates the number of nodes in a `BinaryTree`. Can you figure out how? We can call this function `treeSize`. The solution is given below:
```treeSize (Leaf x) = 1
treeSize (Branch left x right) =
1 + treeSize left + treeSize right
```
Here, we say that the size of a leaf is $1$ and the size of a branch is the size of its left child, plus the size of its right child, plus one.
Exercises
1. Write a function `elements` which returns the elements in a `BinaryTree` in a bottom-up, left-to-right manner (i.e., the first element returned is the left-most leaf, followed by its parent's value, followed by the other child's value, and so on). The result type should be a normal Haskell list.
2. Write a foldr function `treeFoldr` for `BinaryTree`s and rewrite `elements` in terms of it (call the new one `elements2`).
3. Write a foldl function `treeFoldl` for `BinaryTree`s and rewrite `elements` in terms of it (call the new one `elements3`).
### Enumerated Sets
You can also use datatypes to define things like enumerated sets, for instance, a type which can only have a constrained number of values. We could define a color type:
```data Color
= Red
| Orange
| Yellow
| Green
| Blue
| Purple
| White
| Black
```
This would be sufficient to deal with simple colors. Suppose we were using this to write a drawing program, we could then write a function to convert between a `Color` and a RGB triple. We can write a `colorToRGB` function, as:
```colorToRGB Red = (255,0,0)
colorToRGB Orange = (255,128,0)
colorToRGB Yellow = (255,255,0)
colorToRGB Green = (0,255,0)
colorToRGB Blue = (0,0,255)
colorToRGB Purple = (255,0,255)
colorToRGB White = (255,255,255)
colorToRGB Black = (0,0,0)
```
If we wanted also to allow the user to define his own custom colors, we could change the `Color` datatype to something like:
```data Color
= Red
| Orange
| Yellow
| Green
| Blue
| Purple
| White
| Black
| Custom Int Int Int -- R G B components
```
And add a final definition for `colorToRGB`:
```colorToRGB (Custom r g b) = (r,g,b)
```
### The Unit type
A final useful datatype defined in Haskell (from the Prelude) is the unit type. Its definition is:
```data () = ()
```
The only true value of this type is `()`. This is essentially the same as a void type in a language like C or Java and will be useful when we talk about IO in the chapter Io.
We'll dwell much more on data types in the sections on Pattern matching and Datatypes.
## Continuation Passing Style
There is a style of functional programming called "Continuation Passing Style" (also simply "CPS"). The idea behind CPS is to pass around as a function argument what to do next. I will handwave through an example which is too complex to write out at this point and then give a real example, though one with less motivation.
Consider the problem of parsing. The idea here is that we have a sequence of tokens (words, letters, whatever) and we want to ascribe structure to them. The task of converting a string of Java tokens to a Java abstract syntax tree is an example of a parsing problem. So is the task of parsing English sentences (though the latter is extremely difficult, even for native English users parsing sentences from the real world).
Suppose we're parsing something like C or Java where functions take arguments in parentheses. But for simplicity, assume they are not separated by commas. That is, a function call looks like myFunction(x y z). We want to convert this into something like a pair containing first the string "myFunction" and then a list with three string elements: "x", "y" and "z".
The general approach to solving this would be to write a function which parses function calls like this one. First it would look for an identifier ("myFunction"), then for an open parenthesis, then for zero or more identifiers, then for a close parenthesis.
One way to do this would be to have two functions:
```parseFunction ::
[Token] -> Maybe ((String, [String]), [Token])
parseIdentifier ::
[Token] -> Maybe (String, [Token])
```
The idea would be that if we call `parseFunction`, if it doesn't return `Nothing`, then it returns the pair described earlier, together with whatever is left after parsing the function. Similarly, `parseIdentifier` will parse one of the arguments. If it returns `Nothing`, then it's not an argument; if it returns `Just` something, then that something is the argument paired with the rest of the tokens.
What the `parseFunction` function would do is to parse an identifier. If this fails, it fails itself. Otherwise, it continues and tries to parse an open parenthesis. If that succeeds, it repeatedly calls `parseIdentifier` until that fails. It then tries to parse a close parenthesis. If that succeeds, then it's done. Otherwise, it fails.
There is, however, another way to think about this problem. The advantage to this solution is that functions no longer need to return the remaining tokens (which tends to get ugly). Instead of the above, we write functions:
```parseFunction ::
[Token] -> ((String, [String]) -> [Token] -> a) ->
([Token] -> a) -> a
parseIdentifier ::
[Token] -> (String -> [Token] -> a) ->
([Token] -> a) -> a
```
Let's consider `parseIdentifier`. This takes three arguments: a list of tokens and two continuations. The first continuation is what to do when you succeed. The second continuation is what to do if you fail. What `parseIdentifier` does, then, is try to read an identifier. If this succeeds, it calls the first continuation with that identifier and the remaining tokens as arguments. If reading the identifier fails, it calls the second continuation with all the tokens.
Now consider `parseFunction`. Recall that it wants to read an identifier, an open parenthesis, zero or more identifiers and a close parenthesis. Thus, the first thing it does is call `parseIdentifier`. The first argument it gives is the list of tokens. The first continuation (which is what `parseIdentifier` should do if it succeeds) is in turn a function which will look for an open parenthesis, zero or more arguments and a close parethesis. The second continuation (the failure argument) is just going to be the failure function given to `parseFunction`.
Now, we simply need to define this function which looks for an open parenthesis, zero or more arguments and a close parethesis. This is easy. We write a function which looks for the open parenthesis and then calls `parseIdentifier` with a success continuation that looks for more identifiers, and a "failure" continuation which looks for the close parenthesis (note that this failure doesn't really mean failure -- it just means there are no more arguments left).
I realize this discussion has been quite abstract. I would willingly give code for all this parsing, but it is perhaps too complex at the moment. Instead, consider the problem of folding across a list. We can write a CPS fold as:
```cfold' f z [] = z
cfold' f z (x:xs) = f x z (\y -> cfold' f y xs)
```
In this code, `cfold'` takes a function `f` which takes three arguments, slightly different from the standard folds. The first is the current list element, `x`, the second is the accumulated element, `z`, and the third is the continuation: basically, what to do next.
We can write a wrapper function for `cfold'` that will make it behave more like a normal fold:
```cfold f z l = cfold' (\x t g -> f x (g t)) z l
```
We can test that this function behaves as we desire:
Example:
```CPS> cfold (+) 0 [1,2,3,4]
10
CPS> cfold (:) [] [1,2,3]
[1,2,3]
```
One thing that's nice about formulating `cfold` in terms of the helper function `cfold'` is that we can use the helper function directly. This enables us to change, for instance, the evaluation order of the fold very easily:
Example:
```CPS> cfold' (\x t g -> (x : g t)) [] [1..10]
[1,2,3,4,5,6,7,8,9,10]
CPS> cfold' (\x t g -> g (x : t)) [] [1..10]
[10,9,8,7,6,5,4,3,2,1]
```
The only difference between these calls to `cfold'` is whether we call the continuation before or after constructing the list. As it turns out, this slight difference changes the behavior for being like `foldr` to being like `foldl`. We can evaluate both of these calls as follows (let `f` be the folding function):
``` cfold' (\x t g -> (x : g t)) [] [1,2,3]
==> cfold' f [] [1,2,3]
==> f 1 [] (\y -> cfold' f y [2,3])
==> 1 : ((\y -> cfold' f y [2,3]) [])
==> 1 : (cfold' f [] [2,3])
==> 1 : (f 2 [] (\y -> cfold' f y [3]))
==> 1 : (2 : ((\y -> cfold' f y [3]) []))
==> 1 : (2 : (cfold' f [] [3]))
==> 1 : (2 : (f 3 [] (\y -> cfold' f y [])))
==> 1 : (2 : (3 : (cfold' f [] [])))
==> 1 : (2 : (3 : []))
==> [1,2,3]
cfold' (\x t g -> g (x:t)) [] [1,2,3]
==> cfold' f [] [1,2,3]
==> (\x t g -> g (x:t)) 1 [] (\y -> cfold' f y [2,3])
==> (\g -> g [1]) (\y -> cfold' f y [2,3])
==> (\y -> cfold' f y [2,3]) [1]
==> cfold' f [1] [2,3]
==> (\x t g -> g (x:t)) 2 [1] (\y -> cfold' f y [3])
==> cfold' f (2:[1]) [3]
==> cfold' f [2,1] [3]
==> (\x t g -> g (x:t)) 3 [2,1] (\y -> cfold' f y [])
==> cfold' f (3:[2,1]) []
==> [3,2,1]
```
In general, continuation passing style is a very powerful abstraction, though it can be difficult to master. We will revisit the topic more thoroughly later in the book.
Exercises
1. Test whether the CPS-style fold mimics either of `foldr` and `foldl`. If not, where is the difference?
2. Write `map` and `filter` using continuation passing style.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183441996574402, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/15805/convert-acceleration-as-a-function-of-position-to-acceleration-as-a-function-of
|
# Convert acceleration as a function of position to acceleration as a function of time?
Suppose I have acceleration defined as a function of position, "a(x)". How to convert it into a function of time "a(t)"? Please give an example for the case a(x)= x/s²
-
It's not a homework but OK. – WindScar Oct 17 '11 at 0:50
The homework tag doesn't just apply to actual homework problems here, it applies to any questions of an educational nature. That being said, I think this might be general enough that it doesn't actually need the homework tag - especially if you were to remove the request for an example. – David Zaslavsky♦ Oct 17 '11 at 1:20
## 3 Answers
The way to do this is to express position as a function of time, then for any time you can calculate the corresponding position and thus the acceleration.
$$a(t) = a(x(t))$$
So basically, you need to find the function $x(t)$. To do so, you need to solve the differential equation
$$a(x) = \ddot{x}$$
where $\ddot{x}$ denotes the second derivative of $x$ with respect to time.
In general, the solution to this equation is quite complicated. But there are certain special cases that are easily solvable. One that comes up very often in the solution of gravitationally bound systems (such as orbital motion) is $$a(x) = -C x^{-2}$$ ($C$ is a positive constant) which is discussed in this other answer I posted a while back. Another common case - in fact, probably even more common - is the simple harmonic oscillator, $$a(x) = -C x$$ which has the solution $$x(t) = A\cos(\omega t) + B\sin(\omega t)$$ where $\omega = \sqrt{C}$. You can also do the trivial case of constant acceleration, $$a(x) = -C$$ which corresponds to such things as straight-line projectile motion.
-
rewrite as the function: $a_{(x)} = \frac {x}{s^2} = \ddot {x}$
this differential equation can be rewritten as:
$\ddot {x} - \frac{x}{s^2} = 0$
interestingly we need not go any further since obviously the 2nd derivitive must be equal to the function itself. THe obvious solution is some linear combination of hyperbolic functions
$a_{(t)} = Asinh(st) + Bcosh(st)$
etc.
if we are treating x as an independent variable, I think this is the only correct solution. this is not a SHO
-
this is the explicit solution... i guess david zaslanvasky didn't want to do your hw for you and gave you the answers to other similar problems – Timtam Oct 17 '11 at 4:26
Conserning the general case, there is a way to use potential and kinetic energy (every acceleration corresponds to a force on a point mass, and every force in 1 dimension has a potential energy V such that $F=\frac{-dV}{dx}$), but there is a more mathematical solution:
You know $a(x)$ (and, thus, $F(x)$), $v_0(x_0)$ and $t_0(x_0)$ as boundary conditions (otherwise the answer is ambiguous). You need to find $a(t)$.
The general definition:
$$a(x)=\frac{d v(x)}{d t}=\frac{d v(x)}{d x}\frac{d x}{d t}=\frac{d v(x)}{d x}v(x)$$
Recombine the differentials:
$$a(x)dx=v(x)dv(x)$$
Integrate both parts:
$$\int_{x_0}^x a(x)dx=\int_{v_0}^v v(x)dv(x) = \frac{v^2-v_0^2}{2}$$ Solve for $v(x)$:
$$v(x)=\sqrt{v_0^2+2\int_{x_0}^x a(x)dx}$$
Now, let's find $t(x)$. We will first find it`s derivative:
$$\frac{d t(x)}{d x}=\frac{1}{v(x)}$$
Again, split the differentials and integrate:
$$\int_{t_0}^t dt=\int_{x_0}^x \frac{dx}{v(x)}=\int_{x_0}^x \frac{dx}{\sqrt{v_0^2+2\int_{x_0}^x a(x)dx}}$$ Evaluate the leftmost integral:
$$t=t_0+\int_{x_0}^x \frac{dx}{\sqrt{v_0^2+2\int_{x_0}^x a(x)dx}}$$
Solving for x allows you substitute it and get $a(t)$.
This works perfectly when $x(t)$ is a 1-to-1 correspondence, but fails (to give the right answer), when the motion changes direction. In your case it seems that 1/s^2 is meant to be a positive real, so it should work fine.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434814453125, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/69918/projective-plane-and-some-curves
|
# Projective plane and some curves
We define a line in the projective plane as a set of the form $$L_{a,b,c} = \left\{ {\left[ {x,y,z} \right] \in P_R^2 :ax + by + cz = 0} \right\}\text{ or just }L$$ Let a finite collection of lines $$\left\{ {L_i } \right\}_{i = 1}^n$$ such that $$\bigcap\limits_{i = 1}^n {L_i }$$ it´s empty The last definition.. a point $$p \in \left\{ {L_i } \right\}_{i = 1}^n$$ it´s said to be a k-point if it´s exactly contained in k lines of $$\left\{ {L_i } \right\}_{i = 1}^n$$ Prove that: $$t_2 \geqslant 3 + \sum\limits_{k \geqslant 4} {\left( {k - 3} \right)t_k }$$ where $t_k$ is the amount of k-points. And the equality holds iff the corresponding paving is by triangles ( every such kind of sets define a natural paving by polygons)
This problem looks so difficult, I have no idea how to attack this problem Dx This problem scares me , if someone can help me )=
-
1
If you remove the «homework difficult problem?» from the title nothing is lost... – Mariano Suárez-Alvarez♦ Dec 4 '11 at 7:00
## 1 Answer
Look into what is known as the Sylvester-Gallai Theorem, and combinatorial approaches to proving it.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9151775240898132, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Thurston's_geometrization_conjecture
|
# Geometrization conjecture
(Redirected from Thurston's geometrization conjecture)
In mathematics, Thurston's geometrization conjecture states that compact 3-manifolds can be decomposed canonically into submanifolds that have geometric structures. The geometrization conjecture is an analogue for 3-manifolds of the uniformization theorem for surfaces. It was proposed by William Thurston (1982), and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture.
Thurston's hyperbolization theorem implies that Haken manifolds satisfy the geometrization conjecture. Thurston announced a proof in the 1980s and since then several complete proofs have appeared in print.
Grigori Perelman sketched a proof of the full geometrization conjecture in 2003 using Ricci flow with surgery. There are now four different manuscripts (see below) with details of the proof. The Poincaré conjecture and the spherical space form conjecture are corollaries of the geometrization conjecture, although there are shorter proofs of the former that do not lead to the geometrization conjecture.
## The conjecture
A 3-manifold is called closed if it is compact and has no boundary.
Every closed 3-manifold has a prime decomposition: this means it is the connected sum of prime three-manifolds (this decomposition is essentially unique except for a small problem in the case of non-orientable manifolds). This reduces much of the study of 3-manifolds to the case of prime 3-manifolds: those that cannot be written as a non-trivial connected sum.
Here is a statement of Thurston's conjecture:
Every oriented prime closed 3-manifold can be cut along tori, so that the interior of each of the resulting manifolds has a geometric structure with finite volume.
There are 8 possible geometric structures in 3 dimensions, described in the next section. There is a unique minimal way of cutting an irreducible oriented 3-manifold along tori into pieces that are Seifert manifolds or atoroidal called the JSJ decomposition, which is not quite the same as the decomposition in the geometrization conjecture, because some of the pieces in the JSJ decomposition might not have finite volume geometric structures. (For example, the mapping torus of an Anosov map of a torus has a finite volume sol structure, but its JSJ decomposition cuts it open along one torus to produce a product of a torus and a unit interval, and the interior of this has no finite volume geometric structure.)
For non-oriented manifolds the easiest way to state a geometrization conjecture is to first take the oriented double cover. It is also possible to work directly with non-orientable manifolds, but this gives some extra complications: it may be necessary to cut along projective planes and Klein bottles as well as spheres and tori, and manifolds with a projective plane boundary component usually have no geometric structure so this gives a minor extra complication.
In 2 dimensions the analogous statement says that every surface (without boundary) has a geometric structure consisting of a metric with constant curvature; it is not necessary to cut the manifold up first.
## The eight Thurston geometries
A model geometry is a simply connected smooth manifold X together with a transitive action of a Lie group G on X with compact stabilizers.
A model geometry is called maximal if G is maximal among groups acting smoothly and transitively on X with compact stabilizers. Sometimes this condition is included in the definition of a model geometry.
A geometric structure on a manifold M is a diffeomorphism from M to X/Γ for some model geometry X, where Γ is a discrete subgroup of G acting freely on X. If a given manifold admits a geometric structure, then it admits one whose model is maximal.
A 3-dimensional model geometry X is relevant to the geometrization conjecture if it is maximal and if there is at least one compact manifold with a geometric structure modelled on X. Thurston classified the 8 model geometries satisfying these conditions; they are listed below and are sometimes called Thurston geometries. (There are also uncountably many model geometries without compact quotients.)
There is some connection with the Bianchi groups: the 3-dimensional Lie groups. Most Thurston geometries can be realized as a left invariant metric on a Bianchi group. However S2 × R cannot be, Euclidean space corresponds to two different Bianchi groups, and there are an uncountable number of solvable non-unimodular Bianchi groups, most of which give model geometries with no compact representatives.
### Spherical geometry S3
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group O(4, R), with 2 components. The corresponding manifolds are exactly the closed 3-manifolds with finite fundamental group. Examples include the 3-sphere, the Poincaré homology sphere, Lens spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type IX. Manifolds with this geometry are all compact, orientable, and have the structure of a Seifert fiber space (often in several ways). The complete list of such manifolds is given in the article on Spherical 3-manifolds. Under Ricci flow manifolds with this geometry collapse to a point in finite time.
### Euclidean geometry E3
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group R3 × O(3, R), with 2 components. Examples are the 3-torus, and more generally the mapping torus of a finite order automorphism of the 2-torus; see torus bundle. There are exactly 10 finite closed 3-manifolds with this geometry, 6 orientable and 4 non-orientable. This geometry can be modeled as a left invariant metric on the Bianchi groups of type I or VII0. Finite volume manifolds with this geometry are all compact, and have the structure of a Seifert fiber space (sometimes in two ways). The complete list of such manifolds is given in the article on Seifert fiber spaces. Under Ricci flow manifolds with Euclidean geometry remain invariant.
### Hyperbolic geometry H3
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group O+(1, 3, R), with 2 components. There are enormous numbers of examples of these, and their classification is not completely understood. The example with smallest volume is the Weeks manifold. Other examples are given by the Seifert–Weber space, or "sufficiently complicated" Dehn surgeries on links, or most Haken manifolds. The geometrization conjecture implies that a closed 3-manifold is hyperbolic if and only if it is irreducible, atoroidal, and has infinite fundamental group. This geometry can be modeled as a left invariant metric on the Bianchi group of type V. Under Ricci flow manifolds with hyperbolic geometry expand.
### The geometry of S2 × R
The point stabilizer is O(2, R) × Z/2Z, and the group G is O(3, R) × R × Z/2Z, with 4 components. The four finite volume manifolds with this geometry are: S2 × S1, the mapping torus of the antipode map of S2, the connected sum of two copies of 3 dimensional projective space, and the product of S1 with two-dimensional projective space. The first two are mapping tori of the identity map and antipode map of the 2-sphere, and are the only examples of 3-manifolds that are prime but not irreducible. The third is the only example of a non-trivial connected sum with a geometric structure. This is the only model geometry that cannot be realized as a left invariant metric on a 3-dimensional Lie group. Finite volume manifolds with this geometry are all compact and have the structure of a Seifert fiber space (often in several ways). Under normalized Ricci flow manifolds with this geometry converge to a 1-dimensional manifold.
### The geometry of H2 × R
The point stabilizer is O(2, R) × Z/2Z, and the group G is O+(1, 2, R) × R × Z/2Z, with 4 components. Examples include the product of a hyperbolic surface with a circle, or more generally the mapping torus of an isometry of a hyperbolic surface. Finite volume manifolds with this geometry have the structure of a Seifert fiber space if they are orientable. (If they are not orientable the natural fibration by circles is not necessarily a Seifert fibration: the problem is that some fibers may "reverse orientation"; in other words their neighborhoods look like fibered solid Klein bottles rather than solid tori.[1]) The classification of such (oriented) manifolds is given in the article on Seifert fiber spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type III. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
### The geometry of the universal cover of SL(2, R)
${\tilde{\rm{SL}}}(2, \mathbf{R})$ is the universal cover of SL(2, R), which fibers over H2. The point stabilizer is O(2, R). The group G has 2 components. Its identity component has the structure $(\mathbf{R}\times\tilde{\rm{SL}}_2 (\mathbf{R}))/\mathbf{Z}$. Examples of these manifolds include: the manifold of unit vectors of the tangent bundle of a hyperbolic surface, and more generally the Brieskorn homology spheres (excepting the 3-sphere and the Poincare dodecahedral space). This geometry can be modeled as a left invariant metric on the Bianchi group of type VIII. Finite volume manifolds with this geometry are orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
### Nil geometry
See also: Nilmanifold
This fibers over E2, and is the geometry of the Heisenberg group. The point stabilizer is O(2, R). The group G has 2 components, and is a semidirect product of the 3-dimensional Heisenberg group by the group O(2, R) of isometries of a circle. Compact manifolds with this geometry include the mapping torus of a Dehn twist of a 2-torus, or the quotient of the Heisenberg group by the "integral Heisenberg group". This geometry can be modeled as a left invariant metric on the Bianchi group of type II. Finite volume manifolds with this geometry are compact and orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow compact manifolds with this geometry converge to R2 with the flat metric.
### Sol geometry
See also: Solvmanifold
This geometry fibers over the line with fiber the plane, and is the geometry of the identity component of the group G. The point stabilizer is the dihedral group of order 8. The group G has 8 components, and is the group of maps from 2-dimensional Minkowski space to itself that are either isometries or multiply the metric by −1. The identity component has a normal subgroup R2 with quotient R, where R acts on R2 with 2 (real) eigenspaces, with distinct real eigenvalues of product 1. This is the Bianchi group of type VI0 and the geometry can be modeled as a left invariant metric on this group. All finite volume manifolds with sol geometry are compact. The compact manifolds with sol geometry are either the mapping torus of an Anosov map of the 2-torus (an automorphism of the 2-torus given by an invertible 2 by 2 matrix whose eigenvalues are real and distinct, such as $\left( {\begin{array}{*{20}c} 2 & 1 \\ 1 & 1 \\ \end{array}} \right)$), or quotients of these by groups of order at most 8. The eigenvalues of the automorphism of the torus generate an order of a real quadratic field, and the sol manifolds could in principle be classified in terms of the units and ideal classes of this order, though the details do not seem to be written down anywhere. Under normalized Ricci flow compact manifolds with this geometry converge (rather slowly) to R1.
## Uniqueness
A closed 3-manifold has a geometric structure of at most one of the 8 types above, but finite volume non-compact 3-manifolds can occasionally have more than one type of geometric structure. (However a manifold can have many different geometric structures of the same type; for example, a surface of genus at least 2 has a continuum of different hyperbolic metrics.) More precisely, if M is a manifold with a finite volume geometric structure, then the type of geometric structure is almost determined as follows, in terms of the fundamental group π1(M):
• If π1(M) is finite then the geometric structure on M is spherical, and M is compact.
• If π1(M) is virtually cyclic but not finite then the geometric structure on M is S2×R, and M is compact.
• If π1(M) is virtually abelian but not virtually cyclic then the geometric structure on M is Euclidean, and M is compact.
• If π1(M) is virtually nilpotent but not virtually abelian then the geometric structure on M is nil geometry, and M is compact.
• If π1(M) is virtually solvable but not virtually nilpotent then the geometric structure on M is sol geometry, and M is compact.
• If π1(M) has an infinite normal cyclic subgroup but is not virtually solvable then the geometric structure on M is either H2×R or the universal cover of SL(2, R). The manifold M may be either compact or non-compact. If it is compact, then the 2 geometries can be distinguished by whether or not π1(M) has a finite index subgroup that splits as a semidirect product of the normal cyclic subgroup and something else. If the manifold is non-compact, then the fundamental group cannot distinguish the two geometries, and there are examples (such as the complement of a trefoil knot) where a manifold may have a finite volume geometric structure of either type.
• If π1(M) has no infinite normal cyclic subgroup and is not virtually solvable then the geometric structure on M is hyperbolic, and M may be either compact or non-compact.
Infinite volume manifolds can have many different types of geometric structure: for example, R3 can have 6 of the different geometric structures listed above, as 6 of the 8 model geometries are homeomorphic to it. Moreover if the volume does not have to be finite there are an infinite number of new geometric structures with no compact models; for example, the geometry of almost any non-unimodular 3-dimensional Lie group.
There can be more than one way to decompose a closed 3-manifold into pieces with geometric structures. For example:
• Taking connected sums with several copies of S3 does not change a manifold.
• The connected sum of two projective 3-spaces has a S2×R geometry, and is also the connected sum of two pieces with S3 geometry.
• The product of a surface negative curvature and a circle has a geometric structure, but can also be cut along tori to produce smaller pieces that also have geometric structures. There are many similar examples for Seifert fiber spaces.
It is possible to choose a "canonical" decomposition into pieces with geometric structure, for example by first cutting the manifold into prime pieces in a minimal way, then cutting these up using the smallest possible number of tori. However this minimal decomposition is not necessarily the one produced by Ricci flow; if fact, the Ricci flow can cut up a manifold into geometric pieces in many inequivalent ways, depending on the choice of initial metric.
## History
The Fields Medal was awarded to Thurston in 1982 partially for his proof of the geometrization conjecture for Haken manifolds.
The case of 3-manifolds that should be spherical has been slower, but provided the spark needed for Richard Hamilton to develop his Ricci flow. In 1982, Hamilton showed that given a closed 3-manifold with a metric of positive Ricci curvature, the Ricci flow would collapse the manifold to a point in finite time, which proves the geometrization conjecture for this case as the metric becomes "almost round" just before the collapse. He later developed a program to prove the geometrization conjecture by Ricci flow with surgery. The idea is that the Ricci flow will in general produce singularities, but one may be able to continue the Ricci flow past the singularity by using surgery to change the topology of the manifold. Roughly speaking, the Ricci flow contracts positive curvature regions and expands negative curvature regions, so it should kill off the pieces of the manifold with the "positive curvature" geometries S3 and S2 × R, while what is left at large times should have a thick-thin decomposition into a "thick" piece with hyperbolic geometry and a "thin" graph manifold.
In 2003 Grigori Perelman sketched a proof of the geometrization conjecture by showing that the Ricci flow can indeed be continued past the singularities, and has the behavior described above. The main difficulty in verifying Perelman's proof of the Geometrization conjecture was a critical use of his Theorem 7.4 in the preprint 'Ricci Flow with surgery on three-manifolds'. This theorem was stated by Perelman without proof. There are now several different proofs of Perelman's Theorem 7.4, or variants of it which are sufficient to prove geometrization. There is the paper of Shioya and Yamaguchi [2] that uses Perelman's stability theorem [3] and a fibration theorem for Alexandrov spaces.[4] This method, with full details leading to the proof of Geometrization, can be found in the exposition by B. Kleiner and J. Lott in 'Notes on Perelman's papers' in the journal Geometry & Topology.[5]
A second route to Geometrization is the method of Bessières et al.,[6][7] which uses Thurston's hyperbolization theorem for Haken manifolds [8] and Gromov's norm for 3-manifolds.[9] A book by the same authors with complete details of their version of the proof has been published by the European Mathematical Society.[10]
Also containing proofs of Perelman's Theorem 7.4, there is a paper of Morgan and Tian,[11] another paper of Kleiner and Lott,[12] and a paper by Cao and Ge.[13]
## Notes
1. Ronald Fintushel, Local S1 actions on 3-manifolds, Pacific J. o. M. 66 No1 (1976) 111-118, http://projecteuclid.org/...
2. T. Shioya and T. Yamaguchi, 'Volume collapsed three-manifolds with a lower curvature bound,' Math. Ann. 333 (2005), no. 1, 131-155.
3. T. Yamaguchi. A convergence theorem in the geometry of Alexandrov spaces. In Actes de la Table Ronde de Geometrie Differentielle (Luminy, 1992), volume 1 of Semin. Congr., pages 601-642. Soc. math. France, Paris, 1996.
4. Bessieres, L.; Besson, G.; Boileau, M.; Maillot, S.; Porti, J. (2007). "Weak collapsing and geometrization of aspherical 3-manifolds". arXiv:0706.2065 [math.GT].
5. Bessieres, L.; Besson, G.; Boileau, M.; Maillot, S.; Porti, J. (2010). "Collapsing irreducible 3-manifolds with nontrivial fundamental group". 179 (2): 435–460. doi:10.1007/s00222-009-0222-6.
6. J.-P. Otal, 'Thurston's hyperbolization of Haken manifolds,'Surveys in differential geometry, Vol. III Cambridge, MA, 77-194, Int. Press, Boston, MA, 1998.
7. . Volume and bounded cohomology. Inst. Hautes Etudes Sci. Publ. Math., (56):5-99 (1983), 1982.
8. B. Kleiner and J. Lott, 'Locally collapsed 3-manifolds, arXiv:1005.5106, 2010
9. J. Cao and J. Ge, 'A proof of Perelman's collapsing theorem for 3-manifolds', arXiv:0908.3229
## References
• L. Bessieres, G. Besson, M. Boileau, S. Maillot, J. Porti, 'Geometrisation of 3-manifolds', EMS Tracts in Mathematics, volume 13. European Mathematical Society, Zurich, 2010. [1]
• M. Boileau Geometrization of 3-manifolds with symmetries
• F. Bonahon Geometric structures on 3-manifolds Handbook of Geometric Topology (2002) Elsevier.
• Allen Hatcher: Notes on Basic 3-Manifold Topology 2000
• J. Isenberg, M. Jackson, Ricci flow of locally homogeneous geometries on a Riemannian manifold, J. Diff. Geom. 35 (1992) no. 3 723-741.
• G. Perelman, The entropy formula for the Ricci flow and its geometric applications, 2002
• G. Perelman,Ricci flow with surgery on three-manifolds, 2003
• G. Perelman, Finite extinction time for the solutions to the Ricci flow on certain three-manifolds, 2003
• Bruce Kleiner and John Lott, Notes on Perelman's Papers (May 2006) (fills in the details of Perelman's proof of the geometrization conjecture).
• Cao, Huai-Dong; Zhu, Xi-Ping (June 2006). "A Complete Proof of the Poincaré and Geometrization Conjectures: Application of the Hamilton-Perelman theory of the Ricci flow" (PDF). Asian Journal of Mathematics 10 (2): 165–498. Retrieved 2006-07-31. Revised version (December 2006): Hamilton-Perelman's Proof of the Poincaré Conjecture and the Geometrization Conjecture
• John W. Morgan. Recent progress on the Poincaré conjecture and the classification of 3-manifolds. Bulletin Amer. Math. Soc. 42 (2005) no. 1, 57-78 (expository article explains the eight geometries and geometrization conjecture briefly, and gives an outline of Perelman's proof of the Poincaré conjecture)
• Morgan, John W.; Fong, Frederick Tsz-Ho (2010). Ricci Flow and Geometrization of 3-Manifolds. University Lecture Series. ISBN 978-0-8218-4963-7. Retrieved 2010-09-26.
• Scott, Peter The geometries of 3-manifolds. (errata) Bull. London Math. Soc. 15 (1983), no. 5, 401-487.
• Thurston, William P. (1982). "Three-dimensional manifolds, Kleinian groups and hyperbolic geometry". American Mathematical Society. Bulletin. New Series 6 (3): 357–381. doi:10.1090/S0273-0979-1982-15003-0. ISSN 0002-9904. MR 648524 This gives the original statement of the conjecture.
• William Thurston. Three-dimensional geometry and topology. Vol. 1. Edited by Silvio Levy. Princeton Mathematical Series, 35. Princeton University Press, Princeton, NJ, 1997. x+311 pp. ISBN 0-691-08304-5 (in depth explanation of the eight geometries and the proof that there are only eight)
• William Thurston. The Geometry and Topology of Three-Manifolds, 1980 Princeton lecture notes on geometric structures on 3-manifolds.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8911463022232056, "perplexity_flag": "head"}
|
http://mathhelpforum.com/business-math/148082-minimum-variance-profit-rate.html
|
# Thread:
1. ## the minimum variance of the profit rate
There are one riskless bond and a bunch of stocks with risk in the market. For the expected profit r>1 (or r=1) the minimum variance of the profit rate for the risk papers is
σ^2(r)=r^2-2r+2.
Let's asume that we have a portfolio with both riskless bonds and stocks with risk. When expected profit r=8, the variance of the profit rate is 45. What is the minimum variance of the profit rate when r=2?
Is there anyone who could help me with this? I have tried to solve this in many ways, but I never get the answer.
2. is that the entire question, as printed, with no changes or omissions?
if it is, i am stumped. You are presumably expected to use the first scenario to deduce the return on the bonds; this will let you work out the proportion of the portfolio in bonds and stocks in the second scenario; then you can just plug numbers in to get the variance.
However, i dont see how to deduce the return on the bonds from the information you posted.
3. Yes, it is unfortunately all the information I have
4. Is this the proper way to approach this problem:
1) Given $\sigma^2_{min} = 45$ for $r=8$ implies 7.633% of the return is from stocks and 0.367% is from bonds (i.e. riskless, so contribute zero to the variance).
2) To have portfolio returning r=2, with bonds only able to contribute 0.367, the stock portion must be 2 - 0.367 = 1.633, thus r=1.633, so $\sigma^2_{min}=1.4$
?
5. 1.4 was my answer on the exam and I got 0 points. I calculated it differently though.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927300751209259, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=2845564
|
Physics Forums
Page 1 of 2 1 2 >
## 2nd order nonlinear differential equation
Hello everybody,
could you please direct me how to solve this nonlinear differential equation analytically, so by mathematica or matlab? I really need to solve it for my research project, so please help me
du/dx=d/dx[a*u^(-1/2)*du/dx]-n*u^(3/2)*(u-c)/b
boundry conditions are:
u(0)=b+c
u(-infinity)=c
where: a, n,c &b are constants
by the way, I'm a student of mechanical engineering in master
thank you
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
If I understand your text correctly, the equation is: $$\frac{du}{dx} = \frac{d}{dx}\left[a \, u^{-\frac{1}{2}} \, \frac{du}{dx}\right] - \frac{n}{b} \, u^{\frac{3}{2}}(u - c)$$ If this is so, then, notice that the equation does not contain the independent variable x explicitly. Therefore, a parametric substitution of the form: $$p \equiv \frac{du}{dx}$$ which transforms the derivatives to: $$\frac{d}{dx} = \frac{du}{dx} \, \frac{d}{du} = p \, \frac{d}{du}$$ should decrease the order of the ODE by one order. Notice that now $p = p(u)$ or $u = u(p)$, whichever is more convenient. Please show what is the form of the new ODE after this substitution is made.
Hi, Thanks for your help, Dickfore. my equation after that substitution transforms to: P(u)= P(u)*[p'(u)*u^(-1/2)-P(u)*u^(-3/2)/2]-n/b*u^(3/2)*(u-c) Where: a, n,c &b are constants like you said: P=du/dx, P=P(u) So how can I keep solving that? please help me... Thank you again
## 2nd order nonlinear differential equation
I think you have forgotten the coefficient $a$ in front of the first term on the right hand side. Other than that, the equation is correct.
Do some algebra to simplify the equation so that the coefficient in fron of $p'(u)$ is equal to 1. Post your simplified equation!
I did what you said, Dickfore. the simplified equation is: P'(u)=u^(1/2)/a+P(u)*u^(-1)/2+n*u^2*(u-c)/(P(u)*a*b) what should I do after that? actually I don't know! Thank you
Yes, this is what I got too. Just rewriting it for further reference: $$\frac{dp}{du} - \frac{p}{2 u} - \frac{u^{\frac{1}{2}}}{a} - \frac{n u^{2} \, (u - c)}{a \, b \, p} = 0$$
What do you mean? don't you help me anymore? :(
Quote by Dickfore Yes, this is what I got too. Just rewriting it for further reference: $$\frac{dp}{du} - \frac{p}{2 u} - \frac{u^{\frac{1}{2}}}{a} - \frac{n u^{2} \, (u - c)}{a \, b \, p} = 0$$
We can't go any further with this can we? Guess if I had to work with it, I'd first get rid of a, b, c, and n:
$$\frac{dp}{du}-\frac{p}{2u}-\sqrt{u}-\frac{u^3}{p}=0$$
and even that, Mathematica can't solve not to mention you'd have to integrate the solution again to get the final answer.
Perhaps it would have been better not to expand the derivative using the product rule in the first place. If you multiply the equation by $u^{-1/2}$, you will get: $$\left(u^{-1/2}\frac{dp}{du} + (-\frac{1}{2}) \, u^{-3/2} \, p\right) - \frac{1}{a} - \frac{n u \, (u - c)}{a \, b} u^{1/2} \, p^{-1} = 0$$ and introducing a new $p$: $$u^{1/2} \, p \rightarrow p \Rightarrow p \rightarrow u^{1/2} \, p$$ then the equation has one term less: $$\frac{dp}{du} = \frac{1}{a} + \frac{n \, u \, (u - c)}{a \, b \, p} = \frac{b \, p + n \, u \, (u - c)}{a \, b \, p}$$ Notice that now the derivative of $x$ with respect to $u$ becomes (because of the redefinition of $p$): $$\frac{d x}{d u} = \frac{1}{p} \rightarrow \frac{d x}{d u} = u^{-1/2} \, p^{-1}$$ I cannot think of any way to find a solution of this equation in a closed analytic form. (*** MY SUGGESTION ***) The only thing I thought of doing is introducing a new parameter on which u, p and x depend. This was because in the equation for $d p/d u$ we have a term $u (u - c)$ which is non-monotonic and, thus, has no unique inverse. Also, the equation for $d x/d u$ contains an irrational function which has ill behavior. Therefore, we can rewrite everything in the form: $$u^{1/2} \frac{d}{d u} \equiv A \frac{d}{d v} \Leftrightarrow d v = A \, u^{-1/2} \, du \Rightarrow v = 2 \, A \, u^{1/2}, \; A = \frac{1}{2}, \; v = u^{1/2} \Rightarrow u = v^{2}$$ $$\frac{d p}{d u} = \frac{1}{2 v} \, \frac{d p}{d v} = \frac{b \, p + n \, v^{2} \, (v^{2} - c)}{a \, b \, p} \Rightarrow \frac{d p}{d v} = \frac{2 \, v \left[b \, p + n \, v^{2} \, (v^{2} - c)\right]}{a \, b \, p}$$ $$\frac{d x}{d u} = \frac{1}{2 \, v} \, \frac{d x}{d v} = \frac{1}{v \, p} \Rightarrow \frac{d x}{d v} = \frac{2}{p}$$ $$\left\{\begin{array}{rcl} \frac{d p}{d t} & = & 2 \, v \, \left[b \, p + n \, v^{2} \, (v^{2} - c)\right] \\ \frac{d v}{d t} & = & a \, b \, p \\ \frac{d x}{d t} & = & \frac{d v}{d t} \, \frac{d x}{d v} = a \, b \, p \, 2 \, p^{-1} = 2 \, a \, b \end{array}\right.$$ In the process of rewriting the last equation, we see that $x$ is a linear function of $t$. Therefore, this introduction of a new parameter, if anything, has led us to reinterpret $x$ as a good parameter. As a final transformation, we have the following autonomous system of 2 first order ODEs: $$\left\{\begin{array}{rcl} \frac{d v}{d x} & = & p \\ \frac{d p}{d x} & = & \frac{v}{a} \, \left[p + \frac{n}{2 \, b} \, v^{2} \, (v^{2} - c) \right] \end{array}\right.$$ where we also made the transformation $p/2 \rightarrow p$ again. The boundary conditions that you gave translate to the following: $$\begin{array}{l} x \rightarrow -\infty \Rightarrow v = \sqrt{c} \\ x = 0 \Rightarrow v = \sqrt{b + c} \end{array}$$
Hi, Dickfore Thank you very much for your help again. you've introduced a new p, If I understand correctly, that is: u^(1/2)*p~p but didn't you mean, u^(-1/2)*p~p? ( Actually I mentioned the power of u)
Quote by fatimajan Hi, Dickfore Thank you very much for your help again. you've introduced a new p, If I understand correctly, that is: u^(1/2)*p~p but didn't you mean, u^(-1/2)*p~p? ( Actually I mentioned the power of u)
Yes. You are right. However, everything else is correct (I made p -> u^{1/2} p as it should be in the continuation of that line).
Hi, Dickfore but I don't think everything else is correct: Like I said I think you mean u^(-1/2)*P ->P NOT u^(1/2)*P ->P otherwise the equation doesn't have one term less. so, dx/du=u^(1/2)*P^(-1) then like you said dx/du contains an irrational function. Therefore, we can rewrite in the form: u^(1/2)*d/du=A*d/dv but according to what I said we'll have: dx/dv=2*v^2/P so I think we should write: u^(-1/2)d/du=Ad/dv =>v=2/3*A*u^(3/2) ,A=3/2, =>u=v^(2/3) so dP/dv=2/3*v^(-1/3)*[bP+nv^(2/3)*(v^(2/3)-c)]/abP Then dx/dv=2/(3P) finally like the way you suggested: dx/dt=dv/dt*dx/dv=abP*2/(3P)=(2/3)ab the autonomous system of 2 first order ODEs transforms to: dv/dx=P (3P/2 -> P) and dP/dx=v^(-1/3)/a*[P+(n3/2b)v^(2/3)*(v^(2/3)-c)] what do you think? was I right? by the way, I don't know how I can type better like you, you really write clearly. excuse me!
Hi, Dickfore don't you help me anymore? I was hoping to solve my problem with the help of you! you really helped me, would you mind helping me again, please? thank you
Well, I can't seem to find an error in my derivation and I can't decipher your writing, so I will assume the derivation I posted is correct. As someone already mentioned in this thread, it seems impossible that the equation can be solved in a closed form. So, I'm afraid you are left with solving it numerically.
Last time we ended at this step:
Quote by Dickfore $$\left\{\begin{array}{rcl} \frac{d v}{d x} & = & p \\ \frac{d p}{d x} & = & \frac{v}{a} \, \left[p + \frac{n}{2 \, b} \, v^{2} \, (v^{2} - c) \right] \end{array}\right.$$ where we also made the transformation $p/2 \rightarrow p$ again. The boundary conditions that you gave translate to the following: $$\begin{array}{l} x \rightarrow -\infty \Rightarrow v = \sqrt{c} \\ x = 0 \Rightarrow v = \sqrt{b + c} \end{array}$$
The connection with the old variables is given by:
[tex]
u(x) = [v(x)]^{2}
[/tex]
When working with numerics, it is best to get rid of as many parameters as possible. Let's scale everything:
[tex]
\begin{array}{l}
x = x_{0} \, \bar{x} \\
v = v_{0} \, \bar{v} \\
p = p_{0} \, \bar{p}
\end{array}
[/tex]
Then the equations become:
[tex]
\begin{array}{rcl}
\frac{v_{0}}{x_{0}} \, \frac{d \bar{v}}{d \bar{x}} & = & p_{0} \, \bar{p} \\
\frac{p_{0}}{x_{0}} \, \frac{d \bar{p}}{d \bar{x}} & = & \frac{v_{0}}{a} \, \left[ p_{0} \, \bar{p} + \frac{n}{2 b} \, v^{2}_{0} \, \bar{v}^{2} \left(v^{2}_{0} \, \bar{v}^{2} - c \right) \right]
\end{array}
[/tex]
I think it is most convenient to make this choice:
[tex]
v^{2}_{0} = c, \; p_{0} = \frac{n}{2 b} \,v^{4}_{0}. \; \frac{p_{0}}{x_{0}} = \frac{v_{0} \, p_{0}}{a}
[/tex]
[tex]
x_{0} = \frac{a}{\sqrt{c}}, p_{0} = \frac{n \, c^{2}}{2 b}, \; v_{0} = \sqrt{c} \Rightarrow \frac{x_{0} \, p_{0}}{v_{0}} = \frac{a n c}{2 b} \equiv k
[/tex]
Also, let's get rid of the bars above the symbols again:
[tex]
\begin{array}{rcl}
\frac{d v}{d x} & = & k \, p \\
\frac{d p}{d x} & = & v \, \left[ p + v^{2} \, (v^{2} - 1) \right]
\end{array}
[/tex]
With the boundary conditions being:
[tex]
\left\{\begin{array}{l}
\sqrt{c} \, v = \sqrt{c}, \; x \rightarrow -\infty \\
\sqrt{c} \, v = \sqrt{b + c}, \; x = 0
\end{array}\right. \Rightarrow \left\{\begin{array}{l}
v = 1, \; x \rightarrow -\infty \\
v = \sqrt{1 + \frac{b}{c}} = M, \; x = 0
\end{array}\right.
[/tex]
Instead of having this limit as $x \rightarrow -\infty$, let us make the simultaneous substitution $x \rightarrow -x, p \rightarrow -p$. The equations become:
[tex]
\begin{array}{rcl}
\frac{d v}{d x} & = & k \, p \\
\frac{d p}{d x} & = & v \, \left[ p + v^{2} \, (v^{2} - 1) \right]
\end{array}
[/tex]
with the boundary conditions:
[tex]
\left\{\begin{array}{l}
v = \sqrt{1 + \frac{b}{c}} = M, \; x = 0 \\
v \rightarrow 1, \; x \rightarrow \infty \\
\end{array}\right.
[/tex]
Can you find the stationary points for this autonomous non-linear system. What is their type?
Dear Dickfore I'm working on your guidance, I think it's so helpful. thank you very much
Dear Dickfore Actually I didn't understand what you meant about "stationary points". I tried to find out that. thus, today I found an article "Stationary points iteration method for periodic solution to nonlinear system" . is it the way you mean? I couldn't download it yet, however I'll do that soon for responding what you asked. though if you know an easier also faster way direct me ,please. Thank you
I am talking about this: http://<br /> http://mathworld.wolfr...int.html<br /> in relation to the paragraphs on autonomous system of ODEs.
Page 1 of 2 1 2 >
Thread Tools
| | | |
|----------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: 2nd order nonlinear differential equation | | |
| Thread | Forum | Replies |
| | Differential Equations | 22 |
| | Calculus & Beyond Homework | 3 |
| | Differential Equations | 6 |
| | Calculus & Beyond Homework | 0 |
| | Differential Equations | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9212795495986938, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/34782/does-frac-mboxlcm1-2-dots-n1-mboxlcm1-2-dots-n-to-infty/34786
|
## Does $\frac{\mbox{lcm}(1,2,\dots,n+1)}{\mbox{lcm}(1,2,\dots,n)}\to\infty$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I feel that the answer to the title quesiton is "yes". However, I tried using different bounds on such least common multiples to prove this with no luck. Any input on this is highly appreciated.
-
2
Have you tried $n+1=2p$, where $p$ is a prime? – damiano Aug 6 2010 at 16:00
6
Well, it's equal to 1 infinitely often, so... – Qiaochu Yuan Aug 6 2010 at 16:00
Very helpful hints. Thanks a lot. – Abdulaziz Aug 6 2010 at 17:19
There exists a "simple" formula for the seuence: $$\frac{{\rm lcm} (1,2,3,.., n+1)}{\rm lcm} (1,2,3,.., n)}$$ is $p$ if (n+1)= p^ \alpha$for some prime$p$and$1\$ otherwise. – Nick S Oct 28 2010 at 22:59
## 2 Answers
The answer is "no", the limit does not exist, because (now I'm just collecting the comments already made...) consider the series $(2p_n-1)_n$, where $p_1,p_2,\dots$ denote all primes. We have $$\operatorname{lcm}(1,\dots,2p_n) = \operatorname{lcm}(1,\dots,2p_n-1),$$ since $p_n$ is smaller than $2p_n$ (so $p_n$ is already in $1,\dots,2p_n$). So the quotient of the lcm is $1$.
If you take the series $(p_n-1)_n$, then there is a "new prime" in the numerator, so $$\operatorname{lcm}(1,\dots,p_n) = p_n \operatorname{lcm}(1,\dots,p_n-1).$$ Therefore our series of lcm-quotients goes to $\infty$, since there are infinitely many primes.
This means the limit does not exist, the limes inferior is $1$ (since $\operatorname{lcm}(1,\dots,n)$ divides $\operatorname{lcm}(1,\dots,n+1)$), the limes superior is $\infty$.
-
Thanks to you and to those who commented earlier. This was helpful. – Abdulaziz Aug 6 2010 at 17:14
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The limit doesn't exist, as has been pointed out.
However, I'm inclined to interpret the question a bit more loosely. Let $f(n) = lcm(1, 2, \ldots n)$; then it makes sense to ask for some sort of "average" value of $f(n+1)/f(n)$.
Now, $f(n+1)/f(n) = p$ if $n+1$ is a power of a prime $p$, and 1 otherwise. So the non-1 values of $f(n+1)/f(n)$ get larger and larger, but also sparser and sparser, giving hope that there is some kind of average. So we first look at the mean of the first $n$ such quotients, as $n \to \infty$, $$\lim_{n \to \infty} {1 \over n} \sum_{k=1}^n {f(k+1)/f(k)}.$$ But the sum $\sum_{k=1}^n f(k+1)/f(k)$ is at least the sum of all the primes less than $n$, which grows faster than linearly with $n$, so this limit doesn't exist.
But it seems kind of silly to take the mean of quotients anyway. The more natural limit, I think, is $$\lim_{n \to \infty} f(n)^{1/n}$$ and this in fact does exist, and has value $e$. Of course this doesn't mean that the answer to your original question is $e$. But it means that $f(n)$ is about'' $e^n$; more specifically it follows from the prime number theorem that $\lim_{n \to \infty} (\log f(n))/n = 1$. (I'm quoting this from the encyclopedia of integer sequences and don't remember the proof.) This gives some sense of how quickly $f(n)$ grows.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 5, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9565356373786926, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/114500/list
|
## Return to Answer
If you are happy with Frechet bundles here is an alternative approach. Let $\pi \colon {\mathcal G} \times M \to M$ be the projection and consider $\pi^{-1}(TM) \to {\mathcal G} \times M$ a real vector bundle of rank $n$. Following the notation in the paper let $P_{GL^+} \to M$ be the $GL^+(n, {\mathbb R})$ bundle of oriented frames of $TM$. The bundle of oriented frames of $\pi^{-1}(TM)$ is $\pi^{-1}(P_{GL^+})$. As in the paper pick a lift $P_{\widetilde{GL}^+} \to M$ of $P_{GL^+}$ to $\widetilde{GL}^+(n, {\mathbb R}) \to M$ of bundles over $M$. This exists because we assume $M$ is spin. Then $\pi^{-1}(P_{\widetilde{GL}^+})$ is a lift of $\pi^{-1}(P_{GL^+})$ to $\widetilde{GL}^+(n, {\mathbb R})$.
If $g \in {\mathcal G}$ and $m \in M$ then $\pi^{-1}(TM)_{(g, m)} = T_m M$ so has on it an inner product defined by $g(m)$. Denote this "universal" inner product on $\pi^{-1}(TM)$ by $g$. It will be smooth for the usual reason with Frechet manifolds which is because if $M$ and $N$ are finite-dimensional bundles then the evaluation map $$M \times C^\infty(M, N) \to N$$ is a smooth map of Frechet manifolds [1]. Again following the approach in the paper we let $P_{SO} \subset \pi^{-1}(P_{GL^+})$ be the subbundle of oriented orthonormal frames for the metric $g$. Taking the pre-image of this in $\pi^{-1}(P_{\widetilde{GL}^+})$ gives us a $Spin(r, s)$ bundle over $\mathcal{G} \times M$. The associated vector bundle to this using the spin representation gives us $E$ as a smooth, finite rank, Frechet vector bundle.
Finally you want a theorem that says that when you "push-down" $E$ with $\pi$ the result is a smooth Frechet vector bundle on ${\mathcal G}$. This seems reasonably but I'm not sure where to find it. I can't see it in [1].
Sorry this is a bit sketchy but that reflects the sketchiness of my knowledge of Frechet manifolds.
[1] Richard Hamilton -- The Inverse Function Theorem of Nash Moser. http://dx.doi.org/10.1090%2FS0273-0979-1982-15004-2
2 Forgot the reference.
If you are happy with Frechet bundles here is an alternative approach. Let $\pi \colon {\mathcal G} \times M \to M$ be the projection and consider $\pi^{-1}(TM) \to {\mathcal G} \times M$ a real vector bundle of rank $n$. Following the notation in the paper let $P_{GL^+} \to M$ be the $GL^+(n, {\mathbb R})$ bundle of oriented frames of $TM$. The bundle of oriented frames of $\pi^{-1}(TM)$ is $\pi^{-1}(P_{GL^+})$. As in the paper pick a lift $P_{\widetilde{GL}^+} \to M$ of $P_{GL^+}$ to $\widetilde{GL}^+(n, {\mathbb R}) \to M$ of bundles over $M$. This exists because we assume $M$ is spin. Then $\pi^{-1}(P_{\widetilde{GL}^+})$ is a lift of $\pi^{-1}(P_{GL^+})$ to $\widetilde{GL}^+(n, {\mathbb R})$.
If $g \in {\mathcal G}$ and $m \in M$ then $\pi^{-1}(TM)_{(g, m)} = T_m M$ so has on it an inner product defined by $g(m)$. Denote this "universal" inner product on $\pi^{-1}(TM)$ by $g$. It will be smooth for the usual reason with Frechet manifolds which is because if $M$ and $N$ are finite-dimensional bundles then the evaluation map $$M \times C^\infty(M, N) \to N$$ is a smooth map of Frechet manifolds [1]. Again following the approach in the paper we let $P_{SO} \subset \pi^{-1}(P_{GL^+})$ be the subbundle of oriented orthonormal frames for the metric $g$. Taking the pre-image of this in $\pi^{-1}(P_{\widetilde{GL}^+})$ gives us a $Spin(r, s)$ bundle over $\mathcal{G} \times M$. The associated vector bundle to this using the spin representation gives us $E$ as a smooth, finite rank, Frechet vector bundle.
Finally you want a theorem that says that when you "push-down" $E$ with $\pi$ the result is a smooth Frechet vector bundle on ${\mathcal G}$. This seems reasonably but I'm not sure where to find it. I can't see it in [1].
Sorry this is a bit sketchy but that reflects the sketchiness of my knowledge of Frechet manifolds.
[1] Richard Hamilton -- The Inverse Function Theorem of Nash Moser
1
If you are happy with Frechet bundles here is an alternative approach. Let $\pi \colon {\mathcal G} \times M \to M$ be the projection and consider $\pi^{-1}(TM) \to {\mathcal G} \times M$ a real vector bundle of rank $n$. Following the notation in the paper let $P_{GL^+} \to M$ be the $GL^+(n, {\mathbb R})$ bundle of oriented frames of $TM$. The bundle of oriented frames of $\pi^{-1}(TM)$ is $\pi^{-1}(P_{GL^+})$. As in the paper pick a lift $P_{\widetilde{GL}^+} \to M$ of $P_{GL^+}$ to $\widetilde{GL}^+(n, {\mathbb R}) \to M$ of bundles over $M$. This exists because we assume $M$ is spin. Then $\pi^{-1}(P_{\widetilde{GL}^+})$ is a lift of $\pi^{-1}(P_{GL^+})$ to $\widetilde{GL}^+(n, {\mathbb R})$.
If $g \in {\mathcal G}$ and $m \in M$ then $\pi^{-1}(TM)_{(g, m)} = T_m M$ so has on it an inner product defined by $g(m)$. Denote this "universal" inner product on $\pi^{-1}(TM)$ by $g$. It will be smooth for the usual reason with Frechet manifolds which is because if $M$ and $N$ are finite-dimensional bundles then the evaluation map $$M \times C^\infty(M, N) \to N$$ is a smooth map of Frechet manifolds [1]. Again following the approach in the paper we let $P_{SO} \subset \pi^{-1}(P_{GL^+})$ be the subbundle of oriented orthonormal frames for the metric $g$. Taking the pre-image of this in $\pi^{-1}(P_{\widetilde{GL}^+})$ gives us a $Spin(r, s)$ bundle over $\mathcal{G} \times M$. The associated vector bundle to this using the spin representation gives us $E$ as a smooth, finite rank, Frechet vector bundle.
Finally you want a theorem that says that when you "push-down" $E$ with $\pi$ the result is a smooth Frechet vector bundle on ${\mathcal G}$. This seems reasonably but I'm not sure where to find it. I can't see it in [1].
Sorry this is a bit sketchy but that reflects the sketchiness of my knowledge of Frechet manifolds.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 99, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277929663658142, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/69033/help-me-understand-limits
|
# Help me understand limits
Good day
I'm currently doing some math homework (don't worry I won't ask anyone to solve anything) and I don't think I'm understanding limits correctly. More precisely how the l'Hôpital rule works.
I know I can/should be able to apply it if it is either $\infty/\infty$ or $0/0$ but I was wondering does it have anything to do with $0/\infty$ or $\infty/0$?
Anything you think might help me better understand limits would be appreciated.
P.S. I'm sorry if this is a simple question, math is not my strongest point.
-
No, you don't use l'Hôpital on anything you can't reduce to the indeterminate $0/0$ or $\infty/\infty$ cases. Usually other techniques will be much more transparent than l'Hôpital anyway. – J. M. Oct 1 '11 at 14:47
Can you name the other techniques? And can I assume that if I have a limit of a function f(x) that is 0/infinity that it is just 0? – Siemsen Oct 1 '11 at 15:01
The "other techniques" come out on a case-to-case basis... – J. M. Oct 1 '11 at 15:16
## 1 Answer
I was wondering does it have anything to do with 0/infinity or infinity/0?
No, because $0/\infty$ and $\infty/0$ are not indeterminate cases. Symbolically one can write $0/\infty=0\times 0=0$ and $\infty/0=\infty\times \infty=\infty$ (without sign).
For instance, the function $f(x)=1/x\to 0$ (as $x\to \infty$) and the function $g(x)=x\to \infty$, (as $x\to \infty$). And we have, as $x\to \infty$, $$f(x)/g(x)=(1/x)/x=1/x^2\to 0$$ and $$g(x)/f(x)=x/(1/x)=x^2\to \infty.$$
-
@J.M. Do we write "indeterminate case" (and not "undetermined case")? – Américo Tavares Oct 1 '11 at 15:18
Indeterminate is the English term of art I'm accustomed to. (Maybe somebody else uses "undetermined", but I haven't seen 'em.) – J. M. Oct 1 '11 at 15:19
@J.M. Thanks! (I have corrected it). – Américo Tavares Oct 1 '11 at 15:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372959733009338, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Atomic_physics
|
# Atomic physics
Modern physics
${ i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},\,t) = \hat H \Psi(\mathbf{r},\,t)}$
History of modern physics
Founders
Scientists
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. It is primarily concerned with the arrangement of electrons around the nucleus and the processes by which these arrangements change. This includes ions as well as neutral atoms and, unless otherwise stated, for the purposes of this discussion it should be assumed that the term atom includes ions.
The term atomic physics is often associated with nuclear power and nuclear bombs, due to the synonymous use of atomic and nuclear in standard English. However, physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, which considers atomic nuclei alone.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
## Isolated atoms
Atomic physics always considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though both deal with very large numbers of atoms.
## Electronic configuration
Electrons form notional shells around the nucleus. These are naturally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically other electrons).
Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization.
In the event the electron absorbs a quantity of energy less than the binding energy, it will transition to an excited state. After a statistically sufficient quantity of time, an electron in an excited state will undergo a transition to a lower state. The change in energy between the two energy levels must be accounted for (conservation of energy). In a neutral atom, the system will emit a photon of the difference in energy. However, if the excited atom has been previously ionized, in particular if one of its inner shell electrons has been removed, a phenomenon known as the Auger effect may take place where the quantity of energy is transferred to one of the bound electrons causing it to go into the continuum. This allows one to multiply ionize an atom with a single photon.
There are rather strict selection rules as to the electronic configurations that can be reached by excitation by light—however there are no such rules for excitation by collision processes.
## History and developments
Main article: Atomic theory
The majority of fields in physics can be divided between theoretical work and experimental work, and atomic physics is no exception. It is usually the case, but not always, that progress goes in alternate cycles from an experimental observation, through to a theoretical explanation followed by some predictions that may or may not be confirmed by experiment, and so on. Of course, the current state of technology at any given time can put limitations on what can be achieved experimentally and theoretically so it may take considerable time for theory to be refined.
One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms, in the modern sense of the basic unit of a chemical element. This theory was developed by the British chemist and physicist John Dalton in the 18th century. At this stage, it wasn't clear what atoms were although they could be described and classified by their properties (in bulk) in a periodic table.
The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics. In seeking to explain atomic spectra an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry (quantum chemistry) and spectroscopy.
Since the Second World War, both theoretical and experimental fields have advanced at a rapid pace. This can be attributed to progress in computing technology, which has allowed larger and more sophisticated models of atomic structure and associated collision processes. Similar technological advances in accelerators, detectors, magnetic field generation and lasers have greatly assisted experimental work.
## Significant atomic physicists
Pre quantum mechanics Post quantum mechanics
## References
• Bransden, BH; Joachain, CJ (2002). Physics of Atoms and Molecules (2nd ed.). Prentice Hall. ISBN 0-582-35692-X.
• Foot, CJ (2004). Atomic Physics. Oxford University Press. ISBN 0-19-850696-1.
• Herzberg, Gerhard (1979) [1945]. Atomic Spectra and Atomic Structure. New York: Dover. ISBN 0-486-60115-3.
• Condon, E.U. and Shortley, G.H. (1935). The Theory of Atomic Spectra. Cambridge University Press. ISBN 0-521-09209-4.
• Cowan, Robert D. (1981). The Theory of Atomic Structure and Spectra. University of California Press. ISBN 0-520-03821-5.
• Lindgren, I. and Morrison, J. (1986). Atomic Many-Body Theory (Second ed.). Springer-Verlag. ISBN 0-387-16649-1.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358398914337158, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/113479-integration-dr-evil-my-teacher.html
|
# Thread:
1. ## Integration - Dr. Evil is my teacher
Right I won't bore you with a sob story but this is a regular grade 12 class and we've been posed with this problem:
Find the arc length of this function across 0 -> 5:
$s(t) = t^3 - 6t^2 + 9t + 5, {t >= 0}$
I learned the formula of s = Integral of $squareroot(1+f'(x)^2) * dx$
So I began using the Riemann sums until I get here, I'm not quite sure how to proceed.
If need be I will show how I got up to there, it just takes forever to use LaTeX.
PS: That should be x->infinity
Thanks!
2. Originally Posted by Silent Soliloquy
Right I won't bore you with a sob story but this is a regular grade 12 class and we've been posed with this problem:
Find the arc length of this function across 0 -> 5:
$s(t) = t^3 - 6t^2 + 9t + 5, {t >= 0}$
I learned the formula of s = Integral of $squareroot(1+f'(x)^2) * dx$
So I began using the Riemann sums until I get here, I'm not quite sure how to proceed.
If need be I will show how I got up to there, it just takes forever to use LaTeX.
PS: That should be x->infinity
Thanks!
I don't see why you're trying to calculate the limit of the Riemann sum when you have the formula. That will be difficult. Why don't you just use the formula?
3. Originally Posted by adkinsjr
I don't see why you're trying to calculate the limit of the Riemann sum when you have the formula. That will be difficult. Why don't you just use the formula?
Hm, I thought the Reimann sum was a way to find the integral. Oddly enough we never did integrals so I had to scour what I could from textbooks and the internet.
Is the integral
$\frac{2}{3} (x+3(x^{2}-4x+3)^3)^\frac{3}{2}$
for the function
$(1+9(x^{2}-4x+3)^2)^\frac{1}{2}$
Thanks, sorry about all the trivial trouble.
4. Originally Posted by Silent Soliloquy
Hm, I thought the Reimann sum was a way to find the integral. It is one way, but they can be very difficult to evaluate. It's much like using the first principles to find a derivative. You can use the limit formula if you want, but it's just not something people often do since it just makes it too difficult. However, in your case it's really difficult to use the formula too.Oddly enough we never did integrals so I had to scour what I could from textbooks and the internet. That's pretty wierd, this topic isn't discussed until page 563 of my University Calculus text book. I think this is a Calc II topic.
Is the integral
$\frac{2}{3} (x+3(x^{2}-4x+3)^3)^\frac{3}{2}$
for the function
$(1+9(x^{2}-4x+3)^2)^\frac{1}{2}$
Thanks, sorry about all the trivial trouble.
No, this integral seems difficult to solve. I will start a new thread to see if anyone else can help. I'm afraid people may ignore your post since I responded to it. The integral you have to evaluate looks like this:
$\int_0^5\sqrt{1+(3t^2-12t+9)^2}dt$
I'm not sure if this is something that is practical to solve by hand. I will start a thread with this integral to see what
others say.
Thread: http://www.mathhelpforum.com/math-he...-integral.html
5. Originally Posted by adkinsjr
No, this integral seems difficult to solve. I will start a new thread to see if anyone else can help. I'm afraid people may ignore your post since I responded to it. The integral you have to evaluate looks like this:
$\int_0^5\sqrt{1+(3t^2-12t+9)^2}dt$
I'm not sure if this is something that is practical to solve by hand. I will start a thread with this integral to see what
others say.
Thread: http://www.mathhelpforum.com/math-he...-integral.html
I believe the integral is
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9583569765090942, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/100936/composition-of-two-formal-series
|
## Composition of two formal series
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There are two formal semi-infinite Laurent series
$$f_+(z)=z+\sum_{k=2}^{\infty} a_k z^k$$
and
$$f_-(z)=z+\sum_{k=0}^{\infty} b_k z^{-k}$$
Their composition $f_+(f_-(z))$ is a formal series infinite in both directions. Is there any way to construct two other semi-infinite series
$$g_+(z)=z+\sum_{k=2}^{\infty} c_k z^k$$
and
$$g_-(z)=z+\sum_{k=0}^{\infty} d_k z^{-k}$$
such that
$$f_+(f_-(z))=g_-(g_+(z))$$
How should I call this problem? Are there any non-trivial examples when it can be done?
-
It's not clear that the composition is well defined since each coefficient will be a sum of infinitely many terms. – Felipe Voloch Jun 29 at 17:13
That's true. Actually, in the "physical" example I have in my mind, coefficients are divergent. But if we assume that coefficients of the composition are well defined, what can be done? – Sasha Jun 29 at 17:23
I'm sorry. But I am not even sure how to compute $g_-(g_+(z))$ where the $-$ part is on the outside. For example, if $g_-(z)$ has a term $z^{-1}$ and $g_+(z)=z+z^2$, we have $$\begin{align} \left(z+z^2\right)^{-1} &= z^{-1}-1+z-z^2++z^3-z^4+\dots\quad\text{for $z$ near $0$, but} \cr \left(z+z^2\right)^{-1} &= z^{-2}-z^{-3}+z^{-4}-z^{-5}+\dots\quad\text{for $z$ near $\infty$.} \end{align}$$ – Gerald Edgar Jul 5 at 14:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8976801633834839, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/29982/has-this-notion-of-product-of-graphs-been-studied/30012
|
## Has this notion of product of graphs been studied?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $n\geq 2$ be a positive integer. For the purposes of this definition, let a colored graph be a finite undirected graph in which each edge is colored with one of $n$ colors so that no vertex is incident with two edges of the same color. (Without loss of generality, we suppose that every vertex is incident with exactly one edge of each color; add loops wherever necessary.) If $G_1$ and $G_2$ are two colored graphs, we define a product $H=G_1\times G_2$ as follows.
• The vertices of $H$ are the ordered pairs of vertices $(v_1,v_2)$, where $v_i$ is a vertex of $G_i$.
• If $(v_1,w_1)$ and $(v_2,w_2)$ are edges of the same color in $G_1$ and $G_2$, respectively, then there is an edge (also of the same color) between $(v_1,v_2)$ and $(w_1,w_2)$ in $H$.
Examples. Consider $n=3$, the simplest interesting case (and the one that interests me most). Let $K'_3$ be the complete graph on three vertices with an extra loop at each vertex, and let $K_4$ be the complete graph on four vertices. (There is essentially only one way to to color each of these graphs.)
• `$K'_3\times K'_3$` has two components: one is a copy of `$K'_3$` and the other has six vertices.
• `$K'_3\times K_4$` is a connected graph with twelve vertices.
• `$K_4\times K_4$` has four components, each of which is a copy of `$K_4$`.
Does this have a name? Has it been studied? It seems plausible enough to me that this may be a well-studied thing. At the moment, I'm especially interested in necessary/sufficient criteria for the product of connected colored graphs to be connected, or more generally an efficient way to count (or at least estimate) the number of components of the product, but any information that exists is reasonably likely to be useful to me.
My motivation here comes from some problems I'm working on in combinatorial number theory. I have various (colored) graphs that I associate to a positive integer $N$, and in many cases the graph corresponding to $MN$ is the product in this sense of the graphs corresponding to $M$ and $N$ whenever $\gcd(M,N)=1$. The graphs at prime powers are much simpler than the general case.
-
2
This isn't quite what you want, but it seems worth mentioning. Ignoring the coloring data, certain subgraphs of the product of two graphs as you defined it are important in combinatorial group theory. See Stallings's paper "The topology of finite graphs". – Andy Putman Jun 30 2010 at 0:36
## 6 Answers
If you wanted this for directed graphs, I would say this:
As Andy Putman suggests, look at Stallings Inventiones paper "The topology of finite graphs."
A directed graph colored with $n$ colors admits an obvious map to a colored oriented wedge of $n$ circles, $X$ say.
Given two directed colored graphs, the fiber product of the two maps to $X$ may be defined the same way you are defining your product, but where the condition being that there is an edge going from $(a,b)$ to $(c,d)$ colored $m$ if there is an edge from $a$ to $c$ colored $m$ and an edge from $b$ to $d$ colored $m$ (so the orientation is taken into account).
This graph is smaller than yours, but has the advantage of being the pullback of a simple diagram.
I think that you should be able to take care of the undirected case by thinking of each edge of your graph as two edges, one for each orientation, or some such device. There's a nice way to think about this in Gersten's paper "Intersection of finitely generated subgroups of free groups and resolutions of graphs" in the same issue of Inventiones---he talks about "ghost edges."
-
I should say that the smaller fiber product I talk about here has been studied quite a bit in order to think about the Hanna Neumann Conjecture. – Richard Kent Jun 30 2010 at 23:33
This is exactly what I was looking for. Thanks very much! – Cap Khoury Jul 1 2010 at 0:44
Great! Glad to help. – Richard Kent Jul 1 2010 at 0:49
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In a similar spirit to David Eppstein's answer, one can relate this construction to the tensor product (a.k.a. the Cartesian product) of graphs and digraphs. If it is a standard construction, I'm not aware of it.
We may represent edge-colourings of the sort you describe by introducing a new node for each edge-colour, subdividing each edge of the original graph, and linking the central vertex of each edge to the appropriate colour-node. (This requires a multi-graph construction if you have loops in the original graph; you could fix this with additional subdivisions if you prefer simple-ish graphs.) If you use directed arcs to the colour nodes, you could use the asymmetry to make this construction reversible, up to permutation of the colours. Let us call this an edge subdivision model of a proper edge-colouring.
The tensor product of graphs G and H is a graph T such that V(T) = V(G) × V(H), and edge-relations { (g1,h1), (g2, h2) } ∈ E(T) such that {g1, g2} ∈ E(G) and {h1, h2} ∈ E(H). For digraphs, replace unordered pairs with ordered pairs. In terms of relations on the sets V(G) and V(H), this is the logical conjunction. It is easy to see that in digraphs, only those vertex-pairs (g1,h1), (g2, h2) which have consistent arc directions for each co-ordinate in the digraphs G and H will have arcs in the tensor product.
The consistency of arc-directions in tensor products of digraphs is what David remarks on in his response: if each node has one inbound arc and one outbound arc, in both graphs, the tensor product will have the same property. But it may be difficult to obtain a well-defined mapping from edge-colours to arc directions, because edge-colours don't have any asymmetry in them. This motivates finding a different way of encoding structural information in an edge than asymmetry --- such as the subdivision models I describe above.
If you take the tensor product of two coloured-subdivision graphs as I describe above, you get a first approximation to (an edge-subdivision model of) the graph construction you describe. There are two defects: (a) it has too many colours, one colour (a,b) for each pair of colours in the original colouring; and (b) it has vertices (v,e) which correspond to a vertex v ∈ V(G) in one co-ordinate and a subdivided edge e ∈ E(H) in another co-ordinate. However, from this tensor product we may easily obtain an induced subgraph, which is an edge-subdivision model of the graph construction you consider.
1. Eliminating excess colours: the colour-pairs (a,b) will be adjacent to nodes corresponding to the edge-pairs (eG, eH) where eG ∈ E(G) has colour a and eH ∈ E(H) has colour b. You wish to have only consistent edge-colourings; to do this, simply remove the "mismatched colour" nodes (a,b) for a ≠ b, and any vertex adjacent to them which correspond to edges with mismatched colours. Any remaining node (c,c) for some colour c may be identified as the 'colour node' for c in a new edge-subdivision model for a graph.
2. Eliminating vertex/edge type-mismatched pairs: in an edge-subdivision model for your construction, we obviously would only want pairs (eG, eH) for eG ∈ E(G) and eH ∈ E(H) or pairs (vG, vH) for vG ∈ V(G) and vH ∈ V(H), and no mismatched-type vertices (v,e). Fortunately, the vertex-nodes in the subdivision models are not adjacent to any colour; and so neither are mismatched-type nodes in the tensor product. In fact, mimatched-type vertices are only adjacent to other mismatched-type vertices. So we may restrict to the connected component(s) of the product graph which contains the colour nodes.
So, your construction can be 'simulated' by taking an induced subgraph (in which a pre-determined vertex subset is to be removed) of a tensor product of 'uncoloured' graphs and digraphs. However, I haven't heard of this construction being used before.
[Edit: added the remarks about mismatched-type vertices.]
-
If these were digraphs with one outgoing edge of each color at each vertex rather than undirected graphs with one incident edge of each color at each vertex, then this construction would be the Cartesian product of deterministic finite automata. It's useful for showing that regular languages are closed under union and intersection.
-
Interesting... I hadn't been thinking along those lines at all. Of course, I can just replace each one of my undirected edges by a pair of directed ones. – Cap Khoury Jun 30 2010 at 15:22
An observation I made since posting, which may or may not be on the right track.
Let ${\cal A}_n$ be the "free group on $n$ involutions'', that is $\langle x_1,x_2,\ldots,x_n\rangle/\langle x_1^2,x_2^2,\ldots,x_n^2\rangle$. Then a colored graph as described in the question is just a (finite) set with an ${\cal A}_n$ action. (The vertices are the objects, and the generator $x_i$ sends a vertex $v$ to the vertex connected to $v$ by an $i$-colored edge.)
Then this product operation on graphs corresponds to the usual Cartesian product of ${\cal A}_n$-sets.
However, unless there is some developed theory of the actions of the group ${\cal A}_n$ over and above the general theory of group actions, this is probably not the answer I want.
-
1
But there is --- your group A_n is a Coxeter group, which means there are lots of natural things on which it acts. Someone else than me is surely more qualified to tell you where to look, although Björner/Brenti might be useful since you seem interested in combinatorial aspects... – Dan Petersen Jul 1 2010 at 2:26
1
The obvious action (to me) is on the Davis--Moussong complex - look at Davis, M.W., The Geometry and Topology of Coxeter Groups. In this case, the D--M complex is the regular n-valent tree. Fix some base vertex $v_0$; then each generator $x_n$ acts as a 'reflection' in an edge adjacent to $v_0$. – HW Jul 1 2010 at 5:42
1
As you say, your coloured graphs correspond to actions of $\mathcal{A}_n$ on a finite set, in other words, to homomorphisms $\mathcal{A}_n\to S_m$. Roughly, the quotient of the D--M complex by the kernel of this map should recover your coloured graph. (There's something a little odd going on, to do with the undirectedness of the edges, but I think that's basically right.) – HW Jul 1 2010 at 6:32
Finally, let me make a conjecture about the connectivity properties of your product (by analogy with the free group case, which corresponds to coloured graphs with directed edges). As I said above, each graph $\Gamma$ should correspond to a subgroup $H_\Gamma\subseteq \mathcal{A}_n$. Now, I'm fairly certain that the number of components of the 'product' of graphs $\Gamma$ and $\Delta$ is equal to the number of double cosets $H_\Gamma\backslash\mathcal{A}_n/H_\Delta$. – HW Jul 1 2010 at 6:35
1
Anyway, from this point of view, it's clear why $K_4\times K_4$ has four components. That $K_4$ is homoegeneous translates to the fact that $H=H_{K_4}$ is normal in $\mathcal{A}_n$, so $H\backslash\mathcal{A}_n/H=\mathcal{A}_n/H$, which equals the number of vertices of $K_4$. – HW Jul 1 2010 at 6:55
show 3 more comments
Section 6.3 in "Algebraic Graph Theory" by Chris Godsil and Gordon Royle covers products of graphs; 6.6 covers colorings of these products. I also vaguely remember a reference to it in Douglas West's "Introduction to Graph Theory" when he was constructing a counterexample to something involving colorings, but it's been too long since I took my Graph Theory course out of that book and I couldn't find it.
-
Since there is no mention of it, I will make one.
The direct product of graphs $G$ and $H$ is defined like this: $V(G \times H) = V(G) \times V(H)$ and $E(G\times H)$ contains only $((g_1, h_1),(g_2, h_2))$ such that $(g_1, g_2) \in E(G)$ and $(h_1, h_2) \in E(H)$.
In your properly edge colored graphs, if one considers all the pairs of monochromatic subgraphs (i.e., maximal subgraphs all of whose edges are the same color)---one in $G$ and the other in $H$---and takes their direct product, then one gets the same result as your product.
The direct product is well-studied---see the book by Imrich and Klavzar. There are also other products such as the normal (strong) product, lex product, cartesian product, etc. that might also be of interest.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418240785598755, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/99169?sort=oldest
|
## On Pseudo-finite topological spaces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We recall that a topological space $(X,\tau)$ is Pseudo-finite, if each compact subset of $X$ is finite.
One of the classical example of Pseudo-finite topological spaces can be considered as an uncountable set $X$ with the co-countable topology.(i.e.each subset with countable complement is open)
The above topology has no isolated point but it fails to be at least Hausdorff. On the base of my Knowledge there are two Tychonoff Pseudo-finite topological spaces as follows:
A. All discrete spaces are trivial examples of these spaces.
B. Consider the set $\Sigma=\mathbb{N}$$\cup$ {$p$}, and topologize it as follows:
• Consider a free ultrafilter $\mathcal{U}$ on $\mathbb{N}$.
• All points of $\mathbb{N}$ are isolated.
• The Neighborhoods of $p$ are of the form: $U$$\cup$ {$p$}, where $U \in \mathcal{U}$.
We must recall that Case "B" is a special Example of maximal Hausdorff topologies on a set.
But I think there is no example of a Pseudo-finite Tychonoff space without isolated point !. and I guess the following statement:
Statement:Every Pseudo-finite Tychonoff space has an isolated point.
Is there a counterexample of the above statement?
-
Yet another spelling of Tychonoff. – Martin Brandenburg Jun 9 at 10:58
I am sorry about my wrong spelling. – AliReza Olfati Jun 9 at 11:02
## 4 Answers
Every $P$-space (a $P$-space is a completely regular space where every G_{ \delta}-set is open) is pseudofinite since one can easily show that every subspace of a $P$-space is a $P$-space and every compact $P$-space is finite. However there are many $P$-spaces with no isolated points. For instance, take the $P$-space coreflection of an uncountable product of the space {0,1} and you will get a pseudofinite space with no isolated points.
-
Hello dear Joseph. Thank you very much for your nice description. Yes, the fact that you mentioned Is well-Known. I must point the fact that it may be useful. When I saw you answer, another Question came in my mind that if there is a non P-space without isolated point which is also Pseudo-finite. The answer in this case is also no. It Suffices to consider the topological space $X$ which you described as above and multiply it with the space $\Sigma$(i.e. $Y=X \times \Sigma$) (Best Wishes) – AliReza Olfati Jun 11 at 7:09
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I believe you can grow your example B into a counterexample.
Stage 0 is $\emptyset$
Stage 1 is `$\{p\}$`.
Stage 2 is your $B$: you've added a copy of ${\Bbb N}$ for each point newly added in the previous stage (all one of them) with an ultrafilter to define neighborhoods of the old point(s).
And the recipe for Stage $n+1$ adds a copy of ${\Bbb N}$ for each point newly added in Stage $n$ with an ultrafilter to define neighborhoods of the old points.
Natural numbers suffice to index the stages (no transfinite induction necessary).
Clearly we kill all the isolated points in the limit. I believe you get pseudo-finiteness much as you get it for $B$, but there are details.
-
3
Details: Let $\mathcal{U}$ be a free ultrafilter on $\omega$; let $X$ be the tree $\omega^{<\omega}$ with the topology generated by the set $\mathcal{B}$ consisting of all subsets $Y$ such that $Y$ has a unique root and $\forall y\in Y\ \forall^{\mathcal{U}}n\ \ y^\frown n\in Y$. Check that $X$ is $T_1$ and $\mathcal{B}$ is a clopen base, implying $X$ is $T_{3.5}$. By Konig's Lemma, if $A\subset X$ is infinite, then $A$ contains an infinite chain or an infinite set of the form $\\{s^\frown n:n\in I\\}$. In either case, check that $A$ is not compact. – David Milovich Jun 10 at 19:31
The rationals are pseudo-finite.
-
4
The set of $\{1/n\}\cup\{0\}$ is compact but not finite. – Jim Conant Jun 10 at 20:27
1
I think Warren was confused The concept of Pseudo-finiteness with the concept "pseudo-discreteness".The topological space $X$ is pseudo-discrete if if every compact subset of $X$ has finite interior. Every pseudo-finite space is a pseudo-discrete space, but not conversely. The space $\mathbb{Q}$ is a pseudo-discrete space which is not pseudo-finite. – AliReza Olfati Jun 11 at 8:51
Yup thanks for fixing that. How about the set of weak P-points of in \Beta N - N. Or any other infinite weak crowded weak P-space. – Warren McG Jun 11 at 13:08
It might be best if you edited your answer to fix the problem. – S. Carnahan♦ Jun 12 at 7:32
The counterexamples so far depend on AC, but one can have such spaces just from ZF.
In particular, instead of an ultrafilter on ${\Bbb N}$ as in your example $B$, one can use the filter of subsets with asymptotic density 1. The rest follows along the lines of my previous answer (and David Milovich's details).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202879667282104, "perplexity_flag": "head"}
|
http://www.nag.com/numeric/cl/nagdoc_cl23/html/S/s11acc.html
|
NAG Library Function Documentnag_arccosh (s11acc)
1 Purpose
nag_arccosh (s11acc) returns the value of the inverse hyperbolic cosine, $\mathrm{arccosh}x$. The result is in the principal positive branch.
2 Specification
#include <nag.h>
#include <nags.h>
double nag_arccosh (double x, NagError *fail)
3 Description
nag_arccosh (s11acc) calculates an approximate value for the inverse hyperbolic cosine, $\mathrm{arccosh}x$. It is based on the relation
$arccoshx = ln x + x 2 - 1 .$
This form is used directly for $1<x<{10}^{k}$, where $k=n/2+1$, and the machine uses approximately $n$ decimal place arithmetic.
For $x\ge {10}^{k},\sqrt{{x}^{2}-1}$ is equal to $\sqrt{x}$ to within the accuracy of the machine and hence we can guard against premature overflow and, without loss of accuracy, calculate
$arccoshx = ln2 + lnx .$
4 References
Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications
5 Arguments
1: x – doubleInput
On entry: the argument $x$ of the function.
Constraint: ${\mathbf{x}}\ge 1.0$.
2: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
6 Error Indicators and Warnings
NE_REAL_ARG_LT
On entry, x must not be less than 1.0: ${\mathbf{x}}=〈\mathit{\text{value}}〉$.
$\mathrm{arccosh}x$ is not defined and the result returned is zero.
7 Accuracy
If $\delta $ and $\epsilon $ are the relative errors in the argument and result respectively, then in principle
$ε ≃ x x 2 - 1 arccoshx δ .$
That is the relative error in the argument is amplified by a factor at least
$x x 2 - 1 arccoshx$
in the result. The equality should apply if $\delta $ is greater than the machine precision ($\delta $ due to data error etc.), but if $\delta $ is simply a result of round-off in the machine representation, it is possible that an extra figure may be lost in internal calculation and round-off.
It should be noted that for $x>2$ the factor is always less than 1.0. For large $x$ we have the absolute error $E$ in the result, in principle, given by
$E ∼ δ .$
This means that eventually accuracy is limited by machine precision. More significantly for $x$ close to 1, $x-1\sim \delta $, the above analysis becomes inapplicable due to the fact that both function and argument are bounded, $x\ge 1$, $\mathrm{arccosh}x\ge 0$. In this region we have
$E ∼ δ .$
That is, there will be approximately half as many decimal places correct in the result as there were correct figures in the argument.
None.
9 Example
The following program reads values of the argument $x$ from a file, evaluates the function at each value of $x$ and prints the results.
9.1 Program Text
Program Text (s11acce.c)
9.2 Program Data
Program Data (s11acce.d)
9.3 Program Results
Program Results (s11acce.r)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 31, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7409217357635498, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/1538/inverses-in-truncated-polynomial-rings/2588
|
# Inverses in Truncated Polynomial Rings
I've been trying a long time to understand a thing which is obviously extremely simple, but I just can't get it. Read this, please:
The NTRUEncrypt PKCS uses the ring of truncated polynomials $R$ combined with the modular arithmetic described in Section 1. These are combined by reducing the coefficients of a polynomial a modulo an integer $q$. Thus the expression $$a \pmod q$$ means to reduce the coefficients of $a$ modulo $q$. That is, divide each coefficient by $q$ and take the remainder. Similarly, the relation $$a \equiv b \pmod q$$ means that every coefficient of the difference $a-b$ is a multiple of $q$.
This is taken from NTRU tutorial and that's quite understandable. But. Take a glance at the next excerpt from the same tutorial:
The inverse modulo $q$ of a polynomial $a$ is a polynomial $A$ with the property that $$a * A \equiv 1 \pmod q.$$
Not every polynomial has an inverse modulo $q$, but it is easy to determine if $a$ has an inverse, and to compute the inverse if it exists. A fast algorithm for computing the inverse is described in NTRU Technical Note 014, and a theoretical discussion of inverses in truncated polynomial rings is given in NTRU Technical Note 009. These notes may be downloaded from the Technical Center.
Example. Take $N=7$, $q=11$, $a=3+2X^2-3X^4+X^6$. The inverse of $a$ modulo 11 is $$A=-2+4X+2X^2+4X^3-4X^4+2X^5-2X^6,$$ since $$(3+2X^2-3X^4+X^6)*(-2+4X+2X^2+4X^3-4X^4+2X^5-2X^6) \\ = -10+22X+22X^3-22X^6 \equiv 1 \pmod{11}."$$
I do not understand how $-10+22X+22X^3-22X^6$ may be 1 (modulo 11). Why???????
The first excerpt adduced says that each coefficient of the polynomial minus 1 must be a quotient of 11. But it's not. -10? That's not a problem. $-10 - 1 = -11$. $-11 \bmod 11$ is 0, yes, it works, I agree. But how can it work with 22? $22 - 1 = 21$. $21 \bmod 11 = 10$, not 0. also it doesn't work with $- 22$. $-22 - 1 = -23$. $-23 \bmod 11 = -1$. Can anyone, please, explain me this example?
-
1
22x = 0 mod 11 because 22 is a multiple of 11, the same for 22x³ and -22x⁶, then you have only -10 which is congruent to 1 mod 11. – Vicfred Dec 26 '11 at 22:36
Exactly! That is the problem! 22x = 0 mod 11. I understand it. So how -10+22X+22X3-22X6 may be 1 (modulo 11) if 22 = 0 mod 11? This is what i can not understand – Andrey Chernukha Dec 27 '11 at 10:08
1
because -10 = 1 (mod 11) – Vicfred Dec 27 '11 at 21:19
## 2 Answers
$$(-10+22x+22x^3-22x^6) - 1 = -11+22x+22x^3-22x^6 \equiv 0 \mod 11.$$
When substracting a constant from a polynomial, you do not subtract it from every term, only from the constant term.
If you need a refresher, see addition and subtraction of polynomials.
-
"how $-10+22X+22X^3-22X^6$ may be 1 (modulo 11) if 22 = 0 mod 11?"
Because when you reduce this mod 11 you get
$$1 + 0 X + 0 X^3 + 0 X^6 = 1.$$
You seem to think that saying a polynomial is 1 mod 11 means that all its terms are 1 mod 11. What it actually means is that the constant term is 1 mod 11, and all the other terms are 0.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312156438827515, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/stochastic-integrals+martingales
|
# Tagged Questions
0answers
20 views
### Supermartingale Lemma + related problems
Given the following Lemma: Let $A_{t}=\int_{0}^{t}a_{s}dB_{s}$ where $a$ is an adapted process satisfying $\mathbb{P}\Big(\int_{0}^{T}a^{2}_{u}du < \infty\Big) = 1$ and $B$ is a standard Brownian ...
1answer
65 views
### Martingale inequality
Let $f: \mathbb{R}_+ \times \mathbb{R}_+ \to \mathbb{R}$ be a deterministic function, as nice as you want, $W$ a Brownian motion and define $$Y^r_t := \int_0^t f(r,s) dW_s$$ For each fixed $r$, ...
1answer
183 views
### Show that this continuous local martingale is a martingale
We are given the following SDE: $$dX_t=X_tdt+\sqrt{2}X_tdB_t, \quad X_0=1,$$ and $$F(x,t)=e^{-t}x,\quad t\geq0,\; x\in\mathbb{R}.$$ We are asked to apply Ito's formula to $F(t,X_t)$ for $t\geq0$ ...
0answers
63 views
### Local martingale iff each component is a local martingale?
This is probably an easy question: A local martingale is an adapted, cadlag process for which there is an increasing sequence of stopping times (going to $\infty$) such that the stopped process is a ...
1answer
190 views
### $\mathcal{F_t}$-martingales with Itô's formula?
I need a little help with a problem. I am given some stochastic processes and supposed to show that they are $\mathcal{F_t}-$martingales. The first one is this, and they all look similar: ...
1answer
125 views
### Is the solution to a driftless SDE with Lipschitz variation a martingale?
If $\sigma$ is Lipschitz, with Lipschitz constant $K$, and $(X_t)_{t\geq 0}$ solves $$dX_t=\sigma(X_t)dB_t,$$ where $B$ is a Brownian motion, then is $X$ a martingale? I'm having difficulty getting ...
2answers
101 views
### If $X$ is a martingale, $X(0)=0$; $f$ left continuous, is $\int f X$ dt also a martingale?
If $X(t)$ is a martingale, and $X(0) = 0$. $f(t)$ is a left continuous function, $$g(t) = \int_0^t f(s) X(s) ds$$ is $g(t)$ also a martingale? I guess it shall be, but don't know how to prove ...
1answer
228 views
### $d$-Dimensional Brownian Motion Martingales
Let $d > 1$ and let $W_t$ denote a standard $d$-dimensional Brownian motion starting at $x\neq 0$. Let $M_t = \log|W_t|$ for $d = 2$, and $M_t= |W_t|^{2-d}$ for $d > 2$. Show that $M_t$ is a ...
1answer
403 views
### Martingale problem
If $X_t$ is an $\mathbb{R}$- valued stochastic process with continuous paths, show that the following two conditions are equivalent: (i) for all $f\in C^2(\mathbb{R})$ the process f(X_t) − f(X_0) ...
1answer
530 views
### Is this a martingale?
Let $W_t$ be a standard Brownian motion with $W_0 = 0$ and let $Z_t$ solve the stochastic differential equation $dZ_t = 2 Z_t W_t \mathrm{d}W_t$. This has solution ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234077334403992, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/10496/an-inequality-relating-the-factorial-to-the-primorial/10573
|
## An inequality relating the factorial to the primorial.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let [a,b] = {k integer | a < k <= b}. Further let
• Comp[a,b] = product_{c in [a,b]} c composite;
• Fact[a,b] = product_{k in [a,b]} k integer;
• Prim[a,b] = product_{p in [a,b]} p prime.
Question: For n > 2 and n not in {10,15,27,39} is it true that
$$\text{Comp}[{\left\lfloor n /2 \right\rfloor}, n] < \text{Fact}[1, {\left\lfloor n /2 \right\rfloor}] \ \text{Prim}[{\left\lfloor n /2 \right\rfloor}, n] \ ?$$
Update: The state of affairs: Gjergji Zaimi showed that for large enough n the inequality is true. In my answer I affirm that the inequality is true in the range 40 <= n <= 10^5. It remains open whether 10^5 is 'large enough' in the sense of Gjergji's analysis.
-
2
This problem is as much "localized" as Bertrand's postulate is as the latter is an immediate consequence thereof. So at least this would lead to a proof of Bertrand's postulate which, concluding from your answer, is not widely known. "Motivate the rather odd looking exceptional set." I cannot. However I can point to the fact that most number theoretic inequalities have a lower bound with regard to their validity (see for example the formulas in Rosser, Schoenfeld on prime numbers). What you call 'odd looking' is an ubiquitous phenomenon in formulas which involve prime numbers. – Bruce Arnold Jan 2 2010 at 18:32
2
Yes Jose: this seems to me to be a perfectly reasonable question. It says "does the product of the primes in some region beat the product of the composites by some given factor, at least for n sufficiently large". Maybe you would have liked it better if he had written "for all n sufficiently large" rather than given the exceptional set? The exceptional set is just noise at the beginning (and almost certainly has nothing to do with your Sloane link). – Kevin Buzzard Jan 2 2010 at 20:03
3
@Bruce: I'm not asking for a defense, I agree it is interesting. What I'm asking for is an edit of the question. Instead of having me (and future readers) trying to figure out why the question is interesting, please edit the question to inform us why you find it interesting--what led you to it, applications, similar problems, etc. Thank you. – Ben Weiss Jan 2 2010 at 21:09
8
This seems to be a rather unattractive formulation of the question "Is the product of the primes between n and 2n bigger than sqrt(2n choose n)?" – Reid Barton Jan 2 2010 at 23:01
2
I'm just suggesting that the question be put into a "normal form" so that people who know about this stuff can see how strong a bound it is. – Reid Barton Jan 2 2010 at 23:28
show 5 more comments
## 2 Answers
This answer is just to point out that the result is true for large enough $n$. Let's rewrite it as $$\prod_{n\le p\le 2n}p > \sqrt{\binom{2n}{n}}$$ Since $\binom{2n}{n}\approx \frac{4^n}{n}$ introducing Chebyshev's functions $$\theta(x)=\sum_{p\le x}\text{log}p\quad,\quad \psi(x)=\sum_{p^{\alpha}\le x}\text{log}p$$ They satisfy $$\psi(x)=\theta(x)+O(\sqrt{x}\text{log}^2x)$$ What we want to prove is $$\theta(2n)-\theta(n) > n\text{log}2$$ It is a well-known asymptotics that $$\psi(x)=x+O(x\text{exp}(-c\sqrt{\text{log}x}))$$ for some positive $c$. In fact under the Riemann Hypothesis it is even true that $$\psi(x)=x+O(\sqrt{x}\text{log}^2x)$$ but we don't need this refinement. Now $$\theta(2n)-\theta(n)=n+O(n\text{exp}(-c\sqrt{\text{log}n}))$$ and this proves your assertion for large enough $n$. A good reference where all these results are proven is for example "Problems in Analytic Number Theory" by M.Ram Murty. I hope this helps, even though I did not mention any thing about possible small counterexamples. To find the smallest $n$ for which this argument works you'd have to look up each of these equations individually and look for specific bounds.
-
Thanks Gjergjj. IMO a virtue of the down voted question is that it links the parlance as seen on popular sites like 'The Prime Glossary' (but also on Math. of Comp.) using the terms 'compositorial', 'factorial' and 'primorial' with the Chebyshev's arithmetic functions. The part of the question which is concerned with 'possible small counterexamples' remains open. An effective lower bound of the validity of your asymptotic analysis is still appreciated. – Bruce Arnold Jan 3 2010 at 13:44
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A computational approach to the Compositorial-Factorial-Primorial Inequality (CFPI).
Let $u_{0}=1,u_{1}=1,u_{2}=1/2$ and for $n>2$ define $u_{n}$ by $${\text{if}\ n\ \ \text{odd} \ \text{then }\text{if}\ n\ \ prime\ \ \text{then } \ u_{n}=1/n\text{ else }u_{n}=n\ \text{fi}\ \text{fi};}$$ $${\text{if}\ n\ \text{even}\ \text{then } \text{if}\ n/2\ prime\ \text{then}\ u_{n} =n\text{ else }u_{n}=4/n\ \text{fi}\ \text{fi}.}$$ Let the sequence of partial products of $u_{n}$ given by $U_{0}=1$ and $$U_{n}=U_{n-1}u_{n}\quad\left( n>0\right) .$$ The CFPI as stated in the question is equivalent to the statement $$\text{numerator }U_{n}<\text{denominator }U_{n}\quad\left(n\geq40\right) .$$
Using this algorithm I checked the CFPI in the range $40 \leq n \leq 10^5$ and found no counterexamples.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447237849235535, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Noise_shaping
|
# Noise shaping
Noise shaping is a technique typically used in digital audio, image, and video processing, usually in combination with dithering, as part of the process of quantization or bit-depth reduction of a digital signal. Its purpose is to increase the apparent signal to noise ratio of the resultant signal. It does this by altering the spectral shape of the error that is introduced by dithering and quantization; such that the noise power is at a lower level in frequency bands at which noise is perceived to be more undesirable and at a correspondingly higher level in bands where it is perceived to be less undesirable. A popular noise shaping algorithm used in image processing is known as ‘Floyd Steinberg dithering’; and many noise shaping algorithms used in audio processing are based on an ‘Absolute threshold of hearing’ model.
## How noise shaping works
Noise shaping works by putting the quantization error in a feedback loop. Any feedback loop functions as a filter, so by creating a feedback loop for the error itself, the error can be filtered as desired. The simplest example would be
$\ y[n] = x[n] + e[n-1],$
where y is the output sample value that is to be quantized, x is the input sample value, n is the sample number, and e is the quantization error made at sample n (error when quantizing y[n]). This formula can also be read: The output sample is equal to the input sample plus the quantization error on previous sample.
Essentially, when any sample's bit depth is reduced, the quantization error between the rounded (truncated) value and the original value is measured and stored. That "error value" is then added to the next sample prior to its quantization. The effect here is that the quantization error itself (and not the valid signal) is put into a feedback loop. This simple example gives a single-pole filter (a first-order Butterworth filter), or a filter that rolls off 6 dB per octave. The cutoff frequency of the filter can be controlled by the amount of the error from the previous sample that is fed back. For example, changing the value for A1 in the formula
$\ y[n] = x[n] + A_1 e[n-1]$
will change the frequency at which the feedback loop is centered.
More complex algorithms can be used which use more samples' errors' worth of feedback in order to create more complex curves. The formula
$\ y[n] = x[n] + \sum_{i=1}^{9} A_i e[n-i]$
is that of a ninth order noise shaper, and can allow very complex noise shaping.
Noise shaping must also always involve an appropriate amount of dither within the process itself so as to prevent determinable and correlated errors to the signal itself. If dither is not used then noise shaping effectively functions merely as distortion shaping — pushing the distortion energy around to different frequency bands, but it is still distortion. If dither is added to the process as
$\ y[n] = x[n] + A_1 e[n-1] + \mathrm{dither},$
then the quantization error truly becomes noise, and the process indeed yields noise shaping.
## Noise shaping in digital audio
Noise shaping in audio is most commonly done as a bit-reduction scheme. The quantization error from straight dither is flat, white noise.[citation needed] The ear, however, is less sensitive to certain frequencies than others at low levels (see Fletcher-Munson curves). By using noise shaping we can effectively spread the quantization error around so that more of it is focused on frequencies that we can't hear as well and less of it is focused on frequencies that we can hear. The result is that where the ear is most critical the quantization error can be reduced greatly and where our ears are less sensitive the noise is much greater. This can give a perceived noise reduction of 4 bits compared to straight dither.[1]
### Noise shaping and 1-bit converters
Since around 1989, 1 bit delta-sigma modulators have been used in analog to digital converters. This involves sampling the audio at a very high rate (2.8224 million samples per second, for example) but only using a single bit. Because only 1 bit is used, this converter only has 6.02 dB of dynamic range. The noise floor, however, is spread throughout the entire "legal" frequency range below the Nyquist frequency of 1.4112 MHz. Noise shaping is used to lower the noise present in the audible range (20 Hz to 20 kHz) and increase the noise above the audible range. This results in a broadband dynamic range of only 7.78 dB, but it is not consistent among frequency bands, and in the lowest frequencies (the audible range) the dynamic range is much greater — over 100 dB. Noise Shaping is inherently built into the delta-sigma modulators.
The 1 bit converter is the basis of the DSD format by Sony. One criticism of the 1 bit converter (and thus the DSD system) is that because only 1 bit is used in both the signal and the feedback loop, adequate amounts of dither cannot be used in the feedback loop and distortion can be heard under some conditions.[2][3] Most A/D converters made since 2000 use multi-bit or multi-level delta sigma modulators that yield more than 1 bit output so that proper dither can be added in the feedback loop. For traditional PCM sampling the signal is then decimated to 44.1 ks/s or other appropriate sample rates.
## References
1. Gerzon, Michael; Peter Craven, Robert Stuart, and Rhonda Wilson (16–19 March 1993). "Psychoacoustic Noise Shaped Improvements in CD and Other Linear Digital Media". 94th Convention of the Audio Engineering Society, Berlin. AES. Preprint 3501.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231147766113281, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/279980/difference-between-fourier-transform-and-wavelets
|
Difference between Fourier transform and Wavelets
While understanding difference between wavelets and Fourier transform I came across this point in Wikipedia.
The main difference is that wavelets are localized in both time and frequency whereas the standard Fourier transform is only localized in frequency.
I did not understand what is meant here by "localized in time and frequency."
Can someone please explain what does this mean?
-
4 Answers
Very roughly speaking: you can think of the difference in terms of the Heisenberg Uncertainty Principle, one version of which says that "bandwidth" (frequency spread) and "duration" (temporal spread) cannot be both made arbitrarily small.
The classical Fourier transform of a function allows you to make a measurement with 0 bandwidth: the evaluation $\hat{f}(k)$ tells us precisely the size of the component of frequency $k$. But by doing so you lose all control on spatial duration: you do not know when in time the signal is sounded. This is the limiting case of the Uncertainty Principle: absolute precision on frequency and zero control on temporal spread. (Whereas the original signal, when measured at a fixed time, gives you only absolute precision on the amplitude at that fixed time, but zero information about the frequency spectrum of the signal, and represents the other extreme of the Uncertainty Principle.)
The wavelet transform take advantage of the intermediate cases of the Uncertainty Principle. Each wavelet measurement (the wavelet transform corresponding to a fixed parameter) tells you something about the temporal extent of the signal, as well as something about the frequency spectrum of the signal. That is to say, from the parameter $w$ (which is the analogue of the frequency parameter $k$ for the Fourier transform), we can derive a characteristic frequency $k(w)$ and a characteristic time $t(w)$, and say that our initial function includes a signal of "roughly frequency $k(w)$" that happened at "roughly time $t(w)$".
How is this helpful? Let us say we are looking at the signal of the light emitted from a traffic light. So for some time it will be red, and for some time it will be green (ignore the yellow for now). If we take the Fourier transform of the observed frequency, we can say that
• At some time the traffic light shows red. (We know frequency to infinite precision, and that the red part of the signal is non-zero.)
• At some time the traffic light shows green.
But a functioning traffic light would have either red or green shown at a time, and not both. And if the traffic light malfunctions and shows both lights at the same time, we would still see from the Fourier transform
• At some time the traffic light shows red.
• At some time the traffic light shows green.
But if we take the wavelet transform we can sacrifice frequency precision to gain temporal information. So with the wavelet transform done on the working traffic light we may see
• At parameter $w$ which corresponds roughly to $t(w)$ being 1 o'clock sharp and $k(w)$ corresponding to red, the wavelet transform is large and non-zero. This can be taken to mean that sometime around 1 o'clock sharp (could be exactly 1 o'clock, could be 1 minute past, could be 30 second before) the light showed a color that is more or less red (could be a little bit purple, or maybe a little bit amber).
• At parameter $w$ which corresponds roughly to $t(w)$ being 1 o'clock sharp and $k(w)$ corresponding to green, the wavelet transform is almost zero. This can be taken to mean that at all the times around 1 o'clock (say plus or minus 2 minutes) the traffic light does not show any hint of green.
• At parameter $w$ which corresponds roughly to $t(w)$ being five minutes past 1 and $k(w)$ corresponding to green, the wavelet transform is large and non-zero. This would indicate that around 1:05 (maybe 1:06, or 1:04) the light shined greenish (could have a tinge of teal or a bit of yellow in it).
This would tell us that not only can the traffic light show both red and green lights, that at least at around 1 o'clock the light is working properly and only showing one light.
-
Thanks a lot Willie Wong. Traffic signal example was very useful to understand physical meaning – chatur Jan 16 at 17:29
Interesting question.
My guess would be that a wavelet can be defined within a specific time span. As the first sentence in the wikipedia page states:
A wavelet is a wave-like oscillation with an amplitude that starts out at zero(0), increases, and then decreases back to zero.
Thus a wavelet can be defined within a certain time span, starting at f(t_0) = 0 and ending at f(t_end) at 0. This then gives us time localization.
On the other hand, a Fourier transform is an integral from t = -infinity to t = +infinity. So there is no time localization. Even if you would transform a wavelet to it's frequency domain, still the relative phase relation of different contributing frequencies determine the position in time of the transformed wavelet.
Edit: Of course a Fourier transform can be performed on a certain time interval t, but keep in mind that, when transforming back to time domain, the transformed signal will repeat itself every time interval t. This again will give no localization in time for a Fourier transform.
NB: As it seems i can not comment to my own answer since i wasn't logged in, so instead i edited my own answer. Sorry for the inconvenience.
-
Thanks a lot Splinter for the answer. Point that wavelets integral is between two finite specified points and fourier transform being from $-\inf$ to $+\inf$ seems good but Fourier transform in actual practice is also used on finite periodic interval. So Wavelets do not require interval to periodic is the difference? – chatur Jan 16 at 10:26
1
@user19905: it appears you have this account which is active on both Math and Physics, and separately the Splinter account? If so, please ask the Physics mods to merge the two over there. – Willie Wong♦ Jan 16 at 12:17
An audio signal $t\mapsto f(t)$ as recorded on an LP record is localized in time: It tells you the exact air pressure difference $\Delta p(t)$ at each time $t$. But the exact mixture of frequencies audible at (or near, see below) time $t$ is not available.
If you process this LP (running in time $t$ from $-\infty$ to $\infty$) through a Fourier analyzer you get the Fourier transform $\omega\mapsto \hat f(\omega)$. This new function (or plot) tells you which frequencies were present with which intensities in the music recorded on the LP, but it doesn't tell you at which times, e.g., the note $C''$ was played.
The Heisenberg uncertainty relation tells us that it is impossible to have the best of both: There is no such thing as a definite pitch $\omega$ played during an arbitrarily small time interval around a given time $t$.
(A musical score comes near to this dream: It tells us at which time $t$ a particular note, e.g., C'', should be played during a certain small time interval of given length $\Delta t$.)
Now wavelets are special "sounds" that are restricted to time intervals of length $2^{-r}$, $\ r\in{\mathbb Z}$, and perform, say, $3$ full oscillations in such an interval. So in a way they are approximately localized as well in time as in frequency. But there is much more to it, e.g., concerning the numerical handling of the "wavelet analysis" of a time signal $f$. You should consult one of many primers on wavelet theory, e.g. the Introduction to wavelets and wavelet transforms by C.S. Burrus et al.
-
In layman's terms: A fourier transform (FT) will tell you what frequencies are present in your signal. A wavelet transform (WT) will tell you what frequencies are present and where (or at what scale). If you had a signal that was changing in time, the FT wouldn't tell you when (time) this has occurred. You can also think of replacing the time variable with a space variable, with a similar analogy.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937950074672699, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/2786/visualization-of-protons-wavefunction?answertab=votes
|
# Visualization of proton's wavefunction
Visualizations of hydrogen's wavefunctions / electron orbitals are abound.
I could not however locate a visualization of the wavefunction of a proton. The reason I was looking for one is to see whether the three quarks that make it up "occupy" disjoint regions of space (i.e. the maximum probability locations are separated).
Can someone please point to one such visualization, or explain why they can't be created (or why my question is nonsensical, which is always a possibility...)
-
Although not a specialist in particle physics, I have a strong feeling that this is, indeed, not possible, because I'd treat quarks in a quantum field approach i.e. in 2nd quantization. Don't really know how to visualize those, as, for example, particle number isn't something that's fixed – Lagerbaer Jan 13 '11 at 18:50
1
@Lagerbaer: I don't think just 2nd quantization will help you here if by that you mean just perturbative approach to particles. Low energy QCD is quite non-perturbative because of the very strong coupling. If there is a way to answer this question it must be through lattice-QCD or AdS/QCD and it won't be easy. I am looking forward to answers. – Marek Jan 13 '11 at 18:58
Like Marek, I strongly suspect that the only way you could perhaps get a meaningful answer out of this is through lattice QCD. That's not really my specialty so I can't give that answer, but later on perhaps I can try to at least post an explanation of how the proton is described in QCD, which might help clarify why this is such a difficult question. – David Zaslavsky♦ Jan 13 '11 at 19:09
1
@romkyns: yes and no. Quarks are the same as electrons in that they are both elementary particles described by few numbers (mass, spin, charges). But they differ in important way: quarks carry a strong charge and interact by strong force. This force is (sorry for playing Captain obvious) strong at low interaction energies (which are the energies the quarks have inside the proton) and hard to investigate by usual means; in particular, the simple quantum mechanical picture that is used to analyze an atom. You need the full quantum field theory here. – Marek Jan 13 '11 at 22:57
1
The problem is that the force between the quarks is so strong that the binding energy is itself sufficiently high to create new particles. This is the reason why, for example, single quarks are never observed: When pulling them apart, the binding energy is converted to new quarks. – Lagerbaer Jan 14 '11 at 4:12
show 2 more comments
## 3 Answers
for the electron in the Hydrogen atom, the orbital motion doesn't interact with the electron's spin, so "the wave function" pretty much means just a complex $\psi(x,y,z)$. You may choose the electron's spin to be up or down independently of that.
However, you must realize that a wave function of three quarks has many more components. First of all, there are three particles in relative motion rather than two. It means that even if you decouple the center-of-mass coordinates, the relative wave function is a function of six coordinates, $x_1,y_1,z_1,x_2,y_2,z_2$.
So you can't really visualize the full wave function in a simple way because it is a function of six variables rather than three and of course, it doesn't factorize into a product of functions of three variables (something that is strictly speaking true even for all atoms more complicated than the Hydrogen atom).
Moreover, you must understand that the quarks have many discrete degrees of freedom that are not decoupled from the orbital motion in this case. Each of the three quarks has 1 of 3 colors and 1 of 2 possible values of the spin. The combinations of the color really force you to consider 27 combinations of the quark colors - 27 different wave functions (only 6 of them are nonzero and equal to each other, up to a sign) - and 8 combinations of the three quarks' spin. The spins are correlated as well.
So the right wave function for 3 quarks is really a set of $27 \times 8$ complex functions of six variables. Of course, you may visualize various aspects of these functions by integrating it in space and so on.
It's important that the strong force between the non-relativistic quarks is spin-dependent. That's why the protons' spin is not arbitrary - it equals $1/2$ instead. So one of the quarks' spin differs from the remaining two.
Even if you managed to invent ways how to visualize the $27\times 8$ wave functions of 6 variables describing the relative positions of the three quarks, with the best correlations between the colors, spins, and motion, it would still be a hugely oversimplified model of the proton. In reality, it is not true that the proton only contains 3 quarks. Those 3 quarks we normally talk about are analogous to "valence electrons" in an atom but there is also a large see of equally real gluons and quark-antiquark pairs - additional partons whose total color vanishes.
To really describe the wave function of the proton, you need to talk about the right theory of the proton's structure - quantum chromodynamics (QCD) - which is an example of a quantum field theory - one discovered in the early 1970s. It has infinitely many degrees of freedom, like any field theory, so the right "wave function" is really a "wave functional" or a function of infinitely many variables.
You must realize that many approximations valid in atomic physics break down. For example, the speed of electrons in the atoms is very small - effectively controlled by the fine-structure constant $1/137$ that tells you the speed in the units of the speed of light. That allows you to use non-relativistic quantum mechanics. However, the proton's "strong" fine-structure constant relevant for the "three quarks only" is close to one, so the speed of quarks in the proton is always comparable to the speed of light. Consequently, the extra relativistic energy/mass is comparable to the rest mass as well as the interaction energy, and the proton always has enough energy to create quark-antiquark pairs and so on. It's a mess where relativity is needed much like particle creation and annihilation.
In some sense, it's true that "the 3 quarks", if you select them from the sea of the infinitely many partons, like to occupy three different regions in the ground state. But to be more accurate about what this statement actually means for the wave function(al) of the actual proton, you would have to go through a series of so many approximations and idealizations that it's not worth it. The goal of quantum field theory is not to visualize the structure of something; the goal is to calculate the results of the experiments and one can't really design good experiments that would probe the detailed shape of the wave function of the proton.
The parton distribution functions are the closest observables to this information. However, their very assumption is that the proton has infinitely many rather than 3 partons (quarks, antiquarks, or gluons). To summarize, don't expect anything that is as valid yet as useful as the pictures for the atomic physics. Proton's analogy to an atom has severe limitations.
-
– romkyns Jan 14 '11 at 12:06
Actually, Luboš, if you don't mind I have a quick follow-up question: is this set of functions completely spherically symmetric? It's just that I have seen mention that the "helium nucleus is spherically symmetric", and I am now wondering whether this statement is as much an approximation as the proton structure diagram. – romkyns Jan 14 '11 at 12:11
@romkyns: that picture is indeed misleading if interpreted literally. But it captures few essential points: quark content of the proton (determining e.g. the total charge of proton) and the interaction modeled by springs. Springs are indeed the best way to imagine strong interaction at low energies. As you pull the quarks apart, the energy increases and they want to go back together. The more you pull them, the stronger the force. This is in contrast with Coulomb force which gets weaker at long distances (which is one of the reasons why you can model an atom with just quantum mechanics). – Marek Jan 15 '11 at 12:44
Dear Romkyns, if I had to draw a scheme of the proton on paper, without equations, I would probably draw the same thing haha. Again, the proton's wave functions - even as a function of the non-relativistic quarks in an approximation - has many components given by the quarks' spins. They are different and their shape in space also depends on the 3 spins. Finally, the whole proton - including all the components of the wave functions in space - also fails to be spherically symmetric. After all, it has a nonzero spin so it can't be symmetric under all rotations. It also has a mag. dipole moment. – Luboš Motl Jan 17 '11 at 20:13
At a very naive level your question amounts to asking what the bound state for three bodies should look like. I would suggest looking up "Efimov trimers". These are many-body bound states predicted by Efimov some 40 years ago and recently experimentally observed. I'm not suggesting that their is a one-to-one correspondence between Efimov states and bound states of nucleons, but they should at the very least provide some sort of a picture.
-
And I propose to look at my own pictures here. There is no proton wave function in atom but a wave function of the relative motion. It is a quasi-particle wave function. Proton is in a mixed state when in atom.
Concerning always bound quarks, it is a non-linear problem with a strong coupling.
And apart from the wave function with many arguments, one can think of the "charge density" $\rho$(r) determined with the charge form-factor(s) F(q). This is more tangible.
-
If you need to link, please provide a direct link. Otherwise it looks just like an advertisement to your blog. – Marek Jan 19 '11 at 18:13
OK, I corrected the link. – Vladimir Kalitvianski Jan 19 '11 at 19:22
Please give the reason for your negative score. – Vladimir Kalitvianski Jan 19 '11 at 22:49
Dear downvoters, tell me where I am wrong, please. I would like to learn. – Vladimir Kalitvianski Jan 31 '11 at 16:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439480304718018, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/166579/deducing-results-in-linear-algebra-from-results-in-commutative-algebra/166685
|
# Deducing results in linear algebra from results in commutative algebra
Here are two examples of results which can be deduced from commutative algebra:
• Any $n\times n$ complex matrix is conjugate to a Jordan canonical matrix (can be proven using the structure theorem for modules over a PID, in this case $\mathbf{C}[T]$ - see for example these course notes).
• Commuting matrices have a common eigenvector (this can be seen as a consequence of Hilbert's Nullstellensatz, according to Wikipedia).
My question is, does anyone know of other examples of results in linear algebra which can be deduced (non-trivially*) from results in commutative algebra?
More precisely: Which results about modules over fields can be optained via modules over more general commutative rings? (Thanks to Martin's comment below for suggesting this precision of the question).
.* by "non-trivially," I mean you have to go deeper than simply applying module theory to modules over a field.
-
I like your question, but I feel like your asterisk rules out all possible answers to your question :( You want to apply commutative algebra results to linear algebra, but you won't allow the application of module results to vector spaces? – rschwieb Jul 4 '12 at 18:25
1
@rschwieb Note that the first example I give uses a result in module theory. E.g. "linear transformations correspond to matrices, because this is true for free modules" is a trivial deduction which isn't worth mentioning. I only meant to avoid getting responses like "but every $k$-vector space is a $k$-module, so everything comes from commutative algebra." – vgty6h7uij Jul 4 '12 at 18:41
1
Often it is the other way round: Algebraic geometry comes down to commutative algebra via charts, and in the residue fields this comes down to linear algebra. A friend of mine, who teached algebraic geometry, once said: "It's all just Linear Algebra + Nakayama." – Martin Brandenburg Jul 4 '12 at 20:07
1
Somehow I like your question, but on the other hand it is hard to make it precise. After all, all parts in pure mathematics are connected with each other, and linear algebra is just a part of commutative algebra (namely, over a field). But perhaps the following makes the question more precise: Which results about modules and their homomorphisms over fields can be optained via modules over more general commutative rings? – Martin Brandenburg Jul 4 '12 at 20:13
@MartinBrandenburg Thanks for clarifying the question; it seems I had some trouble stating the question precisely. Maybe I should add it to the original post. – vgty6h7uij Jul 4 '12 at 20:16
## 3 Answers
I think this is an interesting question, and I have several responses to it.
1) You say (correctly, of course) that Jordan form is proved using the structure theory of finitely generated modules over a PID. But to me this latter theory is part of linear algebra rather than commutative algebra. It may be "graduate level" linear algebra, but it is generally found in general-purpose algebra texts rather than commutative algebra texts and often in the context of linear algebra.
1$'$) To go deeper, how "linear algebraic" you feel this structure theorem is probably depends upon which proof you give. If you follow the route which first establishes Smith normal form for matrices over a PID, this is strongly reminiscent of undergraduate linear algebra. It seems though that the fashion in many contemporary texts is to give a less matrix-oriented approach. Very recently I realized that I had never really absorbed (and maybe was never taught) the linear algebra approach to structure theory and also that I needed it in some work I am doing in lattices and the geometry of numbers. Maybe a serious undergraduate algebra sequence should include treatment of the Hermite and Smith normal forms and applications to module theory rather than more abstract stuff which will surely be covered later on.
1$''$) One of the few commutative algebra texts I know that gives a proof of the structure theorem for finitely generated modules over a PID is mine. The proof I give is (I think) not one of the standard ones, so to me this is an example of a linear algebra result proved by commutative algebraic methods. At the moment the argument is scattered through various sections:
$\bullet$ In $\S 3.9.2$ it is shown that a finitely generated torsionfree module over a PID is free. Since free modules are projective, it follows that every finitely generated module over a PID is the direct sum of a free module and a torsion module, so it remains to classify finitely generated torsion modules.
$\bullet$ In $\S 17.5.3$ I give the structure theorem for finitely generated torsion modules over a discrete valuation ring, which takes advantage of the fact that its quotients are "self-injective rings".
$\bullet$ In $\S 20.6$ I explain how an easy localization argument reduces the case of finitely generated torsion modules over an arbitrary PID -- in fact over an arbitrary Dedekind domain -- to the already proved case of a DVR.
2) Although it may be interesting to know that the Nullstellensatz can be used to prove that commuting square matrices $A,B$ over an algebraically closed field have a common eigenvector, it seems to me that this is significant overkill. What I imagine to be the standard proof is simple and elementary:
Let $v$ be an eigenvector for $A$ -- say $Av = \alpha v$ -- and choose $k \in \mathbb{Z}^+$ minimal so that the set $\{v,Bv,B^2v,\ldots,B^k v\}$ is linearly dependent. Then $W = \operatorname{span}(v,Bv,\ldots,B^{k-1} v)$ is $B$-stable, so it has an eigenvector for $B$: say $Bw = \beta w$. On the other hand, since for all $0 \leq j \leq k-1$, $AB^j v = B^j A v = B^j \alpha v = \alpha B^j v$, every vector in $W$ is an eigenvector for $A$, and thus $w$ is an eigenvector for both $A$ and $B$.
I am tempted to say that it often works like this: you can sometimes bring in commutative algebraic methods to prove linear algebraic results in nontrivial and interesting ways, but in most cases I know the standard linear algebraic methods are simpler and more efficient. One exception is that in proving polynomial identities (like the Cayley-Hamilton theorem) in linear algebra it is often useful to note that it is sufficient to show that the identity holds "generically", or even on a Zariski-dense subset of the appropriate vector space.
-
Thanks for sharing these interesting and detailed pedagogical remarks. – vgty6h7uij Jul 4 '12 at 19:52
This answer doesn't contain anything new and it was already alluded via the JNF in the question, but let me elaborate this point of view:
Given a vector space $V$ over a field $K$, then an endomorphism $f : V \to V$ is the same as giving $V$ the structure of a left $K[T]$-module (which restricts to the given $K$-module structure). The multiplication with $T$ on the left is just $f$. This is just the most natural gadget when you think about polynomials $f^n + a_{n-1} f^{n-1} + \dotsc + a_0$ and their action on $V$. Many notions of linear algebra can be understood in more concise terms with the help of this module (and actually this can be used as a motivation for an introduction to modules in a linear algebra class):
• A $f$-invariant subspace of $V$ is just a $K[T]$-submodule of $V$
• That $V$ is $f$-cyclic just means that $V$ is cyclic as a $K[T]$-module, i.e. generated by one element, i.e. isomorphic to $K[T]/(p)$ for some $p$. We have that $V$ is finite-dimensional iff $p \neq 0$.
• It immediately follows that $f$-invariant subspaces of $f$-cyclic spaces are also $f$-cyclic. Try to prove this without the language of modules.
• That $V$ is $f$-indecomposable just means that it is indecomposable as a $K[T]$-module
• More general and concisely: The arrow category of $\mathsf{Vect}_K$ is isomorphic to the category of $K[T]$-modules.
• The minimal polynomial $p$ of $f$ is just the unique normed generator of the kernel of the ring homomorphism $K[T] \to \mathsf{End}_K(V)$ given by the module.
• When you think about the endomorphism of $K[T]/(p)$ which multiplies with $T$ in terms of the canonical basis, you will arrive automatically at the companion matrix of $p$ and the relation with $f$-cyclic vector spaces. You don't have to "learn" this, it is just there.
• The structure theorem for finitely generated modules over principal ideal domains implies the general normal form (which holds over every field $K$): Every endomorphism of a finite-dimensional vector space is similar to a direct sum of cyclic endomorphisms. In coordinate language: Every matrix is similar to a block matrix of companion matrices.
• If $K$ is algebraically closed, you may reduce to polynomials of the form $(T-\lambda)^n$, but $K[T]/(T-\lambda)^n \cong K[T]/T^n$ has a very simple matrix, namely the Jordan block. Hence, the general normal form implies the Jordan normal form: Every matrix is similar to a block matrix of Jordan blocks.
-
Criteria for unitarily similarity between two matrices is given by Specht in 1940 in a paper titled “Zur Theorie der Matrizen” (“Toward Matrix Theory”).
-
2
Zur Theorie der Matrizen is On Matrix Theory in English (in this context zur does not mean towards). – Lennart Jul 4 '12 at 18:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 70, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243160486221313, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/82510?sort=newest
|
The Correlation of the Mobius Function and Dirichlet Characters.
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\chi$ be a Dirichlet character, and define $\phi_\chi (n)$ so that it satisfies $$\sum_{n=1}^\infty \phi_\chi (n)n^{-s}=\frac{\zeta(s-1)}{L(s,\chi)}.$$
In other words
$$\phi_{\chi}(n)=\sum_{d|n}\mu\left(\frac{n}{d}\right)\chi\left(\frac{n}{d}\right)d=\left(\text{Id}*\mu\chi\right)(n).$$
My question is, how large can $\phi_\chi(n)$ be? More precisely, what is the smallest function $f$ such that for all $n$ and $\chi$ $$\frac{\phi_{\chi}(n)}{n}=\sum_{d|n}\frac{\mu(d)\chi(d)}{d}\ll f(n).$$
It is not hard to see that $\frac{\phi_{\chi}(n)}{n}\ll \log n$ for all $n$ so $f\ll \log n$. For Euler's $\phi$ function, the first term is the main contributor, as $\phi(n)\leq n$ we know that $\frac{\phi(n)}{n}\leq 1$ for all $n$. Since $\chi$ has norm $1$, and the sums $\sum_{n\leq x}\mu(n)\chi(n)$ are small, we might conjecture that $\frac{\phi_\chi (n)}{n}\leq 1$.
However this is not so. We can find a character such that $\mu$ and $\chi$ function have a lot of correlation, enough to make the sum of size $\sqrt{\log\log n}$. This is outlined by the following construction:
Let $n$ be the product of all primes $p=3+4k$ where $p\leq M$, and let $\chi$ be the unique quadratic character modulo $4$. Then choose whether or not to remove the prime $3$ from this product as to force the equivalence $n\equiv 1 \pmod{4}$. For each divisor $d$ of $n$, if $\omega(d)$ is even, then $d\equiv 1\pmod{4}$ so that $\chi(d)\mu(d)=1$, and if $\omega(d)$ is odd, then $d\equiv 3\pmod{4}$ so that $\chi(d)\mu(d)=1$ yet again. This means that $$\frac{\phi_\chi (n)}{n}=\sum_{d|n} \frac{1}{d}\gg \sqrt{\log M}\gg\sqrt{\log \log n}.$$ The second last $\gg$ follows from the fact that if $A$ is the set of integers composed only of primes congruent to $3$ modulo $4$, then $\sum_{n\leq M,\ n\in A} \frac{1}{n}=\sqrt{\log M}$, and the last $\gg$ follows from the fact that $\log n =\theta(M;4,3)$.
Any references to papers which might deal with this sort of sum is greatly appreciated,
Thanks,
-
1 Answer
Up to a multiplicative constant the answer is what you discovered, $f(n)=\sqrt{\log\log n}$. Obviously you can assume that $\chi$ is not a principal character since in that case you get something less than $1$. Then after writing $$\sum_{d|n}\frac{\mu (d)\chi(d)}{d}=\prod_{p|n}\left(1-\frac{\chi (p)}{p}\right)$$and taking the logarithm of the RHS the problem boils down to finding an upper bound for the real part of the sum $$\sum_{p|n}\frac{-\chi (p)}{p}.$$ Note that only the real part of this expression is important to us, since we are going to exponentiate in the end to get back to the original problem (i.e. we are only interested in bounding the modulus of the resulting complex number).
Since $\chi$ is not principal, the worst thing that could happen in the previous sum is to have half of the values equal to $-1$ (e.g. use the Legendre symbol modulo some prime $q$) and then suppose that $n$ is a product of all primes up to some point which fall in the residue classes $d$ for which $\chi (d)=-1$. Keep in mind that you can't have more than $1/2$ of the values equal to $-1$, because of the orthogonality relations.
By Dirichlet's Theorem this accounts for half of the primes, evenly distributed in the various residue classes, and translating everything back to the original formulation gives the upper bound $\sqrt{\log\log n}$. If you want to see the details of how to carry out this final calculation then I can provide them, but from your example it looks like you already understand how that part of the argument works.
Final note: As Greg pointed out below this bound is not uniform in $\chi$. The best bound in general is $\log\log n$, which is achieved by letting $n$ run through the sequence of primorials and simultaneously letting $\chi_n$ run through a sequence of characters which take the value $-1$ at all primes dividing $n$.
-
4
One needs to be careful here. I think this is a valid proof that $\phi_\chi(n)/n \ll_\chi \sqrt{\log\log n}$; however, the implicit constant will depend upon $\chi$. Given $n$, one can find a Dirichlet character $\chi$ such that all the small primes $p$ have $\chi(p)=-1$. The resulting construction shows that no bound better than $\phi_\chi(n)/n \ll \log\log n$ can hold uniformly over $n$ and $\chi$. – Greg Martin Dec 3 2011 at 8:18
(Also I don't understand what you mean by "the values of $\chi(d)$ are symmetric about the imaginary axis". If $\chi$ is a cubic character then its values are $-\frac12\pm\frac{\sqrt{3}}2i$ and $1$, which does not have that symmetry; the same is true of any odd-order character.) – Greg Martin Dec 3 2011 at 8:20
I agree with your first comment- I thought about that before I went to sleep last night and I think you are correct. I believe what you claim, but would you mind explaining how to construct a Dirichlet character with the property you mention? – Alan Haynes Dec 3 2011 at 9:19
@Greg Your second comment is relevant but not as crucial. Certainly if you have an odd character then what I said about the symmetry is true. Otherwise you can still argue the same way- the worst case scenario is to have as many −1's as possible, but by orthogonality this can be at most 1/2 of the residue classes. I'll edit this to make it (hopefully) correct. Thanks for your comments, it is a fun problem. – Alan Haynes Dec 3 2011 at 9:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483940005302429, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/105397?sort=oldest
|
## There are two slightly different notions of ultraproduct. Why is one said to be better than the other?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $I$ be a set and $\mathcal{U}$ an ultrafilter on $I$. Let $(X_i)_{i \in I}$ be an $I$-indexed family of sets. The ultraproduct of the family $(X_i)$ with respect to $\mathcal{U}$ is, everyone agrees, another set. But which set is it? There are two different definitions, and they sometimes give different results.
For the sake of discussion, I'll call them "Type 1" and "Type 2" ultraproducts.
Type 1 $\$ The type 1 ultraproduct of `$(X_i)_{i \in I}$` with respect to $\mathcal{U}$ is $$\Bigl( \prod_{i \in I} X_i \Bigr) \Bigl/ \sim$$ where ```$$
(x_i)_{i \in I} \sim (x'_i)_{i \in I} \iff \{ i \in I: x_i = x'_i \} \in \mathcal{U}.
$$```
Type 2 $\$ View the poset $(\mathcal{U}, \subseteq)$ as a category. The type 2 ultraproduct of `$(X_i)_{i \in I}$` with respect to $\mathcal{U}$ is the colimit of the functor $(\mathcal{U}, \subseteq)^{\text{op}} \to \mathbf{Set}$ defined on objects by $$J \mapsto \prod_{j \in J} X_j$$ and on maps by projection. Explicitly, then, the Type 2 ultraproduct is $$\Bigl( \coprod_{J \in \mathcal{U}} \prod_{j \in J} X_j \Bigr) \Bigl/ \approx$$ where ```$$
(x_j)_{j \in J} \approx (x'_k)_{k \in K} \iff \{ i \in J \cap K: x_i = x'_i \} \in \mathcal{U}.
$$```
The difference $\$ The two types of ultraproduct are the same if either none of the sets $X_i$ are empty or almost all of them are empty. But in the remaining case, where at least one $X_i$ is empty but the set of such $i$ is not large enough to belong to $\mathcal{U}$, they're different: the Type 1 ultraproduct is empty but the Type 2 ultraproduct is not.
The question $\$ I've read in a couple of texts (both coming from the point of view of categorical logic) that the Type 2 ultraproduct is really the right one. But why? On what criteria is Type 2 judged to be better than Type 1?
A vague guess at an answer $\$ I think I can guess very roughly what's going on. There's been a tradition in logic — maybe dying out now? — of taking all structures to be nonempty by definition. But when you move to the more general setting of categorical logic, that's no longer a satisfactory approach. Although in the category of sets, there's just a single object with no elements, in many other categories, there are lots of interesting objects with no (global) elements: e.g. there are lots of interesting sheaves with no global sections.
So categorical logic sometimes involves a recasting of classical, set-based logic, in order to handle empty sets/types satisfactorily. I imagine that something of the sort is going on here. (I only defined ultraproducts of sets, but you could of course define ultraproducts of objects of any other sufficiently complete category.) But still, I don't see clearly why Type 2 is the right choice.
See also This question of Joel David Hamkins, and its responses.
-
## 4 Answers
The main factor in choosing between Type 1 and Type 2 ultraproducts is whether or not empty domains make sense in the given context.
A majority of logical systems for first-order logic do not allow empty domains. This avoids a lot of technical difficulties. Since empty structures are arguably not that interesting, there is not much loss in doing that. This simplifying assumption brings a multitude of technical advantages that occur throughout the development of logic. For example, when laying the groundwork for structures and models, it is handy to always have at least one variable assignment. However, such a simplifying assumption is not at all necessary and it is possible to develop first-order logic in a way that allows empty domains.
Another routine simplifying assumption is that there is only one sort of object. Indeed, multiple sorts can always be simulated using unary predicates to distinguish different domains. However, there are many contexts where multiple sorts make a lot of sense and this unary predicate "hack" is undesirable. Categorical logic is such a context.
The use of multiple sorts amplifies the empty domain problem. Indeed, each sort has its own domain and a structure is not necessarily uninteresting when some of these domains are empty. When all domains are nonempty, there is no difference between Type 1 and Type 2 ultraproducts. However, if only one of the domains is empty in the structures involved in a Type 1 ultraproduct, then the corresponding domain of the ultraproduct will also be empty. This violates the basic idea that the ultraproduct captures what happens "almost always" in the collection of structures. This problem is fixed with Type 2 ultraproducts where the ultraproduct domain will be nonempty precisely when "almost all" of the corresponding domains in the structures involved in the ultraproduct are nonempty.
(Note that if unary predicates are used to distinguish sorts using the "hack" described above, then the Type 1 ultraproduct does do the right thing and the end result is the "hacked" version of what would be the Type 2 ultraproduct of the structures.)
-
3
Related: See the Appendix (pdf p. 14/16) in archive.numdam.org/ARCHIVE/CTGDC/CTGDC_1986__27_2/… – Benjamin Dickman Aug 24 at 17:49
3
You could of course get a modified version 1' by allowing partial functions, as long as their domain is in the ultrafilter. This type 1' ultraproduct is then canonically isomorphic to the type 1 ultraproduct iff all structures are non-empty, but it will never be empty (unless most of the base structures are empty, of course). – Goldstern Aug 24 at 18:56
3
Yes, this is exactly what the Type 2 ultraproduct is. (Tom's 'explicit' definition is the usual one, I think.) – François G. Dorais♦ Aug 24 at 19:13
4
<flame>I strongly disagree with "avoidance of the empty domain is a good thing". It is a very bad thing, it promotes ad-hoc hacks and definitions, it makes it harder to generalize constructions and proofs. Along the same lines, requiring non-empty metric spaces, non-empty topological spaces, non-trivial rings, etc., is a big mistake. It builds in classical logic, so people have to then rework everything when they pass to sheaves, computability and more generally intuitionistic logic. Bad idea.</flame> – Andrej Bauer Aug 25 at 8:34
3
The criterion is very simple: the correct definition is the one that generalizes to other settings (filter-quotient construction in toposes comes to mind), and it is the one that enjoys a universal property which does not involve ad-hoc side conditions and special cases. – Andrej Bauer Aug 25 at 8:38
show 10 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Since the question mentions texts with a categorical-logic point of view, which tends to connect with constructive logic, and since Francois's answer (with which I completely agree) adopts the viewpoint of classical logic, it seems worthwhile to add the following. As Francois said, empty domains are not very interesting, so it's natural to ignore them. But in constructive logic, it may not be decidable whether a domain is empty. That is, given a set $D$, one cannot generally assert that either $D$ is empty (and therefore safe to ignore) or $D$ has a member. To prove that the two definitions of ultraproduct give the same (i.e., canonically isomorphic) results, one needs that the domains of the structures have members (and in fact, one needs a choice function for these domains). If one is working in constructive model theory (but nevertheless managed somehow to get hold of an ultrafilter), then it's a bad idea to try to ignore empty structures, and so one must pay attention to the difference between the two definitions of ultraproduct.
-
Thanks, that's helpful too. I suppose I'd been subconsciously assuming that since we're working with ultrafilters, we're in a world where choice is assumed, but now that I think about it, that doesn't hold any kind of water. – Tom Leinster Aug 24 at 22:44
5
+1 for raising the idea of a black market for ultrafilters that a constructivist might furtively purchase. – Terry Tao Aug 25 at 0:40
I once asked a logician why he insisted that algebras be non-empty. In Graetzer's book on universal algebra, he has a theorem that the intersection of any family of subalgebras of an algebra is either empty or a subalgebra. The answer the unnamed logician gave me was that it was because ultraproducts didn't work right otherwise. Specifically, it should be the case (as it is indeed with the second definition) that an ultraproduct be empty iff the set of empty factors is in the ultrafilter. The property of being empty is, I guess a first order property (is it, actually?) and should react like any other first order property. Anyway, the second definition obviously does have the desired property. So any categorist will insist on the second definition.
-
Being empty is first order. It means the statement there exists x is false. – Benjamin Steinberg Aug 25 at 16:37
Michael Barr pointed out one thing that goes wrong if you try to use the type 1 definition when some of the sets involved are empty. Now that I understand this issue better, I'll point out a couple of other things that go wrong. They're both of the form "this theorem holds cleanly for type 2 ultraproducts, but for type 1 you have to make exceptions".
I'll use the standard notation for the ultraproduct of `$(X_i)_{i \in I}$` with respect to an ultrafilter $\mathcal{U}$ on $I$: $$\Bigl( \prod_{i \in I} X_i \Bigr) \bigl/ \mathcal{U}$$ (for either of the two definitions). The notation makes more sense for type 1 than type 2, but never mind.
(1) Ultraproduct with respect to a principal ultrafilter is projection. That is, if $k \in I$ and $\mathcal{U}$ is the principal ultrafilter on $k$ then $(\prod X_i)/\mathcal{U} = X_k$. This is true without exception for type 2 ultraproducts. It is almost true for type 1, but fails if $X_k \neq \emptyset$ and $X_i = \emptyset$ for some $i \neq k$.
(2) Ultraproducts preserve finite coproducts. That is, writing $+$ for the coproduct (disjoint union) of sets, ```$$
\Bigl( \prod_{i \in I} (X_i + Y_i) \Bigr) \bigl/ \mathcal{U} \cong \Bigl( \prod_{i \in I} X_i \Bigr) \bigl/ \mathcal{U} + \Bigl( \prod_{i \in I} Y_i \Bigr) \bigl/ \mathcal{U}
$$``` for any families of sets $(X_i)$ and $(Y_i)$. This is true without exception for type 2 ultraproducts (using the fact that these are ultra$\mbox{}$filters). But again, it isn't quite true for type 1: it can fail when some of the sets are empty. For example, let $I$ be a set, choose any nonempty proper subset $J$ of $I$, and put $$X_i = \begin{cases} 1 &\text{if } i \in J\\ \emptyset &\text{otherwise} \end{cases} \qquad\ \qquad Y_i = \begin{cases} \emptyset &\text{if } i \in J\\ 1 &\text{otherwise,} \end{cases}$$ where $1$ denotes a one-element set. Then according to the type 1 definition, $(\prod(X_i + Y_i))/\mathcal{U} = 1$ but $(\prod X_i)/\mathcal{U} + (\prod Y_i)/\mathcal{U} = \emptyset$.
I guess all of these things that go wrong are intimately related to Łoś's theorem, which Michael alluded to.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214537739753723, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-equations/69845-family-functions-differential-equations.html
|
# Thread:
1. ## Family of functions, differential equations
Verify that the indicated family of functions is a solution to the given differential equation.
dP/dt=P(1-P); P=(c1*e^t)/(1+c1*e^t)
Do I take the derivative of P first? If so do I just ignore the constants? Thanks for the help.
2. Originally Posted by cowboys111
Verify that the indicated family of functions is a solution to the given differential equation.
dP/dt=P(1-P); P=(c1*e^t)/(1+c1*e^t)
Do I take the derivative of P first? If so do I just ignore the constants? Thanks for the help.
No, leave the constants alone. Differentiate P (LHS), substitute P into the RHS and show they are equal.
3. Im still kind of confused, Is there any way you can show me how you would solve it? My class doesnt start untill tomorrow so I havent had a lecture, Im just trying to read ahead a little bit but my book has horrible examples.
4. Originally Posted by cowboys111
Verify that the indicated family of functions is a solution to the given differential equation.
dP/dt=P(1-P); P=(c1*e^t)/(1+c1*e^t)
Do I take the derivative of P first? If so do I just ignore the constants? Thanks for the help.
Sure.
$\frac{dP}{dt} = \frac{c e^t}{\left(1 + c e^t \right)^2}$
$P(1-P) = \frac{c e^t}{1 + c e^t} \left( 1 - \frac{c e^t}{1 + c e^t}\right) = \frac{c e^t}{1 + c e^t} \frac{1}{1 + c e^t} = \frac{c e^t}{\left(1 + c e^t \right)^2}$
see - the same.
5. Thanks I appriciate it.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9020435810089111, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/tags/homomorphic-encryption/info
|
# Tag info
## About homomorphic-encryption
Cryptosystems which support computation on encrypted data. They might be partially homomorphic (support for one operation such as + or *) or they might be fully homomorphic (+ and * at the same time).
Homomorphic cryptosystems are cryptsystems in which, given $c_1\mathcal{E}(m_1)$ and $c_2\mathcal{E}(m_2)$, another party can compute some function of $c_1$ and $c_2$. For example, say the other party wants to compute $m_1\cdot m_2$, the cryptosystem allows them to compute $c_1\odot c_2$ such that $\mathcal{D}(c_1\odot c_2) = m_1\cdot m_2$. Not that $\odot$ is not necessarily multiplication (and in fact often is not).
A system is said to be homomorphic with respect to addition if $m_1+m_2$ can be computed from $\mathcal{E}(m_1)$ and $\mathcal{E}(m_2)$. Homomorphic with respect to multiplication is similarly defined. A fully homomorphic cryptosystem would support both addition and multiplication at the same time.
Until 2009, no fully homomorphic cryptosystem existed. This changed with Craig Gentry's PhD Thesis. Much development has taken place since to make fully homomorphic encryption practical.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409925937652588, "perplexity_flag": "middle"}
|
http://www.r-bloggers.com/getting-data-from-an-image-introductory-post/
|
## R-bloggers
R news and tutorials contributed by (452) R bloggers
# Getting data from an image (introductory post)
March 5, 2010
By Timothée
(This article was first published on Data visualization (in R), and kindly contributed to R-bloggers)
Hi there!
This blog will be dedicated to data visualization in R. Why? Two reasons. First, when it comes to statistics, I am always starting by some exploratory analyses, mostly with plots. And when I handle large quantities of data, it’s nice to make some graphs to get a grasp about what is going on. Second, I have been a teacher as part of my PhD, and I was quite appaled to see that even Masters students have very bad visualization practices.
My goal with this blog is to share ideas/code with the R community, and more broadly, with anybody with an interest in data visualization. Updates will not be regular. This first post will be dedicated to the building of a plot digitizer in R, i.e. a small function to get the data from a plot in graphic format.
I have recently been using programs such as GraphClick and PlotDigitizer to gather data from graphs, in order to include them in future analyses (in R). While both programs are truly excellent and highly intuitive (with a special mention to GraphClick), I found myself wondering if it was not possible to digitize a plot directly in R.
And yes, we can. Let’s think about the steps to digitize a plot. The first step is obviously to load the image in the background of the plot. The second is to set calibration points. The third step is boring as hell, as we need to click the points we cant to get the data from. Finally, we just need to transform the coordinates in values, with the help of very simple maths. And this is it!
OK, let’s get this started. We will try to get the data from this graph:
### Setting the plot
First, we will be needign the ReadImages library, that we can install by typing :
```install.packages('ReadImages')
```
This packages provides the `read.jpeg` function, that we will use to read a jpeg file containing our graph :
```mygraph <- read.jpeg('plot.jpg')
plot(mygraph)
```
I strongly recommend that before that step, you start by creating a new window (`dev.new()`), and expand it to full size, as it will be far easier to click the points later on.
### Calibration
So far, so good. The next step is to calibrate the graphic, by adding four calibration points of known coordinates. Because it is not always easy to know both coordinates of a point, we will use four calibration points. For the first pair, we will know the x position, and for the second pair, the y position. That allows us to place the points directly on the axis.
```calpoints <- locator(n=4,type='p',pch=4,col='blue',lwd=2)
```
We can see the current value of the calibration points :
```as.data.frame(calpoints)
x y
1 139.66429 73.8975
2 336.38388 73.8975
3 58.72237 167.0254
4 58.72237 328.1680
```
### Point’n'click
The third step is to click each point, individually, in order to get the data.
```data <- locator(type='p',pch=1,col='red',lwd=1.2,cex=1.2)
```
After clicking all the points, you should have the following graph :
Our data are, so far :
```as.data.frame(data)
x y
1 104.8285 78.08303
2 138.6397 114.70636
3 171.4263 119.93826
4 205.2375 266.43158
5 238.0241 267.47796
6 270.8107 275.84901
7 302.5727 282.12729
8 336.3839 298.86939
9 370.1951 306.19405
10 401.9571 352.23481
```
OK, this is nearly what we want. What is left is just to write a function that will convert our data into the true coordinates.
### Conversion
It seems straightforward that the relationship between the actual scale and the scale measured on the graphic is linear, so that
$S = M\cdot a + b$
and as such, both a and b can be simply obtained by a linear regression.
We can write the very simple function `calibrate` :
```calibrate = function(calpoints,data,x1,x2,y1,y2)
{
x <- calpoints$x[c(1,2)]
y <- calpoints$y[c(3,4)]
cx <- lm(formula = c(x1,x2) ~ c(x))$coeff
cy <- lm(formula = c(y1,y2) ~ c(y))$coeff
data$x <- data$x*cx[2]+cx[1]
data$y <- data$y*cy[2]+cy[1]
return(as.data.frame(data))
}
```
And apply it to our data :
```true.data <- calibrate(calpoints,data,2,8,-1,1)
```
Which give us :
``` true.data
x y
1 1.010309 -2.0909091
2 2.000000 -1.6363636
3 3.051546 -1.5714286
4 3.979381 0.2337662
5 5.000000 0.2467532
6 5.958763 0.3506494
7 6.979381 0.4285714
8 8.000000 0.6493506
9 8.989691 0.7272727
10 9.979381 1.3116883
```
And we can plot the data :
```plot(true.data,type='b',pch=1,col='blue',lwd=1.1,bty='l')
```
Not so bad!
### Conclusion
With the simple use of R, we were able to construct a “poor man’s data extraction system” (PMDES, ©), based on the incorporation of graphics in the plot zone, and the `locator` capacity of R.
We can wrap-up everything in functions for better usability :
```library(ReadImages)
ReadAndCal = function(fname)
{
img <- read.jpeg(fname)
plot(img)
calpoints <- locator(n=4,type='p',pch=4,col='blue',lwd=2)
return(calpoints)
}
DigitData = function(color='red') locator(type='p',pch=1,col=color,lwd=1.2,cex=1.2)
Calibrate = function(calpoints,data,x1,x2,y1,y2)
{
x <- calpoints$x[c(1,2)]
y <- calpoints$y[c(3,4)]
cx <- lm(formula = c(x1,x2) ~ c(x))$coeff
cy <- lm(formula = c(y1,y2) ~ c(y))$coeff
data$x <- data$x*cx[2]+cx[1]
data$y <- data$y*cy[2]+cy[1]
return(as.data.frame(data))
}
```
Do you have any ideas to improve these functions? Let’s discuss them in the comments!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937701940536499, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/101323?sort=votes
|
## Repeated Second Eigenvalue of the Adjacency Matrix of a Graph
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is motivated by a talk I went to earlier today.
Suppose we have a $d$-regular graph $G$ with $n$ vertices, with adjacency matrix $A$.
Let $$\lambda_1\geq \lambda_2 \geq\dots \geq \lambda_n$$ be the eigenvalues of $A$, so in particular $\lambda_1=d$. If the first two eigenvalues are the same, that is $\lambda_2=\lambda_1$, then it tells us a lot about the structure of the graph. In particular, the graph must be disconnected. (This is an if and only if condition)
What if the second and third eigenvalues are equal? That is, suppose that $\lambda_1>\lambda_2=\lambda_3$. What does that tell us (if anything) about the structure of the graph?
Additional questions: If $\lambda_1=\lambda_2=\cdots=\lambda_k<\lambda_{k+1}$, then the graph will have exactly $k$ connected components. What can we say about $G$ if $\lambda_1<\lambda_2=\cdots=\lambda_{k+1}<\lambda_{k+2}$? That is, the second eigenvalue has multiplicity $k$.
What if the $n^{th}$ eigenvalue has multiplicity $k$?
-
1
The $d$-eigenspace is rather special. It corresponds to the kernel of the Laplacian which can be naturally identified with the space of locally constant functions on $g$ ("harmonic functions"). A similar interpretation doesn't exist for smaller eigenvalues so I don't expect their multiplicities to have a similar direct meaning. However, large multiplicities of eigenvalues in general suggest (but don't imply) that $G$ has nontrivial symmetries. – Qiaochu Yuan Jul 4 at 19:52
@Qiaochu: Here is one example which may count. If $\lambda_2$ has multiplicity $n-1$, that is all other eigenvalues are the same, then I believe that $G$ must be the complete graph. – Eric Naslund Jul 4 at 20:55
## 1 Answer
It you allow weighted adjacency matrices and if you insist (among there things) that the eigenspace associated to $\lambda_2$ satisfies the "strong Arnold condition", then you are dealing the Colin de Verdiere invariant. For this, the best I can do now is to refer you to the wikipedia article on this invariant.
But you question is what can be said about connected graphs where $\lambda_1$ has multiplicity greater than one. The short answer is: very little. One reason for this that if our graph was associated with some physical of chemical system, than having $\lambda_2$ very close to $\lambda_3$ will have much the same effect as having $\lambda_2=\lambda_3$. There are certainly no results on graph spectra that relate properties of a graph to whether or not $\lambda_2$ is simple.
It is true that the existence of non-trivial automorphisms can prevent all eigenvalues from being simple. The classic result of this type is that if the automorphism group of a graph is vertex transitive, then its eigenvalues can not all be simple. Stronger assumptions may give more, since each eigenspace provides a representation of the automorphism group, and if one of these representations is faithful, then the multiplicity is at least the minimum degree of an irreducible representation of the group.
On the other hand though, having eigenvalues of large multiplicity tells us nothing about the automorphism group. Strongly regular graphs have large multiplicities, but in general no non-trivial automorphisms.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276834726333618, "perplexity_flag": "head"}
|
http://gilkalai.wordpress.com/2009/04/25/a-problem-on-planar-percolation/?like=1&source=post_flair&_wpnonce=a082172354
|
Gil Kalai’s blog
## A Problem on Planar Percolation
Posted on April 25, 2009 by
Conjecture (Gady Kozma): Prove that the critical probability for planar percolation on a Cayley graph of the group $Z^2$ is always an algebraic number.
Gady mentioned this conjecture in his talk here about percolation on infinite Cayley graphs. (Update April 30: Today Gady mentioned to me an even bolder question: to show that for every group $\Gamma$, either all critical probabilities of its Cayley graphs are algebraic, or none is!!; I recall a similarly bold conjecture regarding the property of being an expander which turned out to be false but was fruitful nevertheless.)
Many problems on percolation beyond $Z^d$ were posed by Benjamini and Schramm, (See here (the paper) and here (update web page 1999)). In the talk the beautiful proof by Itai, Russ, Yuval, and Oded of the “dying percolation property” for nonamenable groups was outlined. And the Burton-Keane beautiful result on the unique infinite connected component for the amenable case was mentioned.
Brief explanations: You have an infinite graph $G$. Consider a random subgraph $H$ where every edge of $G$ is taken with probability $p$. (The edges taken are called open, and the edges not taken are called closed.) We ask:
What is the probability $\Theta(p)$ that $H$ has an infinite connected component? (Kolmogorov’s 0-1 law asserts that it is zero or one.)
Now, $\Theta(p)$ is also monotone and the critical probability is the supremum of the set of all $p$ so that $\Theta(p)$ is 0.
The dying percolation property asserts that $\Theta(p_c)=0$. It is known to hold for the standard planar percolation and is a notorious open problem for standard percolation in three dimensions.
Under very very general conditions the number of infinite components is, with probability 1, either zero, one or infinity. There is a beautiful proof of Burton and Keane that for percolation on Cayley graphs of amenable groups, above the critical percolation $H$ has, with probability one, a unique infinite component. It is conjectured that for the nonamenable case there is always an interval above the critical probablity where $H$ has infinitely many components!
### Like this:
This entry was posted in Open problems, Probability and tagged Percolation, Probability. Bookmark the permalink.
### One Response to A Problem on Planar Percolation
1. Gábor Pete says:
Hi Gil, and implicitly Gady,
The conjecture didn’t specify this, but probably Gady meant bond percolation, as opposed to site percolation.
p_c(Z^2,bond)=1/2 is known, while p_c(Z^2,site)=0.5927460… according to simulations. I don’t know if anyone has a guess if this is an algebraic number or not. Furthermore, p_c(triangular lattice,site)=1/2, while the triangular lattice is in fact a Cayley graph of \Z^2, so it may very well be that for site percolation the invariance conjecture is simply false.
I think that all known critical values are algebraic, and almost all of them are for bond percolation. Two large sets of critical values calculated are by Iva Kozakova for free-like groups http://front.math.ucdavis.edu/0801.4153, and by Wierman and Ziff for planar lattices http://arxiv.org/abs/0903.3135.
There are these sporadic examples, but I don’t know any philosophical reason why bond p_c’s should be easier to compute, for what groups we should have non-algebraic bond p_c’s, or why the invariance principle suggested by Gady should hold only for bond percolation. Gady, or anyone else?
Gabor
• ### Blogroll
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8975878953933716, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/6715/can-someone-help-me-understand-what-type-of-problem-i-am-looking-at-not-sure-if
|
# Can someone help me understand what type of problem I am looking at? Not sure if this classifies as hypothesis-testing
Please pardon me if this question is not clear. I am not sure if I am using the right terminologies.
I have conducted an experiment in different environments multiple times. So my data looks something like this:
````Environment1 1.2 2.1 1.1 1.5 1.6
Environment2 4.2 2.6 3.5 2.5 2.9
Environment3 7.2 4.6 5.3 4.5 1.6
Environment4 0.0 0.0 1.2 15.0 0.0
Environment5 3.2 2.4 7.2 5.5 6.6
Environment6 23.2 32.1 18.1 1.5 19.6
````
I can clearly see (or maybe my intuition says) that the experiment was not conducted properly in Environment4 (too low and fluctuating a lot) and Environment5 (way too high) but am not sure how to prove this. Am I supposed to rely on hypothesis-testing with the hypothesis:
The experiment was not conducted properly in Environments 4 and 6.
and then use some procedure to prove this? Or is there a standard way of showing this? Can someone please help me how to approach this kind of problems? I am using R.
-
Nice question, its a good example to expose to different procedures, because we basically know without any maths or formality, that Environment 4 and 6 are different to the rest (and Environment 1 is a little different from 2, 3, and 5). Thus any good procedure should be able to produce the obvious result, only difference coming from quantifying how different in a mathematical sense. The obvious question is "is there any other way the experiment could have actually produce these results, besides an error?" – probabilityislogic Jan 30 '11 at 6:28
@probabilityislogic: Thank You. What you say is useful: if I can somehow quantify the effectiveness of the experiment in each environment, then may be I can say something but am still not sure what to say or how to say. Ah.. (...feeling stupid typing in puzzles) :) Regarding your question: the experiment was quite controlled in the sense that, it was made sure that the environment did not change. However, the procedure could have gone wrong. May be the procedure was not executed properly according to the guidelines (perhaps?) – Legend Jan 30 '11 at 6:35
I'm talking more along the lines of "is $32.1$ a physically meaningful quantity? What would happen in the real world if this were correct." It may also be useful to speak to someone who actually did the experiment 4 or 6 (preferably the person who recorded the data). – probabilityislogic Jan 30 '11 at 14:31
@probabilityislogic: I see. I get your point. The data is question is a response time variable. My take on your question would be that the value does make sense in a physical world but its just too unusual enough to be called a rare case. The person I talked to said he did not do anything differently. Actually, the data that I put here is just a sample from the entire data and there are some cases like this spread here and there. – Legend Feb 1 '11 at 0:17
so it would appear that the most likely result is an error, but interesting discoveries can be made if you "dig deeper" so to speak. Could possibly be a new finding of some sort! but don't get too excited, it probably is nothing, but it may be worthwhile to entertain the possibility, and see where it leads you. – probabilityislogic Feb 1 '11 at 2:29
## 1 Answer
You can do a student test to see if the mean is different between the group 4,6 and the rest. Even if your sample size is small you will conclude in a difference. Note that it will tell you that group 4,6 is significantly different in average from the rest but it won't tell you that "The experiment was not conducted properly in Environments 4 and 6" which can't be answered without a knowledge of what "properly" means in the observations.
-
girad: Actually, this question came up from someone over the testing team. Properly means that they were given a set of instructions to execute to get a final value. The experiment will complete even if one of the instructions is skipped but will result in an incorrect observation. I will check out the `student test` that you mentioned. But if the test relies on mean, isn't mean supposed to be a bad measure due to its sensitivity to change in the data values? Thank you for your time. – Legend Jan 30 '11 at 6:52
@Legend A test of difference of means may be inappropriate, but that is not the fault of @robin, as pointed out in the second half of his response, which is apt: the test to use is determined by which characteristic of a suite of results signals an "improper" experiment. You could conduct an F-test for a difference of standard deviations; you could conduct multiple-outlier tests; you could conduct a Kruskal-Wallis test; etc., depending on what kind of differences you're looking for. – whuber♦ Jan 30 '11 at 18:08
@Legend There is also another difficulty that is shadowed by your question because here you guessed that 4,6 was the different samples. But what if you don't know in advance... you will have to test all configuration and probably introduce a multiple hypothesis criterion. In this case this looks like outliers detection and many question have already dealt with that here. – robin girard Jan 31 '11 at 7:35
@whuber: I did not intend to see it is anyone's fault. I am a novice here so I apologize if sounded so. @robin girard: That is a very interesting take. Thanks. I was just thinking about outlier detection. Will you be able to point me to some relevant material for this particular case? All I have used before are simple ones like k-means etc. – Legend Feb 1 '11 at 0:20
default
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9613550305366516, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/statistics/206245-combination.html
|
# Thread:
1. ## Combination
I need to write this expression with one combination number: C(15,3)-C(16,2)=C(n,k) so I need to find n and k
How can i get one from these two without using factorials?
2. ## Re: Combination
Originally Posted by Serillan
I need to write this expression with one combination number: C(15,3)-C(16,2)=C(n,k) so I need to find n and k
How can i get one from these two without using factorials?
$\binom{N+1}{k}=\binom{N}{k-1}+\binom{N}{k}$.
Now $\binom{15}{3}-\binom{16}{2}=335$.
So I did a rather large computer search on various combinations $\binom{N}{k}$ but found none to equal 335. I thought that finding one might give a clue as to how that identity might apply.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193564653396606, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/101883/college-euclidean-geometry-textbook-recommendations/101895
|
## College (Euclidean) geometry textbook recommendations
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I will be teaching a mid-level undergraduate course in Euclidean geometry this fall. Has anyone taught such a course, who can recommend a good textbook?
My students will mostly be future high school math teachers, who have some exposure to proofs and rigor but not extensively so. I hope to cover material such as constructions, semi-advanced theorems (Ceva's theorem, the nine point circle, etc., etc.), and the axiomatic approach. I won't do any projective geometry, or anything similar, as that is covered by a followup course here.
I am hoping to choose a book which covers a variety of approaches (so something short is unlikely to be suitable) and which is suitable for students with uneven preparation (i.e. whose ability to write proofs is shaky at the beginning).
Thank you!
EDIT: Cross posted a related question to Math.SE.
-
2
What is the point of your course? My view of the high school geometry course is that only the most elementary geometry is important, and the main point of that course is the axiomatic logic. I have seen attempts to give high school geometry teachers access to advanced and exciting developments in geometry, and while conveying excitement and beauty is good, if advanced geometry displaces the logic in a high school course that is probably counterproductive. – Douglas Zare Jul 10 at 20:37
@Douglas: Needless to say, an interesting and important question. I am just beginning to think about this course (it's not due to start for six weeks yet) and I don't have a strong opinion yet. I figured I would start by looking at a couple books and seeing what it was they are trying to accomplish. – Frank Thorne Jul 10 at 20:52
What background in mathematics do your students have? In particular, have they done any proofed based mathematics courses? You need to be realistic about how much you'll actually be able to teach them. – Brian Borchers Jul 11 at 3:49
I think most of them have written proofs of things like the sum of two odd numbers is even, and the like, but their background is still pretty limited. Most of them won't have had analysis or abstract algebra yet. – Frank Thorne Jul 11 at 17:43
## 7 Answers
Frank, I sympathize with your dilemma which I faced for many years and finally found a solution that I am very happy with. Geometry is a multifaceted subject with many beautiful and fascinating topics to explore. The question is what is right for undergraduate students; particularly preservice teachers. I think there are two important objectives.
(1) I agree wholeheartedly with your first respondent, Douglas, who said that the primary purpose of a geometry course is to immerse students in a logical development of the subject from axioms. Axiomatic geometry was studied for 2000 years by anyone seeking a thorough education because it is an exercise in building facts from given information, something we all need to be able to do. Unfortunately the axiomatic approach was phased out of most of our secondary curricula in the seventies.
(2) Since your students, like most of mine, are future teachers, you want a book that covers the topics they will actually be teaching but at a more advanced level. Brendan and others emphasized this point.
There is a serious problem finding a book that does both (1) and (2).
There are two ways to fulfill requirement (1). You can use a book based on Euclid's axioms. Euclid's work was a great landmark in the history of western thought, but it is severely out of date today because it was written before we really understood axiomatic systems, before we had Dedekind's real number continuum to measure lengths, and before we had Lebesgue's theory of measure as a basis for measuring areas. The alternative is to use a version of Hilbert's axioms (e.g., Moore's or Birkhoff's). These modern approaches are mathematically sound and complete, overcoming the problems of Euclid. But they are not useful for our students. The approach is highly abstract, beginning with very rudimentary axioms about points, lines and betweenness, and building a thorough but tedious foundation before getting into the substance required of (2). If you do this at a pace that students can absorb, you have no chance of getting to most of the requirements of (2) in a single semester.
Frustrated by these two alternatives, I recently developed a new and modern axiom system from which students can and develop the standard topics required by (2) in a semester course. The text was refined through feedback from users of early drafts for several years before it was published by the AMS in the MSRI-MCL series, and it has just become available. It is only \$39 for students, \$32 for AMS members, and free for instructors who teach from it. See
www.ams.org/bookstore?fn=20&arg1=mclseries&ikey=MCL-9
or go to the AMS Bookstore and find the Math Circles Library.
-
Thanks so much David. What I feel I'd like to accomplish is slightly difficult, but your motivation is very, very understandable. I ordered a review copy just now. – Frank Thorne Jul 18 at 14:30
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You can never go wrong with J. Hadamard's classic Lessons in Geometry, recently translated by AMS. This translation covers only half of Hadamard's book, namely plane geometry.
-
Hartshorne's Geometry: Euclid and Beyond (Springer Undergraduate Texts in Mathematics). I think it's a very instructive book and seems to be suitable for your purposes. He presents various geometrical constructions, Hilbert's Axioms (incidence, betweenness, congruence etc. ), geometry over fields, rigid motions, and so forth.
-
Seconding Liviu's reply about Hadamard's Lecons! This used to be one of my favorite texts (although I, too, didn't care much about the stereometric part).
As far as those "semi-advanced theorems" go, there are lots of sources for them nowadays. In no particular order:
H. S. M. Coxeter, Samuel L. Greitzer, Geometry Revisited
Nathan Altshiller-Court, College Geometry.
Roger A. Johnson, Advanced Euclidean Geometry.
Ross Honsberger, Episodes in Nineteenth and Twentieth Century Euclidean Geometry.
Coxeter/Greitzer is the most well-known of these, I think for good reasons. Altshiller-Court is pretty comprehensive (far more than you'll need unless the whole course is supposed to be about these semi-advanced theorem). Johnson is interesting (it includes some beautiful things that few remember nowadays) but aged (some proofs need a serious amount of work to be made correct by 20th century standards). Honsberger is no more systematic than its name ("Episodes") would suggest, but it is very readable.
-
I really like Isaacs' "Geometry for College Students", which I have taught from several times. It is proof-focussed but not pedantic. It's also pretty cheap (\$62).
http://www.ams.org/bookstore-getitem/item=amstext-8
-
8
When did USD 62 become "pretty cheap" for a 216 pages text? – darij grinberg Jul 11 at 17:00
That's the book that was recommended to me here. The previous instructor really liked the book, but he also said that there was a serious lack of easy exercises, and that the weaker students were unable to get through the course without a lot of hand-holding. – Frank Thorne Jul 11 at 17:41
I would recommend Alfred Posamentier's Advanced Euclidean Geometry (Key College Press, 2002). It covers much of the same topics as Geometry Revisited by Coxeter/Greitzer and Episodes... by Honsberger, and it also presents accompanying technology (namely, Sketchpad applications) that allow the students to play around with the results. That is, it gives the students more opportunity to learn how to think geometrically.
I love all of the texts mentioned (Altshiller-Court, Coxeter, Coxeter/Greitzer, Honsberger,...), but their approach to the material is very different from what undergraduates would be used to. And very different than what they will be teaching.
You might find yourself spending a lot of time helping them process the text material into concepts they would find more natural, particularly, if any one of these were the primary text of the course. This may be more work than you would have originally desired: not only teaching the mathematical content but also how to translate mathematical texts.
Then again... this is a good thing for a high school teacher to know how to do...
Either way, I strongly recommend that you look at the Common Core Standards for geometry (at http://www.corestandards.org/the-standards/) and familiarize yourself with what content these future teachers will expect to be teaching. From there, you can get a good idea of what type of thinking and content knowledge you feel that someone would need to teach this material excellently.
-
I was teaching a similar course 2 years ago and also had a problem to find right book.
The course I had to teach had to cover axiomatic approach and give some feeling of Lobachevsky geometry.
There are many books for school students, they are not exactly suitable for undergraduates. I ended up at writing my lecture notes. It is based on Birkhoff's axioms (=minimalism with no cheating). (This year I will teach it again, maybe after that the lecture notes become better.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.966372013092041, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/11832/how-do-you-derive-the-conditional-variance-for-s2-the-ols-estimator-of-sig
|
How do you derive the conditional variance for $s^2$, the OLS estimator of $\sigma^2$?
I just need a little bit of a push in the right direction. I'm working my way through Hayashi's Econometrics and hit a snag in section 1.4. Review question 7 asks:
Show that, under Assumptions 1.1-1.5,
$Var(s^2|X)=\frac{2\sigma^4}{n-K}$
Hint: If a random variable is distributed as $\chi^2(m)$, then its mean is $m$ and variance $2m$.
I figure this needs to be broken into two parts – the first showing that $s^2$ follows a $\chi^2$ distribution, and the second part showing that the mean is the expression above sans the 2.
The book gives a couple of hints about the kinds of things that follow $\chi^2$ distributions. Here's a footnote on page 41:
Fact: Let x be an $m$ dimensional random vector. If $x~N(\mu,\Sigma)$ with $\Sigma$ nonsingular, then $(x-\mu)'\Sigma^{-1}(x-\mu)\sim\chi^2(m)$.
This doesn't do me much good though. Secondly, there's this bit on page 37:
Fact: If $x\sim N(0,I_n)$ and $A$ is idempotent, then $x'Ax$ has a chi-squared distribution with degrees of freedom equal to the rank of A.
But $\varepsilon$ (measurement error) doesn't follow the standard normal distribution – its variance is $\sigma^2$, so this isn't much use to me either. I'm just starting out and not really sure how to tackle this. Could someone give me a hand?
-
What do you know about how the variance changes when you scale a random variable (or vector!)? For example, is $\mathrm{Var}(Z) = 1$, what is $\mathrm{Var}(\sigma Z)$? – cardinal Jun 11 '11 at 17:07
That's what I was thinking, but there's still the $n-k$ to worry about. Since $Var(s^2|X)=Var(e'e/(n-k)|X)$ and $n-k$ doesn't depend on $X$ you can take it out of the parentheses - but that would give $(n-k)^{-2}Var(e'e|X)$, wouldn't it? Then if the variance of $e$ is $\sigma^2$ the exponents aren't going to match the expression in the answer. I know I'm missing something, but I can't figure out what it is. – jefflovejapan Jun 12 '11 at 2:26
1 Answer
Browsing around in the online Google version of the book it seems to me that Assumption 1.5 is the normality assumption. In that case the proof of Proposition 1.3 says that $q|X \sim \chi^2(n-K)$ where $q = (n-K)s^2/\sigma^2$. Thus $$\begin{array}{rcl} \text{Var}(s^2|X) & = & \text{Var}(\sigma^2 q/(n-K)|X) \\ & = & \frac{\sigma^4}{(n-K)^2} \text{Var}(q|X) \\ & = & \frac{\sigma^4}{(n-K)^2} 2(n-K) \\ & = & \frac{2\sigma^4}{n-K} \end{array}$$ where we used the hint for the third equality.
-
Thanks a million! – jefflovejapan Jun 12 '11 at 7:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445560574531555, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/131807/deriving-the-exponential-distribution-from-a-shift-property-of-its-expectation?answertab=votes
|
# Deriving the exponential distribution from a shift property of its expectation (equivalent to memorylessness).
Suppose $X$ is a continuous, nonnegative random variable with distribution function $F$ and probability density function $f$. If for $a>0,\ E(X|X>a)=a+E(X)$, find the distribution $F$ of $X$.
-
2
0% accept rate. – Did Apr 14 '12 at 18:49
Is this a homework question? Also, what did you try? Where did you get stuck? – user2468 Apr 14 '12 at 19:04
2
24 questions, 0 accepted???? please, hadisanji, fix that – leonbloy Apr 14 '12 at 23:11
## 3 Answers
Hopefully there is a more elegant solution, but let us say $\mu=\mathbb{E}[X]$, and start with the definition of conditional expectation: $$\eqalign{ \mathbb{E}[X|X>a] &=& \int_0^{\infty}x\,\mathbb{P}[X|X>a]\,dx \\ \mu + a &=& \int_{a}^{\infty}x\,\frac{f(x)}{1-F(a)}\,dx \\ \left(\mu+a\right)\,\left(1-F(a)\right) &=& \int_{a}^{\infty}x\,f(x)\,dx \,. }$$ Differentiating with respect to $a$, we find $$\eqalign{ 1-F(a)-\left(\mu+a\right)f(a) &=& -a\,f(a) \\\\ 1-F(a)-\mu f(a) &=& 0 \\\\ F(a) + \mu F\,'(a) &=& 1 }$$ which is an ordinary differential equation, solvable by standard methods, e.g., by multiplying by the integrating factor:
$$\eqalign{ F(x) + \mu F\,'(x) &=& 1 \qquad\text{for}\qquad x\ge0 \\\\ F\,e^{x/\mu} + \mu F\,'\,e^{x/\mu} &=& e^{x/\mu} \\\\ \left( \mu\,F\,e^{x/\mu} \right)' &=& e^{x/\mu} \\\\ \mu\,F(x)\,e^{x/\mu} &=& \int e^{x/\mu} dx = \mu \, e^{x/\mu} + c \\\\ \mu\,F(x) &=& \mu + c \, e^{-x/\mu} }$$ At $x=0$, since $X$ is continuous and nonnegative, it must be the case that $F(0)=0$, from which it follows that $c=\mu F(0)-\mu=-\mu$, giving us the CDF $$F(x) = 1 - e^{-x/\mu} = 1 - e^{-\lambda x}$$ and the exponential density $$f(x) = \frac1\mu\,e^{-\mu x} = \lambda \, e^{-\lambda x}$$ where the location parameter $\mu$ and (decay) rate parameter $\lambda$ are reciprocally related, i.e., $\lambda\mu=1$.
EDIT: There is indeed now a more elegant solution, thanks to Didier.
-
1
One thing that I can't figure out, the question does not require f to be continuous or anything particular, why is the derivative of $\int_a^\infty xf(x)$ equal to $-af(a)$? – Ivan Apr 26 '12 at 4:08
@Ivan: I think it still follows from the FTOC. Formally, a continuous random variable $X$ must have absolutely continuous CDF $F$, of which its density $f$ is the Radon–Nikodym derivative, with respect to Lebesgue measure $\lambda$. – bgins Apr 26 '12 at 13:48
Thank you for replying, I'd love a clarification for my own self. Correct me if I'm wrong, but I thought that the FTOC can only guarantee that the derivative exists almost everywhere. This implies that the first order differential equation is only almost everywhere satisfied by $F(x)$, and therefore is not uniquely given by the exponential solution any more. – Ivan Apr 26 '12 at 17:12
– bgins Apr 26 '12 at 21:55
I don't think $f$ is uniquely determined. It cannot be derived from the differential equation $F(a)+\mu F'(a)=1$ since this equation is not valid on a set of measure zero. At least I don't know how to solve such equations. I agree that the exponential distribution is a solution to the above problem. I have a feeling that it might not be unique and I'd like to have an argument for or against my feeling... – Ivan Apr 27 '12 at 3:36
show 2 more comments
About the necessary hypotheses (and in relation to a discussion somewhat buried in the comments to @bgins's answer), here is a solution which does not assume that the distribution of $X$ has a density, but only that $X$ is integrable and unbounded (otherwise, the identity in the post makes no sense).
A useful tool here is the complementary PDF $G$ of $X$, defined by $G(a)=\mathrm P(X\gt a)$. For every $a\geqslant0$, let $m=\mathrm E(X)$. The identity in the post is equivalent to $\mathrm E(X-a\mid X\gt a)=m$, which is itself equivalent to $\mathrm E((X-a)^+)=m\mathrm P(X\gt a)=mG(a)$. Note that $m\gt0$ by hypothesis. Now, for every $x$ and $a$, $$(x-a)^+=\int_a^{+\infty}[x\gt z]\,\mathrm dz.$$ Integrating this with respect to the distribution of $X$ yields $$\mathrm E((X-a)^+)=\int_a^{+\infty}\mathrm P(X\gt z)\,\mathrm dz,$$ hence, for every $a\gt0$, $$mG(a)=\int_a^{+\infty}G(z)\,\mathrm dz.$$ This proves ${}^{(\ast)}$ that $G$ is infinitely differentiable on $(0,+\infty)$ and that $mG'(a)=-G(a)$ for every $a\gt0$. Since the derivative of the function $a\mapsto G(a)\mathrm e^{ma}$ is zero on $a\gt0$ and $G$ is continuous from the right on $(0,+\infty)$, one gets $G(a)=G(0)\mathrm e^{-ma}$ for every $a\geqslant0$.
Two cases arise: either $G(0)=1$, then the distribution of $X$ is exponential with parameter $1/m$; or $G(0)\lt1$, then the distribution of $X$ is a barycenter of a Dirac mass at $0$ and an exponential distribution. If the distribution of $X$ is continuous, the former case occurs.
${}^{(\ast)}$ By the usual seesaw technique: the RHS converges hence the RHS is a continuous function of $a$, hence the LHS is also a continuous function of $a$, hence the RHS integrates a continuous function of $a$, hence the RHS is a $C^1$ function of $a$, hence the LHS is also a $C^1$ function of $a$... and so on.
-
Thank you, @Didier! – bgins Apr 29 '12 at 11:55
(+1) Nice. My hunch is that the restriction to continuous random variables was thrown in there to avoid the geometric distribution, though the memorylessness of the latter only strictly holds for $a \in \mathbb N$. – cardinal Apr 29 '12 at 17:10
@cardinal (Thanks.) Hmmm... Yes, you are probably right about the motivation for the restriction. – Did Apr 29 '12 at 20:49
Yes, the exponential distribution is the ONLY continuous distribution satisfying the property, because the above differential equation has a unique solution. For the proof of this fact you can also have a look at the page on the exponential distribution at Statlect (the rate parameter and its interpretation - proof).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409360885620117, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/10594/is-there-any-value-in-studying-divisors-with-coefficients-in-a-ring-r
|
## Is there any value in studying divisors with coefficients in a ring R?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
As a rule, the various groups and quotients of the divisor group on a variety have coefficients in $\mathbb{Z}$. That is, you take $\mathbb{Z}$-linear combinations of Weil divisors or Cartier divisors, and then to construct other groups you take quotients.
However, in some cases, people tensor with $\mathbb{Q}$ and $\mathbb{R}$. So my question is:
Are these the only rings that people use as coefficients for divisors on a variety?
My vague intuition is that it probably is, because $\mathbb{Z}$ is initial in commutative rings with identity, $\mathbb{Q}$ is a field of characteristic zero, so we can use it to kill torsion, and $\mathbb{R}$ is complete, so we can guarantee that there is an $\mathbb{R}$-divisor, plus with orbifolds, rational coefficients seem to show up naturally. But is this it? More generally, what about for cycles and cocycles? There's an analogy with cohomology and the Chow ring, and we do sometimes take cohomology with coefficients either in an arbitrary ring or in some other rings (finite fields, for instance, when studying things like nonorientable manifolds), which is why I started wondering about this.
-
If $\mathbb{R}$ is complete, so are the $\mathbb{Q}_p$ for each prime $p$. – Anweshi Jan 3 2010 at 16:20
Well, then I guess I should say "complete ordered field" then, to capture what I really meant. – Charles Siegel Jan 3 2010 at 17:08
## 4 Answers
The answer to the question is yes. For example, if $\omega$ is a meromorphic 1-form on a curve (smooth and projective, say) over a field $k$, then one can naturally form a degree zero divisor with coefficients in $k$, namely the residue divisor of $\omega$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If you view a $\mathbb{Z}$-valued Cartier divisor (on say, an integral separated scheme X) as a $\mathbb{G}_m$-torsor on X with generic trivialization, then for any torus T with character group $X^\ast(T)$, an $X^\ast(T)$-valued divisor on X is a T-torsor with generic trivialization. The analogous construction will work for any group of multiplicative type, if you replace the character group with a suitable fpqc sheaf of abelian groups on the base. This shows up for X a curve in some treatments of geometric class field theory, since the space of these divisors is the affine Grassmannian (in the sense of Beilinson-Drinfeld) for T over the Ran space of X. This space is used in, e.g., Gaitsgory's Twisted Whittaker model paper, where it forms a home (together with some similar objects) for factorizable sheaves.
If R is a number ring, you can demand that $X^\ast(T)$ be a sheaf of R-modules, so in this case, you're looking for torsors under tori with CM, with generic trivializations. I don't know where or if this is used, but it seems interesting enough.
-
I think there could be different definitions of what a divisor is, but if want to keep the property that divisors are connected to line bundles, you are forced to relate to combinations of subvarieties with coefficients in $\mathbb Z$.
There is nothing preventing you from considering this group tensored with $\mathbb Q$, $\mathbb R$ (as you mention) or $\mathbb F_q$ (as Emerton's example does), but you won't get anything essentially new by different coefficients here in contrast with the cohomology, where in general $H^i(X, k) \ne H^i(X, \mathbb Z) \otimes k$.
Thus cohomology with the coefficients in sheaves other than $\mathbb Z$ appears more often than group of divisors with coefficients other than $\mathbb Z$, where it doesn't present any new phenomenon.
-
I think that line bundles over gerbes have a notion of associated divisor where the coefficients are not integers.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941796064376831, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/135553/using-the-definition-of-a-concave-function-prove-that-fx-4-x2-is-concave-d/135650
|
# Using the definition of a concave function prove that $f(x)=4-x^2$ is concave (do not use derivative).
Let $D=[-2,2]$ and $f:D\rightarrow \mathbb{R}$ be $f(x)=4-x^2$. Sketch this function.Using the definition of a concave function prove that it is concave (do not use derivative).
Attempt: $f(x)=4-x^2$ is a down-facing parabola with origin at $(0,4)$. I know that. But what $D=[-2,2]$ is given for. Is it domain or a point?
Then, how do I prove that $f(x)$ is concave using the definition of a concave function? I got the inequality which should hold for $f(x)$ to be concave:
For two distinct non-negative values of $x (u$ and $v$)
$f(u)=4-u^2$ and $f(v)=4-v^2$
Condition of a concave function:
$\lambda(4-u^2)+(1-\lambda)(4-v^2)\leq4-[(\lambda u+(1-\lambda)v]^2$
I do not know what to do next.
-
$D=[-2,2]$ is a domain consisting of an interval from $-2$ through to $2$ – Henry Apr 22 '12 at 23:20
Your condition for a concave (not convex) function is missing a $-$ and has a $-$ where it should have a $+$. It should be $\lambda(4-u^2)+(1-\lambda)(4-v^2)\leq 4 - [(\lambda u+(1-\lambda)v]^2$ and requires $0 \le \lambda \le 1$. – Henry Apr 22 '12 at 23:23
ok I corrected the respective typos. – Dostre Apr 23 '12 at 5:05
## 3 Answers
If you expand your inequality, and fiddle around you can end up with
$$(\lambda u-\lambda v)^2\leq (\sqrt{\lambda}u-\sqrt{\lambda}v)^2.$$
Without loss of generality, you may assume that $u\geq v$. This allows you to drop the squares. Another manipulation gives you something fairly obvious. Now, work your steps backwards to give a valid proof.
-
thanks but I would really appreciate if you told me what that obvious manipulation was right away because it took long time to figure it out. – Dostre Apr 23 '12 at 5:06
@Dostre: There are a couple of things you could simplify this to. I just moved all the $v$'s to one side and all the $u$'s to the other and factored. The assumption that $u\geq v$ gives you what you want then. – Joe Johnson 126 Apr 23 '12 at 11:05
To be concave $f(x)=4-x^2$ should satisfy the condition for a concave function:
For two distinct values of x (u and v) such that $f(u)=4-u^2$ and $f(v)=4-v^2$ the following inequality should be true:
$$\lambda f(u)+(1-\lambda)f(v)\leq f(\lambda u+(1-\lambda)v), \;\text{for}\; 0<\lambda<1$$
which turns into the inequality below:
$$\lambda(4-u^2)+(1-\lambda)(4-v^2)\leq4-[(\lambda u+(1-\lambda)v]^2$$
To show that the above inequality is true, first, I expanded it and made it look like the expression Joe posted(remember that $0<\lambda<1$ by definition of a concave function). After expanding LHS I got:
$4\lambda -\lambda u^2 +4-4\lambda -v^2+\lambda v^2\leq4-[(\lambda u+(1-\lambda)v]^2$
Then, I canceled out the terms $4\lambda , 4$ and subtracted the RHS from the LHS:
$-\lambda u^2-v^2+\lambda v^2+(\lambda+v-\lambda v)^2\leq0$
After expanding the expression in parenthesis the following terms cancel out: $v^2,\lambda v^2$. Rearranging the remaining terms I got:
$\lambda^2 u^2-2\lambda^2 uv+\lambda^2 v^2\leq\lambda u^2-2\lambda uv+\lambda v^2$
Which turns into the expression Joe posted:
$(\lambda u-\lambda v)^2\leq (\sqrt{\lambda}u-\sqrt{\lambda}v)^2$
Then, I factored out $\lambda 's$ and subtracted the RHS from the LHS:
$\lambda ^2(u-v)^2-\lambda (u-v)^2\leq0$
Finally I factored out $\lambda (u-v)^2$ and got:
$\lambda (u-v)^2(1-\lambda)\leq0$
which is definitely true because by definition $0<\lambda<1$ which makes the LHS strictly negative:
$\lambda (u-v)^2(1-\lambda)<0$
I fiddled around with the terms of the inequality that should be proved for the $f(x)=4-x^2$ to be concave and turned it into the form that clearly shows that it is true for $0<\lambda<1$ which is a part of the condition for a concave function.
-
If you know that for continuous functions an equivalent definition concavity is $$f\left(\frac{x+y}{2}\right) \geq \frac{f(x)+f(y)}{2}$$ and you're allowed to use this fact, the proof is easy. (This fact was mentioned in several questions at this site, perhaps more often in the dual formulation for convex functions, e.g. here.)
The following inequalities are equivalent $$\begin{align*} f\left(\frac{x+y}{2}\right) &\geq \frac{f(x)+f(y)}{2}\\ 4-\left(\frac{x+y}2\right)^2 &\ge 4-\frac{x^2+y^2}2\\ \frac{x^2+y^2}2 &\ge \left(\frac{x+y}2\right)^2\\ \frac{x^2+y^2}2 &\ge \frac{(x+y)^2}4\\ 2(x^2+y^2) &\ge (x+y)^2\\ x^2-2xy+y^2 &\ge 0\\ (x-y)^2 &\ge 0 \end{align*}$$
-
I did not know about that equivalent definition. Makes the whole thing much easier. thanks. – Dostre Apr 23 '12 at 6:35
They are not equivalent for arbitrary functions, but if the function is continuous (which is your case), then they are. (Just wanted to remind you not to forget, that you the assumption that $f$ is continuous cannot be completely left out.) – Martin Sleziak Apr 23 '12 at 7:47
Thanks very helpful – Dostre Apr 23 '12 at 7:49
1
– Martin Sleziak May 1 '12 at 14:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389368295669556, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/27029/simple-question-on-the-foundations-of-spin-foam-formalism/27030
|
Simple question on the foundations of spin foam formalism
To make it simple, take the spin foam formalism of ($SU(2)$) 3D gravity. My question is about the choice of the data that will replace the (smoothly defined) fields $e$ (the triad) and $\omega$ (the connection) on the disretized version of space-time $\mathcal{M}$: the 2-complex $\Delta$: why choose to replace $e$ by the assignment of elemnts $e\in su(2)$ to each 1-cell of $\Delta$, and elements $g_{e}\in SU(2)$ to each edge in the dual 2-complex $\mathcal{J}_{\Delta}$? I mean, these are both $su(2)$-valued 1-forms, thus, roughly speaking, assigning elements of $su(2)$ to vectors of the tangent bundle $\mathcal{TM}$, in other terms assigning elements of $su(2)$ to infinitesimal displacements represented by the 1-cells of $\Delta$. I can understand the choise for $e$, but not for $\omega$, why this is so? Why it is not the inverse choice?
-
1
– Pieter Dec 2 '11 at 10:57
1 Answer
Well, I'll take a shot at this - most of what I'm going to say comes from the text of Thiemann or Rovelli. The choice of $g_e\in SU(2)$ is connected to the holonomy, which we choose to use because it is a gauge-invariant. In fact, it's probably more pedagogical to say "the loops come from the holonomy $g_e\in SU(2)$, which we know can be made to be gauge-invariant Wilson loops" or something. Now as far as the choice for the 2-complex, we want something related to the triads so that the original Poisson algebra between the connection $A$ and fields $E$ is preserved. It actually turns out that $E$ is an $SU(2)$-valued vector density - dual to a 2-form represented by the 2-complex. In other words, we don't have two 1-forms, but rather a 1-form and an $SU(2)$-valued vector.
So that's a short answer coming mostly from Thiemann's excellent text. This issue is taken up in depth in the third of the Ashtekar series of papers:
Ashtekar, Corichi, Zapata (1998), Quantum Theory of Geometry: III. Non-commutativity of Riemannian structures, Class. Quan. Grav. 15, 2955-2972.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934738039970398, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/44897/general-relativity-and-the-microscopic-macroscopic-distinction
|
# General relativity and the microscopic/macroscopic distinction
Here is Wikipedia's diagram of the stress-energy tensor in general relativity:
I notice that all of its elements are what would be termed "macroscopic" quantities in thermodynamics. That is, in statistical mechanics we would usuallt define these quantities in terms of an ensemble of systems, rather than in terms of the microscopic state of a single system. (This doesn't make much difference for large systems, but for small ones it does.) This observation leads me to a number of questions - I hope it's ok to post them all as a single Question, since they're so closely related:
• Am I correct in inferring that general relativity is actually a macroscopic, or phenomenological theory, rather than a theory about the microscopic level?
• Was Einstein explicit about this in deriving it? Or did he simply start by assuming that matter can be modelled as a continuously subdivisible fluid and take it from there?
• If general relativity is a macroscopic theory, what does the microscopic picture look like? (I'd expect that this is actually unknown, hence all the excitement about holography and whatnot, but perhaps I'm being naïve in thinking that.)
• Are there cases in which this continuous fluid approximation breaks down? For example, what if there are two weakly interacting fluids with different pressures?
• If general relativity is a macroscopic theory, does it imply that space and time themselves are macroscopic concepts?
• Is this related to the whole "is gravity an entropic force?" debate from a few years ago?
-
1
While some quantum/stat mech types might want you to believe anything macroscopic is phenomenological curve-fitting and somehow less pure, that's no more true than saying all non-relativistic physics is phenomenological and thus without a theoretical basis. – Chris White Nov 23 '12 at 8:39
@ChrisWhite pheonomenological $\ne$ without a theoretical basis. – Nathaniel Nov 23 '12 at 8:40
..and certainly $\ne$ curve fitting! – Nathaniel Nov 23 '12 at 8:40
1
@ChrisWhite in other words, I didn't mean in the slightest to imply anything negative about general relativity by suggesting that it's a macroscopic theory. As a thermodynamicist, I consider macroscopic theories to be a very pure form of physics indeed. – Nathaniel Nov 23 '12 at 8:42
1
Very well - I guess I've just been jaded by seeing too many physicists deride other branches of physics for not using primitive enough principles. – Chris White Nov 23 '12 at 8:45
## 5 Answers
Before answering, I would like to say that the difference between macroscopic and microscopic is not made in terms of ensembles of systems; in fact, quantum mechanics has an ensemble interpretation. About your questions, my answers are the following:
• Yes. General relativity is a pre-quantum theory, which means that does not account for the discrete particle-like structure of matter. Particularly, I never use the term "phenomenological theory", which I consider a misnomer.
• Yes, Einstein, Grossmann, and Hilbert explicitly ignored the structure of matter when developed general relativity.
• There is not microscopic picture of general relativity, because this is a (geo)metric theory. Somehow as there is not a microscopic picture of geometric optics. Of course there is a microscopic picture of physical optics which we call quantum optics. A quantum gravity is currently under active research. A first step is the quantum field theory of gravitons whose "microscopic picture" is close to that of quantum electrodynamics.
• There are many cases where the continuous fluid approximation used in general relativity breaks down. E.g. if there are shock waves in your interacting fluids, then they cannot be described by a continuous fluid model. The best that you can do is to describe matter at the mesoscopic level and gravity at the macroscopic level. An example is the Einstein/Vlasov approach. Matter (e.g. a collision-less plasma) is described by the Vlasov kinetic equation, but $g_{\mu\nu}$ is obtained from an approximated energy-momentum tensor $T_{\mu\nu}$ which is computed from averaging over matter with the help of the kinetic $f(x,p,t)$ (see eq. 32 in above link). Both mesoscopic and microscopic descriptions of gravity are entirely outside the scope of GR.
• No. Because the (geo)metric model of general relativity is not fundamental, as Feynman already noted [1]:
It is one of the peculiar aspects of the theory of gravitation, that is has both a field interpretation and a geometrical interpretation. [...] The geometrical interpretation is not really necessary or essential to physics.
The underlying quantum theory of gravity uses, essentially, the same space and time as quantum mechanics.
• No. There are lots of flawed thermodynamic analogies found in the general relativity literature (black hole thermodynamics being the more popular of them).
[1] Feynman Lectures on Gravitation 1995: Addison-Wesley Publishing Company; Massachusetts; John Preskill; Kip S. Thorne (foreword); Brian Hatfield (Editor). Feynman. Richard P.; Morinigo, B. Fernando; Wagner, William G.
-
What would you say does define the microscopic/macroscopic distinction, if not the use of ensembles? In my view, the difference is that macroscopic quantities require the entropy to be defined in order for them to make sense (e.g. pressure can be defined as $-\partial U/\partial V |_{S,{N_i}}$, in which entropy must be held constant when calculating the derivative), and you can't have entropy without an ensemble. – Nathaniel Nov 25 '12 at 8:56
As stated above QM is a theory of ensembles (Rev. Mod. Phys. 1970: 42, 358-381). The amount of atomic-molecular details are what differentiates each level in the hierarchy of ensembles: thermodynamic level <---> hydrodynamic level <---> Boltzmann level <---> ··· QM level. The two first levels are collectively labelled as macroscopic description, QM is microscopic description and the Boltzmann level is often considered mesoscopic description. – juanrga Nov 26 '12 at 21:03
Sure, you can interpret QM as an ensemble theory. I'm in favour of this, though the vast majority of physicists aren't. But I would say that this interpretation makes QM a macroscopic theory, precisely because it supposes a level below the QM level, which describes the individual members of the ensemble. (By the way, if you write "@Nathaniel" in your replies I'll be notified of them. There are some fairly complicated rules about when you need to do this and when you don't.) – Nathaniel Nov 30 '12 at 2:14
More constructively, you say that the fluid model of GR breaks down in the case of shock waves, which makes sense - but how does one do general relativity in such cases? You can't write down Einstein's equations if there isn't a (single, unique) stress-energy tensor, so what takes its place? – Nathaniel Nov 30 '12 at 3:27
@Nathaniel: The ensemble interpretation of QM is rather agnostic regarding the existence of a sub-quantum level. If this level exists I would say that both the quantum and the sub-quantum are microscopic levels, somehow as both the thermodynamic and the hydrodynamic are macroscopic levels. I am glad to see that you favour the ensemble interpretation. – juanrga Nov 30 '12 at 19:17
show 4 more comments
General relativity is a classical theory, so it makes sense at all levels, though that's different from being correct at all levels (it shouldn't be). The energy-momentum tensor doesn't intrinsically have anything to do with statistical mechanics or fluids at all. Its size just reflects that gravity is a spin-2 field.
For a particle with charge $q$ in its rest frame with worldline $\xi^\mu(\tau)$ with four-velocity $u$ as a function of its proper time $\tau$ is, the appropriate source density for the field of spins 0,1,2 (respectively) would be: $$\rho(x^\sigma) = q\int\delta^4(x^\sigma-\xi^\sigma(\tau))\,\mathrm{d}\tau$$ $$J^\mu(x^\sigma) = q\int u^\mu\delta^4(x^\sigma-\xi^\sigma(\tau))\,\mathrm{d}\tau$$ $$T^{\mu\nu}(x^\sigma) = q\int u^\mu u^\nu\delta^4(x^\sigma-\xi^\sigma(\tau))\,\mathrm{d}\tau$$ The electromagnetic field is spin-1, and the electromagnetic charge density is actually a four-current $J^\mu$, and there is nothing conceptually strange about a lone point-like charge. It just means the four-current is described using an appropriate Dirac delta function over its worldline, in a slightly more complicated way than having a Dirac delta function for a charge density $\rho$.
In general, a four-current $J^\mu$ means that an observer with four-velocity $v$ measures a charge density $J^\mu v_\mu.$ Similarly, a 2-tensor $T^{\mu\nu}$ means that such an observer measures a four-current density $T^{\mu\nu}v_\mu$ and charge density $T^{\mu\nu}v_\mu v_\nu$. For GTR, the charge is mass-energy and the four-current is the four-momentum.
Thus, microscopically, it's exactly the same theory.
The problem isn't that we can't get a sensible $T^{\mu\nu}$ for ideal point-masses. In flat spacetime, it's easy, and indeed $T^{\mu\nu}$ is sensible and useful even in STR. The problem is peculiar to GTR rather than the conceptual nature of $T^{\mu\nu}$: the theory says spacetime won't be flat and that you'll get a black holes with a singularity. To try to fix this, Einstein invented the wormhole ("Einstein-Rosen bridge") and attempted to replace point-particles with them. The proposal doesn't actually work for that purpose, though.
-
Thanks, this looks very reasonable. – Nathaniel Nov 23 '12 at 9:05
Was Einstein explicit about this in deriving it? Or did he simply start by assuming that matter can be modelled as a continuously subdivisible fluid and take it from there?
Judge by yourself with this excerpt from the Princeton lectures (1921), published in english as "The Principle of Relativity". When departing from Poisson's equation in his heuristic search for the field equations of GR, he states:
We have seen, indeed, that in a more complete analysis the energy tensor can be regarded only as a provisional means of representing matter. In reality, matter consists of electrically charged particles, and is to be regarded itself as a part, in fact, the principal part, of the electromagnetic field. It is only the circumstance that we have not sufficient knowledge of the electromagnetic field of concentrated charges that compels us, provisionally, to leave undetermined in presenting the theory, the true form of this tensor. From this point of view our problem now is to introduce a tensor, $T_{\mu\nu}$, of the second rank, whose structure we do not know provisionally, and which includes in itself the energy density of the electromagnetic field and of ponderable matter; we shall denote this in the following as the energy tensor of matter.''
-
To quote the Wikipedia article
$T^{ik}$ represent flux of $i^{th}$ component of linear momentum across the $x^k$ surface
so the definition is actually microscopic, in the sense that you can in principle calculate the momentum for every particle in an ensemble. However the momentum flux corresponds to what we mean by shear stress and pressure so this is what we'd use in practice. As we reduce the size of the system our approximation of the momentum fluxes by macroscopic concepts becomes poor and we just put in the momenta explicitly.
There is no sense in which GR is a macroscopic theory, well, not until we get to quantum gravity but this has a much smaller length scale than what we normally mean by microscopic. It's just that we may wish to use macroscopic approximations when we construct the stress-energy tensor.
-
What do you mean by "we just put in the momenta explicitly"? Does this mean it's possible to do GR with ideal point masses instead of a continuous fluid model? – Nathaniel Nov 23 '12 at 7:21
Also, I really don't think a "momentum flux" is a microscopic concept at all. You either have to average over an ensemble or over a nonzero time period in order for it to be meaningful - otherwise it will just be zero (if no particle crossed the surface) or infinite. – Nathaniel Nov 23 '12 at 7:22
@Nathaniel: if you treat your particle as a point mass you will indeed get an infinite momentum flux, but then the particle would be a black hole so it would have infinite density. I'm not sure how you include a black hole in the stress-energy tensor: presumably you'd treat it as being the size of it's event horizon. That's finite so you'd get a finite momentum flux. Or maybe you'd need to use a proper (currently non-existant) quantum gravity approach. – John Rennie Nov 23 '12 at 8:53
I don't want to un-accept @juranga's perfectly good answer, but for future visitors it's worth recording that the macroscopic nature of general relativity is made very clear in this 1995 paper, in which Ted Jacobson derives Einstein's field equations from $dS = \delta Q/T$, together with the Bekenstein bound. (Plus a few other assumptions to do with special relativity and the Unruh effect.)
In the last few paragraphs of the paper, Jacobson spells out some circumstances in which the thermodynamic assumptions he makes might break down. In particular, he points out that in his argument the time-reversible nature of space-time evolution arises from a near-equilibrium assumption. This assumption would not hold close to the big bang and black hole singularities, and this might lead to space-time behaving in thermodynamically irreversible ways. It's interesting stuff.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 3, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341714382171631, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=3984303
|
Physics Forums
## The meaning of Weyl curvature caused by gravitational waves
In his article The Ricci and Weyl Tensors John Baez states that the tidal stretching and squashing caused by gravitational waves would not change the volume as there is 'only' Weyl- but no Ricci-curvature. No additional meaning is mentioned.
But, beeing not an expert I still have no good understanding, what Weyl-curvature really means and would appreciate any help.
If I think of spacetime curvature I have effects like Shapiro-Delay/time dilation/the sum of light ray triangles etc. in my mind. But it seems that the Weyl curvature is not responsible for anything else than the tidal effects happening in the x-y-plane, the transverse plane of the wave. Is that right? Perhaps it is sufficient to say, a plane is not curved. The MTW talkes about the plane-wave solution.
Otherwise gravitational wave measurements should be obscured by Shapiro-Delay/time dilation to a certain extent.
But on the other side and this puzzles me, gravitational waves are called "ripples of spacetime curvature". From this I would expect the spacetime curvature to oszillate locally between positive and negative values as the wave passes by, which should be measurable by light ray triangles, e.g. However is that right? And if so, in which plane? In the transverse plane or in the plane in which the wave propagates?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Blog Entries: 3 Recognitions: Gold Member For the pp-wave metric $$ds^2={du}^{2}\,\left( -\left( {z}^{2}+{y}^{2}\right) \,C+2\,y\,z\,B+\left( {y}^{2}-{z}^{2}\right) \,A\right) +{dz}^{2}+{dy}^{2}+2\,du\,dv$$ where A,B,C are functions of u only, the relevant part of the tidal tensor (in the Y,Z plane, with C=0) is $$\left[ \begin{array}{cc} A & B \\\ B & -A \end{array} \right]$$ This is a vacuum solution ( if C=0) so all the curvature is Weyl. Notice that the trace is zero so volume is preserved. The wave is travelling at c in the u-direction, so this is a tranverse wave if A is a wavy function like $a\sin(\omega u)$. I think B is a sign of polarization ( rotation around the u-axis ?).
Recognitions: Gold Member Science Advisor Staff Emeritus Time dilation is not a curvature effect. Time dilation is an effect you can get in SR, in flat spacetime. It may help to think of curvature generically as an effect in which lines that are initially parallel can later on become non-parallel. On a sphere, lines that are initially parallel can later converge. If two particles attract one another gravitationally, their initially parallel trajectories can later converge. If you release a cloud of test particles in a region of space where there is only Weyl curvature, you get divergence in some plane(s) and convergence in some other plane(s), with the net result that the volume stays the same. This may be helpful: http://www.lightandmatter.com/html_b...tml#Section5.1 http://www.lightandmatter.com/html_b...tml#Section9.2 (subsection 9.2.2)
## The meaning of Weyl curvature caused by gravitational waves
You can say the Riemann tensor has two parts: the Weyl tensor, which describes curvature in regions devoid of matter; and a "source" tensor, based on the stress-energy, which corresponds to curvature from matter at the point in question.
Imagine you were looking at a small differential four-volume: the Weyl tensor would represent curvature originating from outside this four-volume, while the source tensor (which comes from stress energy and, hence, can be contracted to the Ricci tensor) describes what does come from this volume.
An analogous case would be finding the electric field at a given point. If you know all the sources in a given volume and assume no sources exist outside, you can calculate the electric field, sure, but there are other valid solutions to the Maxwell equations based on sources from outside the volume of interest.
Thank you for your comments.
Mentz114 mentioned the tidal tensor.This tensor vanishes in flat space, as there is no energy exchange (absorbtion, emission). So it is agreed, that there is only Weyl curvature.
Quote by bcrowell If you release a cloud of test particles in a region of space where there is only Weyl curvature, you get divergence in some plane(s) and convergence in some other plane(s), with the net result that the volume stays the same.
Yes.
Let us look at the 2 moments where the metric is streched in the x-direction (A) and half a period later in y-direction (B). Now the sum of light ray angles in the xy-plane shall be measured at A and B then, assuming that the change of period of the wave is neglible compared to the time needed to complete the resp. measurement.
What will these measurements show regarding the sum of the angles?
Will they prove that the spacetime curvature in the x-y-plane of the gravitational wave oscillates between negative and positive values, or not?
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by timmdeeg Let us look at the 2 moments where the metric is streched in the x-direction (A) and half a period later in y-direction (B). Now the sum of light ray angles in the xy-plane shall be measured at A and B then, assuming that the change of period of the wave is neglible compared to the time needed to complete the resp. measurement. What will these measurements show regarding the sum of the angles? Will they prove that the spacetime curvature in the x-y-plane of the gravitational wave oscillates between negative and positive values, or not?
I think you're getting tangled up here by an incomplete understanding of spacetime curvature. The sum of the interior angles of a triangle measures spatial curvature, but GR isn't concerned with spatial curvature; it deals with spacetime curvature, i.e., 4 dimensions, not 3. It sounds like what you're visualizing is actually curvature of a 2-dimensional space with both dimensions spacelike. In such a space, there is only a single number, the Gaussian curvature, that measures the curvature at any given point. This is not the case in 3+1 dimensions, where you need the whole Riemann tensor to describe the curvature completely. As an example, standard cosmological models these days typically have zero spatial curvature, but they don't have zero curvature. When you talk about constructing triangles out of light rays, I guess you're imagining that as a measurement of the spatial geometry at one instant, but the light rays actually take time to propagate -- they propagate at exactly the same speed as the gravitational wave. Because the spatial geometry is three-dimensional, not two-dimensional, you can't characterize it by a single number that's positive or negative. (The only reason you can get away with this in cosmological spacetimes is that they have rotational symmetry, which is not present in a gravitational wave.)
Thanks for answering.
Quote by bcrowell When you talk about constructing triangles out of light rays, I guess you're imagining that as a measurement of the spatial geometry at one instant, but the light rays actually take time to propagate -- they propagate at exactly the same speed as the gravitational wave. Because the spatial geometry is three-dimensional, not two-dimensional, you can't characterize it by a single number that's positive or negative.
Yes, gravitational waves propagate with lightspeed. My reasoning was that considering a low frequent wave the shape of the distortion of the metric wouldn't change much during the measurement. But nevertheless it isn't infact an instant measurement and it happens within 2 space dimensions.
I was originally puzzled from what is written
here, page 11:
The plane wavefront moves down the page and spacetime curvature, as determined by light ray triangles, oscillates from flat to convex to flat in the first half cycle, and then becomes concave during the second half cycle.
How then should I understand this? Could you kindly comment on that?
I have in mind that even in the scientific world sometimes space curvature is mixed up with spacetime curvature. Does this happen here?
I still wonder, if it is correct to say:
A gravitational wave curves spacetime periodically positive and negative.
Or is this a priori a wrong statement, because only the stress-energy tensor can be responsible for any curvature of spacetime? In flat space, far away from masses, there is Weyl curvature only.
However, according to this, page 11:
The plane wavefront moves down the page and spacetime curvature, as determined by light ray triangles, oscillates from flat to convex to flat in the first half cycle, and then becomes concave during the second half cycle.
there is spacetime curvature.
Are the triangles shown on page 11 in the plane parallel to the z-direction in which the wave propagates or should they be understood to be in the x-y-plane. I assumed the latter, but may be wrongly. If the former is right, then the extrema of the wave's amplitude could be measured seperately. Any comment to clarify that is very welcome.
Another approach: Given parallel null-geodesics (perhaps it is better to choose parallel geodesics of test particles). What happens to these geodesics, if a gravitational wave propagates (a) parallel and (b) perpenticular to them? I assume, that only in case (a) the geodesics are bent periodically inwards and outwards. If so, I would conclude that a gravitational wave curves spacetime periodically positive and negative. Please correct, if wrong.
Supposed correctness, I further assume that geodesics coming closer and moving away from each other, resp. show positive and negative spacetime curvature respectively.
However testparticles are moving simultaneously in opposite directions along the x- and y-axis as the gravitational wave passes by. So, the conclusion woud be that while the spacetime curvature is positive in x-direction, it is negative in y-direction and vice versa in the next half period.
I have never heard about this and do not trust this reasoning myself. Any help to improve my understanding is appreciated. Sorry, it must be tiring for you experts.
Thread Tools
| | | |
|----------------------------------------------------------------------------------|------------------------------|---------|
| Similar Threads for: The meaning of Weyl curvature caused by gravitational waves | | |
| Thread | Forum | Replies |
| | Special & General Relativity | 23 |
| | Special & General Relativity | 6 |
| | General Astronomy | 18 |
| | General Physics | 0 |
| | Special & General Relativity | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358646273612976, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/50034/particle-filter-motion-model?answertab=oldest
|
# Particle filter motion model
As I understand the basic idea of particle filter is to predict the state of the particle by generating N different possible state. After that, each possible state is evaluated by a predict model (give the weight to each particle).
So...let say, if my particle is (x, y, z). Then, because there are M different possible values for x, N possible values for y and L different possible values for z. The number of particles that I need to generate is M*N*L?
Thanks
-
The question lacks context. You may or may not have to generate all possible states depending on the context. You added the Monte Carlo tag; a Monte Carlo simulation typically doesn't generate all possible states. If your question is just whether the number of different possible values for $(x,y,z)$ is $MNL$ when there are $M$ different possible values for $x$, $N$ possible values for $y$ and $L$ different possible values for $z$, the answer is yes. – joriki Jul 7 '11 at 8:14
Thank Joriki. I can not tag "particle filter" or "condensation" so I had to tag "monte carlo" instead. If we dont generate all possible states, we may miss some important states, dont we? – user10128 Jul 7 '11 at 14:24
As I said, there's not enough context to say anything about that. – joriki Jul 7 '11 at 14:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8402344584465027, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/27038/what-hermitian-operators-can-be-observables/27039
|
# What Hermitian operators can be observables?
We can construct a Hermitian operator $O$ in the following general way:
1. find a complete set of projectors $P_\lambda$ which commute,
2. assign to each projector a unique real number $\lambda\in\mathbb R$.
By this, each projector defines an eigenspace of the operator $O$, and the corresponding eigenvalues are the real numbers $\lambda$. In the particular case in which the eigenvalues are non-degenerate, the operator $O$ has the form $$O=\sum_\lambda\lambda|\lambda\rangle\langle\lambda|$$
Question: what restrictions which prevent $O$ from being an observable are known?
For example, we can't admit as observables the Hermitian operators having as eigenstates superpositions forbidden by the superselection rules.
a) Where can I find an exhaustive list of the superselection rules?
b) Are there other rules?
Update:
c) Is the particular case when the Hilbert space is the tensor product of two Hilbert spaces (representing two quantum systems), special from this viewpoint?
-
A superselection rule is when no local operator can link between two states. It's exactly like when in statistical mechanics, you have zero probability of making a macroscopic motion. There is no exhaustive list, as any vaccum condensate which breaks an exact symmetry or makes a SUSY modulus is automatically a superselection sector maker. – Ron Maimon May 8 '12 at 15:24
## 2 Answers
I think the fundamental object is quantum mechanics is not the Hilbert space and operators on it but the C*-algebra of observables. In this picture the Hilbert space appears as a representation of the algebra. Different irreducible representations are different superselection sectors. The answer to "which operator is observable" is thus simple: the observable operators are those that come from the algebra. Indeed its better to think of observables as self-adjoint elements of the algebra rather than as operators
You might ask where do we get the algebra from. Well, this already should be supplied by the particular model. For a quantum mechanical particle moving on a manifold $M$, the C*-algebra consists of all bounded operators on $L^2(\hat{M})$ commuting with $\pi_1(M)$ where $\hat{M}$ is the universal cover of $M$. The superselection sectors correspond to irreducible representations of $\pi_1(M)$. For QFTs the problem of constructing the algebra of observables is in general open however certain cases (such as free QFT and I believe rational CFT as well) were solved. An approach emphasizing the algebra point of view is Haag-Kastler axiomatic QFT
From the point of view of deformation quantization the quantum observable algebra is a non-commutative deformation of the algebra of continuous (say) functions on the classical phase space. This point of view is not fundamental but it's useful. For example it allows to understand different values of spin and different quantum statistics as superselection sectors
-
Thank you, this is very useful. – Cristi Stoica Feb 14 '12 at 19:13
-1: This is useless. It doesn't answer the question, and is just a collection of impressive sounding trivial gibberish. You didn't say which observables could be measured, you defined "superselection sector" in the maximally unenlightening way, and your example of "different spin values" is wrong and the stuff you talk about is not useful. This is just terrible. – Ron Maimon May 8 '12 at 15:19
Without superselection rules to restrict the observables, any Hermitian operator is an admissible observable. The case of multiple identical systems is very important. Indeed, if the systems are really identical, only observables that are symmetric under the exchange of the systems are admissible. In such a case, technically speaking you should only consider observables that commute with all possible permutation operators (i.e., with the elements of the representation of the permutation group on the Hilbert space of the systems).
-
Thank you for the answer. Indeed, for identical particles one takes as the Hilbert space as the quotient of the tensor product by the appropriate ideal. Do you know some bibliographic references showing that the only restrictions to a Hermitian operator to be observable are only these? – Cristi Stoica Feb 14 '12 at 9:34
– Marco Feb 14 '12 at 18:51
Thank you, this helps. – Cristi Stoica Feb 14 '12 at 19:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219019412994385, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/78023/genus-and-spinor-genus-of-a-lattice/78025
|
## Genus and Spinor genus of a lattice
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi, I'm looking for a motivation for the names genus and spinor genus of a lattice (and spinor norm of an isometry).
Is there any relation between the genus of a lattice and the genus of an algebraic curve ?
Is the spinor norm used in number theory related to physics ? I mean, if I have an isometry in a quadratic or skew-hermitian space and I calculate its spinor norm... Is there a physical interpretation for that ?
Thanks a lot !!
-
The term spinor just means an element of a certain representation of an orthogonal group (namely a spin representation) (or maybe they have the spin group itself lying around). The group is usually considered as the underlying symmetry of the physics that's going on. Similarly, when a physicist says the word vector, they usually implicitly mean an element of the standard representation of O(3) (or a bigger O(n)) that transforms under the quotient SO(3). This is why they have the term pseudovector: these flip under reflections. – Rob Harron Oct 13 2011 at 14:46
The genus of a lattice developed from the genus of quadratic forms. See Frei, On the development of the genus of quadratic forms, Ann. Sci. Math. Quebec 3, 5-62 (1979). – Franz Lemmermeyer Oct 14 2011 at 18:19
## 3 Answers
The word "genus" means "kind," more or less. So no, there is no relation between the genus of a lattice and the genus of an algebraic curve; in both cases the word "genus" just reflects that we like to bundle together sets of objects which have important features in common.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Given a quadratic space $(V, q)$, and an orthogonal transformation $\sigma \in O(V,q)$, the spinor norm of $\sigma$ is nonzero if and only if $\sigma$ is not in the image of the canonical map $Pin(V,q) \to O(V,q)$. If $\sigma \in SO(V,q)$, we can say the same using the map $Spin(V,q) \to SO(V,q)$. It is a refined way to measure the failure of an element to come from the spin group, and you can view it as a boundary map in Galois cohomology (see, e.g., Wikipedia).
I don't know of a deep connection to physics, but it does come up in the following sense: if $(V,q)$ is an indefinite real space of dimension at least 3, like $\mathbb{R}^{3,1}$, then the pin group only has 2 connected components, while the orthogonal group has 4 - we can reflect in vectors of either positive or negative norm to get P or T type discrete symmetries. The pin group only maps to the subgroup of $O(3,1)$ generated by reflections in positive norm vectors, as these are precisely the transformations with positive spinor norm. The spin group is connected, and maps to the connected group $SO_0(3,1)$, while $SO(3,1)$ contains PT symmetries that reverse orientation of both space and time.
The spinor genus is a version of this that uses additional valuations - two lattices have the same spinor genus if for all completions you can transform the respective base changes by orthogonal transformations with trivial spinor genus.
-
Adding to the previous answer: Actually in its original form due to Eichler and Kneser one uses not $Pin$ but the spin group, which is the two fold simply connected cover of the special orthogonal group and has its name from its use in quantum theory.
The term genus was introduced by Gauss (genus in latin, Geschlecht in german). As already remarked, this is just a notion of grouping things together.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357118606567383, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/61031/boundness-of-laplacian-eigenfunctions/61041
|
## Boundness of Laplacian eigenfunctions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $A$ be a bounded domain in $\mathbb R^d$, $d>1$, and $\{u_k\}$ is the set of all $L^2$-normalized Laplacian eigenfunctions on $A$ with Dirichlet boundary condition (i.e., $\|u_k\|_2 = 1$).
Is it true that these eigenfunctions are uniformly bounded, i.e., `$sup_k \|u_k\|_\infty < \infty$`, where `$\|.\|_\infty$` is the $L^\infty$-norm (the maximum)? In other words, does there exist a constant $C_A$ such that for any $k$ and any $x\in A$, $|u_k(x)| < C_A$?
If the answer is positive, please provide a reference or a proof.
If the answer is negative, please provide a counter-example. In that case, what are the conditions on the domain A to make this statement true?
-
There is a strongly related question of mine mathoverflow.net/questions/55235/… where Piero gave some arguments why in general it cannot be true. – András Bátkai Apr 8 2011 at 10:46
## 3 Answers
The answer is no. The following reference specifically discusses the case of the two-dimensional disk: http://www.staff.uni-oldenburg.de/daniel.grieser/wwwpapers/diss.pdf
-
Thanks for this reply. But what would be the necessary/sufficient condition on the domain to make the eigenfunctions uniformly bounded? – Denis Grebenkov Apr 9 2011 at 7:18
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a paper on this subject (containing also further references) by C. D. Sogge, Eigenfunction and Bochner-Riesz estimates on manifolds with boundary. Math. Res. Lett. 9, No.2-3, 205-216 (2002), ArXiv: math/0202032. It is stated that typically there is some growth in $L_\infty$ metric, not uniform boundedness.
-
Thanks for this reply and helpful reference. But what would be the necessary/sufficient condition on the domain to make the eigenfunctions uniformly bounded? – Denis Grebenkov Apr 9 2011 at 7:19
I don't have a general answer (I guess it is yes, there are uniformy bounded, at least when $A$ is a smooth bounded domain). At least, let me mention the case of the torus $\mathbb T^d=\mathbb R^d/\mathbb Z^d$. The eigenfunctions are the exponentials $\exp(2i\pi m\cdot x)$, where $m\in\mathbb N^d$. They are uniformly bounded and this fact is crucial in the Riesz-Thorin interpolation theorem that the Fourier series of an $L^p$-function $f$ belongs to $\ell^{p'}$ whenever $1\le p\le2$ and $p'$ is the conjugate exponent.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8953595161437988, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/179868/how-to-transform-a-stochastic-jump-diffusion-equation-to-a-levy-stochastic-diffe?answertab=votes
|
# How to transform a stochastic jump diffusion equation to a Levy stochastic differential equation?
If I have this type of stochastic differential equation : $$dX(t) = A(X(t),t)\ dt +B(X(t),t)\ dW(t) + C(X(t),t)\ dP(t)$$ With $$\begin{align} dW(t)& : \text{A wiener process}\\ dP(t)& : \text{A Poisson process with parameter }\Lambda\\ A,B,C& : \text{Smooth functions} \end{align}$$ and I want to transform it to this type of stochastic differential equations: $$dX(t) = F(X(t),t)\ dt + G(X(t),t)\ dL(t)\\$$ with $$\begin{align} L(t)& : \text{An }\alpha\text{-stable Levy process} \end{align}$$ I was trying to identify the relations between the coefficients and parameters. Could some one tell me if I can use the development of a Levy process to a Brownian motion, a drift and a poisson process to find a relation between those two equations ?
I wanted to use it in the case $F$ and $G$ are constants.
Thank you.
Kind regards
-
@ Samatix : When possible I think that the best way to do it is to use Lévy-Khintchine decomposition of the process and identifying terms. Best regards – TheBridge Aug 7 '12 at 15:53
Hi TheBridge. I found the equation using google but it is too complicated to apply. Could you, please, write down the Lévy-Khintchine formula ? Best regards – Samatix Aug 7 '12 at 17:03
Do you know where I can find the formula you talked about ? – Samatix Aug 8 '12 at 10:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230420589447021, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/89533?sort=oldest
|
## When a quotient singularity is toric?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $G \subset SL(n,\mathbb{C})$ be a cyclic subgroup of finite order, Is it true that $\mathbb{C}^n /G$ is toric ? If not then when it is ?
-
1
The "cyclic" condition is redundant, the quotient is toric for any finite Abelian group. – Dmitri Feb 26 2012 at 0:21
I kind of knew that but I just simplified the problem. I could not remember any proof or procedure. How is the the construction? – Mohammad F.Tehrani Feb 26 2012 at 0:34
3
The action of any finite Abelian group on $\mathbb C^n$ is diagonalisable - this is just linear algebra. So the action commutes with diagonal $(\mathbb C^*)^n$, this makes your quotient toric. – Dmitri Feb 26 2012 at 1:26
## 1 Answer
Yes, it is true. Let $G \cong \mathbb{Z}/m$ act by $\mathrm{diag}(\zeta^{a_1}, \zeta^{a_2}, \ldots, \zeta^{a_n})$, where $\zeta$ is a primitive $n$-th root of unity. Let $S$ be the semigroup $\{ (b_1, \ldots, b_n) \in \mathbb{Z}_{\geq 0}^n : \sum a_i b_i \equiv 0 \mod m \}$. Then the quotient is Spec of the semigroup ring $\mathbb{C}[S]$. Since the semigroup is torsion free, finitely generated and saturated, the corresponding Spec is an affine toric variety.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.897322952747345, "perplexity_flag": "middle"}
|
http://www.haskell.org/haskellwiki/index.php?title=Free_structure&oldid=33900
|
# Free structure
### From HaskellWiki
Revision as of 04:24, 2 March 2010 by Dolio (Talk | contribs)
(diff) ←Older revision | Current revision (diff) | Newer revision→ (diff)
## Contents
### 1 Introduction
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it.
### 2 Algebra
#### 2.1 What sort of structures are we talking about?
Free structures originate in abstract algebra, so that provides a good place to start. Some common structures in algebra are:
• Monoids
• consisting of
• A set M
• An identity $e \in M$
• A binary operation $* : M \times M \to M$
• And satisfying the equations
• x * (y * z) = (x * y) * z
• e * x = x = x * e
• Groups
• consisting of
• A monoid (M,e, * )
• An additional unary operation $\,^{-1} : M \to M$
• satisfying
• x * x − 1 = e = x − 1 * x
• Rings
• consisting of
• A set R
• A unary operation $- : R \to R$
• Two binary operations $+, * : R \times R \to R$
• Distinguished elements $0, 1 \in R$
• such that
• (R,0, + , − ) is a group
• (R,1, * ) is a monoid
• x + y = y + x
• (x + y) * z = x * z + y * z
• x * (y + z) = x * y + x * z
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.
#### 2.2 Free algebraic structures
Now, given such a description, we can talk about the free structure over a particular set S (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given S, we want to find some set M, together with appropriate operations to make M the structure in question, along with the following two criteria:
• There is an injection $i : S \to M$
• The structure generated is as 'simple' as possible.
• M should contain only elements that are required to exist by i and the operations of the structure.
• The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for the structure.
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation x * y = y * x should not hold unless x = y, x = e or y = e.
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):
```M = [S]
e = []
* = (++)
i : S -> [S]
i x = [x] -- i x = x : []
[] ++ xs = xs = xs ++ []
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []
-- etc.```
### 3 The category connection
#### 3.1 Free structure functors
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. Category theory gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be initial or terminal, and thus, freeness can be defined in terms of such universal constructions.
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors from the category of structures to some other category. For our free monoids above, it'd be:
• $U : Mon \to Set$
The functor taking monoids to their underlying set. Then, the relevant universal property is given by finding an adjoint functor:
• $F : Set \to Mon$, F ⊣ U
F being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.
#### 3.2 Algebraic constructions in a category
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary monoidal category. Such categories have a tensor product $\otimes$ of objects, with a unit object I (both of which satisfy various laws).
A monoid object in a monoidal category is then:
• An object M
• A unit 'element' $e : I \to M$
• A multiplication $m : M \otimes M \to M$
such that:
• $m \circ (id_{M} \otimes e) = u_l$
• $m \circ (e \otimes id_M) = u_r$
• $m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha$
Where:
• $u_l : M \otimes I \to M$ and $u_r : I \otimes M \to M$ are the identity isomorphisms for the monoidal category, and
• $\alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M$ is part of the associativity isomorphism of the category.
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.
#### 3.3 Monads
One example of a class of monoid objects happens to be monads. Given a base category C, we have the monoidal category CC:
• Objects are endofunctors $F : C \to C$
• Morphisms are natural transformations between the functors
• The tensor product is composition: $F \otimes G = F \circ G$
• The identity object is the identity functor, I, taking objects and morphisms to themselves
If we then specialize the definition of a monoid object to this situation, we get:
• An endofunctor $M : C \to C$
• A natural transformation $\eta : I \to M$
• A natural transformation $\mu : M \circ M \to M$
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.
#### 3.4 Free Monads
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, $F : C \to C$. We then expect there to be a natural transformation $i : F \to M$, 'injecting' the functor into the monad.
```data Free f a = Return a | Roll (f (Free f a))
instance Functor f => Monad (Free f) where
return a = Return a
Return a >>= f = f a
Roll ffa >>= f = Roll $ fmap (>>= f) ffa
-- join (Return fa) = fa
-- join (Roll ffa) = Roll (fmap join ffa)
inj :: Functor f => f a -> Free f a
inj fa = Roll $ fmap Return fa```
This should bear some resemblance to free monoids over lists. `Return` is analogous to `[]`, and `Roll` is analogous to `(:)`. Lists let us create arbitrary length strings of elements from some set, while `Free f` lets us create structures involving `f` composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). `Return` gives our type a way to handle the 0-ary composition of `f` (as `[]` is the 0-length string), while `Roll` is the way to extend the nesting level by one (just as `(:)` lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:
```inj_list x = (:) x []
inj_free fx = Roll (fmap Return fx)```
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.
### 4 Further reading
For those looking for an introduction to the necessary category theory used above, Steve Awodey's Category Theory is a popular, freely available reference.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9055055975914001, "perplexity_flag": "middle"}
|
http://physics.aps.org/articles/v2/67
|
# Viewpoint: Melting the world’s smallest raindrop
, Physics & Astronomy, State University of New York, Stony Brook, NY 11794-3800, USA
Published August 10, 2009 | Physics 2, 67 (2009) | DOI: 10.1103/Physics.2.67
Experiments on melting of small water clusters open the door to the study of the size-dependent phase diagram of water.
#### Calorimetric Observation of the Melting of Free Water Nanoparticles at Cryogenic Temperatures
C. Hock, M. Schmidt, R. Kuhnen, C. Bartels, L. Ma, H. Haberland, and B. v.Issendorff
Published August 10, 2009 | PDF (free)
Water is the most ubiquitous substance on the surface of earth. Discoveries related to water typically arouse the interest of both scientists and the general public. Much of this interest stems from water’s unusual properties [1, 2], the existence of a density maximum in the liquid phase [1], its negative volume of melting [1], and its increasing diffusivity or molecular mobility under pressure [3] being some of the most well known. The study of the extremely rich phase diagram of water [4] and its anomalous properties [5] is an active field of research, but the number of papers reporting experimental work lags behind those based on computer simulations.
The structure and dynamics of water in any of its condensed forms, and their relation to the aforementioned anomalies, is one of the most prominent open puzzles in science. In a paper published in Physical Review Letters [6], Christian Hock and colleagues at Universität Freiburg in Germany in collaboration with Université Paris Sud in France, study the melting transition of small clusters of water. From the thermodynamic properties of these clusters it is possible to extract information about their structural and bonding characteristics (Fig. 1). This experiment opens a path to the study of the size-dependent phase diagram of water. Such a phase diagram will help us understand many-body effects on the hydrogen bond network of water [2]. It is well known that in water clusters the hydrogen bond strength increases with increasing cluster sizes [2]. The cohesive energy of these clusters cannot be decomposed as the sum of two-body or three-body contributions, an effect known as hydrogen bond cooperativity [2]. The size and temperature dependence of this many-body effect, key to determining the sources of water’s unusual properties, are related to the structure and dynamics of local cluster domains in the liquid phase [4].
The authors use infrared spectroscopy to both heat and characterize size-selected negatively charged clusters of water. The clusters are heated by infrared excitation of the hydrated electron—an additional electron added to an otherwise neutral water cluster. The excited electron decays in less than 200 femtoseconds by coupling to the vibrational modes of the cluster [7]. Using a laser pulse photofragmentation technique [8], the authors perform calorimetric measurements on these clusters. The specific heat of the cluster at a given temperature, $C(T)=δU(T)/δT$, can be calculated from the temperature dependence of the photofragmentation pattern of size-selected anion water clusters. The size of the fragmented pieces is measured using a mass spectrometer. The cluster itself turns out to be a very sensitive calorimeter. One can increase its internal energy before the fragmentation by either controlling the number ($n$) of photons absorbed ($δU=nħω$) or increasing the cluster temperature in a temperature-controlled external $He$ bath ($δT$). When the fragments are of the same size, independently of how one heats the cluster, the specific heat is given by the previous formula.
This method allows the researchers to determine the onset of the rapid increase of the specific heat ($C$) as a function of the temperature $T$. This sudden change in $C$ is associated to the cluster melting transition. The transition $T$ for two well-defined water anionic clusters of size 48 and 118 molecules is presented in their study. The results are not altogether surprising; the onset of the melting temperature ($Tm$) decreases as a function of the cluster size.
Previous computer simulation studies had reported similar values, but until now no accurate experiment was available to allow comparisons. The quantum nature of the atomic vibrations at these low temperatures means that the theoretical modeling of the melting of water clusters is very complex and computationally expensive, involving the use of path-integral molecular dynamics [9] with semiempirical interatomic potentials [10] whose accuracy is limited by the model of the water-water interactions itself. Even ab initio potential energies and forces are limited in their description of liquid water [11] and therefore experiments like this have been eagerly anticipated.
Many of the anomalies of water are a manifestation of the fine energy balance between configurational entropy and enthalpy in the underlying network of bonds that keep the molecules together, the hydrogen bond ($H$-bond) network. The configurational entropy of water is large; one should think of this as the number of different structural arrangements that the molecules can adopt without paying a large energy cost. The $H$-bond is relatively weak—more than ten times weaker than the covalent $O$-$H$ intramolecular bond. It costs little energy to distort the bond, contributing to the large configurational energy of the system. At the same time the strength of the bond is large enough so that the enthalpy of the system is strongly dependent on the average number of bonds broken in the network. At atmospheric pressure the density of liquid water increases on cooling, reaching a maximum of $0.999972g/cm3$ at $277K$. At this point the density decreases at a much faster rate and continues decreasing if crystallization is avoided at $273K$ and a metastable supercooled regime is entered. At atmospheric pressure, water in liquid phase below the freezing temperature is known as supercooled water. It is a metastable state because at that temperature the free energy of the liquid phase is larger than the free energy of the solid, i.e., ice is the ground-state phase. However, the difference is small enough to allow the system to remain in the phase of higher free energy for a small temperature range. The faster the cooling rate, the easier it is to enter the metastability region, where any small perturbation will cause the system to fall to the ground state and crystallize.
The response of the entropy $S$ and density of any system to changes in temperature $T$ and pressure $P$ are represented by the isothermal compressibility (ratio of density change per applied pressure at constant $T$) and the isobaric heat capacity (ratio of entropy change per temperature change at constant pressure). In liquid water these two quantities present a minimum within the normal liquid range [2] and show a rapid increase upon cooling and entering the supercooled regime—two more of the many anomalies of water. In bulk systems the melting phase transition is always accompanied by a characteristic peak in the heat capacity as a function of the temperature. The area under this peak corresponds to the latent heat of melting, which in bulk water is of the order of $1.4kcal/mol$ or $0.05eV$ per molecule. What happens to the structure of ice when it melts? This energy is mostly used to break about $10%$ of the $H$-bonds in ice; while this might not seem to be such a large number, it is enough to make the hexagonal ice (Ih) structure collapse, setting up the diffusion of the molecules that characterizes the liquid state. Water molecules in ice make four $H$-bonds each, i.e., they are tetracoordinated, forming perfect tetrahedral structures; the 10% of bonds broken at the melting transition only slightly reduce the average coordination to a number in between 3.5 and 4. Water molecules conserve a tetrahedral-like, albeit very distorted, environment. The perfect tetrahedral $H$-bond network with hexagonal symmetry of ice Ih becomes an amorphous tetrahedral network, with coordination defects that allow the molecules to continuously diffuse around exchanging $H$-bonds.
In insulating crystals such as ice, the heat capacity reflects the quantum effects of the atomic vibrations. Both Einstein’s and Debye’s specific heat models are derived from the quantum mechanical nature of the phonons in a system. They predict that the heat capacity should decrease with temperature, since vibrational quanta are harder to excite by a small amount of external heat. The high values of the three intramolecular, and two of the three intermolecular vibrational frequencies, only allow a maximum (at $Tm$) of four degrees of freedom to be active in ice. This is reflected in the value of the specific heat at a given temperature in units of the Boltzmann constant $kB$. At the temperatures at which Hock et al. carry out their thermal measurements (of the order of $100K$) only two of these vibrations are excited. However, in liquids the specific heat is not only a consequence of the molecular vibrations of the system. In water, in particular, the continuous exchange and distortion of hydrogen bonds within the amorphous $H$-bonds network has an unusually large contribution to the heat capacity [9], which is of the order of $9kB$ at room temperature. At the melting point of bulk ice the onset of this liquid configurational entropy is discontinuous, as expected in a first-order phase transition, which translates into the aforementioned sharp peak in the specific heat. However, in Hock et al.’s results this discontinuity is not seen, because, as explained by the authors, phase transitions in small size clusters have a different behavior [12, 13].
It is not straightforward to associate the sudden rise of the specific heat at the so-called “melting” temperature to an actual melting transition. Two possible scenarios are compatible with this behavior. As the authors point out, they might just be seeing the onset of the latent heat peak, and the heat capacity might decrease at higher temperatures. The specific heat of small molecular clusters has been theoretically studied and shown to exhibit a broad peak at temperatures well below the bulk $Tm$, which tends to sharpen and shift closer to $Tm$ with increasing cluster size [12]. This agrees with the observation that at $Tm$ the clusters do not become fully liquid, meaning that the difference in entropy between the solid and “cluster-liquid” state is very small and should decrease with the size of the clusters, making the phase transition continuous. Indeed, the structure of ice clusters as small as the ones studied by the authors is most probably amorphous [14] and not too far from the average structure of the liquid clusters after the transition. A further study with larger clusters at this melting transition should eventually show the appearance of the expected latent heat peak, confirming this scenario.
Looking forward, what is needed is a combination of further experiments like this with theoretical studies of the temperature dependence of the structure and dynamical properties of ice clusters. Even if not mentioned by the authors, understanding the nature of the hydrated electron in water clusters may also help to understand what is happening at the phase transition. An alternative viewpoint, to explain the lack of the latent heat peak and also the temperature dependence of the onset of melting, calls for understanding the spatial localization of the hydrated electron in water clusters. This is another size-dependent question still under debate [15]. The total or partial melting of the clusters may depend on whether the extra electron is located at the surface (a highly possible option for the $H2O48-$ cluster [15]) or inside the cluster. Only pulling together all these elements may solve this small but central area within the larger puzzle of the structure and properties of water.
### References
1. D. Eisenberg and W. Kauzmann, The Structure and Properties of Water (Oxford University Press, Oxford, 1969)[Amazon][WorldCat].
2. R. Ludwig, Angew. Chem. Int. Ed. 40, 1808 (2001).
3. C. A. Angell, Ann. Rev. Phys. Chem 55, 559 (2004).
4. E. Stanley and P. Debenedetti, Physics Today 56, 40 (2003).
5. P. Kumar, G. Franzese, and H. E. Stanley, Phys. Rev. Lett. 100, 105701 (2008).
6. C. Hock, M. Schmidt, R. Kuhnen, C. Bartels, L. Ma, H. Haberland, and B. v.Issendorff, Phys. Rev. Lett. 103, 073401 (2009).
7. A. Bragg, J. Verlet, A. Kammrath, O. Cheshnovsky, and D. Neumark, Science 306, 669 (2004).
8. M. Schmidt, R. Kusche, W. Kronmüller, B. von Issendorff, and H. Haberland, Phys. Rev. Lett. 79, 99 (1997).
9. M. Shiga and W. Shinoda, J. Chem. Phys. 123, 134502 (2005).
10. J. Douady, F. Calvo, and F. Spiegelman, Eur. Phys. J. D 52, 47 (2009).
11. M. V. Fernández-Serra and E. Artacho, J. Chem. Phys. 121, 11136 (2004).
12. P. Sheng, R. W. Cohen, and J. R. Schrieffer, J. Phys. C: Solid State Phys. 14, L565 (1981).
13. D. J. Wales, Energy Landscapes (Cambridge University Press, Cambridge, 2003)[Amazon][WorldCat].
14. Jan K. Kazimirski and Victoria Buch, J. Phys. Chem. A 107, 9762 (2003).
15. W. A. Donald, R. D. Leib, J. T. O’Brien, A. I. S. Holm, and E. R. Williams, Proc. Natl. Acad. Sci. 105, 18102 (2008).
### About the Author: Marivi Fernández-Serra
Marivi Fernández-Serra received her Ph.D. in 2005 from the University of Cambridge, UK. The study of water under different environments by means of ab initio and molecular dynamics simulations is prime amongst her diverse research interests. She has been an assistant professor of Physics at Stony Brook University since 2008.
## Related Articles
### More Atomic and Molecular Physics
Condensate in a Can
Synopsis | May 16, 2013
Remove the Noise
Synopsis | Apr 25, 2013
### More Soft Matter
Stripping Away Confusion
Synopsis | Apr 11, 2013
Cracks in the Cornstarch
Synopsis | Apr 4, 2013
## New in Physics
Wireless Power for Tiny Medical Devices
Focus | May 17, 2013
Pool of Candidate Spin Liquids Grows
Synopsis | May 16, 2013
Condensate in a Can
Synopsis | May 16, 2013
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Fire in a Quantum Mechanical Forest
Viewpoint | May 13, 2013
Insulating Magnets Control Neighbor’s Conduction
Viewpoint | May 13, 2013
Invisibility Cloak for Heat
Focus | May 10, 2013
Desirable Defects
Synopsis | May 10, 2013
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 40, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8907524347305298, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/geometry/208295-equation-circles.html
|
# Thread:
1. ## Equation of circles
So, I've attached both the question and my working inside.
for part 1, I'm not sure how to get the radius and for part 2, how to get the point?
Attached Thumbnails
2. ## Re: Equation of circles
Originally Posted by Thorpelizts
So, I've attached both the question and my working inside.
for part 1, I'm not sure how to get the radius and for part 2, how to get the point?
In part 1, complete the square on the x terms and complete the square on the y terms. Then move any extra constants to the right. You should then be able to write the equation in the form $\displaystyle \begin{align*} (x - h)^2 + (y -k )^2 = r^2 \end{align*}$.
As for part b), the y axis is where x = 0, so let x = 0 in your equation and solve for y.
3. ## Re: Equation of circles
I didn't look at your work, I don't care for images of hand-written work.
We are given the equation:
$x^2+y^2-6x-8y+16=0$
You want to complete the square on the two variables:
$(x^2-6x+9)+(y^2-8y+16)=-16+9+16$
Now, you want to get this into the form:
$(x-h)^2+(y-k)^2=r^2$
where the center is at $(h,k)$ and the radius is $r$.
Can you proceed?
For part b), what is the equation of the $y$-axis?
edit: sorry, I didn't see your post Prove It.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382275938987732, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/21067/noetherian-rings-of-infinite-krull-dimension/21083
|
## Noetherian rings of infinite Krull dimension?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Since Noetherian rings satisfy the ascending chain condition, every such ring must contain infinitely many chains of prime ideals s.t. the heights of these chains are unbounded.
The only example I know of is the one due to Nagata [1962]: we take a polynomial ring in infinitely many variables over a field, and consider the infinite collection of prime ideals formed by disjoint subsets of the variables. Then we localise the ring by the complement of the union of these prime ideals. With a little work, we can show that by appropriate choice of the subsets, the localised ring will be Noetherian and of infinite Krull dimension. Eisenbud (ex. 9.6) provides a good walkthrough.
The question is: what are other examples of Noetherian rings of infinite Krull dimension?
-
The fact that Nagata had to come up with a fairly involved construction means that they probably don't come up in practice, so would you like us to look up other artificial examples? – Harry Gindi Apr 12 2010 at 5:05
Such Noetherian rings are pathological by nature, so artificial examples are probably the way to go... working first from geometry, then to the related algebraic treatment? – moby Apr 12 2010 at 5:23
4
The question could be improved by giving it more focus: one could presumably modify Nagata's construction in various small ways -- e.g. by starting with $\mathbb{Z}$ instead of a field -- but you are probably not interested in such examples. So, what kind of features are you looking for in other examples? E.g. "Does there exist a Noetherian ring of infinite Krull dimension such that...X?" By filling in X, you ask a binary question, which all of a sudden mathematicians are interested in answering. Just asking "What's out there?" doesn't have the same appeal. – Pete L. Clark Apr 12 2010 at 7:20
1
The entire point of coming up with pathological examples is to show that certain conditions are necessary. Presumably the only motivation to come up with more pathological examples to illustrate necessity of a specific condition is if they are simpler to construct, or if you've come up with some sort of "construction scheme" that gives you a whole class of pathological examples. – Harry Gindi Apr 12 2010 at 9:14
4
@Harry: Oftentimes one wants to prove a result for all noetherian rings and the infinite dimensional case is a sticking point. For example Lemma 3.1.5 in Brian Conrad's notes on Grothendieck duality has a very nice (and fairly involved) proof due to Gabber. The lemma is almost trivial when the ring has finite Krull dimension. So having examples at hand could be very helpful. – Jesse Burke Apr 12 2010 at 23:33
## 2 Answers
As you did not ask your ring to be commutative, you can probably take differential operators on your Nagata's example. You may want to look at the 1982 paper by Goodearl and Warfield where they construct a commutative ring and its differential operator ring, both of which have infinite Krull dimensions.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If you check the book "Krull Dimension" Memoirs of the American Mathematical Society 133 1973, you will find examples of commutative Noetherian integral domains of arbitrary infinite ordinal Krull dimension
-
1
The complete reference: R. Gordon and J. C. Robson, Krull dimension, Memoirs Amer. Math. Soc, No. 133 (1973). – Yves Cornulier Jan 12 at 20:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234371185302734, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/23212/shors-algorithm-why-throw-away-the-fx/23215
|
# Shor's Algorithm: Why throw away the f(x)?
I'm having a little trouble understanding Shor's algorithm - namely, why do we throw away the result f(x) that we get after applying the F gate? Isn't that the answer we need?
My notation: $\newcommand{\ket}[1]{\left|#1\right>}$
$F(\ket x \otimes \ket0) = \ket x \otimes \ket{f(x)}$ ,
$f(x) = a^x \mod r$
-
## 1 Answer
$f(x)$ is easy to compute for a given $x$, even classically, and we are not really interested in its value. What we are interested in in Shor's algorithm is the period of $f$, i.e. the smallest $p$ such that $f(x+p)=f(x)$ for all $x$.
The way Shor's algorithm work is to prepare the superposition $\sum_x\ket x\otimes\ket0$ and apply the function $F$. The result is $$\sum_x\ket x\otimes\ket{f(x)} = \sum_{l<r}\left(\sum_k\ket{x_l+k p}\right)\otimes\ket l,$$ where $x_l$ is the smallest $x$ such that $f(x_l)=l$. Measuring the second register ($f(x)$) would give $l$, which is random and does not give much information. The key trick of Shor's algorithm i that it projects the $x$ register onto the state $\sum_k\ket{x_l+k p}$, the period of which ($p$) can be found by Quantum Fourier Transform. This period allows then to deduce the factors of $r$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458901286125183, "perplexity_flag": "head"}
|
http://nrich.maths.org/6870/solution
|
nrich enriching mathematicsSkip over navigation
### F'arc'tion
At the corner of the cube circular arcs are drawn and the area enclosed shaded. What fraction of the surface area of the cube is shaded? Try working out the answer without recourse to pencil and paper.
### Do Unto Caesar
At the beginning of the night three poker players; Alan, Bernie and Craig had money in the ratios 7 : 6 : 5. At the end of the night the ratio was 6 : 5 : 4. One of them won \$1 200. What were the assets of the players at the beginning of the evening?
### Plutarch's Boxes
According to Plutarch, the Greeks found all the rectangles with integer sides, whose areas are equal to their perimeters. Can you find them? What rectangular boxes, with integer sides, have their surface areas equal to their volumes?
# Mixing Lemonade
##### Stage: 3 Challenge Level:
Lots of great solutions were submitted to this problem, using a wide variety of approaches. The problem prompted you to use fractions, ratios, percentages, and graphs. In addition, you could investigate and consider which methods worked most effectively in different situations.
The first part of the task is to determine the mixture with the stronger tasting lemonade -
the first glass has $200$ml of lemon juice, and $300ml$ water
the second glass has $100$ml of lemon juice, and $200$ml water
What is the best way to compare these two mixtures? There are several different ways, as shown by the different explanations submitted.
Mahir, from Saltus Grammar School converted the values given so that there is the same amount of water in each glass:
In the first glass, there is $300$ml water, and in the second, there is $200$ml. Therefore, to equate the amounts of water, he multiplied everything in the first glass by two, and everything in the second glass by three. Each glass now has $600$ml water, and so can now be compared. Note that it is "ok" to multiply everything in each glass by a certain number as it is the proportions or relative amounts we are interested in, rather than the absolute amounts.
For the first glass, we now have:
$400$ml lemon juice and $600$ml water
For the second glass, we now have:
$300$ml lemon juice and $600$ml water
From this, we can now tell that the mixture in the first glass must taste stronger: for the same amount of water, there is more lemon juice.
Jonathan, from Wilson's School, used a similar method. However, he instead made the amounts of lemon juice equal, and then saw which glass had more water. The glass with less water for the same amount of juice will be stronger, as the juice is less diluted.
Another related method, used by many people was to use ratios, fractions and/or percentages. Will K. from Wilson's School gave a lovely explanation:
Glass 1 would be stronger as it has a simplified ratio of lemon juice: water of $2:3$, and so is $\frac{2}{5}$ lemon juice, or $40$%, whereas glass $2$ has a simplified ratio of $1:2$ for lemon juice: water, and so is $\frac{1}{3}$ lemon juice, or $33.\dot{3}$ % lemon juice, and so is weaker than glass $1$.
The strategy is to work out the ratio (e.g. $40$ml lemon juice to $120$ml water would be $40:120$ lemon juice: water), simplify it (e.g. $40:120$ simplifies to $4:12$, which simplifies to $1:3$), and then turn that into a fraction ($1:3$ would be $\frac{1}{4}$ as $1 +3 = 4$ and so the $1$ part that is lemon juice is $\frac{1}{4}$ of the drink), and then turn that into a percentage ($\frac{1}{4}$ into $25$%). The glass with the highest percentage being lemon juice (or the lowest percentage being water) would be the strongest glass of lemonade.
Will expressed the strength of the lemonade as a percentage, as these can be easily compared. Sharumilan, also from Wilson's School converted the fractions so that they had a common denominator. In this way, they can be more easily compared. In fact, this is the same as converting to percentages; percentages are fractions with a denominator of $100$! Here is Sharumilan's answer:
The first glass has the stronger tasting lemonade because $\frac{2}{5}$ ($\frac{6}{15}$) of it is lemon juice while only $\frac{1}{3}$ ($\frac{5}{15}$) of the lemonade in the second glass is lemon juice.
What about a more visual approach? Iona, from Whitby Maths Club compared the two solutions by drawing these graphs. Dominic, from Wilson's School, also suggested a graphical approach:
If you wanted to set it out in a pie chart you could easily see which mixture was sronger. The graph with the biggest chunk of lemon in it would be the strongest.
Another suggestion made by several people would be to convert the amounts so that there are equivalent amounts of water or lemon juice, as described above. Then, you could draw a glass and visually see which is the stronger mixture.
Dulan, from Wilson's School, suggested a very nice method, which can display graphically multiple different mixtures:
He thought that a graph could be constructed with $x$ and $y$ axes. On the $x$ axis could be "amount of lemon juice", and on the $y$ axis, "amount of water". Try constructing this for yourself.
What does it mean if two mixtures have the same $x$ coordinate, but different $y$ coordinates?
What if they have the same $y$ coordinates, but different $x$ coordinates?
You should be able to construct straight lines from the origin to the various points representing different mixtures and compare their strengths.
Along each line the strength of all of the mixtures is the same as the proportions do not change.
We have now seen different examples of approaches to this problem. Do the different methods always work? Which method is most efficient?
Nathan, from Wilson's School noted that different methods are more efficient in different situations:
The ratio or fraction strategy would be more efficient for difficult fractions e.g. $\frac{300}{430}$ and $\frac{290}{560}$ but using a mental method would be more efficient for numbers like $\frac{30}{50}$ and $\frac{40}{50}$, as you can see that $\frac{40}{50}$ has a more concentrated supply of lemonade.
Several people had their own preference of method, depending on what they felt most comfortable using.
Sharumilan explained this, and also examined the combination of the different mixtures:
I always use fractions because it seems easier for me and the graph helped me to get the fractions quickly so it could actually be a mix of both.
The strength of combined mixtures always has to be the same as or in between the two mixtures. I have said "the same as" because if the two mixtures are identical, the combined mixture has to be the same as well.
Try this out for yourself, with some squash, or cordial...
Thank you very much to everyone who submitted solutions. There were many correct solutions, and so we could not mention them all. Well done!
If you enjoyed this problem, try the extension problem Ratios and Dilutions.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9627962708473206, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/17711/lagrange-four-squares-theorem-efficient-algorithm-with-units-modulo-a-prime/18081
|
## Lagrange four-squares theorem: efficient algorithm with units modulo a prime?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking at algorithms to construct short paths in a particular Cayley graph defined in terms of quadratic residues. This has led me to consider a variant on Lagrange's four-squares theorem.
The Four Squares Theorem is simply that for any $n \in \mathbb N$, there exist $w,x,y,z \in \mathbb N$ such that $$n = w^2 + x^2 + y^2 + z^2 .$$ Furthermore, using algorithms presented by Rabin and Shallit (which seem to be state-of-the-art), such decompositions of $n$ can be found in $\mathrm{O}(\log^4 n)$ random time, or about $\mathrm{O}(\log^2 n)$ random time if you don't mind depending on the ERH or allowing a finite but unknown number of instances with less-well-bounded running time.
I am considering a Cayley graph $G_N$ defined on the integers modulo $N$, where two residues are adjacent if their difference is a "quadratic unit" (a multiplicative unit which is also quadratic residue) or the negation of one (so that the graph is undirected). Paths starting at zero in this graph correspond to decompositions of residues as sums of squares.
It can be shown that four squares do not always suffice; for instance, consider $N = 24$, where $G_N$ is the 24-cycle, corresponding to the fact that 1 is the only quadratic unit mod 24. However, finding decompositions of residues into "squares" can be helpful in finding paths in the graphs $G_N$. The only caveat is that only squares which are relatively prime to the modulus are useable.
So, the question: let $p$ be prime, and $n \in \mathbb Z_p ( := \mathbb Z / p \mathbb Z)$. Under what conditions can we efficiently discover multiplicative units $w,x,y,z \in \mathbb Z_p^\ast$ such that $n = w^2 + x^2 + y^2 + z^2$? Is there a simple modification of Rabin and Shallit's algorithms which is helpful?
Edit: In retrospect, I should emphasize that my question is about efficiently finding such a decomposition, and for $p > 3$. Obviously for $p = 3$, only $n = 1$ has a solution. Less obviously, one may show that the equation is always solvable for $n \in \mathbb Z_p^\ast$, for any $p > 3$ prime.
-
Notational conflict, sorry. Where I was taught mathematics, $\mathbb Z_N$ was synonymous with $\mathbb Z / N \mathbb Z$, although I am aware (and have now been reminded) of the notational convention to which you refer. – Niel de Beaudrap Mar 10 2010 at 21:54
Also: yes, the problem is not strictly comparable to (neither more or less general than) algorithms to solve the four-square problem. On the one hand, we require coprimality with $p$; on the other, we are not in characteristic zero. This does open the possibility of significantly different ways of obtaining solutions, while necessarily forcing new ones for "small" $n$. However, it is conceivable that an algorithm may be easily obtained by a modification of their algorithm. – Niel de Beaudrap Mar 10 2010 at 21:56
Well: my approach to finding paths in the graphs $G_N$ is to experimentally try to decompose a residue into a sum or difference of squares, and decompose using the Chinese Remainder Theorem when I find factors of $N$. But using this reduction to find a path in $G_N$, seen as decomposition of a residue into quadratic units, I still require the same number of summands for each factor of $N$, which (as I remarked above) may be e.g. more than two. I picked 'four' for my question because I thought that solutions in $\mathbb N$ for the four-squares problem might be helpful. – Niel de Beaudrap Mar 10 2010 at 22:41
Will: thanks for the references, they will be helpful. I still need to try and obtain an unconditional result, but I'm optimisitic now (esp. after re-reading Felipe's comments more carefully) that such an algorithm can be obtained. The problem is not that I have doubts as to whether the result will work 'empirically'; the problem is that I really want the algorithm, and not actually the paths for any particular case. Thus, I want a running time which is unconditional if such an algorithm can be obtained at all. – Niel de Beaudrap Mar 13 2010 at 10:17
## 2 Answers
How about trying $x,y,z=1,\ldots,[\log p]^2$ (or some such bound) and testing if $n-x^2-y^2-z^2$ is a square modulo $p$? That should be efficient. Proving that it works might require GRH. Did you want an algorithm with a proof?
-
Being a bit of a conservative sort, I would like an algorithm which works independently of unproven conjectures; although the idea that such a simple approach "ought to" work is a nice one. – Niel de Beaudrap Mar 10 2010 at 21:59
Also: testing whether an integer is a quadratic residue currently doesn't have known efficient algorithms. (If it did, this problem would be substantially easier.) Not being familiar with the density-of-primes result which I suspect you might be using --- I am not myself a number theorist, and came upon this problem as if by accident --- does this amount to an attempt to solve the four-squares problem in the integers? (What, then, about e.g. the cases $n \in \{1,2,3\}$?) – Niel de Beaudrap Mar 10 2010 at 22:07
Testing if a number $a$ is a quadratic residue mod p takes deterministically $O(\log p)$ steps by the stupid method of computing $a^{(p-1)/2}$ mod p (by square-and-multiply). Extracting the square root however can only be done in probabilistic polynomial time. My suggestion is not to solve the 4 squares problem for integers. You can even solve $n=x^2+y^2$ mod p for any nonzero n as long as p > 5. – Felipe Voloch Mar 10 2010 at 22:18
Right. Quadratic residuacity is a difficult problem modulo composite N; the fact that the prime case is different slipped my mind. That just leaves the question of doing it unconditionally. – Niel de Beaudrap Mar 10 2010 at 22:31
After reading your remarks more carefully (esp. with respect to root extraction), it seems clear to me that there is an efficient, unconditional, algorithm implicit in your responses. Thanks for reminding me that residuacity is easy modulo primes, and for the further information that root extraction is also easy with randomness modulo primes. – Niel de Beaudrap Mar 13 2010 at 10:10
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As indicated by Felipe (primarily in his responses to my comments of his solution above), the problem is actually easy modulo a prime $p > 3$. Here I outline an explicit random poly-time solution, depending on ideas contributed by him.
First, the special case $p = 5$. We can only express 0 as a sum of an even number of quadratic units (which in this case are $\pm 1$), and can only express the quadratic units themselves using exactly one or at least three quadratic units. We set this case aside and assume $p > 5$.
Second, for any $p \equiv 3 \pmod{4}$, the quadratic residues do not include $-1$; therefore $0$ cannot be formed as a quadratic residue. It suffices however to represent any $n \ne 0$ as a sum of two quadratic units, in which case we may easily reduce the problem to expressing $n \in \mathbb Z \setminus$ { 0 } as a sum of two quadratic units.
A classical result is that modulo $p$, and for $p > 5$, the number of ordered pairs $(q,q+1)$ of consecutive quadratic residues is asymptotically a large constant fraction of $p$ (specifically 1/4); and similarly for the number of pairs $(q,q+1)$ for which $q$ is a quadratic unit and $q+1$ a "non-quadratic" unit. We may then express $$n = nq(q+1)^{-1} + n(q+1)^{-1}$$ where we choose $q+1$ to have the same "residuacity" as $n$; this is then a sum of two quadratic units. Because the suitable pairs $(q,q+1)$ occur a large constant fraction of the time, we may easily generate such a pair whether $n$ is a quadratic unit or not; and by efficiently finding roots for $nq(q+1)^{-1}$ and $n(q+1)^{-1}$, we may then find unconditionally find solutions in random polynomial time.
(I post this answer in order to provide an explicit record, and to emphasize that it can be done unconditionally; however I'm upvoting his answer as the one which contributed the useful ideas.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405853152275085, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.