url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/9115/algebraic-geometry-for-cocommutative-corings-with-counit/9179
|
## Algebraic geometry for cocommutative corings with counit.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a notion of algebraic geometry for these objects? If we take the dual category of the category of cocommutative corings with counit, is there geometry in it in a sense dual to affine schemes? Can we look at the set of coideals of a coring, put a space structure on it and sheaves (maybe cosheaves) of sections?
-
6
Don't you mean coalgebraic cogeometry? – Qiaochu Yuan Dec 16 2009 at 17:35
I suggest accepting Leonid's answer. – Greg Kuperberg Dec 23 2009 at 7:02
## 4 Answers
Let's consider coalgebras over a field rather than corings. There is a theorem that every (coassociative) coalgebra over a field is the union of its finite-dimensional subcoalgebras. So the category of coalgebras over a field k is the category of ind-objects in the category of finite-dimensional coalgebras, while the latter is the opposite category to the category of finite-dimensional algebras. Now if we restrict ourselves to the commutative case, then the category of finite-dimensional commutative algebras with unit over k is the opposite category to the category of 0-dimensional schemes of finite type over k, or just schemes finite over Spec k. Combining it all, the category of cocommutative coalgebras with counit over k is equivalent to the category of ind-0-dimensional ind-schemes of ind-finite type over k, or just ind-finite ind-schemes over Spec k. This explains, in particular, that the "underlying topological space" functor maps cocommutative coalgebras to discrete sets rather than anything else, and the coalgebra itself is simply the infinite direct sum of the coalgebras sitting at the points of this set.
-
It seems that Soibelman did algebraic geometry in category of coalgebra. He called noncommutative thin scheme. – Shizhuo Zhang Dec 16 2009 at 22:37
Very interesting, so in that case the spaces that arise are not that interesting. But still I think it is nice that the points of the set are subcoalgebras, does look a bit like a spectrum. So maybe it could work for corings as well. – jef Dec 16 2009 at 22:49
combining the two answers, maybe one can say that coring, and k-coalg already behave 'geometrically', maybe it just doesn't make sense to take the opposite category to get more geometry. Indeed in the paper of Kontsevich and Soibelman (arxiv.org/abs/math/0606241) they define noncommutative thin schemes as covariant functors from finite dimensional k algebras (not necessarily commutative) to sets commuting with finite projective limits, they go on to prove that this category is equivalent to the category of all k-coalgebras. – jef Dec 16 2009 at 22:58
Yes, there is this term "thin schemes" being used in connection with coalgebras. Yes, points of the spectrum set correspond to cosimple subcoalgebras, a cosimple coalgebra being defined as a coalgebra having no nonzero proper subcoalgebras. Yes, I do think it is better to take the category of coalgebras rather than its opposite category as a category of geometric objects. – Leonid Positselski Dec 16 2009 at 23:36
1
Kontsevich-Rosenberg also worked a little bit on category of coalgebras as an example of their framework Q-category for generalized sheaf condition.The reference is their paper "Noncommutative spaces and flat descent" – Shizhuo Zhang Dec 17 2009 at 9:53
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I like to think of algebraic geometry as being born out of the fact that Ring behaves a lot like Setop. For instance, in Ring, A ∐ (B × C) = (A ∐ B) × (A ∐ C), where ∐ is the coproduct in Ring, which is just the tensor product. This formula is also true in Set after we swap ∐ and ×. This suggests that we can use our intuition about Set to think about Ring if we replace Ring by its opposite, the category of affine schemes.
Cocommutative corings with counit are monoid objects in the opposite category of (Ab, ⊗). However, to get the morphisms to point in the right direction, we need to take the opposite category again: so Coring is (CAlg((Ab, ⊗)op))op. It follows that products in Coring are computed by tensor products in Ab, and colimits are formed by taking colimits of underlying abelian groups; and in fact Coring is a closed cartesian category, even more like Set than Ringop is. In particular, we don't want to take the opposite category of Coring. Maybe this isn't surprising, since every set is already a cocommutative comonoid (w.r.t. ×) in a unique way, and we have a functor Set → Coring taking a set to the free abelian group on it.
These are purely formal observations, and I don't know whether anyone has built a more concrete geometric theory, with say a functor from Coring to some kind of topological spaces with extra structure.
-
Are you sure that the coproduct in Ring is the tensor product? – Qiaochu Yuan Dec 16 2009 at 19:41
Yes (where by Ring I meant commutative rings, otherwise it is false). – Reid Barton Dec 16 2009 at 19:52
Right, sorry. I got a little confused. – Qiaochu Yuan Dec 16 2009 at 20:19
coproduct in Rings should be free product – Shizhuo Zhang Dec 16 2009 at 22:40
The tensor product over Z is the coproduct, since Z is the initial object, so it's all very concrete. In fact, the tensor product is also the pushout of rings. – Harry Gindi Dec 17 2009 at 16:25
One can do (relative) algebraic geometry with respect to any symmetric monoidal category, with affine schemes corresponding to the opposite category of (commutative) monoid objects. This viewpoint is developed in the paper Au-dessous de Spec Z (in French) by Toen and Vaquie. By taking the category of $\mathbb{Z}$--modules with tensor products over $\mathbb{Z}$ one recovers usual algebraic geometry.
For (cocommutative) corings, you just have to do the same in the opposite category $(\mathbb{Z}-Mod)^{op}$. The big question is: what do you take as your monoidal structure? If you just dualize diagrams, your new coproduct would be the old product, i.e. direct product of the underlying abelian groups. Or instead of this you take usual tensor products. Each of these approaches will produce a different geometry.
I remember a course given by Lieven Le Bruyn on Kontsevich-Soibelman (the noncommutative coalgebra thing) and he mentioning that by the "discrete" nature of coalgebras one could never expect them to describe anything but the etale topology of a scheme, which I think is precisely what Leonid observed in his answer.
-
Leonid speaks with great authority. I'd like to point out that you get a different answer in the category of graded coalgebras. If $A$ is a graded vector space, it has a graded dual `$A'$` which is smaller than its full vector space dual `$A^*$`. Indeed, if the grading is locally finite, then this `$A'$` has the same dimension sequence as $A$ and `$A'' = A$`. If $A$ is a locally finite, graded coalgebra, then its ring structure is given by an infinite sequence of finite tensors, and `$A'$` is equivalently a graded algebra. Graded coalgebra homomorphisms also transpose to graded algebra homomorphisms. Recall that if `$A'$` is also finitely generated (and scalar in degree $0$), then it corresponds to a projective variety with a choice of an ample line bundle. The morphisms between these are a perfectly good non-full subcategory of the projective schemes.
The full dual `$A^*$` is also the graded completion of $A'$. So what Leonid is saying in this case is that if you take the graded completion of a finitely generated, graded algera, it localizes the projective variety to the apex of its affine cone. This is similar to what Leonid describes for `$\text{Spec}(A^*)$` in general.
I said in a previous version of this answer that `$\text{Spec}(A^*)$` atomizes the variety $\text{Proj}(A')$ to a "Cantor-set-like" structure. As Leonid points out in a comment, this is totally wrong in context. However, it is true that if $A$ is a finitely generated algebra, it has a completion $\hat{A}$ related to coalgebras such that $\text{Spec}(\hat{A})$ is an atomized form of $\text{Spec}(A)$. Namely, let $A'$ be the vector space of those dual vectors on $A$ that factor through finite-dimensional algebra quotients of $A$. Then $A'$ is a coalgebra, and `$\hat{A} = (A')^*$` is an atomization in the sense that $\text{Spec}(\hat{A})$ is a 0-dimensional scheme whose points are the closed points of $\text{Spec}(A)$. (It isn't a Cantor set though.)
-
2
This is a good example. I am not sure how to atomize a projective variety into a Cantor-set-like structure, though. If C is a nonnegatively graded coalgebra over k with C_0=k, then the coalgebra C is conilpotent and its spectrum, as defined in my answer, consists of a single point. When the dual graded algebra to C is finitely generated, the ind-scheme corresponding to C can be also thought of as a formal scheme. This formal scheme is basically the affine cone over the corresponding projective variety, formally completed at the origin. So it has a single closed point. – Leonid Positselski Dec 17 2009 at 18:22
Once again, you have quickly caught me in an absurd error. Maybe I should have you pre-referee my papers. Anyway, thank you; I will fix it. – Greg Kuperberg Dec 17 2009 at 19:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408047795295715, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/259543/rate-of-convergence-of-left1-fracax-rightx-to-operatornameexpa/259553
|
# Rate of convergence of $\left[1+\frac{a}{x}\right]^x$ to $\operatorname{exp}[a]$ as $x\rightarrow\infty$
It's well-known that $\lim_{x\rightarrow\infty}\left[1+\frac{a}{x}\right]^x=\operatorname{exp}[a]$. I am wondering how fast does the limit converge as $x$ increases, and how the speed of convergence depends on $a$. That is, I would like to find out what $f(x,a)$ is where:
$$\left|\left[1+\frac{a}{x}\right]^x-\operatorname{exp}[a]\right|=\mathcal{O}(f(x,a))$$
However, I'm having trouble evaluating that absolute value. Any tips? Perhaps this a known result...
-
Take logarithms, set $h = 1/x$, and turn it into a question about the rate of convergence of $\frac{\log 1+ha)}{h} - a$. This last expression is $O(ha^2/2)$, so your $f$ is something like $f(x,a) = \frac{a^2}{2x}$. – Hans Engler Dec 15 '12 at 22:31
## 1 Answer
$$\left(1+\frac{a}{x}\right)^x=\exp\left(x\ln\left(1+\frac{a}{x}\right)\right).$$ When $x\gg a$, expand the logarithm in a series, giving $$\left(1+\frac{a}{x}\right)^x\approx\exp\left(x\left[\frac{a}{x}-\frac{a^2}{2x^2}\right]\right)=e^a\cdot e^{-a^2/2x}\approx e^a\left(1-\frac{a^2}{2x}\right).$$ So it seems the answer to your question is $$f(x,a)=\frac{a^2 e^a}{2x}.$$
-
Thanks! Makes complete sense (and I keep forgetting the "exponentiating the log" trick)... – M.B.M. Dec 16 '12 at 4:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465265274047852, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/29926/3n-2m-pm-41-is-not-possible-how-to-prove-it
|
## $3^n - 2^m = \pm 41$ is not possible. How to prove it?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
$3^n - 2^m = \pm 41$ is not possible for integers $n$ and $m$. How to prove it?
-
How many congruences have you tried? How about mod 9? – Charles Matthews Jun 29 2010 at 14:58
6
The equation $3^n-2^m=\pm1$ in integers was fully solved by Levi ben Gerson ($\approx1320$). What's the trouble to do this one yourself by considering modulo 4 and 3 (or 9)? It looks a homework problem if you don't state your reasons and don't indicate your approaches. – Wadim Zudilin Jun 29 2010 at 14:59
2
Wadim: if n is positive even, and m is congruent to 2 mod 6 (but not equal to 2), then 3^n - 2^m is congruent to +41 mod 8 and mod 9. So I don't see how "modulo 8 and modulo 9 suffice". – Michael Lugo Jun 29 2010 at 15:36
2
If m and n are even ... well, any mathematician would know what to do next. – Charles Matthews Jun 29 2010 at 15:42
2
If n and m are both even, then 41 is a difference of two squares, which factorises. – Fedor Petrov Jun 29 2010 at 15:44
show 4 more comments
## 2 Answers
The congruence $3^n - 2^m \equiv 41\pmod{60}$ has no solutions.
The congruence $3^n - 2^m \equiv -41\pmod{72}$ has no solutions.
-
5
Very nice! Is there an integer $t$ such that $3^n-2^m=t$ has no solutions in non-negative integers $m,n$ but for which this cannot be proved by reducing the equation modulo $N$ as above for any $N$? I suspect so (number theory is too hard for this sort of technique to be that powerful) but it would be nice to know an explicit example! – Kevin Buzzard Jun 29 2010 at 19:37
I have a different opinion about the existence of such $t$ - I think it does not exist. But I would also very welcome to see a counterexample if it exists. – Max Alekseyev Jun 29 2010 at 19:45
5
Kevin's question intrigued me. So I wondered if these moduli were special somehow. It turns out that $3^n-2^m\equiv 41$ fails modulo 601 (and then again at 6553) and $3^n-2^m\equiv -41$ fails modulo 1321. All of these moduli are primes (as that is what I restricted my search to). It seems that if we find a prime for which both 2 and 3 have relatively small orders, the chance that we hit $t$ gets small. – Pace Nielsen Jun 29 2010 at 21:16
2
@Michael, you've done it modulo 72 yesterday! See your comment to mine above. Because $72=8\cdot 9$ (I believe). Modulo 60 is much less trivial, Max's answer is really nice. (In dreams I started to think of applying linear forms in logs. :-) ) – Wadim Zudilin Jun 30 2010 at 0:35
4
@Kevin: I suggest that you turn your interrogative comment into a new question. It will get a lot more attention that way, and rightfully so, I think. – Pete L. Clark Jun 30 2010 at 7:48
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As a valuable hint for solving the problem, I consider the following extract from my lectures on elementary number theory.
Theorem ($\approx1320$; Levi ben Gerson 1288--1344). The equations $$(1) \quad 3^p-2^q=1$$ and $$(2) \quad 2^p-3^q=1$$ have no solutions in integers $p,q>1$, except the solution $p=2$, $q=3$ to equation (1).
Proof. (1) If $p=2k+1$, then $$2^q=3^p-1=3\cdot9^k-1\equiv2\pmod4,$$ which is impossible for $q>1$.
If $p=2k$, then $2^q=3^p-1=(3^k-1)(3^k+1)$ implying $3^k-1=2^u$ and $3^k+1=2^v$. Since $2^v-2^u=(3^k+1)-(3^k-1)=2$, we have $v=2$ and $u=1$. This corresponds to the unique solution $q=u+v=3$ and $p=2$.
(2) If $q\ge1$, then $3^q+1$ is not divisible by~$8$. Indeed, if $q=2k$, then $3^q+1=9^k+1\equiv2\pmod8$; and if $q=2k+1$, then $3^q+1=3\cdot9^k+1\equiv4\pmod8$. Therefore $p\le2$, hence $p=2$. The latter implies $q=1$ which does not correspond to a solution.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259021282196045, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/183908/division-by-integers
|
# division by integers
let d,k be integers, with k even.
suppose d|2k
suppose d does not divide 2
suppose d does not divide k
show that d equals 2k.
(I'm really just trying to understand the 2nd last line, in this answer to the question: Prove that if $p$ is an odd prime that divides a number of the form $n^4 + 1$ then $p \equiv 1 \pmod{8}$)
-
## 1 Answer
It's not true. Take $k=6$ and $d = 4$ for example.
-
You're right. I see now. Could you possibly explain to me how the author of the reference concludes that d = 2k in the link I provided? I'd greatly appreciate that. – confused Aug 18 '12 at 7:43
@confused I don't know what the purpose of that answer was. It looks like he wanted to generalize the question to other powers of $k$, but this generalization isn't correct (with $k = 6$, $5$ is an odd divisor of $2^6 + 1$ and is not $\equiv 1 \bmod 2*6$.) – Cocopuffs Aug 18 '12 at 8:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9671386480331421, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/78124/proving-a-sequence-has-a-permutation-in-which-no-member-is-equal-to-its-successo
|
# Proving a sequence has a permutation in which no member is equal to its successor
What conditions show that a sequence can be permuted so that no two successive members are equal?
-
An obvious necessary condition is that no collection of more than half the members of the sequence can have the same value. This might be a sufficient condition as well. – Did Nov 2 '11 at 8:55
@Didier Piau: I'm wondering, what does "more than half the members of the sequence" mean exactly? – Rasmus Nov 2 '11 at 9:03
@zoo: With permutation, do you mean an arbitrary self-bijection of the natural numbers or one which fixed almost all elements? – Rasmus Nov 2 '11 at 9:04
1
Finite sequence, or infinite? – Gerry Myerson Nov 2 '11 at 9:14
1
@Rasmus, Unsurprisingly, the expression half the members of the sequence refers to finite sequences. And, unless I am missing something, the infinite sequence version is trivial: you lose if there is exactly one infinite class and you win otherwise. – Did Nov 2 '11 at 9:57
show 4 more comments
## 2 Answers
Here’s a proof that Christian’s greedy algorithm works.
Suppose that there are $n$ balls and $k$ colors. Suppose further that color $i$ occurs $n_i$ times, so that $n_1+\dots+n_k=n$, and that $n_i\le\lceil n/2 \rceil$ for each $i$. Finally, assume that $n_1\ge n_2\ge\dots\ge n_k$.
Even if we’ve already placed some balls, at most one color is unavailable, so we may take the first ball to be of color $1$ or color $2$. A problem would arise only if color $1$ is unavailable and $n_2=0$, but in that case $n_1=n$, which is impossible. To show that the induction goes through, it suffices to show that after the new ball is placed, there is no color with more than $\lceil(n-1)/2\rceil$ balls left.
Suppose first that the ball just placed is of color $1$. Clearly $n_1-1\le\lceil(n-1)/2\rceil$. Suppose that $n_i>\lceil(n-1)/2\rceil$ for some $i>1$. Then $\lceil(n-1)/2\rceil<n_i\le\lceil n/2\rceil$. If $n=2m$, this is $m<n_i\le m$, which is absurd, so $n=2m+1$. But then we have $m<n_i\le m+1$, so $n_i=m+1$, and $n\ge n_1+n_i\ge 2n_i=2m+2>n$, which is also absurd. Thus, the induction goes through just fine if the ball just placed is of color $1$.
Now suppose that it’s of color $2$. If $n_1>\lceil(n-1)/2\rceil$, then $\lceil(n-1)/2\rceil<n_1\le \lceil n/2\rceil$. As in the last paragraph $n$ must be odd, say $n=2m+1$, and $n_1=m+1$, which is a problem. However, this case arises only if color $1$ is unavailable, which means that we just placed a ball of color $1$ at the previous step. At the beginning of that step, therefore, there must have been $n+1=2m+2$ balls, of which $m+2$ were of color $1$, and that’s impossible. Thus, we must have $n_1\le\lceil(n-1)/2\rceil$, as desired. The colors $i$ for $i>2$ can then be handled as in the last paragraph.
-
Assume everything is finite. So basically we have finitely many balls of different colors assorted into heaps according to colors. I think that Didier Piau's conjecture is true and that the following greedy algorithm works: Take as next ball a ball from an allowed heap of maximal size.
-
It does work; a proof by induction isn’t very difficult. – Brian M. Scott Nov 2 '11 at 10:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9581454992294312, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/2993/capm-beta-of-zero-and-its-implications-on-diversification/3033
|
# CAPM - Beta of zero and its implications on diversification
I don't know if this is the right forum in which to ask this question, but here goes. I'm working through Luenberger's Investment Science. The form of CAPM model given in the book is
$$\bar{r}_i - r_f = \beta_i (\bar{r}_M - r_f)$$
It says that if your asset has a $\beta$ of zero, then $\bar{r}_i = r_f$. The book says that "The reason for this [$\bar{r}_i = r_f$] is that the risk associated with an asset that is uncorrelated with the market can be diversified away. If we had many such assets, each uncorrelated with the others and with the market, we could purchase small amounts of them, and the resulting total variance would be small."
I agree with Luenberger's statement, but what bothers me is the part "if we had many such assets". If we don't, $\bar{r}_i = r_f$ still holds! What would be the explanation then? Is there some theory behind CAPM that would say a statement like the following: "if there exists one asset $i$ with $\beta_i = 0$, then there exist infinitely many assets with $\beta = 0$, that are uncorrelated with each other". This is a strong and sort of ridiculous statement, but it would imply the description quoted from Luenberger - I could make a portfolio with such assets whose variance approaches zero.
Any help wrapping my head around this concept would be greatly appreciated.
-
1
It is extremely unrealistic for any appreciable asset to to have zero correlation with the market. The only way to achieve that in practice is if we held only cash and assumed there was only one currency in the world and that there was never any inflation. – chrisaycock♦ Feb 26 '12 at 2:36
@chrisaycock I agree with you. A reasonable answer to my question might be that the model comes with the caveat $| \beta | > 0$, but Luenberger doesn't say this! He addresses it in a hand-wavy way that bothers me. It's a theoretical question, but I'm hoping that the theory has a reasonable answer to my question. – raoulcousins Feb 26 '12 at 3:17
## 1 Answer
There is a rationale here and it has to do with the connection between diversification and the number of assets in a portfolio.
Suppose we purchase an equal-weighted portfolio of n stocks. The variance of the return is then:
$\sigma_{p}^2$ = $\sum$ $\sum$ $w_i$$w_j$Cov($R_i$,$R_j$)
In the above notation, the sigmas are summing over $i$ and $j$ respectively each element of the variance-covariance matrix of asset returns.
Since each weight is 1/n, then this equation is equal to:
1/$n^2$ * $\sum$ $\sum$ Cov($R_i$,$R_j$).
Let's say the average variance across all stocks is $\sigma_{avg}^2$ and the average covariance between among all pairs of stocks is $Cov_{avg}$.
Then this equation simplifies to the sum of two products (see Investments, Bodie, Kane, and Marcus for the derivation):
$\sigma_{p}^2$ = $(1/n)$ * $\sigma_{avg}^2$ + $[(n-1)/n]$ * $Cov_{avg}$.
Let's analyze the first product. As the number of stocks increases the contribution of the variance of the individual stocks becomes very small because the term $(1/n)$ * $\sigma_{avg}^2$ approaches zero as $n$ becomes large.
Now let's analyze the second product. As the contribution of the average covariance across strocks to the portfolio variance stays non-zero because the term $[(n-1)/n]$ * $Cov_{avg}$ has a limit of $Cov_{avg}$.
So as the number of assets increases the portfolio variance is approximately equal to the average covariance.
In your above example, if there are many such assets then the contribution to variance is vary small. Also, if the assets are "uncorrelated with others and the market" then the second term is also small. Therefore the variance is nearly zero when both the conditions identified by Luenberger are met.
Here is a more intuitive explanation. A Beta of 0 does not imply zero variance, securities still have idiosyncratic risk (i.e. a random component of return not explained by systematic exposure). A risk-free investment is still less risky than a security with a beta exposure of zero although they both have the same expected return. However, investing in a large portfolio with a beta exposure of zero is far less riskier than investing in a single security with a beta exposure of zero.
Adding the idiosyncratic error term 'e' to the formula you identify captures this idea of diversifying away residual risk. 'e' is a random variable whose expectation is 0 but variance is greater than zero. Only as one builds larger portfolios is this 'e' term increasingly diversified away (i.e. the variance of the error term approaches zero):
$$\bar{r}_i - r_f = \beta_i (\bar{r}_M - r_f) + e$$
Put another way, if the number of securities is large (and weights are small), the covariances become the most important driver of portfolio variance. Consider the variance-covariance matrix of the asset returns. For an 'n' asset portfolio there are 'n' variances (the diagonal), and n*(n-1)/2 covariances (the off-diagonal triangle). A portfolio of two stocks has 2 variances and 1 covariance (i.e. variances dominate). A large portfolio of 1,000 stocks has 1,000 variance terms but 499,950 co-variance terms (i.e. covariance dominates).
-
I'll accept it, because it's a good explanation of how portfolio risk works. – raoulcousins Mar 10 '12 at 16:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532163739204407, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/168448/two-different-characterization-of-differentiable-function?answertab=votes
|
# Two different characterization of “differentiable function”
In a calculus class we were given the following definition of "differentiable function" (working with 2 variables):
Definition: Let $A \in \mathbb{R^2}$, and $f : A \to \mathbb{R}$. We say that $f$ is differentiable in $(x_0, y_0) \in A$ if the graph of $f$ admits a tanget plane at $(x_0, y_0, f(x_0, y_0))$.
Then the teacher gave us the following equivalent characterization:
Proposition: $f$ is differentiable in $(x_0, y_0)$ iff
1) $f$ admits partial derivatives in $(x_0, y_0)$
2) the following holds: $$\lim_{(x,y) \to (x_0, y_0)} \frac{f(x,y) - f(x_0,y_0) - A(x-x_0) - B(y-y_0)}{\sqrt{(x-x_0)^2 + (y-y_0)^2}} = 0$$ where $$A = \frac{\partial f}{\partial x}(x_0,y_0)$$ $$B = \frac{\partial f}{\partial y}(x_0,y_0)$$ i.e. the partial derivatives evaluated in $(x_0,y_0)$
So here's my questions:
1. I got it right? Are the two definitions equivalents?
2. How to prove that the limit is zero iff the function admits a tangent plane?
Isn't (2) quite obvious? If a tangent plane at $P$ exists, its equation has to be $$f(x,y) = f(P) - \frac{\partial f}{\partial x}(P)(x-x_P) - \frac{\partial f}{\partial y}(P)(y-y_P)$$ follow that the numerator of the limit is zero. So what's the point of the denominator? Couldn't one use anything else for the denominator? Am I missing something? Is (2) noteworthy?
EDIT: As noted below, my "it's obvious from eq. of tangent plane" approach is, in fact, wrong.
-
## 2 Answers
The definition is stated in terms of "the graph of $f$ admits a tangent plane". Did your teacher preceded these words by defining precisely what it means for a graph to admit a tangent plane? If not, then this "definition" was meant as an intuitive description of the concept, not as a definition in mathematical sense.
In the proposition, part 2) is actually the key, and part 1) could even be omitted. Namely, one can state the definition as follows:
A function $f$ is differentiable at $(x_0,y_0)$ iff there exists numbers $A$ and $B$ such that $$\lim_{(x,y) \to (x_0, y_0)} \frac{f(x,y) - f(x_0,y_0) - A(x-x_0) - B(y-y_0)}{\sqrt{(x-x_0)^2 + (y-y_0)^2}} = 0$$
Using this definition one can prove (and this is a reasonable exercise to do) that both partial derivatives exist and are equal to $A$ and $B$, respectively.
One can also give a geometric interpretation of the limit being zero. The equation $g(x,y)=f(x_0,y_0) + A(x-x_0) + B(y-y_0)$ is an equation of a plane. The difference $f(x,y)-g(x,y)$ measures how much the graph of the function deviates from this plane in the vertical direction. The limit being zero means that the vertical deviation at $(x,y)$ is small compared to the horizontal distance between $(x_0,y_0)$ and $(x,y)$: one pictures this as a surface and a plane meeting "at zero angle", that is, being tangent to each other. (But if at this point you'll ask me to define what being tangent means, I'll only point you back at the limit: the graphs of $f$ and $g$ are tangent if $$\lim_{(x,y) \to (x_0, y_0)} \frac{f(x,y) - g(x,y)}{\sqrt{(x-x_0)^2 + (y-y_0)^2}} = 0$$ The limit is the key; geometry is optional but helpful.)
In your post you wrote down the equation of the tangent plane incorrectly, and, which is a more serious mistake, denoted it by the same letter $f$ as the function itself. This is what led you to the erroneous conclusion that "numerator is zero". I avoid this confusion by using the letter $g$ for the function that represents the tangent plane: $z=g(x,y)$ is my notation for the tangent plane.
-
Ok, now things seems to be a little more clear! I'll focus on the definition with the limit and the geometric interpretation of that. Thanks for your answer. – Haile Jul 9 '12 at 12:32
Suppose that $G\subseteq \mathbb{R}^n$ is open, and $f:G\rightarrow \mathbb{R}^m$. Then $f$ is differentiable at $x$ if there is some linear transformation $A\in {\rm Hom}_{\mathbb{R}}(\mathbb{R}^n, \mathbb{R}^m)$ so that
$$f(x + h) - f(x) = Ah + o(\|h\|).$$
This abstracts to infinite-dimensional settings. In standard coördinates, the matrix of the linear transformation is the matrix of partials.
-
1
-1: not an appropriate explanation at this level. – user31373 Jul 9 '12 at 1:05
2
How did you decide that? In fact, most books I know (Apostol's "Calculus" II, (8.7), page 258, 2nd Ed., Rudin's "Principles of Mathematical Analysis", def. 9.11 page 212, 3rd Ed., etc.) define a differentiable function exactly in this way, i.e. by means of a linear map. This shouldn't be advanced stuff for a student dealing with differentiable functions of several variables. – DonAntonio Jul 9 '12 at 2:16
I appreciate your answer, but I'm not able to understand it completely. The course was a lightweight introdution to 2-variables calculus for CompSci undegrads; and the teacher followed a quite relaxed approach mostly based on ideas from one-variable calculus. I probably should have written that in my question.. – Haile Jul 9 '12 at 12:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9494169354438782, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/80724?sort=oldest
|
Taylor’s theorem and the symmetric group
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Anytime I see an $n!$ in some formula, my instinct is to look for the symmetric group on $n$ letters coming in somewhere. I have never done this seriously with the $n!$ in Taylor's theorem.
Question: Is there some way to see the $n!$ in Taylor's theorem coming naturally from a symmetry group?
Possible lead:
Here is a definition of $f^{(n)}(a)$ which does not depend on finding earlier derivatives: Let $g: \mathbb{R}^n \rightarrow \mathbb{R}$ be defined by $g(x_1,x_2,x_3, ..., x_n)$ is the lead coefficient of the unique $n^{th}$ degree polynomial passing through $(a, f(a)), (x_1, f(x_1)),(x_2, f(x_2)),...,(x_n, f(x_n))$. Then $f^{(n)}(a)$ is $1/n!$ times the limit of $g(x_1,x_2,x_3, ..., x_n)$ as $(x_1,x_2,x_3, ..., x_n)$ approaches $(a,a,a,...,a)$. Could the $1/n!$ be related to the symmetry of $g$ under exchange of coordinates?
-
6 Answers
There must be many ways to think of this. Here's one:
The symmetric group is involved with homogeneous polynomials of degree $n$ because they correspond to symmetric multilinear functions of $n$ variables, and division by $n!$ appears when recovering the former from the latter. For example, a homogeneous polynomial map $f:V\to W$ of degree $3$ determines a map $F:V\times V\times V\to W$ by $$F(x,y,z)=f(x+y+z)-f(x+y)-f(x+z)-f(y+z) +f(x)+f(y)+f(z)-f(0),$$ and $f(x)$ can be recovered as $$\frac{F(x,x,x)}{3!}.$$
The same formula for $F(x,y,z)$ can be applied to other functions $f(x)$. Applied to a polynomial of degree $<3$ it gives zero. Applied to a polynomial of degree $\le3$ it gives the multilinear function that corresponds to the purely cubic part of $f$. Applied to any function at all it gives a symmetric function of $(x,y,z)$ that vanishes when $x$ or $y$ or $z$ is zero.
The foundation of all differential calculus is the process that takes $f$, subtracts $f(0)$, and then makes the best linear approximation of the result. If you apply this process in all three variables to $f(x+y+z)$, you are first making $F(x,y,z)$ and then (assuming that $f$ was sufficiently smooth) making best linear approximation in all variables. If you then set $x=y=z$ and divide by $3!$, you get the third term in the Taylor series of $f$.
Of course, if you perform the linearizations in the three variables one after another then you see this trilinear map as a derivative of a derivative of a derivative.
The reason I look at it this way is that a homotopical categorical analogue of this plays a big role in my "functor calculus". The analogue there of division by $n!$ is a homotopy orbit spectrum for an action of the symmetric group.
In a little more detail:
Let $V$ and $W$ be model categories in which filtered homotopy coimits commute with finite homotopy limits. Assume that $W$ is stable, meaning that the final object is equivalent to initial object and that homotopy pushout squares are the same as homotopy pullback squares. For example, $W$ might be the category of spectra and $V$ might be spaces or spectra. Consider functors $f:V\to W$ that are homotopy-invariant (preserve weak equivalences). Say that $f$ has degree $\le 1$ if it preserves homotopy pushout squares. Say that it has degree $\le d$ if it takes those $(d+1)$-dimensional square diagrams all of whose $2$-dimensional faces are homotopy pushouts to homotopy pushout cubes, those in which the last object is the homotopy colimit of the others. Call $f$ reduced if it takes the trivial object $\star$ to itself (up to equivalence). Call it linear if it has degree $\le 1$ and is reduced. There is a linearization process that takes a reduced functor and makes the universal linear functor under it: basically $hocolim \Omega^nf\Sigma^n$. This can be easily generalized to make something called $P_1$ (first polynomial approximation) which takes any functor $f$, reduced or not, and makes the universal degree $\le 1$ functor $P_1f$ under it. (If we first reduce $f$ by taking the homotopy fiber of $f\to f(\star)$ and then linearize, it's the same as first doing $P_1$ and then reducing.) It can be further generalized to make something called $P_d$ ($d$th polynomial approximation) which takes any functor $f$ and makes the universal degree $\le d$ functor $P_df$ under it. The homotopy fiber of the canonical map $P_df\to P_{d-1}f$ is always a degree $\le d$ functor such that $P_{d-1}$ of it is trivial. Call such functors homogeneous of degree $d$. When $d=1$ this is the same as linear.
There is the functor of $d$ variables $(X_1,\dots,X_d)\mapsto f(X_1+\dots +X_d)$, where $+$ means (derived) coproduct. This is symmetric in the sense that there are isomorphisms when you permute the inputs, satisfying the obvious identities. Reduce it in all variables simultaneously, by first making a cubical diagram consisting of the objects $f(X_S)$, coproduct of all the $X_i$ for $i\in S$, and then looking at the homotopy fiber of the canonical map from $f(X_1+\dots +X_d)$ to the holim of all the rest. I call this reduced symmetric functor of $d$ variables the $d$th crosseffect of $f$. If we simultaneously linearize it in all variables, we get a symmetric multilinear functor.
If $f$ has degree $\le (d-1)$ then its $d$th crosseffect is contractible. If $f$ is homogeneous of degree $d$ then its $d$th crosseffect is already multilinear, and this way of making a symmetric multilinear functor from a homogeneous degree $d$ functor can be inverted: $f(X)$ is the homotopy orbit object for the action of the $d$th symmetric group on $F(X,\dots ,X)$, where the action on $F(X,\dots ,X)$ is the one that you get from the symmetry on $F(X_1,\dots ,X_d)$.
For general $f$ the multilinearization of the crosseffect is the same symmetric multilinear functor that corresponds in this way to the $d$th homogeneous part of $f$, i.e. to the homotopy fiber of $P_df\to P_{d-1}f$.
Again, by linearizing in one variable at a time you can work out a sense in which the multilinear functor corresponding to the $d$th homogeneous part is in fact an iterated derivative. But in this context it is important to have the option of performing linearization in all variables simultaneously, so to speak, in order that the result should be symmetric in the sense that is required for reconstructing the corresponding homogeneous functor. (Here symmetry is a structure, not a property.)
-
1
1+. It would be great if you elaborate on the last point. I think this will yield a good answer to the question because apparently, you don't just use permutations, but rather the whole group of permutations in a categorified setting for the derivative. – Martin Brandenburg Nov 12 2011 at 17:06
While this idea is awesome, it doesn't seem to explain why one should, in fact, divide by $n!$ there. – Will Sawin Nov 13 2011 at 17:30
Martin: I have now done so. Will: My observation that $n\!$ comes into the formula for recovering a homogeneous polynomial from a symmetric multilinear polynomials is very closely related to point 3 in your answer. The fact that in the categorical version the "division by $n\!$" involves the symmetric group rather than just its order may shed some light on the question, somehow. – Tom Goodwillie Nov 13 2011 at 17:42
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
One way is to use a combinatorial definition of the derivative. Let $A(z) = \sum a_n z^n$ be a power series. In combinatorics, where $A$ is likely to be an ordinary generating function, $a_n$ is likely to count the number of possible combinatorial structures of some kind on an ordered set with $n$ elements. For example when $a_n = 1$ the structure is "being an ordered set," when $a_n = a^n$ the structure is "being an ordered set together with a choice, for each element of the order, of a letter from an alphabet of size $a$," and so forth.
Then a combinatorial definition of the derivative $A'(z) = \sum na_n z^{n-1}$ is as follows: the derivative is an operation that given an $A$-structure produces a new type of structure, an $A'$-structure. An $A'$-structure is "being an ordered set together with an extra element, and an $A$-structure on the new order." This is because for an ordered set with $n$ elements there are $n+1$ possibilities for the extra element and $a_{n+1}$ possibilities for the $A$-structure on the new order.
Then after applying the derivative $k$ times we have added $k$ new elements. If we start from the empty set then there are $k!$ ways to do this, and the set of such ways to do this is naturally a torsor for the symmetric group $S_k$. (A simple example of one benefit of categorification: when you replace numbers by sets, they can support more complicated structures such as group actions.)
One nice property of this definition is that it offers a conceptual interpretation of the Leibniz rule. First, recall that if $A, B$ are two generating functions for $A$-structures and $B$-structures, then an $AB$-structure is a partition of an ordered set into a first segment and a second segment together with an $A$-structure on the first segment and a $B$-structure on the second segment, and the corresponding generating function is the product $AB$. Now, the derivative $(AB)'$ counts the number of ways to add one element to an ordered set and then put an $AB$-structure on the new order. The new element may be either in the first segment or the second segment, and this gives the two terms $A B'$ and $A' B$ in the Leibniz rule.
There is a second, possibly more satisfying, explanation using the theory of combinatorial species and groupoid cardinality. I seem to recall that this explanation was written up by John Baez somewhere in This Week's Finds, but I can't currently find it.
-
3
This is a very nice answer, but also unsatisfying in some way. Trying to pin down what's unsatisfying - is it at all possible to prove that this combinatorial definition of the derivative is the same as the analytic definition without coming close to reproving Taylor's Theorem first? Of course the n! in exponential generating functions comes from the symmetric group - EGFs were chosen because of that, and not vice versa! – Alexander Woo Nov 12 2011 at 6:13
2
Also, I got the impression that Steve didn't want to assume that the power series had to be a generating function (or exponential generating function, as the case may be) of a species. (I even suspect that Steve has seen the species-theoretic (categorified) notion of differentiation.) So I think he wants to interpret the $n!$ in a way which works for arbitrary power series, as addressed in Will's answer. – Todd Trimble Nov 12 2011 at 12:50
@Alexander: well, in this answer I am really only concerned with formal power series, and there the derivative $D$ is uniquely characterized by the properties $D(1) = 0, D(x) = 1$, the Leibniz rule, and continuity with respect to the $z$-adic topology. The analytic content of Taylor's theorem (the first part Will mentions) isn't really relevant to the appearance of factorials. @Todd: one can do this by replacing counting with "weighted counting" (turning each $a_n$ into a formal variable), although I don't know how satisfying this is. – Qiaochu Yuan Nov 12 2011 at 15:38
I was not familiar with this. Are there any places you would recommend for reading a systematic treatment of it? – Phil Isett Nov 12 2011 at 22:46
@Phil: try Bergeron, Labelle, and Leroux's Combinatorial Species and Tree-Like Structures, although I am not sure it covers ordinary, as opposed to exponential, generating functions. (Exponential generating functions describe structures on unordered sets, and there is a second combinatorial definition of the derivative that applies in that case.) I think there is also an informal treatment in Flajolet and Sedgewick's Analytic Combinatorics (freely available here: algo.inria.fr/flajolet/Publications/books.html). – Qiaochu Yuan Nov 13 2011 at 0:21
1. Don't you mean to say that $f^{(n)}(a)$ is $n!$ times the lead coefficient? For instance, take $f=x^n$.
2. The functions $f_k(n)=\frac{n!}{(n-k)!k!}$ satisfy a discrete version of the derivative. $f_k(n)-f_k(n-1)=f_{k-1}(n)$. If we view Taylor's theorem as being about approximating with a very fine version of these functions, we just have to see where and why $k!$ arises in their combinatorics.
3. Fundamentally, we can break Taylor's theorem up into two parts - that functions are approximable by polynomials, which with your definition of $g$ could create an $n!$-free version, and that the $n$th derivative of $x^n$ is $n!$. The first part has no $n!$ in it. The second one doesn't seem very deep, but is very clearly about $S_n$. Take $x \cdot x \cdot x \cdot ... \cdot x$, and differentiate it $n$ times. By the product rule, we have to choose an order to differentiate the $x$s in. Because the second derivative of $x$ is zero, we must avoid repetition. Clearly, this is where your permutations come from.
-
1. Indeed I did - I goofed. 3. Nice observation! I think this answers my question at the level I wanted to see an answer. – Steven Gubkin Nov 13 2011 at 16:58
Seems to me the question is just a mighty complicated way to ask "how does $n!$ creep into the $n$-th derivative (analytic or formal) of $x^n$", to which the answer is ob course as Will Sawin noted "through repeated application of the product rule".
I think the question should say Taylor's formula, not Taylor's theorem (and this is true for many uses of Taylor's theorem elsewhere). Taylor did not have any theorem, and what is nowadays called Taylor's theorem (any of the many forms) can be stated without mentioning $n!$ at all, after subtracting off the uninteresting polynomial part (once we've learned about linearity and how to differentiate monomials): "if $f$ is $n$ times differentiable at $a$ with its value and all $n$ derivatives zero at $a$ (and possible additional regularity hypothesis in a neighbourhood of $a$) then (the error term) $f(x)$ satisfies (you favorite estimate or formula) for $x$ in some (specific) neighbourhood of $a$".
And if the question is about relating the coefficients of the Taylor polynomial to the derivatives, then it really boils down to understanding the differentiation of monomials.
-
I think the most obvious way to see the symmetric group appearing is the following. To calculate the difference between $f(x)$ and the constant $f(0)$
$f(x) - f(0) = \int_0^x f'(s_1) ds_1$
Often we do this move because $x$ is small, so we figure that a small change in the input to $f$ will result in the output changing depending on the derivative of $f$. But the same logic goes for $f'$, so applying the same formula to $f'$, we get:
$f(x) = f(0) + f'(0) x + \int_{0 < s_2 < s_1 < x} f''(s_2) ds_2 ds_1$
Note $f'$ may not change significantly from its initial value if $f''$ is under control, but there can be oscillations at the level of the next derivative responsible for $f'$ experiencing a small change -- in this case Taylor expansion usually is not helping us understand the problem. In any case, here we have an integral over the region ${ 0 < s_2 < s_1 < x }$. This is a fundamental domain for the action of $S_2$ on the square ${ 0 < s_1, s_2 < x}$, so if $f''$ is a constant $f''(0)$, you get the volume $x^2 / 2!$ times $f''(0)$.
Even if $f''$ is not constant, you can certainly write $f''(s_2) = f''(0) + \int_0^{s_2} f''(s_3) ds_3$ and get $f''(0) x^2 / 2!$ plus an integral over ${ 0 < s_3 < s_2 < s_1 < x }$, which is again a fundamental domain for the action of $S_3$ therefore has volume $x^3/3!$. Similar considerations apply to the higher order terms.
Normally we replace the use of Fubini's theorem here with an equivalent integration by parts so that we never see iterated integrals. (Fubini's theorem is one way to prove integration by parts, so it really is the same move.)
-
Looking at the answers leaves me with an un-easy feeling. We just made something simple into something very complicated. The understanding produced was borderline nil, because the question was wrong in the first place. Why make a simple thing complicated?
-
4
You are very wrong, even spiritually wrong. Even simple ideas in mathematics can become springboards to deeper understandings, and the fact that these answers haven't produced this understanding in you doesn't mean they don't enlighten anyone else. To pick out one, I'd say Tom Goodwillie's answer reflects great depth of understanding, based on decades of his own research. – Todd Trimble Nov 13 2011 at 12:40
You see my stance, is that these answer actually obscure the understanding. I can read them and understand them, but I still think that what we're doing here is just trying to fit something that doesn't quit fit into what we want it to fit. – nothappy Nov 13 2011 at 12:44
2
@nothappy: I am sorry, but unless you can prove that these answers obscure understanding, and you cannot, this answer is subjective and argumentative. – Todd Trimble Nov 13 2011 at 12:49
@Todd: Same applies to you :-) – nothappy Nov 13 2011 at 12:49
7
@nothappy: It's true that these answers are, in a sense, making something simple more complicated, or at least more subtle. It would be crazy to teach Taylor's theorem to calculus students using such an approach. However, it's important in mathematics to take simple ideas and ask what they mean in a deeper context. This is not always fruitful, but it can lead to richer interconnections and conceptual understanding. People can reasonably disagree about whether they have learned something, based on their background and taste, but it is silly to insist that nobody is learning anything here. – Henry Cohn Nov 13 2011 at 15:15
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 194, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457945227622986, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-equations/171048-explanations-these-laplace-transforms-properties.html
|
# Thread:
1. ## Explanations Of These Laplace Transforms and Properties
Hello, all.
I was on this site previously, but, forgot my username and etc. I would like to say the website looks like it had a great improvement to it, so good job to those who worked on it. I have a few questions about these Laplace transforms.
Some Information: I have taken a Differential Equations previous semester to this course I am taking now which calls for to use this, which is a Circuits class. We never really covered Laplace Transforms too in depth just a basic here what they are and some stuff to remember whilst using a table. I am not one too big on tables until I know the work in between unless it calls for some massive learning I cannot comprehend yet on my current level of mathematical knowledge. So in all these current Laplace Transforms and properties, and unit functions i.e step, impulse, pulse, ramp functions are being used in conjunction of my circuits analysis course if this helps anyone.
Property/Transform 1:
$\int_{0}^t f(\lambda) d\lambda = \frac{1}{s} F(s)$
I was told that lambda was just a dummy variable on integration. But, how would you so to speak integrate or arrive and the result of this property? Because I was given a simple problem to evaluate this unit step function...please do not solve, I want to learn this.
$\int_{1}^t \mu(\lambda) d\lambda$
I wouldn't assume the professor wanted us to evaluate simply by saying using property so and so on table so and so gives yada yada. That is useless to me at least. I will post more as we progress here so thanks to anyone who takes the time to answer.
2. Originally Posted by IHuntMath
[snip]
Property/Transform 1:
$\int_{0}^t f(\lambda) d\lambda = \frac{1}{s} F(s)$
[snip]
This is wrong. The statement should be:
$\displaystyle LT\left[\int_{0}^t f(\lambda) d\lambda \right] = \frac{1}{s} F(s)$.
Here is a start:
$\displaystyle LT\left[\int_{0}^t f(\lambda) d\lambda \right] = \int_0^{+\infty} \left(\int_{0}^t f(\lambda) \, d\lambda\right) e^{-st} \, dt$
Now use integration by parts:
$\displaystyle = \lim_{\alpha \to +\infty} \left[- \frac{1}{s} e^{-st} \int_0^t f(\lambda) \, d\lambda\right]_{t=0}^{t = \alpha} - \int_0^{+\infty} \left( - \frac{1}{s} e^{-st} \right) f(t) \, dt$
= ....
3. Ah, thank you very much Mr. F.
I was a little confused because he just had evaluate, and towards the end of the homework question is was like "Take the laplace transforms of such and such" so I was wondering if he was asking for two different things. But I don't see much of anything else I could do here. I should be able to get the rest of these If not I will ask a general question in this post on what I am stuck on. Thanks again for the time Mr. F
4. the statement is as follows:
Let $f\in E_{\alpha>0}$ with $F(x)=\displaystyle\int_0^x f(t)\,dt$ then $F\in E_\alpha$ and $\mathcal L(F)(s)=\dfrac1s\mathcal L(f)(s).$ ( $E_\alpha$ is the set of all functions exponentially bounded.)
We have $\displaystyle\left| F(x) \right|=\left| \int_{0}^{x}{f(t)\,dt} \right|\le \int_{0}^{x}{\left| f(t) \right|\,dt}=M\int_{0}^{x}{{{e}^{\alpha t}}\,dt}=\frac{M}{\alpha }\left( {{e}^{\alpha x}}-1 \right)\le \frac{M}{\alpha }{{e}^{\alpha x}},$ thus $F$ is bounded exponentially, then $F'(x)=f(x)$ so $\mathcal L (F')(s)=s\mathcal L (F)(s)-F(0),$ since $F(0)=0$ the rest follows.
5. Originally Posted by mr fantastic
Now use integration by parts:
$\displaystyle = \lim_{\alpha \to +\infty} \left[- \frac{1}{s} e^{-st} \int_0^t f(\lambda) \, d\lambda\right]_{t=0}^{t = \alpha}$
Please read my post after this before hand to save you some time
I am confused as to how I would evaluate these limits that integral term is throwing me off alot for some reason right now.
So far for my corresponding problem I have:
$\displaysyle LT \left[\int_1^t \mu(\lambda) d\lambda \right]= \int_{0}^{\infty} (\int_1^t \mu(\lambda) d\lambda) *(e^{-st}) dt$
Now using integration by parts letting, instead of using u,v notation I used w to prevent any confusion in place of u:
$w = \int_1^t \mu(\lambda) d\lambda$
$dw = f(t) dt$
$<br /> dv = e^{-st}$
$v = -\frac{e^{-st}}{s}$
Here is where things get a bit hazy, applying integration by parts form:
$\displaystyle \lim_{t \to \infty} \left[ (-\frac{e^{-st}}{s}) * \int_1^t \mu(\lambda) d\lambda \right]_{t=0}^{t=\infty} - \int_{0}^{\infty} (-\frac{e^{-st}}{s}) f(t) dt$
I know this totally wrong but applying the limits to the first term gives:
$\displaystyle \left[(-\frac{e^{-s \infty}}{s}) \int_1^{\infty} \mu(\lambda) d\lambda \right] - \left[(-\frac{e^{-s 0}}{s}) \int_1^{0} \mu(\lambda) d\lambda \right]$
I don't think thats right, lol.
I feel like I missing the concept ya'll were trying to point about in you and krezlid's posts. I know my level of mathematical comprehension is far less than both of ya'll awesome cranium filled with mathematical knowledge. As always, thank you and krezlid for taking the time reply with a helpful answers and nicely formatted as ya'll have always done in the past making this one of the best places on the internet to learn and improve your mathematical abilities. No I am not trying to suck up just being appreciative of how much time it takes especially since it took me a good while just to format my post.
6. I feel quit retard about what I just found re-searching through my notes. I think I have found a solution or a piece to my solution if this does not require evaluation with Laplace and just evaluate in terms of other unit functions such as ramp, pulse/window, step, and impulse functions.
I found this "property" so to speak in my notes
For the ramp function:
$r(\tau) = \left\{\begin{array}{cc}0,{ \tau} < 0 \\{\tau},{\tau} > 0\end{array}\right = \tau \mu(\tau) = \int_{0}^{\tau} \mu(\lambda) d\lambda$
So then the representation of my solution I am guessing would be like:
If I let
$\tau = t-1$
$\displaystyle \int_{1}^{t} \mu(\lambda) d\lambda = (t-1)r(t-1) = r(t) = \left\{\begin{array}{cc}0,{ t-1} < 0 \\{t},{t-1} > 0\end{array}\right$
If that is the solution then I spent the whole weekend researching relative laplace transforms for nothing , but lucky me the transform question are the last two questions of the assignment so I still learned something anyway from ya'll two. Gotta look at the bright side I learned something instead of nothing
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9601433277130127, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/132835-stationary-set.html
|
# Thread:
1. ## stationary set
Say that a set $A \subseteq \omega_1$ is stationary if it has nonempty intersection with every subset of $\omega_1$. Suppose that there is an injection from $\omega_1$ into $\mathcal{P}(\mathbb{N})$. Show that every stationary subset of $\omega_1$ can be split into two disjoint subsets of $\omega_1$.
Notation: $\omega_1$ denotes the first uncountable ordinal. We may us the Countable Principle of Choice, but not the Axiom of Choice.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937682569026947, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/71432?sort=votes
|
## ordered fields with the bounded value property
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Say that an ordered field $F$ satisfies the bounded value property if, for all $a < b$ in $F$ and for every continuous function $f$ from $[a,b]_F :=${$x \in F: a \leq x \leq b$} to $F$, there exists $B$ in $F$ such that $-B < f(x) < B$ for all $x$ in $[a,b]_F$. (Here we say $f$ is continuous if it satisfies the usual $\epsilon$, $\delta$ definition, where all quantification is over $F$.)
Does there exist a non-Archimedean ordered field with the bounded value property?
I show in http://jamespropp.org/reverse.pdf (see the second paragraph on page 9) that every Archimedean ordered field satisfying the bounded value property is isomorphic to the reals, but my proof that the bounded value property implies the Archimedean property (see the first paragraph on page 9) is incorrect (thanks to Ricky Demer for pointing out my mistake).
In attempting to fix my proof, I am starting to wonder if in fact the implication fails. For instance, does the surreal number system have the bounded value property? I don't see how to prove that it doesn't.
All I can show is that if $F$ satisfies the bounded value property and contains a cofinal set $S$ whose cardinality is less than or equal to that of the continuum, then $F$ is Archimedean. (Proof: Let $g:[0,1] \rightarrow F$ be a function that takes on all values in $S$, and for all $x$ in $[a,b]_R$ with standard part $\overline{x}$ let $f(x) = g(\overline{x})$. If $F$ is non-Archimedean, $f$ is continuous on $[a,b]_F$ and unbounded.) But, even leaving aside constructivist qualms about how one constructs $g$ from $S$, clearly this approach won't work for the surreal numbers or for sufficiently large fields within the Field of surreal numbers.
-
related to your paper but not this question: For the cut property, wouldn't it be nicer to drop the condition that $A\cup B = R$ and use the second conclusion? – Ricky Demer Jul 29 2011 at 2:15
@Ricky: That was Tarski's feeling too. His version of the cut property is more flexible but I don't think students find it as intuitive, since the range of possibilities for the sets $A$ and $B$ becomes vastly greater. I personally find the narrower statement easier to visualize and hence more compelling, but that's purely an esthetic preference. If one were to develop real analysis from my version of the cut principle, your version should probably be Theorem 1! – James Propp Jul 29 2011 at 15:41
@James, this is a reply to your latest comment on my answer, but the comments there were piling up. I cannot see how to eliminate AC from the proof given in my answer, if one uses the Schmerl method; and I suspect the same holds for Sikorski's construction. I am also a bit puzzled about requiring no use of $AC$ in the counterexample since (1) that was not specified anywhere in your question, and (2) there is very little to say about uncountable model theory and algebra in the absence of choice. – Ali Enayat 0 secs ago – Ali Enayat Jul 29 2011 at 21:44
@James: despite the pessimism expressed in my previous comment about eliminating the axiom of choice $AC$ , a "miracle has happened", and I can now see how to take advantage of a key feature of Schmerl's construction to eliminate $AC$; by tomorrow I will put a PS on my answer in which I will outline how this is done. – Ali Enayat Jul 31 2011 at 1:57
I don't see why that cardinal must be regular. For that matter, I don't see why it can't have countable cofinality. – Ricky Demer Aug 1 2011 at 7:42
show 6 more comments
## 1 Answer
EDIT NOTE: A postscript has been added to indicate why the answer does not change if one is forced to work in $ZF+AC_\omega$ (prompted by a query of James Propp). Thanks to James Propp, Ricky Demmer, and Emil Jeřábek for catching infelicities of the past versions.
There are nonarchimedean fields with the bounded value property.
Let's begin with a key definition: an ordered field $F$ satisfies the $\kappa$-Bolzano-Weiestrass property, abbreviated $BW(\kappa)$, if every bounded sequence $x_\alpha$ of length $\kappa$ in $F$ has a convergent subsequence of length $\kappa$.
So the Bolzano-Weirestrass theorem says that $\Bbb{R}$ satisfies $BW (\aleph_{0})$.
Sikorski (1948) proved that for every uncountable regular cardinal $\kappa$ there is an ordered field of cardinality and cofinality $\kappa$ that satisfies $BW(\kappa)$. Since every archimedean ordered field has countable cofinality, the following Lemma, when coupled with Sikorski's theorem above (with $\kappa$ chosen as $\aleph_1$) shows that nonarchimedean fields with the bounded value property exist.
Note that the proof is of the Lemma is an adaptation of the usual real-analysis proof of the boundedness of continuous functions on closed bounded intervals, using $BW (\aleph_{0})$.
Lemma. Let $\kappa$ be a regular cardinal. If $F$ is an ordered field of cofinality $\kappa$ such that $F$ satisfies $BW(\kappa)$, then $F$ has the bounded value property.
Proof: Choose an increasing unbounded sequence $x_\alpha$ of elements of $F$, where $\alpha \in \kappa$. If $f[a,b]$ has no upper bound for a continuous function $f$, then for each $\alpha < \kappa$ there is some $t_{\alpha}$ $\in [a,b]$ with $f(t_{\alpha}) > x_{\alpha}$.
By $BW(\kappa)$ there is some unbounded subset $U$ of $\kappa$ such that the subsequence $S$ := {$t_{\alpha} : x \in U$} converges to some $c\in [a,b]$. Therefore by continuity of $f$, the sequence $f(S)$ converges to $f(c)$.
But a convergent sequence of length $\kappa$ must be bounded (the regularity of $\kappa$, and the assumption that $F$ has cofinality $\kappa$ comes to the rescue here), and yet $f(S)$ is clearly unbounded by construction. This contradiction shows that $f[a,b]$ is bounded above; a similar reasoning shows that $f[a,b]$ is bounded below (or just replace $f$ by its absolute value). QED
Some references: Sikorski's Theorem appears in:
Roman Sikorski, On an ordered algebraic field. Soc. Sci. Lett. Varsovie. C. R. Cl. III. Sci. Math. Phys. 41 (1948), 69–96 (1950).
A proof of Sikorski's theorem can also be found in the following paper (Cor. 2.7), as a corollary of a vast generalization of Sikorski's theorem; the paper is an impressive showcase for the interaction between deep methods of models of arithmetic and higher set theory with field theory.
James Schmerl, Models of Peano arithmetic and a question of Sikorski on ordered fields. Israel J. Math. 50 (1985), no. 1-2, 145–159.
PS. One can show, using some machinery from the model theory of arithmetic, that working only in $ZF+AC_\omega$, Schmerl's proof can produce a well-orderable field $F$ of cardinality and cofinality $\aleph_1$ that satisfies $BW(\aleph_1)$. This allows one to one obtain a non-archimedean field with the bounded value property, entirely within $ZF+AC_\omega$.
-
Thanks for your reply, Ali. The first place where I don't follow you is in your restatement of the Bolzano-Weierstrass Theorem. Did you really mean to say that the ordered field has cardinality $\kappa$? Note that the Bolzano-Weiestrass Theorem concerns sequences of length $\aleph_0$ chosen from a set of cardinality $\aleph_1$. So something seems amiss here. – James Propp Jul 27 2011 at 22:47
@James, you are right, I should have just said: "the usual Bolzano-Weierstrass Theorem", I will fix that. – Ali Enayat Jul 27 2011 at 23:13
@James: I ended-up removing any cardinality restrictions from $BW(\kappa)$ which does not effect my answer; but in case you look at Schmerl's paper note that he stipulates the cardinality of $F$ being greater than $\kappa$ as part of the definition of $BW(\kappa)$. – Ali Enayat Jul 27 2011 at 23:28
How do you get the third sentence of your proof? – Ricky Demer Jul 28 2011 at 1:25
I also appear to be missing something: doesn't Cantor's original (en.wikipedia.org/wiki/…) proof show that any countable ordered field does not satisfy $BW(\aleph_0)$ ? – Ricky Demer Jul 28 2011 at 1:52
show 13 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 97, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295240044593811, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2011/12/08/nonvanishing-compactly-supported-de-rham-cohomology/
|
# The Unapologetic Mathematician
## Nonvanishing Compactly-Supported de Rham Cohomology
Last time we saw that compactly-supported de Rham cohomology is nonvanishing in the top degree for $\mathbb{R}^n$. I say that this is true for any oriented, connected $n$-manifold $M$. Specifically, if $\omega\in\Omega_c^n(M)$, then the integral of $\omega$ over $M$ is zero if and only if $\omega=d\eta$ for some $\eta\in\Omega_c^{n-1}(M)$. That the second statement implies the first should be obvious.
To go the other way takes more work, but it’s really nothing much new. Firstly, if $\omega$ is supported in some connected, parameterizable open subset $U\subseteq M$ then we can pull back along any parameterization and use the result from last time.
Next, we again shift from our original assertion to an equivalent one: $\omega_1$ and $\omega_2$ have the same integral over $M$, if and only if their difference is exact. And again the only question is about proving the “only if” part. A partition of unity argument tells us that we only really need to consider the case where $\omega_i$ is supported in a connected, parameterizable open set $U_i\subseteq M$; if the integrals are zero we’re already done by using our previous step above, so we assume both integrals are equal to $c\neq0$. Dividing by $c$ we may assume that each integral is $1$.
Now, if $p_0\in M$ is any base-point then we can get from it to any other point $p\in M$ by a sequence of connected, parameterizable open subsets $W_i$. The proof is basically the same as for the similar assertion about getting from one point to anther by rectangles from last time. We pick some such sequence taking us from $U_1$ to $U_2$, and just like last time we pick a sequence of forms $\alpha_i$ supported in $W_{i-1}\cap W_i$. Again, the differences between $\omega_1$ and $\alpha_1$, between $\alpha_i$ and $\alpha_{i+1}$, and between $\alpha_N$ and $\omega_2$ are all exact, and so their sum — the difference between $\omega_1$ and $\omega_2$ is exact as well.
And so we conclude that the map $\Omega_c^n(M)\to\mathbb{R}$ given by integration is onto, and its kernel is the image of $\Omega_c^{n-1}(M)$ under the exterior derivative. Thus, $H_c^n(M)\cong\mathbb{R}$, just as for $\mathbb{R}^n$.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 1 Comment »
1. [...] will be a nice, simple definition. As we saw last time, the top-degree compactly-supported de Rham cohomology is easy to calculate for a connected, [...]
Pingback by | December 9, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260708689689636, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/41216/qualitative-discussion-about-entropy-and-disorder/41220
|
# Qualitative discussion about entropy and disorder
Many discussions about entropy and disorder use examples of decks of cards, pages of books thrown in the air, two gases being mixed in a container, even the state of a nursery at the end of the day compared to the beginning of the day in order to explain the idea of order (and disorder). In all these examples it is pointed out that the disorder of the system increases, or that the system is in an ordered state and finishes in a disordered state after something has happened. Take the case of throwing the pages of the book in the air. You start with the pages numbered in sequence (I didn't want to use the for "order"), you throw them in the air, they land on the floor, you collect them up and notice the pages are not in sequence anymore. ANd the point is "They are not in the sequence I call ordered. Nonetheless, they are in a new sequence." And, it appears to me that the probability to find the pages in this precise new sequence is equal to find them in the original sequence". In that sense, 'order' seems to be something that us humans define and it doesn't appear to be a property of the system. On the other hand, I can see that in the case of two gases mixing, empirically we find more states where the two types of molecules are occupying the entire volume of a container than one type of molecule in the left side and the other type in the right side of the container. Nonetheless, the precise state of each molecule, its position and therefore the entire state of the mixed up system is qualitative the same, isn't it? Isn't it equally difficult to make each molecule occupy that precise position in the mixed up state as in the unmixed state? Does this make sense?
Thanks,
Marcio
-
## 2 Answers
I think a key observation here is that entropy is used when you are trying to describe a system on a macroscopic scale, which means you want to make predictions about macroscopic quantities. Using your example of two kinds of gas in a box (call them red and blue), before we talk about entropy of various states, we should consider what kinds of quantities are meaningful at a macroscopic scale. One set of quantities that are not macroscopically relevant are the exact positions of every molecule in the box. That information is not accessible when the system is "coarse-grained," or viewed macroscopically.
To see what information is accessible, lets assume that your coarse-graining only allows you to divide the box into two cells, a left half and a right half. You are interested in knowing if the gas tends to be "mixed" or "separated." The individual positions of the molecules are not observable, but what is observable is the number of molecules of each type on each side of the box. We could define a quantity that measures the mixedness $M$ of the particles as something like $M=(N_B-N_R)/N_\text{tot}$, where $N_{B/R}$ is the number of blue/red molecules in the cell, and $N_\text{tot}$ is the total number of particles in the cell.
Now we can ask which state is more probable: the one with blue on one side and red on the other (unmixed), or the one with nearly equal numbers of red and blue molecules (mixed). There is only one unmixed state (or I guess two since you could have all the red on the right or all the red on the left), but many many mixed states--remember you don't get to measure positions of individual particles, but just how many of each type are in each cell.
So it boils down to identifying what states are meaningful on the macroscopic scale, and then counting how many different ways there are to produce the macroscopic state. Then you can assign an entropy to the state which is bigger for states that are more probable (i.e. can be made in many different ways).
The example of pages in a book being mixed up is a bit more difficult to see how entropy could enter, but the key point is to identify the macroscopic quantities which characterize the state. For this, we shouldn't ask about the precise position of page 1, page 2, and so on. Instead, a good macroscopic quantity for measuring disorder could be the number of pages which ended up in the correct location. Then we see that there is only one possibility where all the pages are in the correct position, but many more possibilities where none of them are in the correct position, and hence the latter option is a state of higher entropy. You could be more sophisticated about how you define a disordered state (maybe look at how many even numbers are next to another even number and odd next to odd), but the key is to focus on macroscopic properties of the system, and not on the individual position of each page.
-
The basic assumption of statistical physics is, as you correctly assume, that all states are equally probable (there may be external reasons why some are less probable than others, but in the simplest case this holds). The concept of order and disorder, then, arises only when we "squint" and look at the system at a more abstract, macroscopic, level.
Example: Imagine that you divide the box of gas into two (imaginary) containers. If you have $N$ particles in total, and $k$ of them are to the left (hence $N-k$ to the right), there are $N \choose k$ or $\frac{N!}{k!(N-k)!}$ ways of defining such a state. For even moderately large numbers of particles, this is a huge number. Compare this to the case where you have all particles on one side; there is exactly one state describing this situation.
As for the example with the pages, you are correct in you assessment that the concept of order is not unique, but it is well defined: there are $N!$ ways of ordering $N$ pages, but only one where they are in a predefined order, and the deviations from this order can be assigned different probabilities.
A high entropy state is then a state which does not surprise us when we find it, whereas a state (such as all molecules being in one corner, or the pages being in a steadily growing order, when there are vastly more states where this is not true) is one of low probability, and thus surprising.
-
Hi Skymandr, Thanks for your reply. – user14188 Oct 19 '12 at 13:38
Hi Skymandr, hit Enter too soon. What really confuses me is why we are surprised when we see "blue" molecules on the left side of the box and "red" molecules on the right side, and "we are not surprised when all the blue molecules (and red ones) are in the precise positions they are, in the mixed state. Like the "ordered" state, there is only one "mixed" state where the molecules occupy those precise positions. – user14188 Oct 19 '12 at 13:45
Yes, there is only one such case. The surprise comes first, when we look at the system from a distance, going from looking at individual particles to ensembles of particles. What would be a typical mixture of blue and red, for a particular set of imaginary boxes? If we make the boxes sufficiently small, the entropy does indeed vanish with out statistics. – skymandr Oct 19 '12 at 14:12
– skymandr Oct 19 '12 at 14:18
Thanks for the paper; I'll read it as soon as I can. Still, I think there is something wrong with my view of entropy and order/disorder. Without getting too philosophical, the confusion may be on "state of the system" or "number of states". I'm not comfortable with nature itself making a distinction between "separate blue and red" and "mixed blue and red". It is obvious that it "matters" to humans, but not to nature. – user14188 Oct 19 '12 at 17:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950056791305542, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/22907/confusion-about-how-the-first-cohomology-classifies-torsors/22925
|
## Confusion about how the first cohomology classifies torsors
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is inspired by, but is independent of: http://mathoverflow.net/questions/2414/sheaf-description-of-g-bundles
Line bundles are classified by $H^1(X,\mathcal{O}^\times_X)$. We also know that in general that $H^1(X,G)$, where $G$ is a sheaf from open sets in $X$ to $Grps$, classifies $G$-torsors over X.
With this insight in mind: $\mathcal{O}^\times_X$-torsors should correspond to line bundles. Indeed, if $P$ is one, then $\mathcal{O} _X \times _{\mathcal{O} _X^\times}P$ gives the desired line bundle, and all line bundles are achieved this way (see the question I linked to).
My question is about the more mundane $H^1(X,\mathbb{C})$, which can be thought of as $H^1(X,\underline{\mathbb{C}})$ where $\underline{\mathbb{C}}$ is the constant sheaf $\mathbb{C}$ (which I think of as a sheaf going to $Grps$). These should supposedly correspond to $\mathbb{C}$-bundles over $X$. Which appears to be line bundles. But of course there's no reason for $H^1(X,\mathbb{C})$ to equal $H^1(X,\mathcal{O}^\times_X)$... My intuition is that this should correspond to the more naive version of fiber bundles that doesn't involve a structure group. Do you have any thoughts?
-
1
I think i can answer this in algebraic topology land, but it would involve singular cohomology with integral coefficients, here H^2 is represented by BS^1, and BG classifies G-bundles over X ie, G-torsors. If this is indeed the type of thing that you are looking for I can put it as an answer, but it might not be... Perhaps this might help with intuition though..., and just in case BS^1=CP^inf – Sean Tilson Apr 28 2010 at 23:20
2
C does not act on the fibers of a line bundle by addition, so line bundles can't be C-torsors. One thing to observe is that line bundles come with a canonical choice of zero section, and are not necessarily trivial, unlike the situation with torsors where having a section automatically makes the bundle trivial. One thing you can say is that a line bundle is a "bundle of groups", but this is very different from being a torsor. – Lucas Culler Apr 28 2010 at 23:33
## 3 Answers
The general principle is: if you have some objects which are locally trivial but globally possibly not trivial then the isomorphism classes of such objects are classified by $H^1(X,\underline{Aut})$, where $\underline{Aut}$ is the sheaf of automorphisms of your objects.
So, if your objects locally are $U\times \mathbb A^n$ (i.e. vector bundles) or they are $\mathcal O_U^{\oplus n}$ (i.e. locally free sheaf) then either of these are classifed by $H^1(X, GL(n,\mathcal O))$. For $n=1$, you get $GL(1,\mathcal O)=\mathcal O^*$.
Now what is $\mathbb C$ an automorphism group of? Certainly not of line bundles (zero has to go to zero).
-
1
Would it be fair to say: fiber bundles over X with fiber F, structure group G, and transition functions with property P (either: nothing, continuous, constant, etc.) is in correspondence with T-torsors, where T is the sheaf on X of P-function to G? – Makhalan Duff Apr 29 2010 at 4:06
Which, in turn, corresponds to H^1(X, T)? – Makhalan Duff Apr 29 2010 at 4:08
2
@MD: Yes, that's fair. In fact it makes it easy to explain your point of confusion: you were conflating the fiber $F$ with the structure group $G$. As you can see, the latter is more important because it is used to classify the torsors, and you can always restrict to principal $G$-bundles. But a bundle with structure group $\mathbb{C}$ is not a line bundle! – Pete L. Clark Apr 29 2010 at 5:52
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think the right thing to look at is $H^1(X, \mathbb C^*)$. This classifies line bundles with a flat connection, or equivalently, line bundles with locally constant transition functions.
Now the natural embedding $\mathbb C^* \to \mathbb O_X^*$ induces on a map on cohomology $H^1(X, \mathbb C^*) \to H^1(X, \mathbb O_X^*)$ which is forgetting the flat connection.
-
Your confusion seems to stem from the difference between topological bundles and algebraic/holomorphic bundles.
For a scheme $(X, \mathcal{O}_X),$ you say that $H^1(X, \mathcal{O}_X^{\times})$ classifies line bundles on $X$. This is true, as long as you mean algebraic line bundles. If, for example, $X$ is something like a complex algebraic variety with the analytic topology (which seems to be how you're thinking about it, perhaps), then there can certainly be plenty of topological line bundles on $X$ which aren't algebraic.
Also, as Lucas Culler notes in the comments to your question, a line bundle is not a torsor under the additive group $\mathbb{C}$. Instead, it is actually a $\mathbb{C}^{\times}$ torsor, if you remove the zero section. Algebraically, a torsor for the additive group $\mathbb{C}$ (which, by the way, is often denoted $\mathbb{G}_a$ to avoid confusion) is classified by all extensions of $\mathcal{O}_X$ by $\mathcal{O}_X$.
Anyway, it is true that (maybe with some mild assumptions on your space $X$) that $H^1(X,G)$ classifies topological $G$-bundles on $X$.
-
I'm confused. Are you treating the constant sheaf C as G_a? But the first applied to Z (the integers) is C (additive), and the other would be the additive group Z. Are you saying H^1(X, C) (topological cohomology) is H^1(x, G_a)? – Makhalan Duff Apr 29 2010 at 0:53
No, the constant sheaf $\mathbb{C}$ is not $\mathbb{G}_a,$ I was merely describing what an algebraic $\mathbb{C} = \mathbb{G}_a$ torsor is. This was meant to clarify your confusion over thinking that a $\mathbb{C}$ torsor is the same as a line bundle. – Mike Skirvin Apr 29 2010 at 1:12
2
One way to express the distinction is viewing $\mathbf{C}$ as a complex-analytic Lie group in two ways: with the usual topology or the discrete topology. The former gives rise to torsors classified by ${\rm{H}}^1(X, O_X)$ and the latter gives rise to torsors classified by ${\rm{H}}^1(X, \underline{\mathbf{C}})$. Neither case related to line bundles, and for connected $X$ the group ${\rm{H}}^1(X, \underline{\mathbf{C}}^{\times})$ classifies line bundles with locally constant transition functions (for some trivializing cover). – BCnrd Apr 29 2010 at 1:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448726773262024, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/9961/colimits-of-schemes/10000
|
## Colimits of schemes
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is related to another question.
I've found many remarks that the category of schemes is not cocomplete. The category of locally ringed spaces is cocomplete, and in some special cases this turns out to be the colimit of schemes, but in other cases not (which is, of course, no evidence that the colimit does not exist). However, I want to understand in detail a counterexample where the colimit does not exist, but I hardly found one. In FGA explained I've found the reference, that Example 3.4.1 in Hartshorne, Appendix B is a smooth proper scheme over $\mathbb{C}$ with a free $\mathbb{Z}/2$-action, but the quotient does not exist (without proof). To be honest, this is too complicated to me. Are there easy examples? You won't help me just giving the example, because there are lots of them, but the hard part is to prove that the colimit really does not exist.
-
7
I'm going to attempt to clarify your question, because I misunderstood it twice. Your question is not about locally-ringed spaces. Your question is not about the famous free Z/2Z-action on the smooth proper 3-fold. Your question is simply to give, with full proof, an example of a diagram in the category of schemes, with no colimit in the category of schemes. Right? – Kevin Buzzard Dec 29 2009 at 11:44
yes, exactly :-). besides, I'm interested in special cases where the colimit exists (see the comments in emertons answer). – Martin Brandenburg Dec 29 2009 at 12:18
If you take the naive approach and just google "no categorical quotient" then you seem to get lots of examples. I just looked through a few and perhaps the one you'll like most is the one in "push-out of schemes" by Li (p538, example 2). I should stress that I did not check anything here though. – Kevin Buzzard Dec 29 2009 at 13:38
there is a gap, li does not prove that X/W = X -> Y' is an isomorphism, which may be very hard: how do get a map into the quotient? I don't know what he means with "(locally, point by point," and can't find "Remark 4.2". – Martin Brandenburg Dec 29 2009 at 15:49
3
As another clarification, if $R \rightrightarrows U$ is an etale equivalence relation in schemes s.t. the alg space $X = U/R$ is not a scheme, for it to be an "example" one must prove there's no "initial map" from $X$ to schemes (i.e., map $\pi:X \rightarrow S$, or equivalently map $U \rightarrow S$ inducing same composites back to $R$, to a scheme $S$ which is initial among all such maps). But just as for "quotients" of schemes, one needs properties beyond "categorical" for it to be useful. So this is an exercise in pathology. Not that there's anything wrong with that... – BCnrd Apr 24 2010 at 20:25
show 2 more comments
## 5 Answers
Edit: BCnrd gave a proof in the comments that this example works, so I've edited in that proof.
## A possible proven example
I suspect There is no scheme which is "two $\mathbb A^1$'s glued together along their generic points" (or "$\mathbb A^1$ with every closed point doubled"). In other words, the coequalizer of the two inclusions $Spec(k(t))\rightrightarrows \mathbb A^1\sqcup \mathbb A^1$ does not exist in the category of schemes. Intuitively, this coequalizer should be "too non-separated" to be a scheme.
I don't have a proof, but I thought other people might have ideas if I posted this here.
If a coequalizer $P$ does exist, then no two closed points of $\mathbb A^1\sqcup \mathbb A^1$ map to the same point in $P$. To show this, it is enough to find functions from $\mathbb A^1\sqcup \mathbb A^1$ to other schemes which agree on the generic points but disagree on any other given pair of points. The obvious map $\mathbb A^1\sqcup \mathbb A^1\to \mathbb A^1$ separates most pairs of closed points. To see that a point on one $\mathbb A^1$ is not identified with "the same point on the other $\mathbb A^1$", consider the map from $\mathbb A^1\sqcup \mathbb A^1$ to $\mathbb A^1$ with the given point doubled.
On the other hand, let $U$ be an affine open around the image of the generic point in $P$. $U$ has dense open preimages $V$ and $V'$ in both affine lines. Let $W=V\cap V'$ inside the affine line, so we have two maps from $W$ to the affine $U$ which coincide at the generic point of $W$, and hence are equal (as $U$ is affine). In particular, the two maps from affine line to categorical pushout $P$ coincide at each "common pair" of closed points of the two copies of $W$, contradicting the previous paragraph.
Edit: The questions below are no longer relevant, but I'd like to leave them there for some reason.
Here are some questions that might be helpful to answer:
If the coequalizer above does exist, must the map from $\mathbb A^1\sqcup \mathbb A^1$ be surjective?
(see the related question Can a coequalizer of schemes fail to be surjective?)
Is the coequalizer of $Spec(k(t))\rightrightarrows \mathbb A^1\sqcup \mathbb A^1$ in the category of separated schemes equal to $\mathbb A^1$? (probably)
What are some ways to determine that a functor $Sch\to Set$ is not corepresented by a scheme?
-
I have one answer to the last question. If F is a contravariant functor from Schemes to Sets such that F(Spec(ℤ)) is more than one point, then F cannot be corepresentable. – Anton Geraschenko♦ May 8 2010 at 22:03
2
An easier example is to divide the "double-headed snake" $\mathbb A^1\cup \mathbb A^1$ by $\mathbb Z_2$. That is not a scheme either, but it is an algebraic space. In other words, replace your $Spec\ k(t)$ by $Spec\ k[t,1/t]$ . That feels more ordinary. Koll'ar called this algebraic space this a "bug-eyed cover" of 1. He has a paper with "bug-eyed" in the title on these. – VA May 8 2010 at 23:52
1
@VA: can you explain your quotient suggestion more fully, so I can see how it is different from the line with a doubled point? Maybe I am not seeing what $\mathbf{Z}/2\mathbf{Z}$-action you have in mind. I did consider trying to do something more ordinary like this, but got stuck on not removing enough closed points. I agree it is much better if the example will be an algebraic space. – BCnrd May 9 2010 at 0:08
3
The proof above shows more generally: Let $X$ be an integral scheme with non-closed generic point $\eta$, such that the closed points are dense in $X$ (for example a variety over a field). Then the coequalizer of $\eta \rightrightarrows X \sqcup X$ does not exist. I think a slogan for the proof is that identifying the two generic points implies that some neighborhoods should be identified as well, but closed points are not identified because you can use test schemes $X$ and $X$ with a doubled point. – Martin Brandenburg May 9 2010 at 0:19
1
The prototypical bug-eyed cover I know of is the quotient of $\mathbb A^1$ by the ℤ/2 action given by x↦-x, except you "remove the action at 0" from the relation. The pushout in the category of algebraic spaces is "the right one", but there is also a pushout in the category of schemes. If there were an example which is an algebraic space, it would also answer David Brown's question mathoverflow.net/questions/4587/…. – Anton Geraschenko♦ May 9 2010 at 0:31
show 3 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is not an answer to the question, but is intended to show that Martins's concern about the possible distinction between the colimit in the category of schemes vs. in the category of locally ringed spaces is a valid one. Indeed, if I haven't blundered below, then it seems that in some circumstances at least the direct limit (colimit) of schemes exists, but does not coincide with the direct limit in the category of locally ringed spaces.
For example, suppose that $X_n$ is the Spec of $k[x]/(x^n),$ for some field $k$ (and the transition maps are the obvious ones). Then the direct limit of the $X_n$ in the category of locally ringed spaces is a formal scheme which is not a scheme, whose underlying topological space is a point, and whose structure sheaf (which is in this context simply a ring, namely the stalk at the unique point) is $k[[x]]$.
On the other hand, suppose given compatible maps from the $X_n$ to a scheme $S$. These must all map the common point underlying the $X_n$ to some point $s \in S$, which lies in some affine open Spec $A$. Thus the maps from the $X_n$ factor through Spec $A$, and correspond to compatible maps $A \rightarrow k[x]/(x^n),$ i.e. to a map $A \rightarrow k[[x]].$ This in turn gives a map Spec $k[[x]] \rightarrow$ Spec $A\subset X,$ and so we see that the natural compatible maps from the $X_n$ to Spec $k[[x]]$ identify Spec $k[[x]]$ with the direct limit of the $X_n$ in the category of schemes.
EDIT: As is noted in a comment of David Brown's attached to his answer, this example generalizes, e.g. if $I$ is an ideal in a ring $A$, then the direct limit in the category of schemes of Spec $A/I^n$ coincides with Spec $\hat A$, where $\hat A$ is the $I$-adic completion of $A$.
FURTHER EDIT: I am no longer certain about the claim of the previous paragraph. If $A/I$ (and hence each $A/I^n$) is local then for any scheme $S$ the maps Spec $A/I^n \to S$ factor through an affine open subscheme, so one reduces to computations in the category of rings, and hence finds that indeed the direct limit of the Spec $A/I^n$ equals Spec $\hat{A}$. More generally, I'm currently unsure ... .
-
in general, the colimit of local affine schemes and local transition maps exists and is a local affine scheme. this is because you can describe morphisms on local schemes via points and local homomorphisms on the stalks. in particular, the colimit of the $Spec A/I^n$ is $Spec \hat{A}$, when $I$ is a maximal ideal. I try to prove the general case: if $f_n : Spec A/I^n \to X$ are compatible morphisms, we want to glue them to $Spec \hat{A} \to X$. if $X$ is affine, this is trivial. – Martin Brandenburg Dec 29 2009 at 10:41
in the general case, let $U \subseteq X$ be an open affine. since the transition maps $Spec A/I^n \to Spec A/i^{n+1}$ are homeomorphisms, the images of the $f_n$ are all equal. consider the preimage of $U$ in $Spec A/I$. let $f \in A$ such $D(\overline{f})$ is a basic open subset of this preimage. then all the $f_n$ restrict to compatible $Spec A_f / (I_f)^n \to U$. the affine case yields $Spec \hat{A_f} \to U$. now we want to glue these morphisms to $Spec \hat{A} \to X$. so it would be nice that $Spec \hat{A_f}$ is an open cover of $Spec \hat{A}$, but this seems to be unlikely ... – Martin Brandenburg Dec 29 2009 at 10:42
I still have not understood the claim in the EDIT if $I$ is not maximal. – Martin Brandenburg May 8 2010 at 19:42
Dear Martin, I will try to reconstruct the proof, and then post it. – Emerton May 9 2010 at 4:20
Dear Martin, My reconstruction effort has failed, at least for the moment, and so I have added a further edit, scaling back the claim of the previous one. – Emerton Dec 10 2010 at 4:17
Sounds to me like you don't want to hear the proof, but you want to hear "the point". The point is that a scheme must by definition be covered by affine schemes, and sometimes when doing exercises in Hartshorne the proofs go like this: first do the question for affine schemes, where the question becomes ring theory, and then do it for all schemes by glueing together. So here is how you might want to try and quotient out a scheme by a free action of $Z/2Z$: first let's say the scheme is affine, so $Spec(A)$, with $A$ having an action of $Z/2Z$, and try and figure out if the quotient exists, and then move onto the general case.
In the affine case we have a ring $A$ with an action of $Z/2Z$, and if $B$ is the invariants, then you can convince yourself that $Spec(B)$ is the quotient.
Now let' s do the general case. Say $X$ is a scheme with an action of $Z/2Z$. Choose a point $x$ in $X$. Now let $Spec(A)$ be an affine containing $X$. Now $Z/2Z$ acts on $Spec(A)\ldots$oh wait, no, no it doesn't, because the action will maybe move $Spec(A)$ to another affine $Spec(B)$. So let's try intersecting $Spec(A)$ and $Spec(B)$, The intersection will often be affine so let's consider this and call it $Spec(C)$ and$\ldots$oh no, wait, that doesn't work either, because $x$ might not be in $Spec(C)$. Hmm.
The well-known 3-fold you mention in your question behaves in this way. You can't cover it by affines each one of which is preserved by the action, so you get stuck.
Does that help?
-
that the naive approach does not work, does not prove anything. in fact, there are categorical constructions which are in some sense unexpected. in this case, I want to understand a prove of a counterexample. – Martin Brandenburg Dec 28 2009 at 19:03
2
again: we have a full subcategory C of D and a diagram F in C, whose colimit in D does not lie in C. this is no reason that the colimit of F in C does not exist. of course, this situation is likely to be an example, but one has to prove it. – Martin Brandenburg Dec 28 2009 at 20:14
"that the naive approach does not work, does not prove anything." Of course it doesn't. But it does prove that I misunderstood what you were asking. Let me try again then: just go and read Mumford's "Geometric Invariant Theory" and see his careful explanation of why the quotient isn't a scheme. Will this do? If not, what are you asking? I'm trying my best ;-) – Kevin Buzzard Dec 28 2009 at 23:29
3
I think he's asking for an example of a diagram of schemes where there is no categorical colimit, and not for an example where there is no categorical colimit that looks "correct" geometrically. E.g. if E is an elliptic curve over the complex numbers with a point x of infinite order, then translation by x induces an action of Z on E, and I believe the categorical colimit exists but it is Spec(C), because there are not many translation-invariant functions. So the idea is to construct an example where there are not too few functions out that the colimit exists in some trivial way. – Tyler Lawson Dec 29 2009 at 4:17
Direct limits of schemes fail to exist. A good example is the following: let X be a scheme and Z a closed subscheme defined by an ideal I. Then for any n we get the nth infinitesmal neighborhood Z^(n) defined by the ideal I^(n+1) and a diagram $Z \to Z^2 \to \cdots \to Z^n \to \cdots$ and in general the direct limit of this diagram does not exist in the category of schemes. (It does exist however in the category of formal schemes). Knutson's Algebraic Spaces, Chapter 5, section 1 (near the end) explains this well (for algebraic spaces, but for this point it is fine to think of everything as a scheme).
A simple example is P^1 over a dvr (R,t), with Z the closed subscheme of P^1 defined by t.
-
1) seems that something in your answer is not visible. anyway, is there are a easy prove that the colimit does not exist in this example? 2) as I already said, this is no prove that the pushout does not exist. – Martin Brandenburg Dec 28 2009 at 19:06
Ah. You are right, too hasty on my part. I edited my answer. – David Zureick-Brown♦ Dec 28 2009 at 19:17
1
thank you. how can we prove that the colimit in your example (infinitesemal neighborhood in P^1) does not exist? so we have to conclude a contradiction, if X = colim(Z^n) exists. I have no idea how this can be done. I know nothing about algebraic spaces. sorry that I repeat my question again and again, but as I said the problem is not really in "finding" examples, but checking them. – Martin Brandenburg Dec 28 2009 at 20:19
for example, we could try that A^1 -> A^2 -> ... (the canonical closed immersions) has no colimit in the categors of schemes. assume X is a colimit. the global sections are morphisms to A^1, thus we get that the global sections of X are k[[x_1,x_2,...]]. I don't know how to go on. – Martin Brandenburg Dec 29 2009 at 0:54
For affine schemes the colimit does exist, it is just the completion. The point behind the P^1 example is that you can't glue the `completed A^1's' together to get a completed P^1. I'll see if I can think of a succinct way to see this. – David Zureick-Brown♦ Dec 29 2009 at 8:02
show 2 more comments
Direct limits include in particular quotients.
I think that if $Z \subset X$ is a closed subscheme, then $X/Z$ usually doesn't exist.
Let's take for $Z$ a divisor on $X$, such that there exist a rational function $f: X \to \mathbb P^1$ with poles only on $Z$. Then according to the universal property of quotients such a function descends to a function $f \bar: X/Z \to \mathbb P^1$, with the pole consisting of one point (I'm little uncertain, why it's a point, but perhaps it must be a point). But that's impossible unless $dim(X)=1$.
-
1
Let X = Spec k[x,y] and Z = Spec k[x] (the divisor y=0). Then the quotient X/Z exists in affine schemes: it is the pullback of k[x,y] -> k[x] <- k, which is the non-Noetherian subring k[y,yx,yx^2,...]. I'm not clear whether Spec of this ring is still the quotient in the category of all schemes, but I think that more is necessary to make your argument work. – Tyler Lawson Dec 29 2009 at 13:06
1
"that's impossible unless dim(X)=1." I think you mean that's impossible unless dim(X/Z)=1. It may be that something with the right universal property does exist in the category of schemes, but that Z is not the only thing that gets crushed. Why should the dimension of X/Z agree with the dimension of X? – Anton Geraschenko♦ May 8 2010 at 17:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 135, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9450234174728394, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/106313/regular-average-calculated-accumulatively
|
# Regular average calculated accumulatively
is it possible to calculate the regular average of a sequence of numbers when i dont know everything of the sequence, but just everytime i get a new number i know the total count of numbers and the average for the numbers - 1.
for example: 2 3 10 the average is of course: 5
but in the last step to calculate i only have access to the previous average of 2 and 3: 2.5 the next number: 10 and the count of numbers: 3
if this is possible, how?
-
## 1 Answer
Yes, and you can derive it from the expression for the average. Let the average of the first $n$ numbers be $\mu_n$. The formula for it is
$$\mu_n = \frac{1}{n} \sum_{i=1}^n x_i$$
Then you can derive
$$n \mu_n = \sum_{i=1}^nx_i = x_n + \sum_{i=1}^{n-1} x_i = x_n + (n-1)\mu_{n-1}$$
and hence, dividing by $n$,
$$\mu_n = \frac{(n-1) \mu_{n-1} + x_n}{n}$$
i.e. to calculate the new average after then $n$th number, you multiply the old average by $n-1$, add the new number, and divide the total by $n$.
In your example, you have the old average of 2.5 and the third number is 10. So you multiply 2.5 by 2 (to get 5), add 10 (to get 15) and divide by 3 (to get 5, which is the correct average).
Note that this is functionally equivalent to keeping a running sum of all the numbers you've seen so far, and dividing by $n$ to get the average whenever you want it (although, from an implementation point of view, it may be better to compute the average as you go using the formula I gave above. For example, if the running sum ever gets larger than $10^{308}$ish then it may be too large to represent as a standard floating point number, even though the average can be represented).
-
Thanks alot man. You saved me alot of time :) – bksi Dec 8 '12 at 12:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256511330604553, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/89697/topology-on-set-of-prime-filters-of-a-distributive-lattice
|
## Topology on Set of Prime Filters of a Distributive Lattice
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a distributive lattice $A$ we can look at $Spec(A)$, whose points are prime ideals and its open sets are given similarly to the Zariski topology on Spec of a ring. That is, the basis of open sets is composed of sets of the form $D(I)=\{p~\mathrm{prime~in~A}:I\nsubseteq p\}$.
So, given a prime ideal, it is not hard to show that its complement is a prime filter. Hence there is a set bijection between the set of prime ideals and the set of prime filters. Does anyone know, if we force this bijection to be a homeomorphism based on the topology on $Spec(A)$, is there a nice description of the open basis elements on the set of prime filters of $A$?
Note: Perhaps this question is purely lattice theory. I guess it depends on your point of view. Please add or remove tags as necessary.
Thanks!
Jon
-
## 2 Answers
If you just take the basis of sets $D(I)$ that you gave for the space of prime ideals and transport it via the bijection you gave, you obviously get a basis for the space of prime filters. It consists of the sets `$M(I)=\{p \text{ prime filter}:p\cap I\neq\varnothing\}$`. Clearly, this $M(I)$ is the union, over all $a\in I$, of the sets that Ben called $D(a)$ in his answer. So his base and mine (which is really yours) generate the same topology. Actually, it seems that your base of $D(I)$'s is not just a base but the whole topology (and therefore the same goes for the $M(I)$'s).
-
Yes, the M(I)'s give the whole topology. My D(a)'s give the basis. I wrote my answer to quickly think about the duality between frames with enough points and sober spaces and forgot that one has to associate a distributive lattice with its frame of ideals to get the D's to give all the open sets. – Benjamin Steinberg Feb 27 2012 at 22:21
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I believe that if $a\in A$, one defines $D(a)$ to be the set of all prime filters containing $a$ and these give the open sets of the topology. More details can be found in Johnstone's Stone Spaces book when he does Stone duality between distributive lattices and coherent spaces.
-
Thankyou. I thus far have found Johnstone's book to be a little short on the basic equivalence of using filters versus using ideals. He seems to prefer ideals, but for my purposes filters are much better. I just wanted to make sure there was a good homeomorphism between Spec(A) and the topology on prime filters that you give so that I can basically use all the theorems for distributive lattices and Boolean algebras without thinking about whether or not I'm dealing with filters or ideals for the most part. – Jon Beardsley Feb 27 2012 at 22:56
1
The continuous lattices and domains book does everything from both viewpoints. – Benjamin Steinberg Feb 28 2012 at 0:52
Oh wow thanks so much! I had not heard of this book, but I will have to check it out. – Jon Beardsley Feb 28 2012 at 5:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9609832167625427, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/62130/whats-the-difference-between-2-and-3/62136
|
## What’s the difference between 2 and 3? [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Here are two classical results which depend on whether a parameter is 2 or 3:
• It is possible to bisect an arbitrary angle with ruler and compass, but impossible to trisect it.
• While there are infinitely many Pythagorean triples, i.e. integer solutions to $x^2+y^2=z^2$, there are no non-trivial integer solutions to $x^3+y^3=z^3$.
There are several other instances where the dividing line seems to be between 2 and 3:
• A 2-regular tree is countable, a 3-regular tree is uncountable.
• 2SAT is solvable in polynomial time, 3SAT is NP-complete.
• A random walk on $\mathbf Z^2$ is recurrent, while a random walk on $\mathbf Z^3$ is transient.
What other examples can you think of?
-
16
All the prime numbers less than or equal to 2 are even, and all the prime numbers greater than or equal to 3 are odd :) – Will Merry Apr 18 2011 at 15:18
8
I'm sorry. I may just be in a poor mood (those who follow other parts of the internet math Q&A community will catch an allusion here) but at the moment your question strikes me as somewhat superficial: $2$ is not equal to $3$, so there are going to be a lot of instances where changing $2$ to $3$ makes a big difference. But perhaps there is a good question lurking in here somewhere, something like: what common explanations can be given for these examples? It might be worth thinking about how to rephrase it. – Pete L. Clark Apr 18 2011 at 15:19
5
I don't think it's just you being in a bad mood, Pete. It would at least help if $3$ was replaced by $n\geq 3$ in the places where it can be, I guess. – Tara Brough Apr 18 2011 at 15:25
3
While others have essentially said it already I need to say it too: this is so so broad and vague; hundreds of answers could be given, and I have the strong feeling that they will be given and the couple of interesting ones will be hard to find in this flood, while the question will be on the front page for way too long. – quid Apr 18 2011 at 15:35
36
A correct answer to the question in the title would be 1 I believe. – Roland Bacher Apr 18 2011 at 15:39
show 4 more comments
## 8 Answers
There are infinitely many regular polytopes in $\mathbb R^2$ and only five in $\mathbb R^3$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
$SL_2(\mathbb{Z})$ is an amalgam whereas $SL_3(\mathbb{Z})$ is not.
-
Autonomous systems of ODEs produce simple dynamics in two dimensions, but complex dynamics in three or more. This is directly related to the fact that curves in three or more dimensions can pass each other without crossing.
-
The permutation group of two elements is abelian, the permutation group of three elements is not. There are thus non-galoisian number fields of degree $3$.
-
More examples are given as answers to a similar question about problems NP-hard in $\mathbb R^3$ but not in $\mathbb R^2$:
• Set-cover by half-spaces.
• Finding a shortest path between two points among polygonal obstacles.
• Determining whether a non-convex polygon/polyhedron can be triangulated without Steiner points.
• Realizability problem for $d$-dimensional polytopes is a candidate ($d \leq 3$ vs $d \geq 4$).
-
In 2-dimensional Euclidean space every two lines intersect (maybe in the infinite), in 3-dimensional Euclidean space there are skew (?) lines.
-
$\mathbb R^3$ is much more rigid than $\mathbb R^2$ when considering conformality: Conformal transformations of $\mathbb R^2$ do not form a finite-dimensional Lie-group.
-
Elements of order $2$ in a group are the only non-trivial elements which their own inverse.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353256821632385, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/5450/cocktail-party-math/5488
|
Cocktail party math [closed]
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Ok, hotshots. You're at a party, and you're chatting with some non-mathematicians. You tell them that you're a mathematician, and then they ask you to elaborate a bit on what you study, or they ask you to explain why you like math so much.
1. What are some engaging ways to do this in general? What are some nice elementary results, accessible to people with any mathematical background or lack thereof, that can be used to illustrate why math is interesting, and its depth, breadth, and beauty? Note the scenario: you're at a party so it should be a relatively quick and snappy kind of thing that doesn't require a blackboard or paper to explain.
2. Easy Mode: How do you explain what your particular sub-field of math is about in an accurate but still understandable and engaging way? Hard Mode: Assuming you work in a less applied area, how can you do this without mentioning any real-world applications? Again, note the scenario.
This question is inspired by this one, in particular Anton's answer.
-
7
This is not a question: But apparently seven dimensional laser calculus is a good answer. – Peter McNamara Nov 13 2009 at 22:47
Following the precedent of questions like mathoverflow.net/questions/5353/… and mathoverflow.net/questions/3559/… , I figured this one would be ok as well. – Kevin Lin Nov 13 2009 at 23:04
2
seems like quite a reasonable question to me, I don't see a difference between this and something like "Do good math jokes exist?" or "Mathematicians who were late learners" or the other two that Kevin mentions – Michael Hoffman Nov 14 2009 at 6:20
1
This is the problem with making decisions based on precedents. Once one undesirable question is allowed to slip through the net, it becomes a free for all. – Peter McNamara Nov 14 2009 at 16:31
@Peter: I propose to take this discussion to the meta board – Kevin Lin Nov 14 2009 at 17:00
show 1 more comment
25 Answers
In parties where people are eating pizza, it is quite nice to see people taking their slice and curving the edge so that the slice stays straight. Then you can tell them that this is effectively Gauss's "Theorema Egregium": the initial Gaussian curvature is zero, so by curving the slice in one direction they force the slice to stay straight in the perpendicular direction.
You can probably continue the discussion talking about rolling pieces of paper into cylinders but not into tori ("doughnuts"), or maybe talking about soap bubbles!
-
4
I love the pizza illustration! – Elisha Peterson Nov 14 2009 at 23:58
10
Ohhhh maaaan! I've been curving pizza like that since I was twelve, and I never saw the differential geometry connection! :) – Vectornaut Nov 15 2009 at 5:34
1
I don't get it.. or maybe I'm too stupid and have not eaten enough pizzas.. what does it mean "curving pizza" or curving the edge of the slice? You mean picking up the sliced slice and moving it away from the pizza in a curved-like manner to cut off the cheese part? sorry I feel so silly here – Jose Capco Nov 15 2009 at 22:10
11
The idea is that, if you pick up a slice of pizza by the crust, gravity makes the tip of the slice droop downwards. This makes it difficult to eat, and the toppings can slide off. However, if you use your hand to fold it lengthwise a bit (i.e., perpendicular to the crust), then the pizza stays straight and no longer droops. (See slice.seriouseats.com/images/… for a picture of this.) Essentially, you make one of the principal curvatures positive -- but since the Gaussian curvature is zero, this forces the other principal curvature to be zero. – Ari Nov 15 2009 at 22:38
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I've always found that a brief description of the Stable Marriage Problem gets people interested. I frame it as a high school dance with n boys and n girls each with a preference relation defined on the members of the opposite sex. Of course I don't call it a preference relation, but that's what it is. I then explain the disaster and heartache waiting if an unstable matching occurs.
I'll pair up people standing around at the party to illustrate the theorem, and get people involved. I'll then close with the fact that the National Resident Matching Program, which places graduating medical students into residence positions, uses the Stable Marriage Theorem to determine those placements.
The example is interesting enough to keep people listening and just technical enough that the answers aren't clear from the outset. If people are still interested you can segway into bipartite graphs and other kinds of matchings or just to graphs in general. Graph Theory is a rich area for cocktail party discussion.
Stable Marriage Problem can be found on Wikipedia. I can't post a second link as a new user.
-
1
Yeah, this is an example I use a lot too. The sociologically-minded friends also find it really interesting that the male courtship method is male-optimal and female-pessimal. – Jonah Ostroff Nov 13 2009 at 23:26
1
Have you really paired people up? That's awesome. – Qiaochu Yuan Nov 14 2009 at 1:59
21
Yes, I have. And I always put myself with my first choice. :) See kids? Math DOES help you meet women. – Hank Turowski Nov 14 2009 at 5:55
One of the things I like to mention, since I study topology, is the Brouwer fixed point theorem. The idea to explain is that if you pick up a piece of paper, DON'T RIP IT, but crumple it, turn it over, fold it, whatever, put it down on top of another one, then there will always be at least one point that will match up with the one below it on the other paper. It's very physical, very counterintuitive, and thoroughly math, though it's better demonstrated with a decorated napkin than plain.
Alternatively, in the same vein, one can talk about the hairy-sphere theorem (the idea that you can't comb a hairy sphere all the way around without a cowlick; i.e., you can't have a nonvanishing continuous vector field along the sphere).
-
2
Borsuk-Ulam is nicely illustrated with deflated toy balloon: crumple it all you want, but once you lay the balloon flat, there will be a pair of antipodal points one on top of another. – Boris Bukh Jan 29 2010 at 23:13
There's no reason to stick to your subfield of mathematics. Nonmathematicians aren't going to care about the difference between category theory and Fourier analysis.
I like to tell people about Arrow's theorem, that the only voting systems which are consistent are dictatorships. Consistency can be illustrated by this joke:
"Would you like tea or coffee?"
"Oh, we also have lemonade."
"In that case, I'd like coffee."
That seems silly, but it's what democratic systems do, e.g., in 1992 the American population preferred Bush over Clinton, but with the right wing 3rd party candidacy of Perot, Clinton was elected.
Most people get the joke, and see that consistency might be desirable, and then that mathematics can say some things which are meaningful to them.
-
This is really interesting. And I guess your comment about category theory/Fourier analysis is true....... though I really wish it weren't..... – Kevin Lin Apr 20 2010 at 8:57
I have considered some of the lack of knowledge and mistaken impressions of mathematics (and scientific research in general) that can be held by nonmathematicians. In mathematics, there's nothing to prove, just things to calculate. Or all of the important things to prove were established ages ago. Or if there are things still to prove, it's because the remaining questions are incredibly complicated, or incomprehensibly abstract. Or remaining questions could be mystical ones with no right answer, only opinions. Maybe contemporary mathematicians are much smarter than their predecessors because they have powerful computers. In any case, applications — technology, health, some other science that's actually interesting — could be the serious reason to do mathematics. Or if not that, it could be sheer ego and hero worship.
To be fair, a lot of nonmathematicians don't have any such depressing view of our profession. However, they can adopt it very quickly in response to bad explanations. Certainly most nonmathematicians have little sense of the basic coin of research in pure mathematics: theorems, proofs, definitions, conjectures, open problems. They also generally don't know that mathematics was already sophisticated in the 19th century, that a vast amount was accomplished in the first half of the 20th century, and that there are plenty of open problems left. (19th century mathematics is largely invisible in newspapers. On the one hand, very few readers or journalists know any of it; on the other hand, it certainly isn't news.)
To counter every side of that, I like to discuss questions that are not only accessible and fun, but also have a historical narrative. The narrative can go from an easy question, to some 19th or early 20th century result, to open problems. It can also cite great results from mathematicians other than the most famous heroes. I think that this can be done in lots of ways, but it is important to stick to clear explanations. Here is an example and a half:
1. Knot theory. Is an overhand (a trefoil) different from a nothing (an unknot)? Is a right-handed overhand different from a left-handed one? Yes and yes, according to Heegaard, Tietze, and Dehn from a century ago. Are there knots that aren't handed? For example, the figure-eight. Are there non-invertible knots? Yes, but that's harder; it was only established in 1964 by Trotter. Is it possible to distinguish any pair of knots? Yes, as was first proved by Haken in the late 1960s. How would you do it? The current best way is with Thurston's ideas, using hyperbolic geometry. (Hyperbolic geometry is then another big topic.) How many crossings do you need to switch to convert one knot to another? That is a big open problem, although it has been solved in many interesting cases. What's the easiest solution to the first of this whole chain of questions? Reidemeister moves and 3-colorings. Etc.
2. Real algebraic curves. Hyperbolas, parabolas, and ellipses are all quadratic curves. A hyperbola has two branches, but these are halves of one oval that passes through infinity. How many ovals can you have in higher degrees? An indirect argument tells you that you can't have an unbounded number in any fixed degree. In degree 4 you can have 4 ovals; in degree 6 you can have 11 ovals. Harnack discovered and proved these upper bounds in the 19th century. Can the ovals nest any way you like? No...
-
I researched some properties of words with a limited alphabet, which has applications for genetic modelings. So at a party, I tell people that I model genes.
They do tend to look at me funny after I say it, though.
-
There are some good ones in topology (or maybe I just know these examples because it's my field).
1. People always seem to think the hairy ball theorem (already mentioned) is interesting because it explain why people have cowlicks on their head, not to mention the name itself usually gets a few giggles.
2. The Meteorological Theorem (Borusk-Ulam) implies there are antipodal points on the surface of the Earth where the temperature and barometric pressure are the same.
3. My favorite one is similar to the plate trick above. If you hold a coffee cup in your hand and rotate your wrist until the cup is oriented the original way--your arm is all tangled up, but if you rotate it again, your arm straightens out. It demonsrates that the fundamental group of the group of rotations in R^3 is Z_2.
4. Vin de Silva, who does works in applied topology, has the best one though. Take a piece of paper draw a few dots on it and ask what the shape it. Draw more until it's clear that you're "sampling" points from a circle. Then tell them that math (persistent homology in this case) lets you find the shape of a sampling of points. Leads to simple discussions of using math to solve lots of applied problems.
Then you can start making jokes using the work "functor." ("Functor? I hardly know her!" or just randomly say "Clusterfunctors!")
-
I'm a bit confused by #3. I tried to do what you suggested but my wrist doesn't bend that way. – Michael Lugo Nov 14 2009 at 15:28
@Michael Lugo: Here's an okay illustration, along with a nice description (on the previous page) of what the plate trick has to do with SO(3): books.google.ca/… I can't believe there's no video of this on YouTube! – Vectornaut Nov 14 2009 at 17:07
@Michael: My fault for posting before morning coffee. Vectornaut's link has a good picture. If it's done with a coffee cup it's easy to see when you've went around 360 degrees. Bredon's "Topology and Geometry" has a nice picture of this, but I can't find it online. – Josh Roberts Nov 14 2009 at 20:46
Noncommutativity for transformations: when dressing up, you first put on socks, then shoes, when undressing, you have to take off shoes first!
Percentages and interest rates: it sort of matters whether to put 2,000 into your savings account this year and 1,000 the next year or 1,000 this year and 2,000 the next year (not obvious to many people).
Abstract maths can go far from "reality" into the "world of ideas": Banach-Tarski paradox.
And as to the questions like "why do you like maths so much?" I usually prefer to draw parallels between maths and arts/languages (not denying the applications either).
-
-
In Ramsey theory, there is a theorem that states that if the edges of the complete graph on six points are divided into two sets one set will contain all the edges of a triangle. This could be illustrated by taking 6 people at the party and finding either three that know each or three that don't. This is easy to demonstrate and could be generalized in various ways.
-
I'm in applied math so I have it easy: I use math to pick the cancer treatments with the highest probability of being safe and effective.
-
I like to talk about planarity of graphs, because it's easy to introduce, easy to draw on a napkin, and on the "application" side of things you can mention computer chip design, transportation networks, and other things. The nonplanarity of $K_{3,3}$ can be introduced via the "three utilities" puzzle: you have three houses and you want to connect each one to each of three utilities (water, hydro, gas) -- can you do it without having the lines cross? Similarly the nonplanarity of $K_5$ can be posed as a puzzle, and Kuratowski's theorem basically asserts that these two examples give the only obstructions to planarity: a graph is planar if and only if it contains no subgraph homeomorphic to $K_5$ or $K_{3,3}$.
Going further, you can talk about embedding graphs on a torus (i.e. in a game of Asteroids), for which you can draw examples of embeddings of $K_5$ and $K_{3,3}$, and say that there's an analogy of Kuratowski's theorem (by corollary to the Robertson-Seymour theorem): there is a finite list of "forbidden graphs" which are the only obstruction to a graph being embeddable on the torus, in the sense that any nonembeddable graph contains one of those forbidden graphs as a minor.
-
1
I found that playing planarity.net gave me the intuition for planar graphs before I thought of looking up what they actually were. Perhaps if you're near a computer you can introduce people to that? – Elizabeth S. Q. Goodman Nov 18 2009 at 5:46
On computer role-playing forums, I have seen a lot of general strategy advice regarding difficult group battles ("attack the boss monster first", "start with the highest-damage enemies", etc.). I have decided to see for myself which the optimal tactics is. The answer requires some basic (school-level but nontrivial) mathematics: see http://www.mathlinks.ro/viewtopic.php?t=326811 , scroll down to the remark (the problem turned out to be more or less identical to an American Math. Monthly question which is older than CRPG).
Is it possible to brute-force a combination lock by repeatedly changing a digit without checking one and the same combination several times? (Yes, by induction, at least if you can cycle every digit all the way from 0, 1, 2, ... to 9 and back to 0. If you cannot move between 9 and 0 without going through the intermediate digits, then no, by a semi-invariant argument. The keyword here is Hamiltonian path. If none exists, the next natural question is how to find a path through every vertex of minimal length...) And as we are talking about Hamiltonian paths, Euler paths can be interesting as well...
For some reason I never understood, many people not particularly close to mathematics seem to be fascinated by Rubik's cube. As opposed to some other popular riddles like Sudoku, this one becomes more or less trivial once one knows the maths behind it.
Huh, the four-color theorem has not been mentioned yet? This is the best example for the notion of beautiful vs. ugly in mathematics that comes into my mind. The five-color theorem is not difficult and rather nice to prove; the only proofs of the four-color theorem known go the "classify and solve for hundreds of particular cases" way. Mathematics is probably the only science where people care for the difference.
The isoperimetric inequality, with all of its, sorry for the pun, variations (such as the case of a curve in a half-plane with two given ends).
You want to encrypt a message in a way that each of $n$ persons gets a key such that any $k$ of them can, in a joint effort, unambiguously decrypt the it, while any $k-1$ will not have the slightest idea about the message (i. e., every possible message including gibberish will be equiprobable). It's called Shamir's Secret Sharing.
-
1
Hah, your RPG strategy is more or less identical to one I figured out when I was 18 or so wondering about the same things. Although I approached it from a slightly different perspective, it is basically equivalent. It's also not too difficult to account for some basic additional factors, such as area-of-effect attacks, ranged attacks, who's nearest to you, etc. Similarly, before that, I'd come up with a nice approximate solution to what basically amounts to the knapsack problem in RPGs. I can carry x pounds, I have items worth d_i dollars, and want to maximize the value of stuff I can take. – jeremy Feb 10 2010 at 7:57
The knapsack thing leads to the rearrangement inequality ( en.wikipedia.org/wiki/Rearrangement_inequality ) which I also have thought about mentioning but didn't because most people would do the right thing by intuition here (in RPG or in real life), and the mathematics would just give them a proof that their intuition is right - not particularly exciting. – darij grinberg Feb 10 2010 at 13:09
I am a bit skeptical about area of effect tactics - while it is certainly a good idea to get the toughest enemies in one row and shoot some cone-shaped effect on them, it is usually a matter of luck and of enemy AI whether getting them in one row actually works. – darij grinberg Feb 10 2010 at 13:10
Here's a couple of examples from the Japanese TV show "Coma University of Mathematics". There have been some excellent episodes which are accessible to non-mathematicians yet I think really conveys the creativity in mathematical research, and how very different that is from doing complicated arithmetic.
1. The moving sofa problem: explain that, to get to your room, you have to pass through two corridors, 1m in width, which meet at right angles. What shape of sofa has the largest area but can still get into your room? It's not immediately clear what shapes one should try, and it might surprise your audience that such a simple-looking question is an open problem.
2. The art gallery theorem (wikipedia is a good reference, I'm not allowed to post 2 links): in short, how many guards (or CCTV cameras) do you need to cover all of a polygonal art gallery? Again, a problem that's easy to understand, but the proof for the upper bound requires a clever idea.
-
One of the issues that came up during the challenger explosion investigation was whether some port had a circular opening. The engineers had taken two parallel planes, placed the port between them, measured the distance between the planes, and then rotated & repeated. Since all the measured distances were equal, they concluded the port was circular.
Anyone who's studied a bit of geometry knows this is an error. There are curves of constant width that are not circles -- a good counterexample is the 50p coin if you have some UK currency.
It's easy to extend this into a discussion of whether manhole covers need to be circular. Any manhole cover that's a curve of constant width cannot fall into its hole.
-
I like the following description of the inverse galois problem I came up with at one point:
Do you remember polynomials? Maybe from high school math? Do you ever remember trying to factor polynomials in school? Like for example $x^2 -4$ factors as $(x+2)(x-2)$. So notice that you can get from one factor to the other just by switching from 2 to -2, so this polynomial is symmetric in the simplest way possible, you just hold up a mirror and switch roots from one to the other.
Well every polynomial is symmetric in some way or another, and there are very many ways a polynomial can be symmetric. We as mathematicians know how to take a polynomial and say in a very precise way "What kinds of symmetry does this polynomial have?"
Now, can we go the other way? Can we start with some symmetries and ask for a polynomial that is symmetric in that way? Well, we don't know. We should know, but we don't yet!
-
I have successfully used examples involving covering chessboards with dominoes. I start with the question about whether you can tile a chessboard with opposite corner squares removed. If this gets any interest, I might go on from there. This leads people to the idea that mathematics is not just about numbers, but is actually about thinking logically about pretty much anything.
-
Does that work with no chess board and dominoes around? – Douglas Zare Jan 30 2010 at 6:47
A bizarre consequence of the Crystallography Restriction is that one can't have a wallpaper pattern with with fivefold rotational symmetry but one can have 2-, 3-, 4-, and 6-fold rotational symmetry. It's tricky to explain "fivefold rotational symmetry" concisely and precisely, but I just tried this example out on a friend of mine, and I must say his reaction was fairly enjoyable. He found the result unbelievable.
-
2
Starfish have fivefold rotational symmetry. That might help future explanations. – Michael Lugo Dec 23 2009 at 15:02
Especially when talking to engineers, when asked about math I tend to mention a fact I learned in Hilbert's book "Geometry and the Imagination": the two rulings on a quadric surface in 3-space can be used to design gear trains which allow for transferring motion between two arbitrary skew axes of rotation. (Basically, make gears shaped like real hyperboloids of one sheet. There's a picture in Hilbert's book, p. 287.) For some reason I find this a particularly snazzy (albeit very elementary) example of why anybody might care about algebraic geometry.
-
I have a go-to math fact to bring out at parties when people want to hear one. The cute part is that they can do it themselves.
Take a pen and paper and draw a quadrilateral. There are no restrictions (it can be concave or self-intersecting), but don’t make it too close to the sides of the paper. Now, for each edge, draw the square containing that edge that is outside the quadrilateral. Put a dot in the center of each of the four squares, and draw a line connecting opposite dots, ie, those that came from opposite edges.
The Punchline: The lines you just drew are the same length, and perpendicular.
I wrote it up and drew up a pictoral proof on my (mostly-defunct) blog sometime ago.
-
When people are asking for practical applications (which they inevitably do), I always explain how number theory (that's prime numbers for the layperson) was at some time studied by some people because it couldn't have any application ever (so it's pure and artistic), but now is the basis for cryptography, which is for example behind the banking system; implying that there can be a delay of 50-100 years between a mathematical discovery and a potential everyday application. I also mention that there are often immediate applications inside mathematics, so it's not totally useless :).
Topology has some interesting stuff too, as already mentioned.
-
Get piece of paper in the shape of long rectangle. Draw the line along long side exactly in the middle of the rectangle. Then make Moebius strip from piece of paper. Finally cut it in the half - You will obtain two pieces of paper. Now make another Moebius strip but now, cut in in say 1/3 from the side along long side of the rectangle ( it is hard to describe as English is not my primary language ) . See what happen. You may ask people to guess before;-)
Do You know Steinhaus http://en.wikipedia.org/wiki/Hugo_Steinhaus book "Mathematical Kaleidoscope"? Read it and You will be ready for any occasion;-)
-
When trying to talk about specific results, I really like talking about Cantor's Theorem (or at least, the special case of $2^{\aleph_0}$), and then, if they're willing to accept that, talk about ordinals a bit. If the audience isn't taking it, I'll generally talk about some arbitrary graph problem that comes to my head.
But when asked, I typically try to approach it from the more philosophic perspective of "what mathematicians do"-- study abstract structure. (Or at least, that's the approach I take to math), and try to explain what that means, providing some examples here and there. The definition of group shows up a lot, since there are easy to understand examples of groups. I also view myself as a bit of an artist, so I tend to use analogies that deal with music and painting.
-
If you work in Theoretical computer science (or related fields) you can say what Anup Rao says in this article.
-
I just thought about this a bit more, and wonder whether, with a good set of people who can describe mathematics well, something in the realm of Bill Nye the Science Guy could be created for math. It might end up Geometry heavy, i'm not sure.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9572121500968933, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/11905/when-is-the-hamiltonian-of-a-system-not-equal-to-its-total-energy/11911
|
# When is the Hamiltonian of a system not equal to its total energy?
I thought the Hamiltonian was always equal to the total energy of a system but have read that this isn't always true. Is there an example of this and does the Hamiltonian have a physical interpretation in such a case?
-
1
A large class of such examples comes from using accelerating and/or rotating frame of reference. See e.g. Herbert Goldstein, "Classical Mechanics", Chapter 2. – Qmechanic♦ Jul 5 '11 at 21:51
## 3 Answers
In an ideal, holonomic and monogenic system (the usual one in classical mechanics), Hamiltonian equals total energy when and only when both the constraint and Lagrangian are time-independent and generalized potential is absent.
So the condition for Hamiltonian equaling energy is quite stringent. Dan's example is one in which Lagrangian depends on time. A more frequent example would be the Hamiltonian for charged particles in electromagnetic field $$H=\frac{\left(\vec{P}-q\vec{A}\right)^2}{2m}+q\varphi$$ The first part equals kinetic energy($\vec{P}$ is canonical, not mechanical momentum), but the second part IS NOT necessarily potential energy, as in general $\varphi$ can be changed arbitrarily with a gauge.
-
What's a generalized potential? I've heard of a generalized force, is it related? – Dan Jul 6 '11 at 5:01
4
@Dan: Non-conservative generalized force cannot be written in terms of $Q_i=-\frac{\partial V}{\partial q_i}$, but some of them may be written as $Q_{i}=-\frac{\partial U}{\partial q_{i}}+\frac{d}{dt}\left(\frac{\partial U}{\partial\dot{q_{i}}}\right)$, then if we let $L=T-U$, $L$ will still satisfy the Lagrangian equation. The generalized potential for a charged particle is $q\varphi-q\vec{v}\cdot\vec{A}$. – C.R. Jul 6 '11 at 5:21
Actually, the Hamiltonian for a charged particle in an electromagnetic field is usually interpreted as the total energy. – Qmechanic♦ Jul 6 '11 at 14:24
1
@Qmechanic: I never encounter that kind of interpretation. As I said, the physical meaning of the first part is always kinetic energy, but the second part can be arbitrarily changed by a gauge fixing. Is total energy something variant with gauge? – C.R. Jul 7 '11 at 4:10
@Karsus Ren: As a reference to my above comment, see e.g. Herbert Goldstein, "Classical mechanics", eq. (8-26) in edition 2 or eq. (8.34) in edition 3. – Qmechanic♦ Jul 9 '11 at 15:22
The Hamiltonian is in general not equal to the energy when the coordinates explicitly depend on time. For example, we can take the system of a bead of mass $m$ confined to a circular ring of radius $R$. If we define the $0$ for the angle $\theta$ to be the bottom of the ring, the Lagrangian $$L=\frac{mR^2\dot{\theta}^2}{2}-mgR(1-\cos{(\theta)}).$$ The conjugate momentum $$p_{\theta}=\frac{\partial L}{\partial \dot{q}}=mR^2\dot{\theta}.$$ And the Hamiltonian $$H=\frac{p_{\theta}}{2mR^2}+mgR(1-\cos{\theta}),$$ which is equal to the energy.
However, if we define the $0$ for theta to be moving around the ring with an angular speed $\omega$, then the Lagrangian $$L=\frac{mR^2(\dot{\theta}-\omega)^2}{2}-mgR(1-\cos{(\theta-\omega t)}).$$
The conjugate momentum $$p_{\theta}=\frac{\partial L}{\partial \dot{q}}=mR^2\dot{\theta}-mR^2 \omega.$$
And the Hamiltonian $$H=\frac{p_{\theta}}{2mR^2}+p_{\theta}\omega+mgR(1-\cos(\theta-\omega t)),$$ which is not equal to the energy (in terms of $\dot{\theta}$ it has an explicit dependence on $\omega$).
-
Goldstein's Classical Mechanics (2nd Ed.) pg. 349, section 8.2 on cyclic coordinates and conservation theorems' has a good discussion on this. In his words:
The identification of H as a constant of the motion and as the total energy
are two separate matters. The conditions sufficient for one are not
enough for the other.
He then goes on to provide an example of a 1-d system in which he chooses two different generalized coordinate systems. For the first choice, H is the total energy while for the second choice H ends up being just a conserved quantity and NOT the total energy of the system.
Check it out. It's a very nice example.
-
1
What is the definition of energy here? Is it more than 'what Noethers theorem give you if you consider time translations'? I thought this quantity always is the Hamiltonian. – Nick Kidman Feb 12 '12 at 18:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9164398908615112, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/146986/are-there-examples-that-suggest-the-riemann-hypothesis-might-be-false?answertab=votes
|
# Are there examples that suggest the Riemann Hypothesis might be false?
Are there examples that might suggest the Riemann hypothesis is false?
I mean, is there a zeta function $\zeta (s,X)$ for some mathematical object $X$ with the properties
• $\zeta (1-s,X)$ and $\zeta (s,X)$ are related by a functional equation.
• $\zeta (s,X)$ can be expanded into an Euler product $\zeta (s,X)= \prod _{i}(1-N(i)^{-s})^{-1}$.
• the zeta function $\zeta (s,X)$ has zeroes of the form $a+bi$ with $b\ne0$ for $a$ different from $\frac 12$.
That is, a zeta function with similar properties to the Riemann zeta but with zeroes off the critical line.
-
## 2 Answers
The answer is either yes or no, depending on how stringently you interpret your various requirements. You should look at the discussion of the Selberg class of functions, which is Selberg's conjectural characterization of functions satisfying the Riemann Hypothesis. In particular, if you read the comments on the definition in the wikipedia entry, you will get a sense of why those are the precise conditions on a "$\zeta$-type function" which are needed to guarantee RH.
As a concrete (counter-)example, consider the function $$\eta(s) = 1 - 2^{-s} + 3^{-s} - 4^{-s} + \cdots,$$ sometimes called the Dirichlet $\eta$-function. It admits a functional equation and Euler product, but does not satisfy RH. It is not in the Selberg class because although it admits an Euler product, its Euler factors do not satisfy the correct conditions. (This is discussed in the wikipedia entry on the Selberg class.)
-
There is the de-Bruijn Newman Constant $\Lambda$ . RH is equivalent that $\Lambda \leq 0$. Up to now it is known $-2.7 \cdot 10^{-9} < \Lambda \leq 1/2$. See http://www.dtc.umn.edu/~odlyzko/doc/debruijn.newman.pdf
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9038136005401611, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/90523/on-the-least-prime-in-arithmetic-progressions/91084
|
## On the least prime in arithmetic progressions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
My question concerns the least prime (denoted $p(a, q)$) in the arithmetic progression $a \pmod q$ where $a$ and $q$ are coprime. Quite a time ago Linnik demonstrated that $$p(a, q) \ll q^L$$ for some absolute constant $L$. Wiki page for this theorem lists a number of papers that estimate $L$ with the most recent result by Xylouris who proved that $L \leq 5.2$.
It is also known that the Generalized Riemann Hypothesis implies $$p(a, q) \ll (q\log q)^2 \text{,}$$ while in 1978, Heath-Brown conjectured even tighter bound: $$p(a, q) \ll q(\log q)^2 \text{.}$$ I'm wondering whether this last bound, if true (it is still an open problem), implies something non-trivial about $L$-functions?
-
2
I'd guess that you could try to plug in the relevant values into the explicit formula (for example see equation (2) here: math.ubc.ca/~gerg/teaching/613-Winter2011/…) and do the computations, but I'll leave a more authoritative statement on this to the experts. – Timothy Foo Mar 8 2012 at 7:02
## 1 Answer
I haven't looked at such types of questions, however my first thought is no: I don't see (or haven't seen yet) how the existence of one prime (or a small number of them) would force the Dirichlet L-functions (I guess these are which you meant) to look in a certain way.
As an example, take the explicit formula which counts the number of primes in arithmetic progressions by using zeros of some L-functions.
Now, if you know that the "left side" in the particular equation (counting the primes) is 1 instead of 0, this does not seem to force anything noteworthy for the zeros which appear in the sum of the right side....Some small change in the imaginary parts of a couple of zeros (with very large imaginary part) might be enough to change the total value by 1, which then implies my answer for the special case if you only use the explicit formula. However, most arguments in analytic number theory (where this theorem on the least prime number comes from) tend to give similar behavior.
I hope, you understand the point I am trying to make besides my unclear presentation.
-
2
I think I agree. If you knew there were lots of small primes in every arithmetic progression - essentially the desired asymptotic number with a small error term - then that would probably improve the known zero-free region for Dirichlet $L$-functions, up to a proof of the generalized Riemann hypothesis if the error term were good enough. But just one small prime in each residue class, I'm not sure that would give us any leverage. – Greg Martin Mar 13 2012 at 18:41
That's a good point Greg which I wanted to mention as well and forgot. – unknown (yahoo) Mar 14 2012 at 10:03
Thanks to both of you: unknown and Greg. I think I understand. Your arguments sound convincing. – kdr Mar 25 2012 at 21:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415358901023865, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/26506/categorical-construction-of-the-category-of-schemes
|
## Categorical construction of the category of schemes?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The answer to the following question is probably well known or the question itself is well known not to have a reasonable answer. In the latter case could you please let me know what the "right" question may be (rather then stating that the answer is 42;))
Is there a purely categorical procedure that takes the category of commutative rings as input and produces the category of schemes (over $\mathbf{Z}$) as output?
A possible place to start would be to consider a scheme $X$ as a functor from the category $CommRing$ of commutative rings to the category of sets: $A\mapsto Hom_{Sch}(Spec(A),X)$ where $A$ a commutative ring. If we instead of $Spec(A)$'s we consider all schemes, then we simply get the Yoneda embedding. But some questions arise.
1. Does this give a fully faithful functor from schemes to functors from commutative rings to sets? Or loosely speaking, do $Spec(A)$-valued points ($A$ a commutative ring) suffice to determine a scheme? (My guess is that the answer is yes and this is classical.)
2. Is there a way to characterize those functors that actually come from schemes? For example one can introduce a Grothendieck topology on $CommRing$ (or its opposite) and require that the functor should be a sheaf in that topology. But in that case, can one describe the topology without referring to the fact that the objects of $CommRing$ are commutative rings? (Here my guess is the first question is probably too complicated but there are some necessary conditions.)
3. Regardless whether the answer to 2. is positive or negative, is there a way to describe algebraic spaces or stacks as presheaves on $CommRing^{op}$ that satisfy some conditions?
-
6
A very satisfactory answer to #3, under mild finiteness hypotheses, is Artin's criteria (see his paper "Versal deformations & algebraic stacks", & his earlier stuff on alg. spaces), coupled with "converse" results of Olsson, and Artin's results on remarkable stability properties of these kinds of objects (preservation under fppf quotients, contraction results, etc.): these imply the concepts of alg. space/stack can be checked in interesting abstract settings, unlike for schemes, and that there's natural sense in which these concepts need no further generalization for "geometric" purposes. – BCnrd May 31 2010 at 0:22
1
Thanks, BCnrd! This and your other comments were very helpful. Would you consider posting them as an answer? – algori May 31 2010 at 15:57
@algori: Glad to be of help. I'll take a pass on reposting. – BCnrd May 31 2010 at 18:43
## 6 Answers
1. The highbrow way of reformulating your question is as follows. Consider the category $Sch$ of all schemes endowed with the Zariski topology. There is a fully faithful embedding of the category of affine schemes $Aff = CommRing^{op}$ into $Sch$; the topology induced on $Aff$ by that on $Sch$ is also the Zariski topology. The comparison lemma ([SGA4] III, 4.1) then says that, because any object in $Sch$ can be covered by objects in $Aff$, the categories of sheaves on both sites are equivalent. In particular, representable sheaves in $Sch$ (i.e., schemes) are determined by their values on affine schemes.
2. For a sheaf $F$ on $Aff$ to be represented by a scheme it is enough that it be covered by affine schemes, i.e., that there exist affine schemes $U_i$ together with open immersions $U_i \to F$ (you have to define what this means, of course) such that $\coprod_i h_{U_i} \to F$ is an epimorphism of sheaves. Actually, you can take this as a definition of schemes. The compatibility of the gluings in the classical definition is taken care of here by the sheaf condition.
3. Algebraic spaces can be similarly defined. While I was writing this, Harry beat me to giving the reference to the excellent notes of Bertrand Toën from a course of his on algebraic stacks.
In 2, you also ask if you can construct schemes from $Aff$ without actually using the fact that you are dealing with commutative rings. I think not. The categorical nonsense can get you only so far: at some point you have to introduce the geometry itself, and that is given by the $Aff$ with its topology. If you replace $Aff$ by the category of open sets in some $\mathbb{R}^n$ with open immersions you would end up defining manifolds. This is what Toën calls geometric contexts.
-
1
+1 for good taste! – Harry Gindi May 30 2010 at 23:08
1
Thanks, Alberto! Yes, I was hoping there is a procedure which would extract the Zariski topology (or other) from $Aff$ just by looking at it as a category. But maybe this is too much to hope for. – algori May 31 2010 at 16:00
@algori: What do you mean by "just looking at the category"? – Harry Gindi May 31 2010 at 16:59
Harry -- I was thinking that maybe there is a condition on the category that would imply the existence of a preferred topology on it and which will pick the, say Zariski topology when applied to $Aff$. Then it would be interesting to see to which other categories this is applicable and what the result would be. – algori May 31 2010 at 18:27
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Toen's notes on stacks construct the category of schemes as the category of etale sheaves (presheaves satisfying descent in the etale topology) on CRing^op with a jointly surjective cover by smooth monomorphisms (exercise: show that smooth monomorphisms of affines are etale) of representable functors (i.e. affines).
http://www.math.univ-toulouse.fr/~toen/m2.html
He constructs algebraic spaces in a similar way, then constructs algebraic stacks using the same approach after a digression into homotopical descent theory (which generalizes readily to the approach taken in Toen-Vezzosi (Homotopical Algebraic Geometry).
The case of schemes is given a more general treatment in a fixed "geometric context", which is a category with a grothendieck topology and a fixed class of morphisms compatible with it. A scheme is then simply a "geometric variety" in the "algebro-geometric context", which is CRing^op equipped with the etale topology, where the fixed class of morphisms is the class of smooth morphisms of affines (morphisms corresponding to smooth morphisms in CRing).
-
1
But this is just dressing up the "old-fashioned" ringed space definition in fancy language, so is there any purpose to it beyond obsfucation of the geometric origins of the subject? – BCnrd May 30 2010 at 22:53
2
Harry, I won't pass judgement on HAG since I know nothing about it, but I am suspicious of value of def'ns which don't allow to do something hard to express without them. For alg. space/stacks, the "proof of concept" is seen geometrically by accepting schemes as known and considering functors or fibered categories which admit "covering" by scheme for suitable topology: similar to atlas def'n of manifold. I don't know any advantage for alg. spaces/stacks which is attained by less geometric def'ns; for many serious foundational proofs one reduces to thms for (non-affine) schemes anyway. – BCnrd May 31 2010 at 0:00
1
Well, at least in the context of HAG (the tiny part of it that I've read), one doesn't have the option of using actual geometric things like locally ringed spaces, since one is not working with CRings. One instead works with an arbitrary symmetric monoidal model category, which forces one to define things like flatness, faithfulness, smoothness, unramifiedness, finite presentation, etc. directly by their functorial properties. I'm no expert, and I don't know about the value of it to algebraic geometers, so I'll defer to your expertise on that question. – Harry Gindi May 31 2010 at 0:15
3
@BCnrd, I am certainly not an expert either, but, I don't think Toen's definition is meant to REPLACE that of a scheme, but rather, translate it into a different language- that of category theory? Why? Because you can then "categorify it"- so you can, in some sense, talk about "schemes up to homotopy". Also, as Harry points out with Geometric contexts, since category theory is a universal language, such a definition allows you to extend ideas of algebraic geometry to other fields (e.g. topology and differential geometry). – David Carchedi May 31 2010 at 0:17
1
@BCnrd: In HAG II, T-V try out the theory with a notion of complicial algebraic geometry, where SSets are replaced with Complicial sets, and Brave New Algebraic geometry, where SSets are replaced with spectra (in the algebraic topology sense). These are (according to T-V) very interesting and very different from the ordinary and simplicial (derived) cases. – Harry Gindi May 31 2010 at 0:41
show 6 more comments
If $B$ is a site, then the presheaf category $\hat{B} = Set^{B^{op}}$ is also a site and the topology restricts to the one on $B$. See this related question.
If we take $B$ to be the dual of the category of rings with the Zariski topology ($R_i \to R$ is a covering iff the ring morphisms $R \to R_i$ are localizations at elements $f_i \in R$ such that the $f_i$ generate the unit ideal), consider the full subcategory of $\hat{B}$ consisting of all sheaves which are covered by representable functors, then we get the category of schemes with the Zariski topology.
Also if $B$ is the category of open subsets in some euclidean space, you can produce the category of manifolds.
Basically, this constructions allows you to construct global objects using local models.
-
This isn't really a "characterization" of functors represented by schemes: as you know, it's just the usual definition of a scheme recast in more categorical language. The answer to question #2 is, sadly, "no" (as far as anyone knows). The raison d'etre for the theory of algebraic spaces is that there a characterization really is possible (assuming mild finiteness hypotheses, say); see my comments on David's answer above. – BCnrd May 31 2010 at 12:42
? "This isn't really a "characterization" of functors represented by schemes: as you know, it's just the usual definition of a scheme recast in more categorical language." Yeah, and that was the question. And it does provide a construction without locally ringed spaces. – Martin Brandenburg May 31 2010 at 13:05
This is a nice answer to part 1 of the question, but I think BCnrd is pointing out that this doesn't answer part 2, since this answer takes "the Zariski topology" as part of the data, presumably defined in the classical way, hence in the language of commutative rings. – Peter LeFanu Lumsdaine May 31 2010 at 14:01
@Martin: when one asks for a "characterization" of a property it is desired to give something not trivially equivalent to the initial definition (cf. Peter's comment). Could choose broader intent, but of limited use (e.g., is your "characterization" useful to represent Hilbert functors?). Was a huge problem in 1960's to give a characterization of functors represented by schemes. Grothendieck discovered necessary conditions (fpqc descent, algebraization, etc.), essential in Artin's solution via alg. spaces (e.g., new representability proofs for Hilb, Pic, etc., beyond Grothendieck's results). – BCnrd May 31 2010 at 14:45
The answer to the main question is undoubtedly "yes". I think there are going to be two lots of literature about this, one coming direct from the Grothendieck school (probably somewhere in the Demazure-Gabriel book), and another from the category-theorists, where I remember some work of Cole that puts the construction in a more general context. (I'm lying when I say "I remember"; I was certainly told about this at one point and filed away the information mentally.) I think the answer to Q1 is "yes and foundational", to Q2 is that representable functors are so well studied that the information is available, but being a scheme is rather subtle in practical terms. The connection with the flat topology here is well known.
As for Q3, even more out of my depth here.
-
1
The answer that comes as "classifying topos of local rings" will doubtless be resisted by geometers; but see Mac Lane-Moerdijk Ch. 8 for all that. What I was alluding to can be traced in Johnstone, Topos Theory, from the index entry on "local ring". Basically adding in "with local ring homomorphisms", in passing from commutative rings to commutative local rings, is a step with general categorical meaning. But the work of Julian Cole referenced there seems ultimately unpublished. (Be careful what you wish for in the "purely categorical"!) Please can we have more expositions of SGA? – Charles Matthews May 31 2010 at 9:48
The answer to 1. is yes. The answer to 2.) is also yes. Look here: http://rigtriv.wordpress.com/2008/07/16/representable-functors/
For 3.) A "strong" algebraic (Artin) stack is a "geometric stack" on the category of affine schemes=opposite category of commutative rings equipped with the etale topology. http://ncatlab.org/nlab/show/geometric+stack. In other words, it a pseudofunctor (or fibred category) obtained by stackifying a strict 2-functor of the form $Hom(blank,G)$, where $G$ is a groupoid object in schemes (so technically to view these as geometric stacks, you should take your site to be all schemes, not just affine ones). There are some subtleties however:
*You need that the source and target map of the groupoid $G$ are smooth maps of schemes.
** Often, Artin stack means instead the stack associated to a groupoid object in algebraic spaces, rather than schemes.
-
1
The link for #2 is disguised version of def'n of scheme. Artin gave non-tautological criteria on abstract functor which imply it's an alg. space (so can do "geometry" with it), whereas "criterion" in link amounts to constructing the scheme, so no deeper than Yoneda (i.e., useful but linguistics). There's no deep useful abstract criterion for representability by a scheme (except for trick with alg. spaces). It's a miracle that mild generalization to alg. spaces admits real sol'n to the problem. I find n-Lab impenetrable; is it's version of #3 better than "old-fashioned" definition of Artin? – BCnrd May 30 2010 at 23:25
Well, I never claimed that #2 was that deep. It was merely a positive answer to the question "Is there a way to characterize those functors that actually come from schemes". My "#3" answer is equivalent to that of Artin's, just more compact. If you are given an atlas $X \to \mathbf{X}$ of an algebraic stack, then, the groupoid $G$ is $X \times_{\mathbf{X}} X \rightrightarrows X$. Conversely, given $G$, $G_0 \to G$ gives an atlas $G_0 \to St(Hom(blank,G))$. – David Carchedi May 30 2010 at 23:56
@BCnrd: As for #2, now I see what you mean. I guess, it technically is an answer, but, maybe not one that "really" has something to do with the functor. A more intrinsic one would be preferable, I agree. As for #3: See userpage.fu-berlin.de/~nhoffman/allahabad.pdf Definition 3.6. They define Artin stacks the same way as me. BY the way, weak pullback of "two copies of" $G_0 \to G$ for any groupoid scheme is $G_1$- the scheme of arrows. Stackification commutes with finite limits, so this implies that $G_0 \times_{BG} G_0$ is $G_1$, hence a scheme. – David Carchedi May 31 2010 at 0:27
@BCnrd: Aha! So, I guess there's two variants of Artin stacks in the literature and the one that I've described is slightly more strong. What you are saying amounts to just asking for groupoid objects in algebraic spaces, rather than schemes. I'll EDIT. – David Carchedi May 31 2010 at 1:13
Thanks, David! Re the n-cat cafe: the material is organized there so that every page cites every other page, which makes it difficult (but not impossible) to get anything out of it. – algori May 31 2010 at 17:01
show 1 more comment
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410723447799683, "perplexity_flag": "middle"}
|
http://medlibrary.org/medwiki/Zeeman_effect
|
Zeeman effect
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
The Zeeman effect (pron.: ; IPA: ), named after the Dutch physicist Pieter Zeeman, is the effect of splitting a spectral line into several components in the presence of a static magnetic field. It is analogous to the Stark effect, the splitting of a spectral line into several components in the presence of an electric field. Also similar to the Stark effect, transitions between different components have, in general, different intensities, with some being entirely forbidden (in the dipole approximation), as governed by the selection rules.
Since the distance between the Zeeman sub-levels is a function of the magnetic field, this effect can be used to measure the magnetic field, e.g. that of the Sun and other stars or in laboratory plasmas. The Zeeman effect is very important in applications such as nuclear magnetic resonance spectroscopy, electron spin resonance spectroscopy, magnetic resonance imaging (MRI) and Mössbauer spectroscopy. It may also be utilized to improve accuracy in atomic absorption spectroscopy. A theory about the magnetic sense of birds assumes that a protein in the retina is changed due to the Zeeman effect.[1]
When the spectral lines are absorption lines, the effect is called inverse Zeeman effect.
Zeeman splitting of the 5s level of Rb-87, including fine structure and hyperfine structure splitting. Here F = J + I, where I is the nuclear spin. (for Rb-87, I = 3/2)
Nomenclature
Historically, one distinguishes between the "normal" and an anomalous Zeeman effect that appears on transitions where the net spin of the electrons is not 0, the number of Zeeman sub-levels being even instead of odd if there's an uneven number of electrons involved. It was called "anomalous" because the electron spin had not yet been discovered, and so there was no good explanation for it at the time that Zeeman observed the effect.
At higher magnetic fields the effect ceases to be linear. At even higher field strength, when the strength of the external field is comparable to the strength of the atom's internal field, electron coupling is disturbed and the spectral lines rearrange. This is called the Paschen-Back effect.
In the modern scientific literature, these terms are rarely used, with a tendency to use just the "Zeeman effect".
Theoretical presentation
The total Hamiltonian of an atom in a magnetic field is
$H = H_0 + V_M,\$
where $H_0$ is the unperturbed Hamiltonian of the atom, and $V_M$ is perturbation due to the magnetic field:
$V_M = -\vec{\mu} \cdot \vec{B},$
where $\vec{\mu}$ is the magnetic moment of the atom. The magnetic moment consists of the electronic and nuclear parts; however, the latter is many orders of magnitude smaller and will be neglected here. Therefore,
$\vec{\mu} = -\mu_B g \vec{J}/\hbar,$
where $\mu_B$ is the Bohr magneton, $\vec{J}$ is the total electronic angular momentum, and $g$ is the Landé g-factor. The operator of the magnetic moment of an electron is a sum of the contributions of the orbital angular momentum $\vec L$ and the spin angular momentum $\vec S$, with each multiplied by the appropriate gyromagnetic ratio:
$\vec{\mu} = -\mu_B (g_l \vec{L} + g_s \vec{S})/\hbar,$
where $g_l = 1$ and $g_s \approx 2.0023192$ (the latter is called the anomalous gyromagnetic ratio; the deviation of the value from 2 is due to Quantum Electrodynamics effects). In the case of the LS coupling, one can sum over all electrons in the atom:
$g \vec{J} = \left\langle\sum_i (g_l \vec{l_i} + g_s \vec{s_i})\right\rangle = \left\langle (g_l\vec{L} + g_s \vec{S})\right\rangle,$
where $\vec{L}$ and $\vec{S}$ are the total orbital momentum and spin of the atom, and averaging is done over a state with a given value of the total angular momentum.
If the interaction term $V_M$ is small (less than the fine structure), it can be treated as a perturbation; this is the Zeeman effect proper. In the Paschen-Back effect, described below, $V_M$ exceeds the LS coupling significantly (but is still small compared to $H_{0}$). In ultrastrong magnetic fields, the magnetic-field interaction may exceed $H_0$, in which case the atom can no longer exist in its normal meaning, and one talks about Landau levels instead. There are, of course, intermediate cases which are more complex than these limit cases.
Weak field (Zeeman effect)
If the spin-orbit interaction dominates over the effect of the external magnetic field, $\scriptstyle \vec L$ and $\scriptstyle \vec S$ are not separately conserved, only the total angular momentum $\scriptstyle \vec J = \vec L + \vec S$ is. The spin and orbital angular momentum vectors can be thought of as precessing about the (fixed) total angular momentum vector $\scriptstyle \vec J$. The (time-)"averaged" spin vector is then the projection of the spin onto the direction of $\scriptstyle \vec J$:
$\vec S_{avg} = \frac{(\vec S \cdot \vec J)}{J^2} \vec J$
and for the (time-)"averaged" orbital vector:
$\vec L_{avg} = \frac{(\vec L \cdot \vec J)}{J^2} \vec J.$
Thus,
$\langle V_M \rangle = \frac{\mu_B}{\hbar} \vec J(g_L\frac{\vec L \cdot \vec J}{J^2} + g_S\frac{\vec S \cdot \vec J}{J^2}) \cdot \vec B.$
Using $\scriptstyle \vec L = \vec J - \vec S$ and squaring both sides, we get
$\vec S \cdot \vec J = \frac{1}{2}(J^2 + S^2 - L^2) = \frac{\hbar^2}{2}[j(j+1) - l(l+1) + s(s+1)],$
and: using $\scriptstyle \vec S = \vec J - \vec L$ and squaring both sides, we get
$\vec L \cdot \vec J = \frac{1}{2}(J^2 - S^2 + L^2) = \frac{\hbar^2}{2}[j(j+1) + l(l+1) - s(s+1)].$
Combining everything and taking $\scriptstyle J_z = \hbar m_j$, we obtain the magnetic potential energy of the atom in the applied external magnetic field,
$\begin{align} V_M &= \mu_B B m_j \left[ g_L\frac{j(j+1) + l(l+1) - s(s+1)}{2j(j+1)} + g_S\frac{j(j+1) - l(l+1) + s(s+1)}{2j(j+1)} \right]\\ &= \mu_B B m_j \left[1 + (g_S-1)\frac{j(j+1) - l(l+1) + s(s+1)}{2j(j+1)} \right], \\ &= \mu_B B m_j g_j \end{align}$
where the quantity in square brackets is the Landé g-factor gJ of the atom ($g_L = 1$ and $g_S \approx 2$) and $m_j$ is the z-component of the total angular momentum. For a single electron above filled shells $s = 1/2$ and $j = l \pm s$, the Landé g-factor can be simplified into:
$g_j = 1 \pm \frac{g_S-1}{2l+1}$
Example: Lyman alpha transition in hydrogen
The Lyman alpha transition in hydrogen in the presence of the spin-orbit interaction involves the transitions
$2P_{1/2} \to 1S_{1/2}$ and $2P_{3/2} \to 1S_{1/2}.$
In the presence of an external magnetic field, the weak-field Zeeman effect splits the 1S1/2 and 2P1/2 levels into 2 states each ($m_j = 1/2, -1/2$) and the 2P3/2 level into 4 states ($m_j = 3/2, 1/2, -1/2, -3/2$). The Landé g-factors for the three levels are:
$g_J = 2$ for $1S_{1/2}$ (j=1/2, l=0)
$g_J = 2/3$ for $2P_{1/2}$ (j=1/2, l=1)
$g_J = 4/3$ for $2P_{3/2}$ (j=3/2, l=1).
Note in particular that the size of the energy splitting is different for the different orbitals, because the gJ values are different. On the left, fine structure splitting is depicted. This splitting occurs even in the absence of a magnetic field, as it is due to spin-orbit coupling. Depicted on the right is the additional Zeeman splitting, which occurs in the presence of magnetic fields.
Strong field (Paschen-Back effect)
The Paschen-Back effect is the splitting of atomic energy levels in the presence of a strong magnetic field. This occurs when an external magnetic field is sufficiently large to disrupt the coupling between orbital ($\vec L$) and spin ($\vec S$) angular momenta. This effect is the strong-field limit of the Zeeman effect. When $s = 0$, the two effects are equivalent. The effect was named after the German physicists Friedrich Paschen and Ernst E. A. Back.[2]
When the magnetic-field perturbation significantly exceeds the spin-orbit interaction, one can safely assume $[H_{0}, S] = 0$. This allows the expectation values of $L_{z}$ and $S_{z}$ to be easily evaluated for a state $|\psi\rangle$. The energies are simply:
$E_{z} = \langle \psi| \left( H_{0} + \frac{B_{z}\mu_B}{\hbar}(L_{z}+g_{s}S_z) \right) |\psi\rangle = E_{0} + B_z\mu_B (m_l + g_{s}m_s).$
The above may be read as implying that the LS-coupling is completely broken by the external field. However $m_l$ and $m_s$ are still "good" quantum numbers. Together with the selection rules for an electric dipole transition, i.e., $\Delta s = 0, \Delta m_s = 0, \Delta l = \pm 1, \Delta m_l = 0, \pm 1$ this allows to ignore the spin degree of freedom altogether. As a result, only three spectral lines will be visible, corresponding to the $\Delta m_l = 0, \pm 1$ selection rule. The splitting $\Delta E = B \mu_B \Delta m_l$ is independent of the unperturbed energies and electronic configurations of the levels being considered. It should be noted that in general (if $s \ne 0$), these three components are actually groups of several transitions each, due to the residual spin-orbit coupling.
In general, one must now add spin-orbit coupling and relativistic corrections (which are of the same order, known as 'fine structure') as a perturbation to these 'unperturbed' levels. First order perturbation theory with these fine-structure corrections yields the following formula for the Hydrogen atom in the Paschen-Back limit:[3]
$E_{z+fs} = E_{z} + \frac{\alpha^2}{2 n^3} \left[ \frac{3}{4n} - \left( \frac{l(l+1) - m_l m_s}{l(l+1/2)(l+1) } \right)\right]$
Intermediate field for j = 1/2
In the magnetic dipole approximation, the Hamiltonian which includes both the hyperfine and Zeeman interactions is
$H = h A \vec I \cdot \vec J - \vec \mu \cdot \vec B$
$H = h A \vec I \cdot\vec J + \mu_B (g_J\vec J + g_I\vec I ) \cdot \vec B$
To arrive at the Breit-Rabi formula we will include the hyperfine structure (interaction between the electron's spin and the magnetic moment of the nucleus), which is governed by the quantum number $F \equiv |\vec F| = |\vec J + \vec I|$, where $\vec I$ is the spin angular momentum operator of the nucleus. Alternatively, the derivation could be done with $J$ only. The constant $A$ is known as the zero field hyperfine constant and is given in units of Hertz. $\mu_B$ is the Bohr magneton. $\hbar\vec J$ and $\hbar\vec I$ are the electron and nuclear angular momentum operators. $g_J$ and $g_F$ can be found via a classical vector coupling model or a more detailed quantum mechanical calculation to be:
$g_J = g_L\frac{J(J+1) + L(L+1) - S(S+1)}{2J(J+1)} + g_S\frac{J(J+1) - L(L+1) + S(S+1)}{2J(J+1)}$
$g_F = g_J\frac{F(F+1) + J(J+1) - I(I+1)}{2F(F+1)} + g_I\frac{F(F+1) - J(J+1) + I(I+1)}{2F(F+1)}$
As discussed, in the case of weak magnetic fields, the Zeeman interaction can be treated as a perturbation to the $|F,m_f \rangle$ basis. In the high field regime, the magnetic field becomes so large that the Zeeman effect will dominate, and we must use a more complete basis of $|I,J,m_I,m_J\rangle$ or just $|m_I,m_J \rangle$ since $I$ and $J$ will be constant within a given level.
To get the complete picture, including intermediate field strengths, we must consider eigenstates which are superpositions of the $|F,m_F \rangle$ and $|m_I,m_J \rangle$ basis states. For $J = 1/2$, the Hamiltonian can be solved analytically, resulting in the Breit-Rabi formula. Notably, the electric quadrapole interaction is zero for $L = 0$ ($J = 1/2$), so this formula is fairly accurate.
To solve this system, we note that at all times, the total angular momentum projection $m_F = m_J + m_I$ will be conserved. Furthermore, since $J = 1/2$ between states $m_J$ will change between only $\pm 1/2$. Therefore, we can define a good basis as:
$|\pm\rangle \equiv |m_J = \pm 1/2, m_I = m_F \mp 1/2 \rangle$
We now utilize quantum mechanical ladder operators, which are defined for a general angular momentum operator $L$ as
$L_{\pm} \equiv L_x \pm iL_y$
These ladder operators have the property
$L_{\pm}|L_,m_L \rangle = \sqrt{(L \mp m_L)(L \pm m_L +1)} |L,m_L \pm 1 \rangle$
as long as $m_L$ lies in the range ${-L, \dots ... ,L}$ (otherwise, they return zero). Using ladder operators $J_{\pm}$ and $I_{\pm}$ We can rewrite the Hamiltonian as
$H = h A I_z J_z + \frac{hA}{2}(J_+ I_- + J_- I_+) - \mu_B B(g_J J_z + g_I I_Z)$
Now we can determine the matrix elements of the Hamiltonian:
$\langle \pm |H|\pm \rangle = -\frac{1}{4}A - \mu_B B g_I m_F \pm \frac{1}{2} (hAm_F - \mu_B B (g_J-g_I))$
$\langle \pm |H| \mp \rangle = \frac{1}{2} hA \sqrt{(I + 1/2)^2 - m_F^2}$
Solving for the eigenvalues of this matrix, (as can be done by hand, or more easily, with a computer algebra system) we arrive at the energy shifts:
$\Delta E_{F=I\pm1/2} = -\frac{h \Delta W }{2(2I+1)} + \mu_B g_I m_F B \pm \frac{h \Delta W}{2}\sqrt{1 + \frac{2m_F x }{I+1/2}+ x^2 }$
$x \equiv \frac{\mu_B B(g_J - g_I)}{h \Delta W} \quad \quad \Delta W= A \left(I+\frac{1}{2}\right)$
where $\Delta W$ is the splitting (in units of Hz) between two hyperfine sublevels in the absence of magnetic field $B$,
$x$ is referred to as the 'field strength parameter' (Note: for $m = -(I+1/2)$ the square root is an exact square, and should be interpreted as $+(1-x)$). This equation is known as the Breit-Rabi formula and is useful for systems with one valence electron in an $s$ ($J = 1/2$) level.[4][5]
Note that index $F$ in $\Delta E_{F=I\pm1/2}$ should be considered not as total angular momentum of the atom but as asymptotic total angular momentum. It is equal to total angular momentum only if $B=0$ otherwise eigenvectors corresponding different eigenvalues of the Hamiltonian are the superpositions of states with different $F$ but equal $m_F$ (the only exceptions are $|F=I+1/2,m_F=\pm F \rangle$).
Applications
Astrophysics
Zeeman effect on a sunspot spectral line
Solar magnetogram
George Ellery Hale was the first to notice the Zeeman effect in the solar spectra, indicating the existence of strong magnetic fields in sunspots. Such fields can be quite high, on the order of .1 Tesla or higher. Today, the Zeeman effect is used to produce magnetograms showing the variation of magnetic field on the sun.
Laser Cooling
The Zeeman effect is utilized in many Laser cooling applications such as a Magneto-optical trap and the Zeeman slower.
References
1. Paschen, F., Back, E.: Liniengruppen magnetisch vervollständigt. Physica 1, 261–273 (1921).
2. Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. p. 247. ISBN 0-13-111892-7 [Amazon-US | Amazon-UK]. OCLC 40251748.
3. Woodgate, Elementary Atomic Structure, section 9.
4. first appeared in G. Breit and I. Rabi, Phys. rev. 38, 2082 (1931).
Historical
• Condon, E. U.; G. H. Shortley (1935). The Theory of Atomic Spectra. Cambridge University Press. ISBN 0-521-09209-4 [Amazon-US | Amazon-UK]. (Chapter 16 provides a comprehensive treatment, as of 1935.)
• Zeeman, P. (1897). "On the influence of Magnetism on the Nature of the Light emitted by a Substance". Phil. Mag. 43: 226.
• Zeeman, P. (1897). "Doubles and triplets in the spectrum produced by external magnetic forces". Phil. Mag. 44: 55.
• Zeeman, P. (11 February 1897). "The Effect of Magnetisation on the Nature of Light Emitted by a Substance". Nature 55 (1424): 347. Bibcode:1897Natur..55..347Z. doi:10.1038/055347a0.
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Zeeman effect", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Zeeman_effect
• Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 119, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8800369501113892, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=591261
|
Physics Forums
Recognitions:
Gold Member
Rate of entropy generation (can it be negative?)
1. The problem statement, all variables and given/known data
The Clausius inequality combined with the defintion of entropy yields an inequality known as the increase of entropy principal, expressed as
Sgen ≥ 0
where Sgen is the entropy generated during a process.
2. Relevant equations
Sgen ≥ 0
3. The attempt at a solution
I know that Sgen cannot be negative, but can the rate of Sgen, $\dot{S}_{gen}$ be negative?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I do believe. You mean the entropy generates fast at first and slow later. Why not.
Recognitions:
Gold Member
Quote by dikmikkel I do believe. You mean the entropy generates fast at first and slow later. Why not.
That was my thought but I wasn't sure as this whole entropy thing is rather new to me.
I reasoned that if a car going 50mph slows down to 40mph, it still has a positive velocity, but the velocity derivative (acceleration) is negative. Likewise, Sgen's time rate of change can be negative although entropy generated overall can only be positive. Test on Friday... I hope I'm right!
Rate of entropy generation (can it be negative?)
Thermodynamics (Equilibrium) neither entertains (asks) nor answers questions concerning the rates of processes. Rates of processes is irrelevant to find answers to questions in thermodynamics. Once the initial and final states of a system are defined, the process connecting them could be of any rate (could be infinitely fast/slow), the result of the entropy change will be the same.
It would perhaps be better to write the equation as delta S universe greater than or equal to zero instead of Sgen greater than or equal to zero, to reduce possible ambiguity and misinterpretations.
Recognitions:
Gold Member
Quote by Radhakrishnam Thermodynamics (Equilibrium) neither entertains (asks) nor answers questions concerning the rates of processes. Rates of processes is irrelevant to find answers to questions in thermodynamics. Once the initial and final states of a system are defined, the process connecting them could be of any rate (could be infinitely fast/slow), the result of the entropy change will be the same. It would perhaps be better to write the equation as delta S universe greater than or equal to zero instead of Sgen greater than or equal to zero, to reduce possible ambiguity and misinterpretations.
I think what Radhakrishnam is alluding to here is that the constrained form of the Clausius inequality described by you in the OP applies to a closed, adiabatic system (i.e., an isolated system). The entire universe can be regarded as a closed, adiabatic system. The more general form of the Clausius inequality, applicable to closed (but not necessarily adiabatic) systems is dS > dq/T.
Recognitions:
Gold Member
Quote by Radhakrishnam Thermodynamics (Equilibrium) neither entertains (asks) nor answers questions concerning the rates of processes. Rates of processes is irrelevant to find answers to questions in thermodynamics.
That is true. Fair enough. But aside from that, I want to know what happens along the way, not just at the endpoints. If it's not under the umbrella of thermodynamics, in what area of science may my question be asked?
Quote by Chestermiller I think what Radhakrishnam is alluding to here is that the constrained form of the Clausius inequality described by you in the OP applies to a closed, adiabatic system (i.e., an isolated system). The entire universe can be regarded as a closed, adiabatic system. The more general form of the Clausius inequality, applicable to closed (but not necessarily adiabatic) systems is dS > dq/T.
Entropy Generation Definition:
Entropy generated (Sgen) during a process is a measure of the irreversibilities of that process.
Lets say you have a device that is rougher in one area than another and when the parts move in the device, more friction occurs as the parts make contact with the rough area.
The rate of entropy generation would be positive through this rough patch because the device introduces more irreversibility (friction) here. Then as your parts go back to moving smoothly and they are not touching the rough area, the friction subsides and the rate of entropy generation would be negative.
Does this make sense? This would make it sound like the rate of entropy generation could be negative.
Quote by JJBladester That is true. Fair enough. But aside from that, I want to know what happens along the way, not just at the endpoints. If it's not under the umbrella of thermodynamics, in what area of science may my question be asked? Since we can choose the initial and final states as we please, it is possible to get the information at every point of the path (remember, it must be a reversible path). Along non-reversible paths the system will not be in a state of equilibrium and the properties of the system accordingly will be ill-defined. Since your question concerns entropy, if and when you understand entropy well, you will find your question does not hold good - the question would disappear. Entropy Generation Definition: Entropy generated (Sgen) during a process is a measure of the irreversibilities of that process. Lets say you have a device that is rougher in one area than another and when the parts move in the device, more friction occurs as the parts make contact with the rough area. The rate of entropy generation would be positive through this rough patch because the device introduces more irreversibility (friction) here. Then as your parts go back to moving smoothly and they are not touching the rough area, the friction subsides and the rate of entropy generation would be negative. Does this make sense? This would make it sound like the rate of entropy generation could be negative.
Thermodynamics gives whether a given process under given conditions is possible or impossible, in principle. when a process is known to be possible, the rate at which it is possible to carry out that process in practice depends upon the kinetics which takes into account the presence of catalysts, for example, etc. But that would not help in finding out the rate of generation of entropy.
Thermodynamics is much simpler to understand and appreciate than what it is projected to be in many books.
Recognitions:
Gold Member
Quote by JJBladester That is true. Fair enough. But aside from that, I want to know what happens along the way, not just at the endpoints. If it's not under the umbrella of thermodynamics, in what area of science may my question be asked? Entropy Generation Definition: Entropy generated (Sgen) during a process is a measure of the irreversibilities of that process. Lets say you have a device that is rougher in one area than another and when the parts move in the device, more friction occurs as the parts make contact with the rough area. The rate of entropy generation would be positive through this rough patch because the device introduces more irreversibility (friction) here. Then as your parts go back to moving smoothly and they are not touching the rough area, the friction subsides and the rate of entropy generation would be negative. Does this make sense? This would make it sound like the rate of entropy generation could be negative.
Here are some suggestions on how to begin to get a handle on what you are looking for:
1. Bird, Steward, and Lightfoot "Transport Phenomena" has a homework problem that looks at entropy generation in non-equilibrium continua.
2. Look up non-equilibrium thermodynamics in Wikipedia
3. Get a book on Statistical Thermodynamics, and get an idea how entropy is expressed in terms of the total number of quantum mechanical states available. Then start looking at how the molecular dynamics guys use statistical thermo to quantify entropy (and other thermodynamic entities) in systems that are not at equilibrium.
Tags
clausius, entropy, sgen
Thread Tools
Similar Threads for: Rate of entropy generation (can it be negative?)
Thread Forum Replies
Introductory Physics Homework 2
Chemistry 1
Biology, Chemistry & Other Homework 3
Advanced Physics Homework 6
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374485015869141, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/98187/eigenfunction-of-heat-operator
|
## eigenfunction of heat operator.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\Delta$ be the usual Laplacian on $\mathbb R^n$, $\Delta=-\sum_{i=1}^n\frac{\partial^2}{\partial x_i^2}$. Consider the heat operator $H_t=e^{-t\Delta}$. Is there an eigenfunction of $H_t$ which is not an eigenfunction of $\Delta$?
-
## 1 Answer
Neither operator has an eigenfunction in $L^2({\mathbb R}^n)$. But if you replace ${\mathbb R}^n$ by a bounded domain $\Omega$ with a smooth boundary, you may consider the Heat equation with the Dirichlet boundary condition $u=0$ on $\partial\Omega$. Then $e^{-\Delta}$ and $\Delta^{-1}$ are compact and self-adjoint on $L^2(\Omega)$, thus can be diagonalized. In addition $e^{-\Delta}$ is a contraction in $L^2(\Omega)$.
To see that every eigenfunction of $e^{-t\Delta}$ is an eigenfunction of $\Delta$, you may use the formula $$t\Delta=\sum_{m=1}^\infty\frac1m(I-e^{-t\Delta})^m.$$ This is valid over the domain $D(\Delta)$. If $e^{-t\Delta}u=\lambda u$, then you obtain $$t\Delta u=\sum_{m=0}^\infty\frac1m(1-\lambda)^mu=t\left(\log\frac1\lambda\right) u.$$
-
Do you mean $\Delta$ rather than $\delta$ in the second paragraph? – Jon May 28 at 13:07
Shouldn't that sum go from $m=1$ and add up to $-\textrm{ln}(\lambda)$? The eigenvectors of $\Delta$ and $e^{t\Delta}$ should hopefully match, but not necessarily so the corresponding eigenvalues. – Emilio Pisanty May 28 at 13:38
(terrific answer, otherwise!) – Emilio Pisanty May 28 at 13:38
I apologize for the typos and I edit ... – Denis Serre May 28 at 15:12
I am not talking about $L^2$-eigenfunctions. There are $L^\infty$-eigenfunctions of $\Delta$ on $\mathbb R^n$: $x↦e^{−i(x_1y_1+⋯+x_ny_n)}$. There are eigenfunctions in some other $L^p$ spaces depending on $n$. Any eigenfunction of $\Delta$ on $\mathbb R^n$ is clearly an eigenfunction of $e^{-t\Delta}$. I am asking if there is an eigenfunction of $e^{-t\Delta}$ on $\mathbb R^n$ for a fixed $t>0$ which is not an eigenfunction of $\Delta$. – spr May 29 at 6:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8700550198554993, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/3669/why-pairing-based-crypto-is-suitable-for-some-particular-cryptographic-primitive?answertab=active
|
# Why pairing based crypto is suitable for some particular cryptographic primitives?
Why pairing based crypto is being widely used in some special crypto primitives as ID based crypto and variations of standard signatures? I mean taking as deep as possible what makes it suitable for that schemes while other schemes do not feet?
-
– mikeazo♦ Aug 28 '12 at 14:23
## 1 Answer
Crypto based on cyclic groups is (at a very high level) about "hiding" things "in the exponent" and then manipulating those values as they live in the exponent. As an example, in a cyclic group $\langle g\rangle$, you can "hide" a random value $x$ as $g^x$.
Without a bilinear pairing, all you can really do "in the exponent" are linear/affine (degree-1) combinations of these hidden values. That is, given $g^{x_1}, \ldots, g^{x_n}$, you can obtain $g^{a_0 + a_1 x_1 + \cdots + a_n x_n}$ for known coefficients $a_i$.
With a bilinear pairing, you can do degree-2 combinations. This is huge, it allows you to "multiply hidden values together in the exponent". This extra expressivity is what lends itself to a wider variety of cryptographic primitives, like identity-based and functional encryption, etc.
There are other tools like lattices that also have a lot of algebraic structure. The capabilities of lattices are somewhat incomparable to those of groups with bilinear pairings. But more of these "functional encryption" applications are now being achieved using lattices as well. So none of these applications are probably unique to bilinear pairings, it is a bit of a historical accident that we have seen an explosion in techniques and applications of bilinear pairings.
-
And taking this more deep what actually a bilinear pair is? I mean ok it's a mapping from two groups into another one. This mapping can be formulated as a non-linear function?What is this $e:GxG->G_1$ that all books describe?How i can more precisely formualte $e$? – curious Sep 24 '12 at 18:30
Not sure at what level you're asking. Maybe reading about the Weil pairing and Tate pairing will give you some idea of how these things work. – Mikero Sep 25 '12 at 2:47
i am asking that if a have a bilinear map G1xG1->G2 who defines where in G2 the elements of G1 are mapped?Is it a function?In all scientific papers that they use b.maps they just mention there is map. but not what exactly the map is and how is being defined... – curious Oct 10 '12 at 21:40
Can you give me an example of a degree-2 combination? I.e given $g^{x_1}, \ldots ,g^{x_n}$ as a consequence of bilinearity i can have $g^{x_1 \ldots x_2}$ ? – curious Oct 26 '12 at 14:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295531511306763, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/10/10/the-exponential-differential-equation/?like=1&source=post_flair&_wpnonce=4ca8fc8c32
|
# The Unapologetic Mathematician
## The Exponential Differential Equation
So we long ago defined the exponential function $\exp$ to be the inverse of the logarithm, and we showed that it satisfied the exponential property. Now we’ve got another definition, using a power series, which is its Taylor series at ${0}$. And we’ve shown that this definition also satisfies the exponential property.
But what really makes the exponential function what it is? It’s the fact that the larger the function’s value gets, the faster it grows. That is, the exponential function satisfies the equation $f(x)=f'(x)$. We already knew this about $\exp$, but there we ultimately had to use the fact that we defined the logarithm $\ln$ to have a specified derivative. Here we use this property itself as a definition.
This is our first “differential equation”, which relates a function to its derivative(s). And because differentiation works so nicely for power series, we can use them to solve differential equations.
So let’s take our equation as a case in point. First off, any function $f$ that satisfies this equation must by definition be differentiable. And then, since it’s equal to its own derivative, this derivative must itself be differentiable, and so on. So at the very least our function must be infinitely differentiable. Let’s go one step further and just assume that it’s analytic at ${0}$. Since it’s analytic, we can expand it as a power series.
So we have some function defined by a power series around ${0}$:
$\displaystyle f(x)=\sum\limits_{k=0}^\infty a_kx^k$
We can easily take the derivative
$\displaystyle f'(x)=\sum\limits_{k=0}^\infty (k+1)a_{k+1}x^k$
Setting these two power series equal, we find that $a_0=1a_1$, $a_1=2a_2$, $a_2=3a_3$, and so on. In general:
$\displaystyle a_k=\frac{1}{k}a_{k-1}=\frac{1}{k}\frac{1}{k-1}a_{k-2}=...=\frac{1}{k}\frac{1}{k-1}...\frac{1}{3}\frac{1}{2}\frac{1}{1}a_0=\frac{a_0}{k!}$
And we have no restriction on $a_0$. Thus we come up with our series solution
$\displaystyle f(x)=\sum\limits_{k=0}^\infty\frac{a_0}{k!}x^k=a_0\sum\limits_{k=0}^\infty\frac{x^k}{k!}$
which is just $a_0=f(0)$ times the series definition of our exponential function $\exp$! If we set the initial value $f(0)=a_0=1$, then the unique solution to our equation is the function
$\displaystyle\exp(x)=\sum\limits_{k=0}^\infty\frac{x^k}{k!}$
which is our new definition of the exponential function. The differential equation motivates the series, and the series gives us everything else we need.
## 3 Comments »
1. [...] Homogeneous Linear Equations with Constant Coefficients Now that we solved one differential equation, let’s try a wider class: “first-degree homogeneous linear equations with constant [...]
Pingback by | October 10, 2008 | Reply
2. [...] we can, and we’ll use the same techniques as we did before. We again find that any solution must be infinitely differentiable, and so we will assume that [...]
Pingback by | October 13, 2008 | Reply
3. Now, I try to verify your statement …
“That is, the exponential function satisfies the equation f(x)=f’(x). We already knew this about exp, but there we ultimately had to use the fact that we defined the logarithm ln to have a specified derivative” to prove that:
d(e^x)/dx = e^x, (1)
Let differentiate the both sides of eq.(1) respect to x, and suppose d(e^x)/dx = u, hence the result can be written as following
du/dx = u, (2)
Next, separating variables gives
du/u = dx, (3)
and integrating the both sides of eq.(3) gives ln(u) = x. Here I put the integrating constant c=0, because at x=1, ln(1)=0. From ln(u) = x , I find u = e^x. After u is written into initial form, then I can prove that d(e^x)/dx = e^x.
Now, turn me invite you all visit to my wordpress at http://rohedi.wordpress.com and/or my website at http://rohedi.com to give some comments to my research.
Comment by | November 30, 2008 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9181904196739197, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/tagged/lie-algebras
|
## Tagged Questions
1answer
104 views
### Resolutions of Lie algebras
We have a good notion of dgc algebra resolutions of commutative algebras. Is there an explicit construction of a dg Lie algebra resolution of a Lie algebra?
1answer
108 views
### Quantized conserved quantities appearing from the Lie-algebra
Hi, consider a simple situation in quantum mechanics: Your Hilbert space is $\mathcal{H}=L^2(\mathbb{R}^3)$ and you use the obvious unitary representation \$\pi\colon G=O(3)\times\ …
0answers
127 views
### Source of a formula for tensor product multiplicities?
This is a follow-up to a recent question by Allen Knutson here, involving a special type of tensor product multiplicity for a simple Lie algebra $\mathfrak{g}$ over $\mathbb{C}$ (o …
2answers
91 views
### Connectedness of Springer Fibers
Let $G$ be a connected, simply-connected, complex semisimple Lie group with Lie algebra $\frak{g}$. Let $\mu:T^*\mathcal{B}\rightarrow\mathcal{N}$ be the Springer resolution of \$\m …
2answers
86 views
### quasi-minuscule representations
Wich representations of $F_{4}$, $E_{8}$ and $G_{2}$ are quasi-minuscule?
1answer
270 views
### A question about the proof of Beilinson-Bernstein localisation
I'm trying to understand the proof of the Beilinson-Bernstein localisation theorem at the moment, but there's just one point where I'm having a mental block, and was wondering if a …
0answers
117 views
### Explicit Lie May structure on cosimplicial DG Lie algebras
In the paper "Homotopy Lie algebras", Schechtman and Hinich proved that any cosimplicial differential graded Lie algebra has the structure of a 'Lie May algebra'. If my understan …
1answer
145 views
### Reference request - localisation de g-modules
Does anyone have a link to a copy of Beilinson-Bernstein's "Localisation de g-modules", in which they prove the Beilinson-Bernstein theorem? I can't find it anywhere.
1answer
153 views
### finite dimensional irreducible representation of finite dimensional nilpotent Lie algebra
Let $k$ be a field, $L$ be a finite dimensional nilpotent Lie algebra over $k$ and $M$ be a finite dimensional irreducible representation of $L$. Assume that there is a linear func …
1answer
126 views
### ‘Generalised’ coinvariant algebras
Let $\mathfrak{g}$ be a simple complex Lie algebra, and $\mathfrak{h}\subset\mathfrak{g}$ a Cartan subalgebra with Weyl group $W$. Consider the fibre product \$\mathfrak{h}\times_{\ …
1answer
91 views
### Reduction of antisymmetric complex matrices
Let $E=\mathfrak{so}(n,\mathbb{C})$ be the Lie algebra of antisymmetric complex matrices. We consider the action of the complex orthogonal group $SO(n,\mathbb{C})$ on $E$ by conjug …
1answer
127 views
### Does the vanishing of the Poisson bracket on $S(\mathfrak{g})^{\mathfrak{g}}$ inspire the disover of Duflo’s isomorphism theorem?
For any finite dimensional Lie algebra $\mathfrak{g}$, we know that the universal enveloping algebra $U(\mathfrak{g})$ is a deformation of the symmetric algebra $S(\mathfrak{g})$. …
0answers
127 views
### polynomial representation of $sl_{2}(k)$
Let $k$ be an algebraic closed field of characteristic 0. We write X=\left( \begin{array}{ccc} 0 & 1\\ 0 & 0\\ \end{array} \right),~~ Y=\left( \begin{array}{ccc} 0 & …
1answer
74 views
### Criterion for nilradical of a maximal parabolic subalgebra to be abelian?
This question has some overlap with previous ones but doesn't seem to have a well-documented answer. I recall some literature (mostly involving Lie groups and hermitian symmetric …
0answers
102 views
### universal enveloping algebras and commutator subalgebras
Let $A$ and $B$ are Lie subalgebras of a Lie algebra $L$. $U(A)$, $U(B)$ and $U(L)$ are the universal enveloping algebras of $A$, $B$ and $L$, respectively. Let $[A, B]$ be the Lie …
15 30 50 per page
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8433027863502502, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-statistics/7956-spearmans-rank-multiple-ranks.html
|
# Thread:
1. ## Spearmans rank multiple ranks
I have my data ready for ranking, my only quuestion is when it comes
to ranking a number that has more than one what will i rank it.
E.g I have one number 1 in my data so this gets ranked as one
but i have 5 number 2's in my data so would this be ranked as 2.2
since there is 5 of them and 2.2 would be the average (1/5=.2)
or would it 2.5 for each of them
2. Originally Posted by dtwazere
I have my data ready for ranking, my only quuestion is when it comes
to ranking a number that has more than one what will i rank it.
...
Hello,
you have:
rank value
1 → 1
2 → 2
3 → 2
4 → 2
5 → 2
6 → 2
7 → 3.6
... → ...
then you calculate the rank as the average of all similar ranks. In this case all 2's will get the rank $r=\frac{2+3+4+5+6}{5}=4$
Now you have:
1 → 1
4 → 2
4 → 2
4 → 2
4 → 2
4 → 2
7 → 3.6
... → ...
EB
3. Hi, thanks for the response, this has caused me more confusion, would it be possible for you to rank my data for me as i am really stuck at this point! I can do the rest of the equation myself.
I would really appreciate this
Attached Files
• Book2.xls (18.5 KB, 44 views)
4. Originally Posted by dtwazere
Hi, thanks for the response, this has caused me more confusion, would it be possible for you to rank my data for me as i am really stuck at this point! I can do the rest of the equation myself.
I would really appreciate this
Hello,
...would it be possible for you to rank my data for me as i am really stuck at this point!
of course.
I've attached your modified file so you can see what I've done with your data:
1. I copied the time data into the column t' in sorted order.
2. In the column r' are listed all possible ranks in increasing order.
3. In the column r'' are listed the ranks so that equal data get equal ranks. (Notice: In German the comma is the same as the decimal point in English)
4. Now you have a one to one function between data and rank and you can accomplish your original list.
Remark: Of course it looks a little bit screwy to have a rank 11.5. But that doesn't matter for your calculations.
EB
Attached Files
• Book2_exmpl1.xls (69.5 KB, 45 views)
5. Originally Posted by dtwazere
Hi, thanks for the response, this has caused me more confusion, would it be possible for you to rank my data for me as i am really stuck at this point! I can do the rest of the equation myself.
Hello,
in the attachment you'll find the completed table.
As you may know you can calculate the ranks by using "average(range with absolute adresses!)". For the one-to-one function you can use a command something like "vertical reference" (I'm sorry I only know the commands in German which are not very helpful for you, I belöieve)
EB
Attached Files
• Book2_compl.xls (79.5 KB, 51 views)
6. Thank you all for your help. I will now do my spearmans ranking.
You guys are cool
Would it be possible for you to check my spearmans rank if i upload it as an attachment to see if i got it correct?
If it is not too much trouble.
Thanks, Daniel
Attached Files
• 1303d1164451142-spearmans-rank-multiple-ranks-book2_compl.xls (26.0 KB, 40 views)
7. Originally Posted by dtwazere
Thank you all for your help. I will now do my spearmans ranking.
You guys are cool
Would it be possible for you to check my spearmans rank if i upload it as an attachment to see if i got it correct?
If it is not too much trouble.
Thanks, Daniel
Hello, Daniel,
obviously you used $r_S=1-\frac{6\cdot \sum\left(D^2\right)}{n(n^2-1)}$. So everything is fine and correct.
Congratulations!
By the way: If I were you I would write somewhere on the spread-sheet which formula you have used. Otherwise you'll not understand your own homework two weeks later.
EB
8. thanks a lot for all your help, i will attempt to do this for my other set of data. If i get confused ill post it here.
Thanks again!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394014477729797, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/82953/stokes-theorem-application
|
# stokes theorem application
-edit, i really need to know which part i got wrong as i'm having trouble proceeding, please help me, thanks-
Applying Stokes' Theorem, evaluate the integral
Curve C is the intersection of the boundary surface of the cube $0≤x,y,z≤a$, with the plane $x+y+z=3a/2$ where a = 1.17. The contour C is oriented counterclockwise if seen from the positive direction of the Ox-axes.
my workings:
Let S be the hexagon that lies in the cube 0<=x,y,z <=a and on the plane x + y + z = 3a/2
then we can parameterize it using x,y obtaining z = 3a/2 - x - y.
normal vector to plane = | {i j k}, {1 0 -1}, {0 1 -1} | = i + j + k
curl F = |{i j k}, {dx dy dz}, {$y^{2}-z^{2}$ $z^{2}-x^{2}$ $x^{2}-y^{2}$}| = ($-2y-2z$)i + ($-2z-2x$)j + ($-2x-2y$)k
using stroke's theorem
the integral = $\int \int_{S}$ curl F.N dS
= $\int \int_{0<=x,y<=a} -4y - 4x - 4(\frac{3a}{2} - x - y) dx dy$
=$\int \int_{0<=x,y<=a} 6a$ dx dy = 6a $\int \int_{0<=x,y<=a}1 dx dy$ = 6a * area = 6a^3.
which part of my workings is wrong? thanks for looking through
-
How do you know this answer is wrong? The only thing that jumps out at me is the sign error in the last step. – user7530 Nov 17 '11 at 8:43
because i have the final numerical answer and its different from mine. i'm really at a lost here. you're right, i think my orientation is wrong, but that only changes the sign. – adsisco Nov 17 '11 at 10:56
The projection of the hexagon to the $(x,y)$-plane is not the full square, but only ${3\over4}$ of it. See my answer below. – Christian Blatter Nov 17 '11 at 12:59
## 1 Answer
Given a force field ${\bf F}=(P,Q,R)$ and the oriented boundary $\partial S$ of a piece of surface $S$ Stokes' theorem says that
$$W:=\int_{\partial S}\ {\bf F}\cdot d{\bf x}\ =\ \int_S\ {\bf rot\ F}\cdot{\bf n}\ {\rm d}\omega\ ,$$
where ${\rm d}\omega$ denotes the scalar surface element. In your case we have
$${\bf rot\ F}\cdot{\bf n}=(-2y-2z, -2z-2x,-2x-2y)\cdot\Bigl({1\over\sqrt{3}},{1\over\sqrt{3}},{1\over\sqrt{3}}\Bigl)=-{4\over\sqrt{3}}(x+y+z)=-2\sqrt{3}\ a$$
at all points of $S$, whence $W=-2\sqrt{3} a\ \omega(S)$. Now $S$ is a regular hexagon with edge length $a/\sqrt{2}$. Therefore $\omega(S)=6\cdot{\sqrt{3}\over 8}a^2$, so that we get definitively
$$W=-2\sqrt{3}a\cdot{3\sqrt{3}\over 4}a^2=-{9\over2}\ a^3\ .$$
-
THANKS! i really couldn't thanks you enough. – adsisco Nov 17 '11 at 14:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9091719388961792, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/43021/kugo-and-ojimas-canonical-formulation-of-yang-mills-using-brst?answertab=oldest
|
# Kugo and Ojima's Canonical Formulation of Yang-Mills using BRST
I am trying to study the canonical formulation of Yang-Mills theories so that I have direct access to the $n$-particle of the theory (i.e. the Hilbert Space). To that end, I am following Kugo and Ojima's (1978) 3-part paper.
At the outset, I am confused by their Lagrangian, and their two differences from the conventional one (I write the Lagrangian 2.3 of their paper):
$$\mathcal{L}=-\frac{1}{4}F^a_{\mu\nu}F^{a\,\mu\nu}-i\partial^\mu\bar{c}D_\mu^{ab}c^b-\partial^\mu B^a A_\mu^a+\alpha_0 B^a B^a/2$$
1. They have rescaled the Ghost field so that its kinetic term has a factor of $i$ in front.
2. They have integrated by parts on $B^a\partial_\mu A^{a\,\mu}$, effectively making $B$ dynamical.
The authors chose these two differences are so that (1) BRS variations (eq 2.15 in their paper) preserve Hermiticity of the Ghost fields, and (2) to make the Lagrangian BRS invariant.
I am totally confused by their second point. I thought the standard BRS Lagrangian appearing in standard texts, for example, in Peskin and Schroeder was already BRS invariant. Why the $\partial B.A$ term?
-
What is the precise reference to Kugo and Ojima's paper? – Qmechanic♦ Oct 30 '12 at 15:48
– QuantumDot Oct 30 '12 at 16:28
## 2 Answers
Kugo and Ojima's work was one of the major breakthroughs in understanding the role of BRST in the quantization of gauge theories. Historically BRST was discovered in the path integral formalism. The understanding of this theory as a cohomology theory started from the Kugo and Ojima's work.
Now, the action is BRST invariant with and without the Gaussian integration over the auxiliary field $B^a$ (called the Lautrup-Nakanishi multipliers). They are introduced in order to not have explicit dependence on the gauge parameter in the Ward identities (please see a recent review by Becchi). The BRST invariance Ward identities is a crucial step in the unitarity proof.
Kugo and Ojima actually solved the BRST cohomology problem of the Yang-Mills theory. They actually identified the physical and unphysical states of the theory (in terms of the BRST operator) as follows: The physical states correspond to the states annihilated by the BRST operator and in addition of positive norm.
The unphysical states are arranged in degenerate quartets. This is called the Kugo - Ojima quartet mechanism. One quartet corresponds to a ghost, anti-ghost, longitudinal and temporal gluons. In their formalism these states can be generated from the vacuum by the action of the ghost antighost operators as well as by the field $B^a$ and its conjugate momentum. They also conjectured that since colored quark operators and transversal gluon operators belong to the quartet sector, then these states must be confined.
-
Although this reply does not directly address my question, it is incredibly informative. I always welcome historical anecdotes regarding achievements and breakthroughs made in theoretical physics! – QuantumDot Nov 4 '12 at 2:14
Adding a total divergence term $-\partial^{\mu}(B^aA_{\mu}^a)$ (and other$^{1}$ total divergence terms) to the Lagrangian density does not change the classical equations of motion. In particular, this change doesn't make the Lautrup-Nakanishi field $B^a$ a propagating field.
One may prove that both the old and the new actions are BRST invariant.
As to why Kugo and Ojima choose to add the total divergence terms in the path integral formalism, if they are not being cavalier about boundary terms, I'm guessing it is linked to their choice of boundary conditions, and to have the most straightforward correspondence to the operator formalism, where all the field variables of the model should have well-defined Hermitian properties wrt. the inner product structure of the Hilbert space, and where the quartet mechanism takes place.
--
$^{1}$ Kugo and Ojima also integrate the Faddeev-Popov determinantal term by part (as compared to Peskin et al.).
-
I don't understand how performing an integration by parts makes "all the field variables of the model have well-defined Hermitian properties wrt. the inner product structure of the Hilbert space". Would you elaborate on this? Thanks! – QuantumDot Oct 30 '12 at 15:16
Is it straightforward to show that certain representations of the Lagrangians would make all field variables have well-defined Hermitian properties wrt inner product structure of the Hilbert space? What does that mean, anyway? – QuantumDot Nov 1 '12 at 23:39
Please, please, please; can you tell me what you mean by "all the field variables of the model have well-defined Hermitian properties wrt. the inner product structure of the Hilbert space"? Thanks! – QuantumDot Nov 4 '12 at 2:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280693531036377, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/10555/categorization-of-logic?answertab=votes
|
# categorization of logic
(1). I was wondering about what are the relation and differences between formal and informal logic?
What topics does each of them have? For example, topics such as Meaning and Definition, Syllogistic Logic, Inductive Logic, Probabilistic/Statistical Reasoning, Deductive Logic
Is symbolic logic same as formal logic?
(2) what is the relation between informal/formal logic and deductive/non-deductive reasoning/inference?
must formal logic be only for deduction, not for non-deductive reasoning/inference?
(3) what are other usual/better ways to categorize various logic topics?
Thanks and regards!
-
## 3 Answers
Informal logic is, well, informal. As such it is not unambiguously defined what the subject matter is, how different statements are related to each other, or (more ambitiously) what statements "mean". This type of activity is more popular in philosophy, linguistics and other areas that are not specifically mathematical. Modern interest in informal reasoning is often about how to formalize it, as in AI, natural language processing or data mining.
Formalized logic is as described in the other answers. Essentially, there are many types of formal logic, and each one is a game with its own well-defined rules (in principle, performable by a computer) for taking a set of statements -- informally thought of as "premises" -- and adding to it some additional statements, informally thought of as "conclusions". In spirit it is very close to computer programming and its study as a field resembles computer science. Some branches such as model theory or set theory are studied purely as mathematics in the sense that the explicitly combinatorial/linguistic aspect is in the background and the sentences are studied in a "semantic" approach that speaks in terms of the structures, objects and their properties (the material supposedly described by the logic) rather than the permissible syntactic operations in a formal logical system.
Mathematics is intermediate between these two extremes. It uses logic(s) that could be called potentially (or presumably, or in-principle) formalizable. Steps of the game are not codified precisely enough to be programmed on a computer, but they are standardized and sufficiently well-defined to correspond to known genuine moves in some actual formal systems, so there is a presumption that any correct non-formalized proof is robust enough to have many slightly different expansions into a very detailed formal proof in several of the formal systems suggested as a "foundation" for mathematics.
You mentioned probabilistic or statistical reasoning. This is a somewhat separate question because any deterministic setting can be enlarged a probabilistic one (e.g., instead of measuring attributes or their absence by $1$ or $0$, allow any numerical value between $1$ and $0$) and the logical rules modified accordingly in a known way. Similarly, any setting that uses ordinary or probabilistic logic on objects thought of as "data" can be placed in the context of statistics, where one considers how the data might have been generated in addition to the data itself, and the rules for reasoning and speaking about this richer picture can again be updated in a routine way. But these probabilistic/fuzzy/statistical enrichments are a kind of fixed recipe where the "logic" and "probabilistic" ingredients are cooked separately and mixed in an understood way. You could discuss the result as a new form of logic but really it is a repackaging of what you started with, not a fundamentally different mode of reasoning.
-
Unfortunately I'm not versed in philosophical logic, so let's start with some "different categories of logic".
### Scope
The first relevant category of logic is scope, what is the logic talking about. Propositional logic talks about nonparametric propositions. A typical rule (modus ponens) allows you to derive a proposition B given that you know A and that A implies B.
Predicate logic talks about parametric propositions, for example "X is a poster". We can derive "There's X such that X is a poster" from "Tim is a poster", whereas in propositional logic "Tim is a poster" is monolithic and cannot be broken up to "Tim" and "X is a poster".
Predicate logic is also known as first-order logic. If we quantify over predicates then we get Second order logic. This allows us to use concepts such as "There's a predicate P such that P(X) iff X=Tim".
The scope is enlarged in a different direction by Modal logic, where we can talk about events (propositions) eventually happening, happening until some other event happens, and so on.
### Rules
The rules of logic people are usually taught constitute Classical logic. In classical logic you're allowed to use proof by contradiction. However, some mathematicians (well, logicians) dislike such rules since they're non-constructive (this is like set-theorists who consider what happens when you're not allowed to use AC, the axiom of choice).
By modifying (restricting) the rules, we arrive at Intuitionistic logic (or Constructive mathematics), in which every proposition that we prove has a constructive proof from the givens. In particular, you can't use a proof of the sort "if X, then Y; if not X, then also Y" unless you can decide whether X is in fact true. The resulting world is nice - for example all functions are continuous. On the other hand, not every theorem you can prove in classical logic is true in the constructive sense, although surprisingly many are (for example, the fundamental theorem of algebra is still provable).
In a different direction, in Infinitary logic the proof is infinite and there are logical rules with infinitely many premises. For example, you could have a rule deducing "P(n) for every natural n" from the (countably) many propositions P(0), P(1), ... In general these logics are not well-behaved, but if you choose the parameters correctly, they are (look for $L_{\omega_1,\omega}$).
### Goals
Different areas of logic have different goals. For example, in Structural Proof Theory one goal is to understand how simple a proof system can be made, and what are the consequences. Using this syntactic approach, you can show that certain statements are not provable in certain systems by converting them into a very simple form, bounding the resulting size of the simple proof, and show that it is too short to prove the given statement.
A complementary direction is Model Theory, which is all about what a given set of axioms describes. You can prove for example that a set of first-order axioms cannot characterize the size of the universe (if it's infinite). The model-theoretic approach to showing that some theorem is unprovable is to exhibit a model of the axioms in which the theorem isn't true.
In computer science, people are interested in Proof Complexity, which studies how long it takes to prove statements if you're only allowed to use basic means. The holy grail is proving a statement which is even stronger than $P \neq NP$. A different sub-field constructs systems in which every predicate proven to exist is efficiently computable.
### Categorical logic
Unfortunately I don't know anything about categorical logic. Apparently there are some kinds of categories (in the sense of category theory) which describe "universes of set theory" along with their associated logic, which is often constructive. These things are called Topoi (singular Topos).
### Philosophers' logic
Philosophers are interested in different aspects of logic. Unfortunately, I don't know much about these, so I'll only give some sporadic examples.
There is some interest in nice, succinct systems of logic whose axioms are "as natural as possible". In mathematical logic, by and large people aren't interested in the exact form of the axioms but in what they describe and what can be proven about the system using them.
Other interest is in what the allowable rules of logic should be. I described classical logic and intuitionistic logic. There are many systems in between (and even systems weaker than the latter), which might be natural given some philosophical stance. Most mathematicians don't even bother with intuitionistic logic, although some logicians (and even people from other branches) are intrigued by it.
There's also some interest in making sense of paradoxes - given a statement which is paradoxical (for example, the Liar paradox), what should its truth value be? The mathematical solution is to avoid self-reference in some systematic way, since mathematicians are more interested (in this case) in the well-being of the system rather than these "monstrous examples".
-
Any formal system consists of:
1. a list of symbols (a vocabulary or alphabet),
2. a list of of typographical rules to connect these symbols to write statements (a grammar or syntax), and
3. a list typographical rules for deriving new statements from other statements (axioms or rules of inference).
By typographical rules, I mean rules that tell you only how to manipulate the symbols without reference to any meaning or interpretation that these symbols might have. Consider for example a rule of inference in propositional logic that I call the Join Rule:
If X and Y are true statements, then so is "(X) & (Y)".
Notice that I make no reference in the statement of this rule to any possible meaning of the "&" symbol. (You can probably guess that it is the AND-operator.)
You can play around with my own implementation of formal propositional logic using a freeware program I have developed, available at my website http://www.dcproof.com
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9511674642562866, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/8043/decomposing-geodetic-de-sitter-effect-into-thomas-precession-and-spatial-curvatu?answertab=oldest
|
# Decomposing geodetic/de Sitter effect into Thomas precession and spatial curvature
According to Rindler the geodetic effect can be considered as consisting of Thomas precession combined with the effect of moving through curved space.
Wolfgang Rindler (2006) Relativity: special, general, and cosmological (2nd Ed.) p234
However according to Misner, Thorne, and Wheeler, Gravitation, p. 1118, Thomas precession does not come into play for a freely moving satellite.
See: http://en.wikipedia.org/wiki/Talk:Geodetic_effect
I think that although a freely moving satellite doesn't feel gravity, it's relation to an observer is still subject to lorentz transformations and hence Thomas precesssion.
So who is right ? Rindler or Misner, Thorne, and Wheeler ?
-
## 1 Answer
The difference between Rindler's wording and the MTW wording is just a difference in the choice of coordinates.
Thomas precession in STR
First, what is the Thomas precession? It is a special relativistic effect so the original derivation of the Thomas precession only applies in flat spacetimes. In other words, Rindler's application of the Thomas precession in the context of general relativity requires one to specify how the flat special relativistic spacetime is identified with, or embedded into, the curved spacetime in general relativity.
The Thomas precession is a change of the angular momentum that a gyroscope undergoes when it is attached to another object whose velocity is changing and going along a curve in the space of velocities. Why does it occur?
Well, in special relativity, the velocity is a vector $u^\mu$ normalized so that $u^\mu u_\mu=1$. The space of such vectors is a two-part hyperboloid in a Minkowski space (its intrinsic signature is purely spacelike). Now, the angular momentum of a gyroscope moving together with an object is given by an antisymmetric tensor $J_{\mu\nu}$ that satisfies $J_{\mu\nu}u^\nu=0$. Now, if you try to parallel transport this tensor $J_{\mu\nu}$ along a path inside the hyperboloid, it will not return to itself because the hyperboloid is a curved submanifold. Instead, it will rotate by an angle around an axis (one that depends on the path in the space of velocities) - and this is what we mean by the Thomas precession.
General relativity
In the case of a gyroscope attached to a satellite in a gravitational field described by general relativity, we may describe the vicinity of the world line of the satellite as a piece of flat Minkowski spacetime. After a whole orbit, we return to the same place in space. However, the orbit won't be quite periodic in our "flat, thin, and long Minkowski cylinder" surrounding the world line. Instead, we will induce both a rotation of the velocity space as well as a rotation of the ordinary position space, caused by the actual curvature of the space. Rindler probably calculates the full effect by summing these two contributions - from the monodromy in the uniformly curved velocity space and from the monodromy in the actual spacetime curved by the presence of matter.
So I am pretty confident that Rindler did his job correctly and avoided any sign errors. Gravity Probe B has confirmed the result, after all.
MTW are arguably able to calculate the total result correctly, too. But they organize their calculation differently, attributing the whole effect to the curvature of space. Effectively, their coordinates for the "cylinder surrounding the satellite's world line" differ by a "twist" (a time-dependent rotation) from Rindler's cylinder - the two groups of relativists use different coordinates.
The MTW proposition seems to directly contradict Rindler's calculational strategy and I think that the MTW, in the very satellite context that is relevant and with the Rindler's choice of coordinates, is incorrect. A statement that would be true and similar to the MTW proposition is that if the satellite were freely moving in space and had "no intrinsic rotation" relatively to a chosen system of coordinates, then the velocity could be viewed as a "constant" and the Thomas effect would be exactly zero.
However, the coordinates chosen by Rindler contradict the assumption because the satellite itself - not just the gyroscope - is rotating in this system of coordinates (the gyroscope is also rotating, and differently). So the velocity itself is non-constant even in the flattened cylinder surrounding the world line, and the Thomas precession contribution is nonzero.
As you can see, the existence or absence of the Thomas precession depends on whether or not we consider the velocity of the satellite to be "constant" and this question depends on whether or not our "natural" (there is no natural one!) coordinate system is rotating relatively to other people's natural systems. If there is no Thomas precession in one system, it's because we embedded the special relativistic spacetime in such a way that the velocities stay "constant". However, if we choose a "twisted" coordinate system that is rotating with respect to the first one, the velocities will be non-constant and the precession will receive contributions from the Thomas precession.
By changing the coordinate systems, arbitrary parts of the overall precession after one period - that everyone has to agree with - may be moved from the Thomas precession contribution to the curvature-of-space contribution.
-
3
Excellent answer. This is an example of something very common in relativity: the observed fact (precession) can "look like" different things in different coordinate systems. MTW are correct that, in a certain choice of coordinates, there is no Thomas precession, but to the extent that they seem to mean that this is a universal fact about the world rather than a fact about a particular choice of coordinates, their statement is misleading. – Ted Bunn Apr 4 '11 at 18:29
Thanks Luboš Motl for the detailed answer, and Ted Bunn for the helpful comment. – user2929 Apr 5 '11 at 14:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9226911067962646, "perplexity_flag": "middle"}
|
http://conservapedia.com/Parameterization
|
# Parametrization
### From Conservapedia
(Redirected from Parameterization)
Parametrization is a technique in calculus for expressing multiple variables in terms of only one variable, which is usually depicted as "t", over a specified range.
The most common parametrization is for a circle centered at the origin in the xy-plane:
• x = rcos(t)
• y = rsin(t)
• z = 0
• $0 \le t \le \pi$
Always remember to be precise in determining the new limits on the new parameter t.
This technique is particularly useful for calculating extrema and line integrals.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7867029309272766, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/151592-how-many-bit-strings-length-27-there.html
|
# Thread:
1. ## How many bit strings of length 27 are there...?
How many bit strings of length 27 are there such that:
a. the bit string corresponding to the last twelve positions contain exactly seven 0
b. the bit string has at least fifteen 0 and at least ten 1, also must have the bit string corresponding to the first nine positions contains six 1and the bit string corresponding to the last twelve positions contain a maximum of ten 0.
c. the bit string corresponding to the first ten positions contain exactly eight 1 and a bit string corresponding to the last fifteen positions contains (or does not contain) the string as a sub-string 1011101
2. Hi,
I will attempt a.
For the last 12 positions there are $\binom{12}{7}$ ways to pick the 0's, once the 0's are fixed, the 1's fall naturally into position.
Then for the first 15 positions there are 2 choices for each positions, either a 1 or 0, thus there are $2^{15}$ choices for the 1st 15 positions.
Thus in total we have $\left(2^{15}\right)\binom{12}{7}$ bit strings.
I would really like to know if this is correct though, could someone confirm? Thanks.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9162333607673645, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/16553/understanding-an-integral-from-page-15-of-titchmarshs-book-the-theory-of-the-r
|
# Understanding an integral from page 15 of Titchmarsh's book “The theory of the Riemann Zeta function”
In Titchmarsh's book "The theory of the Riemann Zeta function" pg. 15 where the functional equation of the zeta function is being derived, I couldn't understand this part: $$\frac{s}{\pi} \sum_{n=1}^{\infty} \frac{(2n\pi)^s}{n} \int_{0}^{\infty} \frac{\sin y}{y^{s+1}} dy = \frac{s}{\pi} (2\pi)^s \{-\Gamma(-s)\}\sin\frac{1}{2}s\pi\zeta(1-s)$$
I could not digest Titchmarsh's reasoning. Can anyone explain this please?
Thanks,
-
## 2 Answers
If I'm reading your question correctly, you'd like to prove the stated equality? If so, perhaps this might orient you a little bit. Write the left hand side as \begin{eqnarray} \frac{s}{\pi} \left( \int_{0}^{\infty} \frac{\sin y}{y^{s+1}} dy \right) (2\pi)^{s} \left( \sum_{n = 1}^{\infty} n^{s-1} \right) \end{eqnarray} To prove that this equals the right side you'll need a definition and an identity. The zeta function is defined as $\zeta(s) = \sum_{n = 1}^{\infty} n^{-s}$ with $\mathbf{Re}(s) > 1$, so the sum above is clearly equal to $\zeta(1-s)$ with $\mathbf{Re}(s) < 0$. The hard part is now showing the following gamma function integral representation \begin{eqnarray} \Gamma(s) = \frac{1}{\sin \frac{\pi s}{2}} \int_{0}^{\infty} \frac{\sin y}{y^{1-s}} dy, \end{eqnarray} where the integral converges if and only if $-1 < \mathbf{Re}(s) < 1$. Once you've got this in hand, then your equality is true on $-1 < \mathbf{Re}(s) < 0$. Now just analytically continue by individually continuing both the gamma function and the zeta functions to their largest respective domains.
(Before I post a proof of the integral representation, I'll give you a few hours to try to work on it yourself. Hint: Use a change of variables on a more common integral representation. More to come)
-
:) Yea... a few hours would be good!!! – Roupam Ghosh Jan 6 '11 at 17:13
Got it, Mellin transform of $\sin(y)$ in $(0,\infty)$ is $\Gamma(s) \sin\frac{\pi s}{2}$ using standard formulas!!! What was your method, that you had in mind? – Roupam Ghosh Jan 6 '11 at 17:32
Hi Roupam! Working through the proof of the Mellin transform identity from first principles is equivalent to my original idea. Well done. – user02138 Jan 10 '11 at 14:37
I do not intend to post it, but for those who are interested the proof of the tough part of this question, namely the classic result that
$$\Gamma(s) \sin \left( \frac{\pi s}{2} \right) = \int_0^\infty y^{s-1} \sin y \textrm{ d}y$$
where $-1 < Re(s) < 1$ can be found in "Topics in Analytic Number Theory," by Hans Rademacher (in chapter 6).
-
1
By the way, if you should pick up the book don't just stop at the appropriate page for this problem. The book is excellent and chapter 14 contains one of my all time favourite results, Rademacher's convergent series for the partition function. It's well worth a look. – Derek Jennings Jan 6 '11 at 18:39
Yup... Thanks, thats what I needed... Its on page 82 (39.2)... I was going through the book, its a really good book!!! – Roupam Ghosh Jan 7 '11 at 2:56
Which answer should I accept? Both give part of the solution. :D – Roupam Ghosh Jan 7 '11 at 3:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9102329015731812, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/6724/list
|
## Return to Answer
2 fixed inadvertent swap of Spec R and Spec S in 3rd para.
"Life is really worth living in a Noetherian ring R when all the local rings have the property that every s.o.p. is an R-sequence. Such a ring is called Cohen-Macaulay (C-M for short).": Hochster, 1978
Section 3 of that paper is devoted to explaining what it "really means" to be Cohen-Macaulay. It begins with a long subsection on invariant theory, but then gets to some algebraic geometry that will interest you.
In particular, he points out that if $R$ is a standard graded algebra over a field, then it is a module-finite algebra over a polynomial subring $S$, and that $R$ is Cohen-Macaulay if and only if it is free as an $S$-module. Equivalently, the scheme-theoretic fibers of the finite morphism $\mathrm{Spec}\ S R \to \mathrm{Spec}\ R$ S\$ all have the same length.
At the end of section 3, Hochster explains that the CM condition is exactly what is required to make intersection multiplicity "work correctly": If $X$ and $Y$ are CM, then you can compute the intersection multiplicity of $X$ and $Y$ without all those higher $\mathrm{Tor}$s that Serre had to add to the definition.
He gives lots of examples and explains "where Cohen-Macaulayness comes from" (or doesn't) in each one. The whole thing is eminently readable and highly recommended.
1
"Life is really worth living in a Noetherian ring R when all the local rings have the property that every s.o.p. is an R-sequence. Such a ring is called Cohen-Macaulay (C-M for short).": Hochster, 1978
Section 3 of that paper is devoted to explaining what it "really means" to be Cohen-Macaulay. It begins with a long subsection on invariant theory, but then gets to some algebraic geometry that will interest you.
In particular, he points out that if $R$ is a standard graded algebra over a field, then it is a module-finite algebra over a polynomial subring $S$, and that $R$ is Cohen-Macaulay if and only if it is free as an $S$-module. Equivalently, the scheme-theoretic fibers of the finite morphism $\mathrm{Spec}\ S \to \mathrm{Spec}\ R$ all have the same length.
At the end of section 3, Hochster explains that the CM condition is exactly what is required to make intersection multiplicity "work correctly": If $X$ and $Y$ are CM, then you can compute the intersection multiplicity of $X$ and $Y$ without all those higher $\mathrm{Tor}$s that Serre had to add to the definition.
He gives lots of examples and explains "where Cohen-Macaulayness comes from" (or doesn't) in each one. The whole thing is eminently readable and highly recommended.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322338104248047, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/81626/is-strong-approximation-difficult/81659
|
## Is strong approximation difficult?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Recently a colleague and I needed to use the fact that the natural map $SL_2(\mathbb{Z}) \rightarrow SL_2(\mathbb{Z}/N\mathbb{Z})$ is surjective for each $N$. I happily chugged my way through an elementary proof, but my colleague pointed out to me that this is a consequence of strong approximation.
After browsing through some references (including Ch. 7 of Platonov and Rapinchuk, and one of Kneser's papers) I now cheerfully agree with my colleague. Indeed, as P+R describe, strong approximation is "the algebro-geometric version of the Chinese Remainder Theorem".
But what about the proofs? To this particular novice in this area, the proofs seemed rather difficult -- involving a variety of cohomology calculations, nontrivial theorems about Lie groups, and much else besides. I confess I did not attempt to read them closely.
But perhaps this is because the theorems are proved in a great deal of generality, or perhaps such arguments are regarded as routine by experts. My question is this:
Is the proof of strong approximation genuinely difficult?
For example, can a reasonably simple proof be given for $SL_n$ which does not involve too much machinery, but which illustrates the concepts behind the general case?
-
The proof in Section 5 of Yoshida's notes (dpmms.cam.ac.uk/~ty245/2008_AGR_Fall/…), boils down to the elementary fact in your first sentence! I wonder if there really is a proof which is logically independent of this kind of elementary lemma, since otherwise your colleague's assertion seems a bit circular. (I am very far from an expert here.) – David Hansen Nov 22 2011 at 17:39
1
Paul Garrett - Hilbert modular form pg.279-282 for $SL(n)$. – Marc Palm Nov 22 2011 at 17:42
@pm: Care to say a few words about the ingredients in Garrett's proof? – David Hansen Nov 22 2011 at 17:55
## 4 Answers
There are two very-different questions here: the best arguments for surjectivity of the natural maps $SL(n,R)\rightarrow SL(n,R/I)$, and about Strong Approximation. While it is true that Strong Approximation more-than implies these surjectivities in situations that are not quite elementary, it is serious overkill, I think.
The most direct proofs of these surjectivities may very well strike one as needlessly and unilluminatingly messy, but I think this is a result of underestimating the issue. It's not as serious as (edit: "the strong version of") Strong Approximation, by far, but is not an easy exercise, either. The arguments in Rosenberg cannot be simplified much, apart from removing $K$-theory terminology, if one insists. A possibly irreducibly-minimal argument for $SL(2,\hbox{PID})$ is on-line at http://www.math.umn.edu/~garrett/m/mfms/notes/07b_surjectivity.pdf In any case, once one reconciles oneself to the not-quite-triviality of the surjectivity issue, the proofs one reasonably finds may seem less unreasonable.
The question(s) about proofs of (edit: "the strong forms of") Strong Approximation are in a different league. (Indeed, as in one comment or answer, the surjectivity issue by itself does not need adeles or cohomology.) In the late 1930s, Eichler had results amounting to Strong Approximation for simple algebras over number fields. M. Kneser's work starting about 1960 (and with two articles in the Boulder conference) mostly addressed orthogonal groups and classical groups other than SL(n) or unit groups of simple algebras. For the latter, including SL(n), the argument is essentially linear algebra and elementary analysis, with some clevernesses.
(The argument in my old Hilbert Modular Forms book was (if I recall correctly) a distillation of remarks in Eichler's old papers, and remarks of Kneser. The SL(n) case was not the main focus of any of that, since, despite its non-triviality, Strong Approximation for SL(n), even over number fields, is much easier than the orthogonal group case. Perhaps I'll put a version of that argument on-line sometime soon, since the book is out of print, and there is no longer any electronic version...)
In brief, the reason the Platonov-Rapinchuk iconicized version uses cohomology is to address the orthogonal-group or unitary-group cases, and others with more genuine arithmetic content than has the SL(n) case. That is, again, the SL(n) case of Strong Approximation is "trivial" by comparison to the orthogonal-group case, although it itself is vastly more serious than the surjectivity question.
[EDIT 2:] Following up @LYX's note, indeed, for the usual invocations of Strong Approximation for SL(n), the Bourbaki Comm. Alg. citation suffices. Unsurprisingly, it does argue on elementary matrices (thus implicitly a bit of K-theory), and is purely algebraic, in the sense that finiteness of residue fields is irrelevant. One "usual" invocation is to know that $SL(n,F_\infty)SL(n,F)$ is dense in $SL(n,\mathbb A_F)$ for a number field $F$, where $F_\infty$ is the product of archimedean completions ($F_\infty=F \otimes_{\mathbb Q} \mathbb R$). The application to automorphic forms is that quotients of this adele group are "merely" quotients of the Lie group $SL(n,F_\infty)$.
The stronger version replaces $F_\infty$ with $F_v$ for any place $v$. That is, the general case does not specially suppress reference to archimedean places. (The argument I gave in my Hilbert modular forms book did suppress archimedean places, exactly because of the specific motivations. Nevertheless, "my" argument there did more resemble the viewpoint that would extend to the stronger assertion. The date on Bourbaki's Comm. Alg. is 1985, which might explain why, from late 1970s to mid-1980's, I saw no discussion of any form of "Approximation" other than Kneser and Eichler. Kneser's paper does also mention Rosenlicht and Steinberg for discussions of more-interesting cases than SL(n).)
The Kneser article "Strong Approximation" in the Boulder Conference (Proc Symp Pure Math AMS IX, 1966, pp 187-196 cites Eichler 1938 J. Reine und Angew. Math "Allgemeine..." as treating the simple algebra case, excepting the definite quaternion algebra case (in which SA does not hold). Thus, Eichler's case was already more complicated than SL(n).
In summary: any form of the SL(n) case is vastly simpler than orthogonal-group or other cases. There is a bifurcation already for SL(n), namely, whether or not archimedean places are suppressed (thus, giving a "purely algebraic" result).
The more delicate result can be relevant for "Shimura curves", made from quaternion division algebras over totally real fields, split at only a single real place, to know that the adelic construction is a single classical quotient.
-
Unfortunately, the link seems to do not work. – Ralph Nov 23 2011 at 0:29
Oop, sorry about the bad link. Should be fixed now... – paul garrett Nov 23 2011 at 1:03
A very illuminating answer. Thank you! – Frank Thorne Nov 24 2011 at 19:35
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The $SL_n$ case can be treated rather general in an elementary way:
Let $R \to S$ be a surjective homomorphism of commutative rings. If $SL_n(S)$ is generated by elementary matrices then the induced map $SL_n(R) \to SL_n(S)$ is surjective.
[This is, since an elementary matrix over $S$ obviously lifts to an elementary matrix over $R$].
The cases when $SL_n(S)$ is generated by elementary matrices include:
1. $S$ is Euclidean
2. $S$ is local
3. $S$ is a finite product of rings $S_i$ such that the $SL_n(S_i)$ are generated by elementary matrices.
4. $S$ is finite
[The proof of 1. and 2. is elementary and can be found in 2.3.2 / 2.2.2 of "Rosenberg: Algebraic K-Theory and its Applications" and 3. follows from $SL_n(S_1 \times S_2) \cong SL_n(S_1) \times SL_n(S_2)$. 4. holds since each finite comm. ring is a direct product of local rings.]
Example: Let $S = \mathbb{Z}/N\mathbb{Z}$ or more generally, let $S = D/I$ where $D$ is a Dedekind domain and $0 \neq I \trianglelefteq D$. By prime ideal factorization $I=\prod_{i=1}^m P_i^{k_i}$ and by Chinese Remainder Theorem $D/I \cong \prod_{i=1}^m D/P_i^{k_i}$ where $D/P_i^{k_i}$ is local with maximal ideal $P_i/P_i^{k_i}$. Thus $SL_n(D) \to SL_n(D/I)$ is surjective. Alternatively use that $D/I$ is finite.
-
What a delightful proof! Thank you. – Frank Thorne Nov 24 2011 at 19:32
I am quite surprised that it seems to be so poorly known, but an elementary proof of Strong Approximation for $SL_n$ over a Dedekind domain can already be found in Bourbaki (Algebre Commutative, VII, $\S$2, n.4). Essentially, the idea is to deduce it from the Chinese Remainder Theorem, by means of elementary matrices, as hinted in Ralph's answer. Actually, I would be curious to know a bit more about the history of this case of Strong Approximation: who and when did prove it first?
-
3
@LYX, is it meant to talk about the surjectivity of $SL(n,R)$ to $SL(n,R/I)$, or about the "strong" (genuine) version of Strong Approximation? I don't have Bourbaki's Algebre Comm here... but I would be mildly surprised if (what I call) "Strong Approximation" appears there, since archimedean places of a number field are very relevant. E.g., there is no "sense" of the assertion for abstract Dedekind domains. The surjectivity assertion is certainly much more elementary, and (I'd wager) would have been clear to Kronecker, at least, for example. But the "strong" version? Thanks for clarification. – paul garrett Nov 23 2011 at 1:56
2
Bourbaki's Algebre Commutative, VII, §2, n.4 has the title: "The Approximation Theorem for Dedekind domains". They take a Dedekind domain $A$ with fraction field $K$ and define the ring of restricted adeles $\bf A$ as the restricted product of completions of $K$ over all (non-zero) primes of $A$. Proposition 4 states: $SL(n,K)$ is dense in $SL(n,{\bf A})$. – LYX Nov 23 2011 at 2:42
@LYX: Ah, thanks for the clarification! So this notion of "adeles" does not include archimedean factors, in the number field case. But if there is only one archimedean place, as for $\mathbb Q$, perhaps this gives (what I think of as) Strong Approximation. Thanks again for the reference: I'd not noticed it, and had not heard a reference to it, even while looking for such things some years ago! :) – paul garrett Nov 23 2011 at 17:04
The proof for $SL(n, \mathbb{Z})$ is not very difficult, and takes a couple of pages in M. Newman's "Integral Matrices". Newman uses no fancy language at all (no adeles, no cohomology), so this is highly recommended. The book should be in your university library, if not, I am quite sure it is on gigapedia. The book is a classic, and is highly recommended for pretty much everything it covers.
-
3
Another reference of the same kind is Shimura, Introduction to the Arithmetic Theory of Automorphic Forms, lemma 1.38 and its proof. – Matthieu Romagny Nov 22 2011 at 20:35
@Matthieu: I have been told that Shimura's books/papers are generally really difficult to read, and that, for example, we should be glad that Deligne and Milne wrote very good books on Shimura varieties. Would you say this book is an exception? – M Turgeon Nov 22 2011 at 22:07
1
@M Turgeon: I have heard the same complaint about this book, but I read the proof of this lemma at Matthieu's suggestion and it is quite nice! (I have not yet looked up Newman's book.) – Frank Thorne Nov 23 2011 at 0:51
@M Turgeon : I think that that particular proof is neat. – Matthieu Romagny Nov 23 2011 at 21:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933300793170929, "perplexity_flag": "middle"}
|
http://www.mathplanet.com/education/pre-algebra/graphing-and-functions/the-slope-of-a-linear-function
|
# The slope of a linear function
The steepness of a hill is called a slope. The same goes for the steepness of a line. The slope is defined as the ratio of the vertical change between two points, the rise, to the horizontal change between the same two points, the run.
$\\ slope=\frac{rise}{run}=\frac{change\: in \: y}{change \: in\: x} \\$
The slope of a line is usually represented by the letter m. (x1, y1) represents the first point whereas (x2, y2) represents the second point.
$\\ m=\frac{y_{2}\, -y_{1}}{x_{2}\, -x_{1}} \\$
It is important to keep the x-and y-coordinates in the same order in both the numerator and the denominator otherwise you will get the wrong slope.
Example:
Find the slope of the line
(x1, y1) = (-3, -2) and (x2, y2) = (2, 2)
$\\ m=\frac{y_{2}\, -y_{1}}{x_{2}\, -x_{1}}=\frac{2-\left ( -2 \right )}{2-\left ( -3 \right )}=\frac{2+2}{2+3}=\frac{4}{5} \\$
A line with a positive slope (m > 0), as the line above, rises from left to right whereas a line with a negative slope (m < 0) falls from left to right.
$\\ m=\frac{y_{2}\, -y_{1}}{x_{2}\, -x_{1}}=\frac{\left (-1 \right )-3}{2-\left ( -2 \right )}=\frac{-1-3}{2+2}=\frac{-4}{4}=-1 \\$
If two lines have the same slope the lines are said to be parallel.
You can express a linear function using the slope intercept form.
$\\y=mx+b \\\\m=slope \\b=y - intercept\\$
Video lesson: Find the slope
Next Class: Graphing and functions, Graphing linear inequalities
• Pre-Algebra
• Algebra 1
• Algebra 2
• Geometry
• Sat
• Act
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 5, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.905195951461792, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=2942271
|
Physics Forums
## Congruence Classes
Hello PhysicsForums!
I have been reading up on congruence classes and working out some examples. I came across one example that I seem to struggle understanding.
I've solved for $\lambda$ and I know that $\lambda = (3+\sqrt{-3})/2$ $\in$ $Q[\sqrt{-3}]$. I also know that $\lambda$ is a prime in $Q[\sqrt{-3}]$.
From here, I would like to prove that iff $\lambda$ divides $a$ for some rational integer $a$ in $Z$, it can be proven that 3 divides $a$.
Can this is done? If so, could someone show me?
Lastly (or as a second part to this), what are the congruence classes $(mod (3+\sqrt{3})/2)$ in $Q[\sqrt{-3}]$ ?
I really appreciate the help on this everyone!
*Note: I intentionally put $(mod (3+\sqrt{3})/2)$ with the $\sqrt{3}$, so it should not be negative for this part.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Gold Member If $$\lambda \mid a, then N(\lambda) = 3 \mid a^2,$$ or that 3 divides a. Conversely, of course $$\lambda \mid 3$$ It would seem there is no way you can arrive at $$\sqrt3$$ in this field since obviously it would not be $$R\sqrt-3$$, or the Eisenstein integers. The positive and negatives of the quadratic field are not interchangeable.
Quote by robert Ihnot If $$\lambda \mid a, then N(\lambda) = 3 \mid a^2,$$ or that 3 divides a. Conversely, of course $$\lambda \mid 3$$
Thank you robert!
Do you have an idea on the second part? (quoted below)
What are the congruence classes $(mod (3+\sqrt{3})/2)$ in $Q[\sqrt{-3}]$ ?
Recognitions:
Gold Member
## Congruence Classes
Quote by Brimley Thank you robert! Do you have an idea on the second part? (quoted below)
It would seem there is no way you can arrive at $$\sqrt3$$ in this field since obviously it would not be with the $$\sqrt{ -3}$$ or the Eisenstein integers. The positive and negatives of the quadratic field are not interchangeable.
What happens is that we begin with the rationals and add the $$\sqrt X$$ to generate the field. The next step is to define and look for the quadratic integers in this set up.
Quote by robert Ihnot It would seem there is no way you can arrive at $$\sqrt3$$ in this field since obviously it would not be with the $$\sqrt{ -3}$$ or the Eisenstein integers. The positive and negatives of the quadratic field are not interchangeable. What happens is that we begin with the rationals and add the $$\sqrt X$$ to generate the field. The next step is to define and look for the quadratic integers in this set up.
Is that the same if you treat this as a separate problem entirely?
Perhaps if I word it like this it will be different (if not just say no):
"What are the congruence classes $(mod (3+\sqrt{3})/2)$ in $Q[\sqrt{-3}]$ ?"
Recognitions: Gold Member A quadratic integer, Eisenstein, is of the form $$a+b\omega$$ where $$\omega = \frac{-1+\sqrt-3}{2}$$ Here a and b are integers and $$\omega^3=1$$. The form will satisfy an integral equation with the squared term unity. Here we have for the cube root of 1, $$1+\omega+\omega^2 = 0$$. The roots of our quadratic are $$a+b\omega$$ $$a+b\omega^2$$ This gives then the form of X^2-(2a-b)X+a^2-ab+b^2. If we let a=1,b=2, we arrive at X^2+3 = 0. The question is can we arrive at the form X^2-3 = 0. You can try to find that.
Quote by robert Ihnot A quadratic integer of the Eisenstein form is of the form $$a+b\omega$$ where $$\omega = \frac{-1+\sqrt-3}{2}$$
I understood this, however I don't understand where you're going with this...
Recognitions: Gold Member I tried to make this clear that $$\sqrt3$$ is not an algebratic integer in this set, so that it is useless to consider residue classes. If you want to ajoin $$\sqrt3$$ to this set then you would no longer be talking about a quadratic integer.
Quote by robert Ihnot A quadratic integer, Eisenstein, is of the form $$a+b\omega$$ where $$\omega = \frac{-1+\sqrt-3}{2}$$ Here a and b are integers and $$\omega^3=1$$. The form will satisfy an integral equation with the squared term unity. Here we have for the cube root of 1, $$1+\omega+\omega^2 = 0$$. The roots of our quadratic are $$a+b\omega$$ $$a+b\omega^2$$ This gives then the form of X^2-(2a-b)X+a^2-ab+b^2. If we let a=1,b=2, we arrive at X^2+3 = 0. The question is can we arrive at the form X^2-3 = 0. You can try to find that.
Okay, I just want to try and format your answer again to make sure I'm getting it right:
A quadratic integer, Eisenstein, is of the form $a+b\omega$ where $\omega = \frac{-1+\sqrt-3}{2}$ Here a and b are integers and $$\omega^3=1$$. The form will satisfy an integral equation with the squared term unity. Here we have for the cube root of $1$, $$1+\omega+\omega^2 = 0$$. The roots of our quadratic are: Root1: $$a+b\omega$$ Root1: $$a+b\omega^2$$ This gives then the form of $X^2-(2a-b)X+a^2-ab+b^2$. If we let $a=1,b=2,$ we arrive at $X^2+3 = 0$. The question is can we arrive at the form $X^2-3 = 0$. You can try to find that.
So what you're saying is we cannot find that form because we don't have $\sqrt{-3}$ in our mod statement, rather we have $\sqrt{3}$ which will prevent us from getting the statement of: $X^2-3 = 0$ ?
Recognitions: Gold Member The question is how is the form arrived at. First we start with the rationals, then we adjoin $$\sqrt-3$$ to this form and generate an expanded set of numbers. But that does not give us the form of $$\sqrt3$$ After all, what is the point of trying to form "reside classes" of $$\pi$$ relative to the integers?
Thread Tools
| | | |
|-----------------------------------------|----------------------------|---------|
| Similar Threads for: Congruence Classes | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 3 |
| | Academic Guidance | 5 |
| | Linear & Abstract Algebra | 1 |
| | Calculus & Beyond Homework | 4 |
| | General Math | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 35, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9450681209564209, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/31638/what-equations-govern-the-formation-of-droplets-on-a-surface?answertab=votes
|
# What equations govern the formation of droplets on a surface?
When some smooth surface (like that of a steel or glass plate) is brought in contact with steam (over e.g. boiling milk) then water is usually seen to condense on that surface not uniformly but as droplets. What are the equations which govern the formation and growth of these droplets ? In particular what role does the geometry of the surface plays in it? Also it is possible to prepare experimental conditions where no droplets are formed but water condenses uniformly ?
-
## 2 Answers
It is more about properties of the material and surface roughness/texture, than geometry of the surface on the large scale. There is a nice Wikipedia article on wetting, which you may find useful. In brief, that is the difference in surface tension at liquid-solid ($\gamma_{LS}$), liquid-air ($\gamma_{LA}$), and solid-air ($\gamma_{SA}$) interfaces which define the value of contact angle $\theta=\arccos\left(\left(\gamma_{SA}-\gamma_{LS}\right)/\gamma_{LG}\right)\neq0$, and lead to droplet formation. If the angle is zero, the liquid will tend to cover the whole surface uniformly.
-
Is the contact angle the only factor, or is there any other factor? – Bernhard Jul 14 '12 at 18:04
– straups Jul 15 '12 at 5:17
Well, I can imagine that droplet size is also dictated by gravity (orientation of the plate) and the surface tension or the liquid-air interface. The maximum size of the droplets do not follow directly from your equation. – Bernhard Jul 15 '12 at 6:44
I would suggest you the following well written review articles, by Jens Eggers, who is one of the most renowed and acknowledged researchers on this field:
Drop formation – an overview
Nonlinear dynamics and breakup of free-surface flows
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216675162315369, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/129708-parallel-lines.html
|
# Thread:
1. ## Parallel lines
The line 2x-3y+2=0 is midway between 2 parallel lines 6 units apart. Find the equation of the two lines
2. Originally Posted by lance
The line 2x-3y+2=0 is midway between 2 parallel lines 6 units apart. Find the equation of the two lines
Parallel lines have the same gradient. So write this line in $y = mx + c$ form to find its gradient and y-intercept.
Now since it lies halfway between two parallel lines that are 6 units apart, one will have a $y$ intercept three units higher, and one will have a $y$ intercept three units lower.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374626874923706, "perplexity_flag": "middle"}
|
http://spokutta.wordpress.com/2012/01/
|
# Sebastian Pokutta's Blog
Mathematics and related topics
## On linear programming formulations for the TSP polytope
with 13 comments
Next week I am planning to give a talk on our recent paper which is joint work with Samuel Fiorini, Serge Massar, Hans Raj Tiwary, and Ronald de Wolf where we consider linear and semidefinite extended formulations and we prove that any linear programming formulation of the traveling salesman polytope has super-polynomial size (independent of P-vs-NP). From the abstract:
We solve a 20-year old problem posed by M. Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the maximum cut problem and the stable set problem. These results follow from a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs.
The history of this problem is quite interested. From Gerd Woeginger’s P-versus-NP page (See also Mike Trick’s blog post on Swart’s attempts):
In 1986/87 Ted Swart (University of Guelph) wrote a number of papers (some of them had the title: “P=NP”) that gave linear programming formulations of polynomial size for the Hamiltonian cycle problem. Since linear programming is polynomially solvable and Hamiltonian cycle is NP-hard, Swart deduced that P=NP.
In 1988, Mihalis Yannakakis closed the discussion with his paper “Expressing combinatorial optimization problems by linear programs” (Proceedings of STOC 1988, pp. 223-228). Yannakakis proved that expressing the traveling salesman problem by a symmetric linear program (as in Swart’s approach) requires exponential size. The journal version of this paper has been published in Journal of Computer and System Sciences 43, 1991, pp. 441-466.
In his paper, Yannakakis posed the question whether one can show such a lower bound unconditionally, i.e., without the symmetry assumption and Yannakakis conjectured that symmetry ‘should not help much’. This sounded reasonable however no proof was known. In 2010, to the surprise of many, Kaibel, Pashkovich, and Theis proved that there exist polytopes whose symmetric extension complexity (the number of facets of the polytope) is super-polynomial, whereas there exists asymmetric extended formulations that use only polynomially many inequalities; i.e., symmetry does matter. On top of that, the considered polytopes were closely related to the matching polytope (used by Yannakakis to establish the TSP result) which rekindled the discussion on the (unconditional) extension complexity of the travelling salesman polytope and Kaibel asked whether 0/1-polytopes have extended formulations with a polynomial number of inequalities in general or if there exist 0/1-polytopes that need a super-polynomial number of facets in any extension. Beware! This is not in contradiction or related to the P-vs.-NP question as we only talk about the number of inequalities and not the encoding length of the coefficients. This was settled by Rothvoss in 2011 by a very nice counting argument: there are 0/1-polytopes that need a super-polynomial number of inequalities in any extension.
To make the following slightly more formal, let ${P \subseteq {\mathbb R}^n}$ be a polytope (of some dimension). Then an extended formulation for ${P}$ is another polytope ${Q \subseteq {\mathbb R}^\ell}$ such that there exists a linear projection ${\pi}$ with ${\pi(Q) = P}$. The size of an extension ${Q}$ is now the number of facets of ${Q}$ and the extension complexity of ${P}$ (denoted by: ${\text{xc}(P)}$) is the minimum ${\ell}$ such that there exists an extension of size ${\ell}$. We are interested in ${\text{xc}(P)}$ where ${P}$ is the travelling salesman polytope. Our proof heavily relies on a connection between the extension complexity of a polytope and communication complexity (the basic connection was made by Yannakakis and it was later extended by Faenza, Fiorini, Grappe, and Tiwary and Fiorini, Kaibel, Pashkovich, Theis in 2011). In fact, suppose that we have an inner and outer description of our polytope ${P}$, say ${P = \text{conv}\{v_1, \dots v_n\} = \{x \in {\mathbb R}^n \mid Ax \leq b\}}$. Then we can define the slack matrix ${S(P)}$ as ${S_{ij} = b_i -A_i v_j}$, i.e., the slack of the vertices with respect to the inequalities. The extension complexity of a polytope is now equal to the nonnegative rank of ${S}$ which is essentially equivalent to determining the best protocol to compute the entries of ${S}$ in expectation (Alice gets a row index and Bob a column index). The latter can be bounded from below by the non-deterministic communication complexity and we use a certain matrix ${M_{ab} = (1-a^Tb)^2}$ which has large non-deterministic communication complexity (see de Wolf 2003). This matrix is special as it constitutes the slack for some valid inequalities for the correlation polytope which eventually leads to a exponential lower bound for the extension complexity of the correlation polytope. The latter is affine isomorphic to the cut polytope. Then via a reduction-like mechanism similar lower bounds are established for the stable set polytope (${\text{xc(stableSet)} = 2^{\Omega(n^{1/2})}}$) for a certain family of graphs and then we use the fact that the TSP polytope contains any stable set polytope as a face (Yannakakis) for suitably chosen parameters and we obtain ${\text{xc(TSP)} = 2^{\Omega(n^{1/4})}}$.
Here are the slides:
Written by Sebastian
January 5, 2012 at 3:17 pm
Posted in Announcements, mathematics, Talks
## Fundamental principles(?) in mathematics
with 2 comments
As said already in one of my previous posts, David Goldberg and I had a nice discussion about “fundamental concepts” in mathematics. Our definition of “fundamental” was that,
1. once seen you cannot imagine anymore having not known it beforehand and it completely changes your way of thinking and
2. a somewhat realistic approach, i.e., when subtracted from the “world of thinking” something is missing.
So here is our preliminary list of things that we came up with – in random order – and a very brief, (totally biased) meta-description of what we mean by these terms. Of course, this list is highly subjective! For each of these “fundamental concepts” the idea is to have (about) 3 applications and try to distill the main core. There will be (probably) a separate blog post for each point on the list.
1. Identity/Equality. Closely related to being isomorphic. The power of identity is so penetrating that I cannot even find a short explanation. Do I have to? (Aristotle’s first law of thought)
2. Contradiction. Showing that something cannot be true as it leads to a contradiction or inconsistency. Closely related to this is the principium tertii exclusi or law of the excluded third as this is how we often use proofs by contradiction: a statement holds under various assumptions because its negation leads to a contradiction. If you do not believe in the law of the excluded thirdthen you obtain a different logic/mathematics. In particular, in these logics, usually every proof also constitutes some form of an algorithm as existence by mere contradiction when assuming non-existence is not allowed. (see also Aristotle’s second/third law of thought)
3. Induction. Establishing a property by relying on the same property for smaller sub-objects.
4. Recursion. Somewhat dual to induction: a larger object is defined as a function of smaller objects that have been subject to the same construction themselves.
5. Fixpoint. The existence of a point that is invariant under a map. Equilibria in games.
6. Symmetry. The notion of symmetry. Take a cube – rotating it does not really change the cube.
7. Invariants. Think of the dimension of a vector space. Invariants are a powerful way to show that two things are not equal (or isomorphic).
8. Limits. What would we do without limits? The idea of hypothetically continuing a process infinitely long. Think of the definition of a derivative.
9. Diagonalization. One of my personal favorites. Constructing a member not being in a family by making sure it differs from all the members in the family at (at least) one position. Diagonalization often exploits self-references. An example is Cantor’s proof.
10. Double counting. You count a family of objects in two different ways. Then the resulting “amounts” have to be identical. Typical example is the handshaking in graphs.
11. Proof. The notion of proof is very fundamental. Once proven a statement remains true (provided constistency etc). Interestingly, it can be proven that some things cannot be proven. A good example for the latter is the existence of inaccesible cardinals which is consistent with ZFC.
12. Randomness. Randomness is an extremely fundamental concept. One of my favorite applications is probably the probabilistic method. Think about Johnson’s ${7/8}$-approximation algorithm for 3SAT or the PCP theorem.
13. Algorithm. When considering a function ${f: M \rightarrow N}$ we are often not just interested in what ${f}$ computes but in particular howit can be computed. In this sense the algorithmic paradigm is an additional layer to the somewhat descriptive layer of classical mathematics.
14. Exponential growth. What we were particularly thinking about was the idea that a relative improvement bounded away from ${0}$ ensures exponentional progress. This is used regularly in different scaling algorithms such as barrier algorithms, potential reduction methods, and certain flow algorithms.
15. Information. The idea that often a critical amount of informationis necessary to decide a property. Then fooling set like arguments can show that the information is not sufficient. Prime examples include the classical example that sorting via comparison needs at least ${\Omega(n \log n)}$ comparisons, communication complexity, and query complexity.
16. Function/Relation. Mapping one set to another. In particular important when the function/relation is homogeneous, i.e., when it preserves the structure.
17. Density and approximation. The idea that a set (such as the reals) can be approximated arbitrarily well by an exponentially smaller set (such as the rationals). This exponential by polynomial approximation is also something that we are using in approximation algorithms, say, when we round the input. In this case the set of polytime solvable (rounded) instances is “dense” in the set of all instances. It can also be found in set theory when using prediction principles (such as Jensen’s diamond principle or Shelah’s Black Box) to predict functions on a stationary set by an exponentially smaller set.
18. Implicit definitions. The concept of defining something not in an explicit manner but as a solutionto a set of contraints.
19. Abstraction. The use of variables is so ingrained in us that we cannot even imagine to do serious mathematics without them. But abstraction is much more. It is the ability to see more clearly because we “abstract away” unnecessary details and we use “abstraction” to unify seemingly unrelated things.
20. Existence (in the sense that Brouwer hated). One of the keywords here is probably non-constructivism and the probabilistc methods and indirect arguments are two promiment methods in this category. This was something that Brouwer despised: the idea to infer, e.g., existence of something merely because the contrary statement would lead to a contradiction (Brouwer’s school of thought denies the tertium non datur). The probabilistic method might have been fine with him. Although that is not clear at all as on a deep level we are merely trading an existential quantifier for a random one… long story…
21. Duality. By duality we mean the wider idea of duality, i.e., for example the forall quantifier and the existential quantifier. Basically, when we talk about duality we often think about some structure describing the “space of positive statement” and a dual structure that describes the “space of negative statement”. In some sense duality is a form of a compact representation of the negation of a statement.
22. Counting. Counting is again something that penetrates every mathematical theory. My favorite application of counting is the Pigeonhole principle.
23. Hume’s principle (suggested by Hanno – see comments). Two quantities are the same if there exists a bijection between them. Somewhat related to “equality” however here we explicitly ask for the existence of a bijection. For example there are as many integers as their are rationals.
24. Infinity (suggested by Hanno – see comments). The idea that something is not finite. With the notion of infinity I feel that the notion of countably infinite and uncountably infinite is closely connected. In fact the Continuum Hypothesis (CH) is such a case. It is consistent with ZFC and asserts that the first uncountably infinite cardinal is the size of the power set of the natural numbers (essentially the reals), i.e., whether $\aleph_1 = 2^{\aleph_0}$). However in other models of set theory $\aleph_1 \neq 2^{\aleph_0}$ is possible, by adding, e.g., Cohen reals.
Written by Sebastian
January 2, 2012 at 8:49 pm
Posted in Crackpot halluzinations, mathematics, things that make me think
Tagged with contradiction, david goldberg, fundamental concepts, logics
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930604875087738, "perplexity_flag": "middle"}
|
http://planetmath.org/BirchAndSwinnertonDyerConjecture
|
# Birch and Swinnerton-Dyer conjecture
Let $E$ be an elliptic curve over $\mathbb{Q}$, and let $L(E,s)$ be the L-series attached to $E$.
###### Conjecture 1 (Birch and Swinnerton-Dyer).
1. 1.
$L(E,s)$ has a zero at $s=1$ of order equal to the rank of $E(\mathbb{Q})$.
2. 2.
Let $R=\operatorname{rank}(E(\mathbb{Q}))$. Then the residue of $L(E,s)$ at $s=1$, i.e. $\lim_{{s\to 1}}(s-1)^{{-R}}L(E,s)$ has a concrete expression involving the following invariants of $E$: the real period, the Tate-Shafarevich group, the elliptic regulator and the Neron model of $E$.
J. Tate said about this conjecture: “This remarkable conjecture relates the behavior of a function $L$ at a point where it is not at present known to be defined to the order of a group (Sha) which is not known to be finite!” The precise statement of the conjecture asserts that:
$\lim_{{s\to 1}}\frac{L(E,s)}{(s-1)^{R}}=\frac{|\operatorname{Sha}|\cdot\Omega% \cdot\operatorname{Reg}(E/\mathbb{Q})\cdot\prod_{p}c_{p}}{|E_{{\operatorname{% tors}}}(\mathbb{Q})|^{2}}$
where
• $R$ is the rank of $E/\mathbb{Q}$.
• $\Omega$ is either the real period or twice the real period of a minimal model for $E$, depending on whether $E(\mathbb{R})$ is connected or not.
• $|\operatorname{Sha}|$ is the order of the Tate-Shafarevich group of $E/\mathbb{Q}$.
• $\operatorname{Reg}(E/\mathbb{Q})$ is the elliptic regulator of $E(\mathbb{Q})$.
• $|E_{{\operatorname{tors}}}(\mathbb{Q})|$ is the number of torsion points on $E/\mathbb{Q}$ (including the point at infinity $O$).
• $c_{p}$ is an elementary local factor, equal to the cardinality of $E(\mathbb{Q}_{p})/E_{0}(\mathbb{Q}_{p})$, where $E_{0}(\mathbb{Q}_{p})$ is the set of points in $E(\mathbb{Q}_{p})$ whose reduction modulo $p$ is non-singular in $E(\mathbb{F}_{p})$. Notice that if $p$ is a prime of good reduction for $E/\mathbb{Q}$ then $c_{p}=1$, so only $c_{p}\neq 1$ only for finitely many primes $p$. The number $c_{p}$ is usually called the Tamagawa number of $E$ at $p$.
The following is an easy consequence of the B-SD conjecture:
###### Conjecture 2 (Parity Conjecture).
The root number of $E$, denoted by $w$, indicates the parity of the rank of the elliptic curve, this is, $w=1$ if and only if the rank is even.
There has been a great amount of research towards the B-SD conjecture. For example, there are some particular cases which are already known:
###### Theorem 1 (Coates, Wiles).
Suppose $E$ is an elliptic curve defined over an imaginary quadratic field $K$, with complex multiplication by $K$, and $L(E,s)$ is the L-series of $E$. If $L(E,1)\neq 0$ then $E(K)$ is finite.
# References
• 1 Claymath Institute, Description, online.
• 2 J. Coates, A. Wiles, On the Conjecture of Birch and Swinnerton-Dyer, Inv. Math. 39, 223-251 (1977).
• 3 Keith Devlin, The Millennium Problems: The Seven Greatest Unsolved Mathematical Puzzles of Our Time, 189 - 212, Perseus Books Group, New York (2002).
• 4 James Milne, Elliptic Curves, online course notes.
• 5 Joseph H. Silverman, The Arithmetic of Elliptic Curves. Springer-Verlag, New York, 1986.
• 6 Joseph H. Silverman, Advanced Topics in the Arithmetic of Elliptic Curves. Springer-Verlag, New York, 1994.
Major Section:
Reference
Type of Math Object:
Conjecture
## Mathematics Subject Classification
14H52 Elliptic curves
## Recent Activity
May 17
new image: sinx_approx.png by jeremyboden
new image: approximation_to_sinx by jeremyboden
new image: approximation_to_sinx by jeremyboden
new question: Solving the word problem for isomorphic groups by mairiwalker
new image: LineDiagrams.jpg by m759
new image: ProjPoints.jpg by m759
new image: AbstrExample3.jpg by m759
new image: four-diamond_figure.jpg by m759
May 16
new problem: Curve fitting using the Exchange Algorithm. by jeremyboden
new question: Undirected graphs and their Chromatic Number by Serchinnho
## Info
Owner: alozano
Added: 2003-08-06 - 15:26
Author(s): alozano
## Versions
(v16) by alozano 2013-03-22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 51, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.860872745513916, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/4003/perfect-security-definitions/4016
|
# Perfect security definitions
In my notes, there are 2 definitions of perfect security:
1. "For $M \in \{0,1\}^m$, define the distribution $D_M$ on strings as follows: to choose a random member of $D_M$, choose a random $K \in \{0,1\}^n$ and output $Enc(M,K)$. Then $Enc,Dec$ is perfectly secure if $D_M$ is exactly the same for every $M$. That is, for every $\alpha \in \{0,1\}^*$, the probability of $\alpha$ according to $D_M$ is independent of $M$.
2. For every two messages, no function can tell which one has been encrypted. That is, $Enc,Dec$ is perfectly secure if for every $M_0,M_1 \in \{0,1\}^m$ and for every $f: \{0,1\}^* \to \{0,1\}$, the following holds: consider the experiment where $b$ is randomly chosen from $\{0,1\}$ and $K$ is randomly chosen from $\{0,1\}^n$; then the probability that $f(Enc(M_b,K)) = b$ is equal to $1/2$."
I have two questions:
1. Could someone clarify the definition of $D_M$ (the distribution of strings?). I'm not sure I get what is meant by the "distribution of strings" and how it differs from $M$
2. If $M_0$, and $M_1$ are any two $m$-bit messages, how is it possible that their encryption be equal $b$? Wouldn't the number of bits in the encrypted message be $m$? if that number is for example 10 then how does $P \left ( f( \mathrm{Enc} (M_b, K)) = b \right ) = 0.5$?
-
I have typesetted your post to make it more readable - you can do it too by using LaTeX in between dollar signs. Feel free to edit your post if I accidentally changed the meaning of something. – Thomas Oct 10 '12 at 6:03
Your formulas don't look like a definition of perfect security, but like a definition of the one-time-pad, one scheme known to provide perfect security. – Paŭlo Ebermann♦ Oct 10 '12 at 16:52
1
I went a little further and transcribed the linked image. I also deleted the one time pad definition to which @Paŭlo's comment (presumably) refers to, since it didn't really seem relevant to the actual question. – Ilmari Karonen Oct 10 '12 at 18:05
## 2 Answers
If M0, and M1 are any 2 m-bit messages, how is it possible that their encryption be equal b?
The definition does not say that their encryption is equal b. It says that the function f outputs b. This function is often called distinguisher (or attacker) and he should not be able to distinguish which message M0 or M1 was encrypted. In other words there exists no distinguisher who can distinguish which message was encrypted better than with probability of guessing (0.5).
-
In the first definition, $D_M$ is a probability distribution. As the Wikipedia page notes, there are various ways to specify a probability distribution; one of them, which is what's used here, is to specify how to sample a random variable distributed according to that distribution.
Specifically, what the definition is saying is that $D_M$ is the distribution of ciphertexts obtained by encrypting the message $M$ with a randomly chosen key $K$. In other words, to sample a random variable $X$ distributed according to the distribution $D_M$, you take the message $M$, pick a random key $K$ from the keyspace and let $X = Enc(M,K)$.
(To be specific, the key $K$ should presumably be chosen uniformly from the set of $n$-bit keys $\{0,1\}^n$, although the definition doesn't explicitly say so. It's a common convention in cryptography that random variables chosen from a finite set are assumed to be uniformly distributed unless stated otherwise.)
The definition then says that the encryption is perfectly secure if and only if $D_{M_0} = D_{M_1}$ for any two messages $M_0$ and $M_1$ — that is, if, for any given messages $M_0$ and $M_1$, any given ciphertext $C$ and a random key $K$, the probability that $Enc(M_0,K) = C$ equals the probability that $Enc(M_1,K) = C$.
In the second definition, the notation $f:\{0,1\}^* \to \{0,1\}$ means that $f$ is a function from the set $\{0,1\}^*$ of arbitrary bitstrings to the set $\{0,1\}$ of single bits. That is, $f$ takes a bitstring as input and outputs a single bit.
We can think of the input of $f$ as an encrypted message, and the output of $f$ as a guess about which of $M_0$ or $M_1$ is the plaintext corresponding to the input. The definition then says that, if (and only if) our encryption system is perfectly secure, then for any pair of messages $M_0$ and $M_1$ and any distinguisher $f$, if we take a random message $M$ chosen from the given pair of messages, and encrypt it with a random key $K$ to get a ciphertext $C = Enc(M,K)$, then the guess $f(C)$ has no more (and no less) than a 50% change of being correct.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9339571595191956, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/115357/list
|
## Return to Answer
2 added 209 characters in body
I guess you assume that $F_0$ is bounded(?). . In this case the answer is "yes".
Fix a point $p$. Fix small $\varepsilon>0$ smaller than convexity radiusso that principle curvatures of $\varepsilon$ spheres in $M$ are uniformly bounded. (It is possible since $M$ has bounded geometry.)
Denote by $S_r$ the sphere with center $p$ and radius $r$ and let $\Sigma_r$ be the inward $\varepsilon$-equidistant to $S_r$.
Note that for $r>2\cdot\varepsilon$ there is a fixed upper bound for principle curvatures of $\Sigma_r$. Therefore $\ell(t)=\max_{x\in F_t}\{\mathop{\rm dist}_px\}$ grows at most linearly.
P.S. I used that $M$ has bounded geometry in an essential way, but I am not sure if it is necessary.
1
I guess you assume that $F_0$ is bounded (?). In this case the answer is "yes".
Fix a point $p$. Fix $\varepsilon>0$ smaller than convexity radius.
Denote by $S_r$ the sphere with center $p$ and radius $r$ and let $\Sigma_r$ be the inward $\varepsilon$-equidistant to $S_r$.
Note that for $r>2\cdot\varepsilon$ there is a fixed upper bound for principle curvatures of $\Sigma_r$. Therefore $\ell(t)=\max_{x\in F_t}\{\mathop{\rm dist}_px\}$ grows at most linearly.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306117296218872, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/34797/how-can-you-have-a-non-significant-multiple-regression-model-w-significant-pred
|
# How can you have a non-significant multiple regression model w/ significant predictors? [duplicate]
Possible Duplicate:
Not-significant F but a significant coefficient in multiple linear regression
How can a regression be significant yet all predictors be non-significant?
Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic
If in a multiple linear regression (enter method) the general model isn't significant (F>.05) but one of the predictors is significant (β<.05), should I consider it as a significant result?
-
Did that actually happen? I wouldn't expect it to. – Michael Chernick Aug 21 '12 at 17:35
I voted to close as a duplicate. I didn't take the time to find the exact thread but questions along these lines have been asked on here many times and I'd suggest doing a search of the site to find your answer. The basic point is that you're doing two different hypothesis tests so you can't expect them to agree all of the time. – Macro Aug 21 '12 at 18:54
1
@Michael This is not at all hard to reproduce, knowing that including lots of irrelevant variables will increase the overall p-value. Here's a case with up to five variables where you can watch the overall p-value balloon as each variable is added, while the individual p-value changes little. Try this in `R`: ```set.seed(17);
p <- 5;
x <- as.matrix(do.call(expand.grid, lapply(as.list(1:p), function(i) c(-1,1))));
y <- x[,1] + rnorm(2^p, sd=2);
temp <- sapply(1:p, function(i) print(summary(lm(y ~ x[, 1:i]))))```. – whuber♦ Aug 21 '12 at 19:55
@whuber Yes I realized that when I read the answer provided by Peter and gung. – Michael Chernick Aug 21 '12 at 19:58
## marked as duplicate by Macro, gung, whuber♦Aug 21 '12 at 19:26
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 2 Answers
This happens because the general model and the specific parameters answer different questions. Whether you should consider the result significant depends on what your original hypotheses are. If they were about the significant variable, controlling for the others, then I'd say "yes". If they were about the whole model, I'd say "No".
In either case, I'd stress effect sizes rather than significance levels.
-
Thank you for your help and suggestions! – Cátia Aug 21 '12 at 20:42
I think @Macro is right; in fact, I'm pretty sure I've answered exactly this question. However, I can't find it, and I think the underlying reason for this happening is unrelated to the reason for the reverse situation. So I'll put down some quick information.
One thing that I think is unfortunate is that the problem of multiple comparisons is always discussed the same / in only one way, namely the comparisons between multiple groups. But this issue occurs everywhere, not just in that situation. For example, if you run a multiple regression with 20 covariates where the null hypothesis obtains for each of them, you should expect that in the long run, on average one of them will appear 'significant' anyway in each model. There are various ways of addressing this issue (e.g., alpha correction techniques), but the most common is to use a simultaneous test, that is, a global $F$ test of the model.
Thus, my first guess is that the model test is doing its job and protecting you from inflated family-wise type I error. That is, it is telling you to ignore any possible significant betas. (I apologize if this is bad news.)
For the sake of completeness, there is of course, another possibility. The power of a simultaneous test can be quite week, especially in cases where there are many unrelated covariates, only a few (or one) actually related covariates that are weakly correlated with the response, and a large error term. Despite that fact, you should be cautious about concluding that the beta in question is 'significant'.
-
With all due respect, @gung, if you're pretty sure you've answered this exact question then it seems like perhaps you should be casting a close vote. High rep members like yourself are relied upon to lead by example. I'm not sure answering questions you know are duplicates is a wonderful example. – Macro Aug 21 '12 at 19:01
2
That's a valid point, @Macro, but if I (as a high rep member w/ some sense of where & how to look) can't find it after 5 minutes of searching, what chance does someone new to the site have? Instead of spending more of my time trying to figure out how to find it, I spent 5 minutes answering this Q. I recognize there are cons to this approach, but there are pros as well. I imagine it's more helpful for the OP, & it will be easier for future visitors to find a needle in a haystack if there are more needles. I just don't think the cons outweigh the pros in this case. – gung Aug 21 '12 at 19:06
– Macro Aug 21 '12 at 19:11
1
@gung I sympathize; it's pretty hard sometimes to find a duplicate you know is there. I managed to find a close one just by scanning down the "related" links automatically offered by the system (see at the right). When one of those is a near hit, you can be pretty sure that either the OP did little or no research while writing the question or they are not sufficiently conversant with the terminology to recognize a duplicate. It's good to take some care in the latter case to explain why something is a duplicate--but then close the question anyway. – whuber♦ Aug 21 '12 at 19:29
2
@Michael One is torn between competing objectives: closing quickly can help the OP by directing them immediately to a solution. Leaving a question open allows it to collect answers which then later will either disappear or will have to be merged with the duplicate, which can create a little confusion (as well as quite a bit more work for the mods :-). Because a closed question is still visible, can still be discussed (as we are doing here), and can easily be reopened, I find it's best to err on the side of closing so that such early answers do not accumulate. – whuber♦ Aug 21 '12 at 20:10
show 5 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9605632424354553, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/173654-rank-theory-print.html
|
# rank theory
Printable View
• March 6th 2011, 11:22 AM
situation
rank theory
literally have no clue on how to do this one... :'(
Prove that if A and B are n x n matrices and if rank(A) = n and rank(B) = n then rank(AB) = n.
any help would be much appreciated here as i have a test tomorrow!
• March 6th 2011, 11:57 AM
TheEmptySet
Quote:
Originally Posted by situation
literally have no clue on how to do this one... :'(
Prove that if A and B are n x n matrices and if rank(A) = n and rank(B) = n then rank(AB) = n.
any help would be much appreciated here as i have a test tomorrow!
I don't know what tools you have exactly but here is a way.
A square matrix has full rank if and only if it has non zero determinant. (only the zero vector can be in the nullspace) We also know that
$\det(AB)=\det(A)\det(B)$
• March 6th 2011, 12:24 PM
HallsofIvy
Another way- the "rank" of a linear transformation, A, from vector space V to itself is the dimension of A(V). Here, A is given by an n by n matrix so it is from $R^n$ to $R^n$. Since the rank of A is n, $A(R^n)= R^n$. Similarly, the rank of B is n so that $B(R^n)= R^n$. Putting those together, AB maps $R^n$ into all of $R^n$: $AB(R^n)= R^n$ and so AB has rank n.
• March 6th 2011, 01:31 PM
situation
Quote:
Originally Posted by HallsofIvy
Another way- the "rank" of a linear transformation, A, from vector space V to itself is the dimension of A(V). Here, A is given by an n by n matrix so it is from $R^n$ to $R^n$. Since the rank of A is n, $A(R^n)= R^n$. Similarly, the rank of B is n so that $B(R^n)= R^n$. Putting those together, AB maps $R^n$ into all of $R^n$: $AB(R^n)= R^n$ and so AB has rank n.
please could you just briefly explain what is meant by the dimension, a linear transformation, and a vector space? i have so pretty awful lecture notes which just arent making sense to me in working out what your explanation means.
thank you both very much!
All times are GMT -8. The time now is 04:14 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9618953466415405, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/106571/a-space-in-which-sequences-have-unique-limits-but-compact-sets-need-not-be-closed/106583
|
## A space in which sequences have unique limits but compact sets need not be closed
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A topological space is KC if every compact subspace is closed. A topological space is US if every convergent sequences has exactly one limit. Does someone know an easy example of a US space which is not KC? Thanks.
-
7
These terms happen to both 1) be terrible search terms and 2) have unguessable meanings if you're not familiar with them, so you might want to include definitions. – Qiaochu Yuan Sep 7 at 5:28
Sorry, I edit to include definitions. – Pedro Perez Sep 7 at 6:15
1
Take the finite complement topology on any infinite set. – Evan Jenkins Sep 7 at 6:39
2
That space is not US. – Pedro Perez Sep 7 at 7:53
2
By the way, what do KC and US stand for? I imagine KC means "kompact closed", but US is puzzling me. ("unique sequence"?) – Henry Cohn Sep 7 at 12:32
## 3 Answers
To create a counterexample X, start with the closed interval [0,1] (with the usual topology) and attach a new point z whose neighborhoods are open dense subsets of [0,1].
Observe [0,1] is a compact nonclosed subspace of X and thus X is not a KC space. However no sequence in [0,1] converges to z and in particular all convergent sequences in X have unique limits.
The finite complement topology on an infinite set does not yield a counterexample since every infinite sequence converges to every point of the space.
In general no counterexample Y can be a sequential space since if Y is a sequential space then Y is a KC space iff Y is a US space. ( Recall Y is a sequential space if every nonclosed set B contains a convergent sequence whose limit lies outside B).
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Start with the one point compactification of the minimal uncountable well ordered space and then split the maximum point into two points.
-
I refer to COROLLARY 1 of This Article.
In COROLLARY 1 of it, $X^+$ denotes the one point compactification of the topological space $X$:
COROLLARY: Let $X$ be a Hausdorff space.Then:
(a) $X^+$ is always $US$.
(b) $X^+$ is $KC$ if and only if $X$ is a $\kappa$ space.
So it suffices to choose a Hausdorff space $X$, which is not $\kappa$ space. then $X^+$ is $US$ but is not $KC$.
PS: The topological space $X$ is called $\kappa$ space, if A subspace $A$ is closed in $X$ if and only if $A \cap K$ is closed in $K$, for all compact subset $K \subset X$.
-
1
+1 for the link. Theorem 1 says: $T_2 \implies KC \implies US \implies T_1$ and none of the implications reverses even for compact spaces. – Ramiro de la Vega Sep 7 at 14:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9083803296089172, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/3402/is-relativity-necessary-for-the-existence-of-life
|
# Is relativity necessary for the existence of life?
If the universe didn't have the relativity principle, would it be able to support life?
Life consists of very complicated organisms. The operation of these organisms depends on the laws of physics.
If the laws of physics depended on absolute velocity, then it seems that life would have a more difficult task; organisms would have to adapt their biochemistry to the different absolute speeds of the planet as it moves with or against the motion of the sun around the galaxy.
If the laws of physics depended on the absolute gravitational potential, or on acceleration, then the biochemistry of life would have to adapt to the different accelerations / gravitational potential as life colonized higher altitudes. In addition, there would be a seasonal effect as the earth moves closer and farther away from the sun.
I think there's a way this question could be answered quantitatively. Begin with a modification of general relativity such as the post Newtonian parameters. See wikipedia article:
http://en.wikipedia.org/wiki/Parameterized_post-Newtonian_formalism
Now analyze an important biological molecule whose shape is extremely important to life such as hemoglobin. Find out what range of post-Newtonian parameters are compatible with the operation of that molecule.
Unfortunately, I suspect that this is a research problem. If someone solves it, I presume they will publish it.
-
3
– Johannes Jan 20 '11 at 5:49
Nice paper @Johannes. @Carl your question sounds a little vague. Relativity is simply a restatement of the fact that the laws of physics are local and causal. Without either one of those any complex phenomena, let alone life, would not be possible. – user346 Jan 20 '11 at 9:17
1
One have to work out more elaborate definition of "life" to address this question. Consider Conway's "game of life" -- there are "complicated organisms" too. And the only thing necessary is a two-state cellular automata. – Kostya Jan 20 '11 at 9:37
But life with Galilean relativity sounds fine to me. Not that I'm complaining or anything. – Greg P Jan 20 '11 at 14:38
1
Without relativity, physics wouldn't be... physics? It's impossible to imagine in my view, because it is very much tied to other theories (including QFT). – Noldorin Jan 21 '11 at 2:07
show 2 more comments
## 8 Answers
This question has been modified into a more specific form concerning two structures: Haemoglobin and the PPN Formalism for Gravitation. So I shall make some general comments about how to consider this: a linked Supplementary question might then be the best means to progress.
Haemoglobin is a large (class of) molecules, which have high complexity. Approx molecular weight 68000 (where Hydrogen=1). They are in many terrestrial life forms and they are a key component of blood. There are undoubtedly many outstanding scientific questions concerning the origin and dynamics of such a large molecule. One aspect in particular is the whole issue of "protein folding" which would help form them efficiently and successfully.
In terms of physics there are two aspects: Fundamental physics aspects and local environment aspects. The latter are the most important in practice with e.g Temperature undoubtedly being important; although pressure (ie blood pressure) is likely a function of local gravitation too. So there is the whole question of the biomechanics of a body in differing gravitational fields (surface of Earth, surface of Moon, in Vacuum, on Jupiter, etc) to consider. As the body is the "manufacturing unit" for this molecule its well being is important too - and Vacuum conditions and Jupiter conditions are generally considered hostile to life as we understand it.
In terms of the fundamental physics and laws, the most important here are those of Chemistry which come from Quantum Mechanics. In a hypothetical different universe with a different electric charge for example undoubtedly different molecules would form; perhaps no molecules can form, only atoms.
In terms of our universe (whatever gravitational laws really apply: Newton, Einstein, etc) the local strength of the gravitational field will be an important parameter in the body formation. In terms of biochemistry it probably only has an indirect effect from its contribution to environment (air,sea) pressure and other thermodynamic variables.
All these issues from the persective of life history and presence on other planets and environments is important research in the field of astrobiology.
The PPN formalism was invented to compare Einstein's General Relativity with competing theories via delicate astronomical measurements: but GR remains the best theory in terms of these tests. The best way for a gravity theory parameter to affect quantum parameters would be if, in a theory of Quantum Gravity, it was found that electric charge,say, was a function of the Gravitation constant (which then might not be constant). So that theory might have additional solutions with unusual parameters which might or might not allow the formation of complex molecules.
-
This doesn't answer the question but I need to award the bounty. I may think about writing a paper on this, but I think it will require learning a great deal of detail about chemistry. – Carl Brannen Feb 4 '11 at 23:48
I expect that when you start writing the paper you will identify the various issues that arise (and there certainly are quantitative aspects in the astrobiology part), as well as the physical chemistry (reaction rates, thermodynamics), and the biochemistry (role of amino acids, maybe genetics even DNA). The latter topics might be less quantitative than one would like. Then there is the whole incomplete world of quantum gravity. So hopefully you will be able to pin down the issue that you are after after working on a paper covering all these topics. – Roy Simpson Feb 5 '11 at 9:42
– Roy Simpson Feb 5 '11 at 11:41
one could tackle your question a little more formal if one looks at the limit $$c\rightarrow\infty\, .$$
We can now ask what happens in the several theories and what implications there would be for life.
## A naive attempt
One starting point is indeed the Wikipedia article on physical constants. You can look for your favourite one depending on the speed of light and look on the corresponding site what it implies.
Ok, lets do this for one thing I just have chosen (almost) randomly: The Rydberg constant $R_\infty$.
It is given by $$R_\infty = \frac{m_e e^4}{8\epsilon_0^2h^3 \mathbf{c}} \approx 1.097 \times 10^7 m^{-1}\, .$$
As the article states, it is the most accurately measured fundamental physical constant and its value can be derived from first principles. Interesting to know, I always thought this was the fine-structure constant $\alpha$.
We state that $$\lim_{c\rightarrow \infty}R_\infty = 0\, .$$
The Rydberg constant has its interpretation as the lowest wavenumber $\lambda_{ion} = 1/R_\infty \rightarrow \infty$ that can ionize the hydrogen atom. This is linked to some lowest energy $$E_{ion} = c \frac{h}{\lambda_{ion}} \rightarrow \frac{m_e e^4}{8\epsilon_0^2h^2}\, .$$
So it seems like we have won nothing at all. The wavelength goes to infinity but the corresponding energy remains constant. Or do we run into further problems because we have to look at the permittivity of vacuum $\epsilon_0$ that is also linked to $c$ via $\mu_0\epsilon_0 = 1/c^2$. This is puzzling - we cannot answer the question from this viewpoint, but earned a nice hint due to the fact that all we are discussing about corresponds to wave propagation in electrodynamics.
## Electromagnetic waves
Wave propagation at a certain frequency $\omega$ through vacuum is described by the Helmholtz equation $$\Delta A_{\mu} + \frac{\omega^2}{c^2} A_{\mu} = 0$$ which also holds for quantum electrodynamics as discussed in another thread. Here we can see clearly what happens if $c\rightarrow \infty$:
The Helmholtz equation reduces to the Laplace equation $$\Delta A_\mu = 0$$ which can be interpreted in a way that everything that happens, does it instantaniously - there is no retardation anymore. This implies that everything happens at the same time, at least in electrodynamics. In fact, this should also hold for all (effective) theories that can be described by interactions via light particles, or other massless particles since they also travel at $c$.
So to speak, the speed of light is something like a translation between space and time. If $c$ goes to infinity, maybe even the notion of time (and energy as the associated quantity) does not make sense.
I don't know what will happen, but one of the two cases seem to be plausible if $c\rightarrow\infty$:
Either all will happen instantaniously, or, maybe worse, everything will have to remain in a unchanged forever (static) - I don't think that life as we think of it is possible under these circumstances.
Sincerely
Robert
-
First, the principle of relativity is the statement that the laws of physics are invariant in all frames of reference.
So now we have physical laws that are a function of the frame of reference, the answer to your question would ofcourse depend on exactly what laws you replace them with. If there were only some minute change in physical constants throughout space, we would not necessarily have a problem, I am proposing we wouldnt even have a theoretical problem, as many of these numbers seem arbitrary in the SM.
Infact there are many more radical changes you can do to the physical laws that would not result in any (immediately) observable change at all, just think of string theory or most TOE ;)
-
The main problem is that the sun wouldn't work - no mass-energy conversion. Recall all the unusual and baroque theories of the source of solar radiation before relativity and QM (a giant coal furnace, gravitational collapse, ...).
-
1
mass-energy conversion is behind all energy sources, not just nuclear. you are right that the sun wouldn't work, but that would be the case even if it were a giant coal furnace. – Jeremy Jan 21 '11 at 1:41
The question is not about failed theories. – Carl Brannen Jan 21 '11 at 4:23
What do you mean, not about failed theories? Before relativity it was a genuine struggle to explain how the sun could have existed for more than 10^7 years or so, which is the energy you get from classical gravitational collapse. You need E=mc^2 to get a plausible timescale for the sun's energy depletion given fossil records. Your question is "what happens without relativity" which includes "what happens without E=mc^2," and one consequence is certainly that stars have very short lifetimes. – kharybdis Jan 31 '11 at 1:38
There would be the problem of $E~=~mc^2$ and solar power. We might imagine though some non-relativistic form of nuclear energy. There is also the likely result that the spectra of elementary particles is determined by gravity. QCD on the boundary of an AdS spacetime is equivalent to the symmetries of the graviton in the interior. So without relativity the world might appear very differently.
This question is a bit of a “what if?” sort of question.
-
Of course changing the details would change the details. That's not what I'm asking. – Carl Brannen Jan 21 '11 at 4:19
Special Relativistic dynamics is the reason that electrons have spin and behave as fermions. These properties of the electrons are the reason the various elements have different chemical properties or in other word the reason we have chemistry. Chemistry is the reason we have molecules. Without molecules there would be no life, thus without Special Relativity there would be no life.
-
Of course if the laws of physics were different life would be different. Anybody knows that. What I'm asking is if life would be so much more difficult as to be nearly impossible due to the laws depending on reference frame and or gravitational potential. – Carl Brannen Jan 21 '11 at 4:18
1
It can't get more difficult/impossible than "no life". – Vagelford Jan 21 '11 at 6:55
And when we are saying "without relativity", I am assuming Newtonian. Because if you don't assume a framework in physics you can't give any answer (you can't give an answer on the basis of unknown principles). – Vagelford Jan 21 '11 at 7:05
Even with relativity organisms will meet different gravitational potentials at different altitudes. This reduces air density and pressure and causes breathing difficulties for nonadapted species. Without General Relativity (or a similar theory) we might also have a problem putting together a Cosmological Theory. Thus the Universe might not get started at all....
-
... which would make this a moot question, right? +1 – user346 Jan 20 '11 at 19:11
Let me explain: Different altitudes do not pose a significant problem for organisms as far as their basic biochemistry goes. The Krebs cycle works the same at all altitudes. What would pose a problem is if, for example, mitochondria only worked at some particular gravitational potential. – Carl Brannen Jan 21 '11 at 4:16
As far as I know nothing works too well at the gravitational potentials found inside the event horizon of black holes. – Roy Simpson Jan 21 '11 at 14:22
So the only alternative physics theory you can imagine is one appropriate to the insides of black holes? – Carl Brannen Jan 21 '11 at 23:17
Well consider what a physics theory is. Lets say a set of consistent rules generating a Universe. Well Cellular Automata do that; some even claim to model "Life". Most such rules dont have relativity, but some have speed limits. Most dont generate complex Kreb Cycles. But does that matter? It is all quite open ended... – Roy Simpson Jan 22 '11 at 17:16
another possibility is that, assuming life could evolve into highly intelligent and technically capable lifeforms, it could then propagate at exponentially fast rates and take over the universe.
-
This is silly. Relativity is not the only possible theory where speeds are limited. – Carl Brannen Jan 21 '11 at 4:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461506009101868, "perplexity_flag": "middle"}
|
http://id.wikipedia.org/wiki/Urutan_Leksikografis
|
# Urutan Leksikografis
Dari Wikipedia bahasa Indonesia, ensiklopedia bebas
Langsung ke: navigasi, cari
Untuk nama yang mempunyai kemiripan dengan sistem pengurutan di luar bidang matematika, lihat Urutan Alfabet dan Collation.
Orderings of the 3-subsets of $\scriptstyle \{1,...,6\}$ (and the corresponding binary vectors)
When the (blue) triples are in lex order the (red) vectors are in revlex order, and vice versa. The arrangements on the right side show colex and revcolex order.
Dalam matematika, urutan leksikografik, (biasa dikenal sebagai urutan leksikal atau urutan alfabet, adalah bentuk umum dari urutan alfabet kata yang berdasarkan pada pengurutan huruf depan.
## Definisi
Diberikan dua himpunan terurut sebagian yaitu A dan B,. Leksikografik urutan di dalam Cartesian product A × B is defined as
(a,b) ≤ (a′,b′) if and only if a < a′ or (a = a′ and b ≤ b′).
The result is a partial order. If A and B are totally ordered, then the result is a total order as well.
More generally, one can define the lexicographic order on the Cartesian product of n ordered sets, on the Cartesian product of a countably infinite family of ordered sets, and on the union of such sets.
Templat:Merge from
## Motivation and uses
The name of the lexicographic order comes from its generalizing the order given to words in a dictionary: a sequence of letters (that is, a word)
a1a2 ... ak
appears in a dictionary before a sequence
b1b2 ... bk
if and only if the first ai, which is different from bi, comes before bi in the alphabet.
That comparison assumes both sequences are the same length. To ensure they are the same length, the shorter sequence is usually padded at the end with enough "blanks" (a special symbol that is treated as coming before any other symbol). This also allows ordering of phrases. For the purpose of dictionaries, etc., padding with blank spaces is always done. See alphabetical order.
For example, the word "Thomas" appears before "Thompson" in dictionaries because the letter 'a' comes before the letter 'p' in the alphabet. The 5th letter is the first that is different in the two words; the first 4 letters are "Thom" in both. Because it is the first difference, the 5th letter is the most significant difference (for an alphabetical ordering).
A lexicographical ordering may not coincide with conventional alphabetical ordering. For example, the numerical order of Unicode codepoints does not always correspond to traditional alphabetic orderings of the characters, which vary from language to language. So the lexicographic ordering induced by codepoint value sorts strings in an unambiguous canonical order, but it does not necessarily "alphabetize" them in the conventional sense.
An important property of the lexicographical order is that it preserves well-orders, that is, if A and B are well-ordered sets, then the product set A × B with the lexicographical order is also well-ordered.
An important exploitation of lexicographical ordering is expressed in the ISO 8601 date formatting scheme, which expresses a date as YYYY-MM-DD. This date ordering lends itself to straightforward computerized sorting of dates such that the sorting algorithm does not need to treat the numeric parts of the date string any differently from a string of non-numeric characters, and the dates will be sorted into chronological order. Note, however, that for this to work, there must always be four digits for the year, two for the month, and two for the day, so for example single-digit days must be padded with a zero yielding '01', '02', ..., '09'.
Another example of digits ordered lexicographically is 101,102,103,104,105,106,107,108,109,110,111,112... 200, 201, 202 etc.
Another generalization of lexical ordering occurs in social choice theory (the theory of elections). Consider an election in which there are 4 candidates A, B, C and D, each voter expresses a top-to-bottom ordering of the candidates, and the voters' orderings are as follows:
18% 17% 33% 32%
A B C D
B A D B
C C A A
D D B C
The MinMax voting method is a simple Condorcet method that counts the votes as in a round-robin tournament (all possible pairings of candidates) and judges each candidate according to its largest "pairwise" defeat. The winner is the candidate whose largest defeat is the smallest. In the example:
• The largest defeat of A is by D: 65% (33%+32%) rank D over A.
• The largest defeat of B is by D: 65% (33%+32%) rank D over B.
• The largest defeat of C is by A (or B): 67% (18%+17%+32%) rank A over C (and B over C).
• The largest defeat of D is by C: 68% (18%+17%+33%) rank C over D.
MinMax declares a tie between A and B since the largest defeats for both are the same size, 65%. This is like saying "Thomas" and "Thompson" should be at the same position because they have the same first letter. However, if the defeats are compared lexically, we have the MinLexMax method. With MinLexMax, because the largest defeats of A and B are the same size, their next largest defeats are then compared:
• A's next largest defeat is 0%. (This is a padding, since A has only one defeat.)
• B's next largest defeat is by A: 51% (18%+33%) rank A over B.
Since B's next largest defeat is larger than A's, MinLexMax elects A, which makes more sense than the MinMax tie since a majority rank A over B.
Another usage in social choice theory is the Ranked Pairs voting method. Although usually defined by a procedure that constructs the order of finish, Ranked Pairs is equivalent to finding which of all possible orders of finish is best according to a minlexmax comparison of the majorities they reverse. In the example above, the Ranked Pairs order of finish is ABCD (which elects A). ABCD affirms the majorities who rank A over B, A over C, B over C and C over D, and reverses the majorities who rank D over A and D over B. The largest majority that ABCD reverses is 65%. The only other ordering that wouldn't reverse a larger majority is BACD (which also reverses 65%). ABCD is a better order of finish than BACD because the lexically relevant set of majorities—the majorities on which ABCD and BACD disagree—is {A over B} and BACD reverses the largest majority in this set.
## Case of multiple products
Suppose
$\{ A_1, A_2, \cdots, A_n \}$
is an n-tuple of sets, with respective total orderings
$\{ <_1, <_2, \cdots, <_n \}$
The dictionary ordering
$\ \ <^{d}$
of
$A_1 \times A_2 \times \cdots \times A_n$
is then
$(a_1, a_2, \dots, a_n) <^d (b_1,b_2, \dots, b_n) \iff (\exists\ m > 0) \ (\forall\ i < m) (a_i = b_i) \land (a_m <_m b_m)$
That is, if one of the terms
$\ \ a_m <_m b_m$
and all the preceding terms are equal.
Informally,
$\ \ a_1$
represents the first letter,
$\ \ a_2$
the second and so on when looking up a word in a dictionary, hence the name.
This could be more elegantly stated by recursively defining the ordering of any set
$\ \ C= A_j \times A_{j+1} \times \cdots \times A_k$
represented by
$\ \ <^d (C)$
This will satisfy
$a <^d (A_i) a' \iff (a <_i a')$
$(a,b) <^d (A_i \times B) (a',b') \iff a <^d (A_i) a' \lor ( a=a' \ \land \ b <^d (B) b')$
where $B = A_{i+1} \times A_{i+2} \times \cdots \times A_n.$
To put it more simply, compare the first terms. If they are equal, compare the second terms – and so on. The relationship between the first corresponding terms that are not equal determines the relationship between the entire elements.
## Groups and vector spaces
If the component sets are ordered groups then the result is a non-Archimedean group, because e.g. n(0,1) < (1,0) for all n.
If the component sets are ordered vector spaces over R (in particular just R), then the result is also an ordered vector space.
## Ordering of sequences of various lengths
Given a partially ordered set A, the above considerations allow to define naturally a lexicographical partial order $<^\mathrm{d}$ over the free monoid A* formed by the set of all finite sequences of elements in A, with sequence concatenation as the monoid operation, as follows:
$u <^\mathrm{d} v$ if
• $u$ is a prefix of $v$, or
• $u=wau'$ and $v=wbv'$, where $w$ is the longest common prefix of $u$ and $v$, $a$ and $b$ are members of A such that $a<b$, and $u'$ and $v'$ are members of A*.
If < is a total order on A, then so is the lexicographic order <d on A*. If A is a finite and totally ordered alphabet, A* is the set of all words over A, and we retrieve the notion of dictionary ordering used in lexicography that gave its name to the lexicographic orderings. However, in general this is not a well-order, even though it is on the alphabet A; for instance, if A = {a, b}, the language {anb | n ≥ 0} has no least element: ... <d aab <d ab <d b. A well-order for strings, based on the lexicographical order, is the shortlex order.
Similarly we can also compare a finite and an infinite string, or two infinite strings.
Comparing strings of different lengths can also be modeled as comparing strings of infinite length by right-padding finite strings with a special value that is less than any element of the alphabet.
This ordering is the ordering usually used to order character strings, including in dictionaries and indexes.
### Quasi-lexicographic order
The quasi-lexicographic order on the free monoid A∗ over an ordered alphabet A orders strings firstly by length, so that the empty string comes first, and then within strings of fixed length n, by lexicographic order on An.[1]
## Generalization
Consider the set of functions f from a well-ordered set X to a totally ordered set Y. For two such functions f and g, the order is determined by the values for the smallest x such that f(x) ≠ g(x).
If Y is also well-ordered and X is finite, then the resulting order is a well-order. As already shown above, if X is infinite this is in general not the case.
If X is infinite and Y has more than one element, then the resulting set YX is not a countable set, see also cardinal exponentiation.
Alternatively, consider the functions f from an inversely well-ordered X to a well-ordered Y with minimum 0, restricted to those that are non-zero at only a finite subset of X. The result is well-ordered. Correspondingly we can also consider a well-ordered X and apply lexicographical order where a higher x is a more significant position. This corresponds to exponentiation of ordinal numbers YX. If X and Y are countable then the resulting set is also countable.
## Monomials
In algebra it is traditional to order terms in a polynomial, by ordering the monomials in the indeterminates. This is fundamental, to have a normal form. Such matters are typically left implicit in discussion between humans, but must of course be dealt with exactly in computer algebra. In practice one has an alphabet of indeterminates X, Y, ... and orders all monomials formed from them by a variant of lexicographical order. For example if one decides to order the alphabet by
X < Y < ...
and also to look at higher terms first, that means ordering
... < X3 < X2 < X
and also
X < Yk for all k.
There is some flexibility in ordering monomials, and this can be exploited in Gröbner basis theory.
## Decimal fractions
For decimal fractions from the decimal point, a < b applies equivalently for the numerical order and the lexicographic order, provided that numbers with a recurring decimal 9 like .399999... are not included in the set of strings representing numbers. With that restriction there is an order-preserving bijection between the strings and the numbers.
## Reverse lexicographic order
In a common variation of lexicographic order, one compares elements by reading from the right instead of from the left, i.e., the right-most component is the most significant, e.g. applied in a rhyming dictionary.
In the case of monomials one may sort the exponents downward, with the exponent of the first base variable as primary sort key, e.g.:
$x^2 y z^2 < x y^3 z^2$.
Alternatively, sorting may be done by the sum of the exponents, downward.
## References
1. ^ Calude, Cristian (1994). Information and randomness. An algorithmic perspective. EATCS Monographs on Theoretical Computer Science. Springer-Verlag. p. 1. ISBN 3-540-57456-5. Zbl 0922.68073.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8936697840690613, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/189694/abelian-quotient-group
|
# Abelian quotient group
I'm stuck on the following practice problem. Any hints would be appreciated.
Suppose $N$ is a normal subgroup of $G$ such that every subgroup of $N$ is normal in $G$ and $C_{G}(N) \subset N$. Prove that $G/N$ is abelian.
I'm not sure how to use the fact that $C_{G}(N) \subset N$.
Thanks
-
It would be way more useful if you'd posted your insights, ideas, effort, background and/or things you already know about this problem. -1 – DonAntonio Sep 1 '12 at 23:52
## 2 Answers
Let $n\in N$, and consider the action of $G$ on $\langle n\rangle$. This embeds $G/C_G(\langle n\rangle)$ into $Aut(\langle n\rangle)$, an abelian group. Doing this for all cyclic subgroups of $N$ gives an embedding of $G/C_G(N)$ into a direct product of abelian groups. We are done then, because that means $G/C_G(N)$ is abelian, and $G/N$ is a quotient of that group.
-
1
"The action"...you mean, I presume, the action by conjugation , right? I guess the OP could know this, but it is not immediate from his post, which gives no background, ideas, etc. at all, and not everybody knows about the injection $$N_G(H)/C_G(H) \to Aut(H)$$ – DonAntonio Sep 1 '12 at 23:45
First of all, don't get stuck on what is given. This is the wrong place to look when you start on a proof. Rather, you should look at what you need to prove. In this case, we want to show that $G/N$ is abelian. What does it mean for a group to be abelian?
Well, the definition states that a group $G$ is abelian if for all $g, h \in G$ we have $gh = hg$. So this means we need to pick any two elements from $G/N$ and show that they commute under the group's operation.
I'll let you think about it from there. Let me just emphasize that whenever you write a proof, you need to start with the definition of what you are trying to prove. This almost always gives you a guide as to how to start your proof.
-
I downvoted, as this is not even close to an answer. – Steve D Sep 1 '12 at 19:11
@SteveD Sure it isn't a complete answer. It's too lengthy for a comment, though, and it points the OP in the right direction to solve this problem on their own. A "practice problem" sounds like homework, so giving a complete answer would be doing the OP a disservice, IMO. – Code-Guru Sep 1 '12 at 19:18
It's not an answer in any way, it just rewords some definitions. – Steve D Sep 1 '12 at 19:23
1
I agree with anon: the OP's post history shows he understand what a commutator is, and the info in the OP's profile says they are a grad student, preparing for comps. – Steve D Sep 1 '12 at 19:45
3
To rely on reading the OP's post history is too stretching the work one could do to guess the OP's background: I agree with Guru in this, the OP should have posted his question with way more ideas, background, things already known, etc. Yet, I think Steve's point is the main one here: Guru's writing doesn't come even close to be anything ressembling a hint of a possible answer (he didn't even mentioned that $\,G/N\,$ abelian $\,\Longleftrightarrow G'\leq N\,$...!) , and I'd advice him to delete his post as it is useless and will probably bring upon him lots of downvotes. – DonAntonio Sep 1 '12 at 23:50
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9643765687942505, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/150900-norm-matrix-algebra.html
|
# Thread:
1. ## Norm in Matrix algebra
let $A\in M_n(\mathbb{C})$, show that
$\|A\|^2=\max\{\lambda: \det(\lambda-A^*A)=0\}=\max\{\lambda: \det(\lambda-AA^*)=0\}$
2. Originally Posted by Mauritzvdworm
let $A\in M_n(\mathbb{C})$, show that
$\|A\|^2=\max\{\lambda: \det(\lambda-A^*A)=0\}=\max\{\lambda: \det(\lambda-AA^*)=0\}$
The proof consists of two steps. (1) $\|A\|^2 = \|A^*A\|$; (2) A*A is a positive definite matrix and so its norm is equal to its largest eigenvalue. Those two steps together show that $\|A\|^2=\max\{\lambda: \det(\lambda-A^*A)=0\}$.
For the last part, use the fact that $\|A\| = \|A^*\|$ to deduce that $\|A\|^2=\max\{\lambda: \det(\lambda-AA^*)=0\}$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282418489456177, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/123179/explain-this-passage-on-the-proof-that-the-curvature-of-the-vector-function-rt
|
# Explain this passage on the proof that the curvature of the vector function $r(t)$ is $\frac{|r'(t) \times r''(t)|}{ |r'(t)|³}$?
Explain this step on the proof of the curvature of the vector $r(t)$?
-
1
Which part you don't understand? – Jack Mar 22 '12 at 6:21
1
Differentiate $T\times T=0$ and you are done – Blah Mar 22 '12 at 6:47
I don't get it... – Dokkat Mar 24 '12 at 7:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8855413198471069, "perplexity_flag": "middle"}
|
http://nrich.maths.org/763/index?nomenu=1
|
'Enclosing Squares' printed from http://nrich.maths.org/
Show menu
Here's a problem to work at with your graphic calculator or graph-plotting package on a computer.
If you plot the following lines
$\begin{eqnarray} y &=& 2x + 1\\ y &=& 2x + 4 \\ y &=& -0.5x + 1\\ y &=& -0.5x + 2.5 \end{eqnarray}$
the lines will enclose a square.
Can you find other sets of sloping lines that enclose a square?
If you are given the equations of two parallel lines
$y = ax + b$ and $y = ax + c$
can you explain how to find the equations of the other two lines that would enclose a square, if you know that one of the vertices is at $(0,b)$?
[In the example at the top, $a = 2$, $b = 1$ and $c = 4$]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174555540084839, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/35823/is-there-a-deep-reason-why-springs-combine-like-capacitors?answertab=active
|
Is there a deep reason why springs combine like capacitors?
I was solving a practice Physics GRE and there was a question about springs connected in series and parallel. I was too lazy to derive the way the spring constants add in each case. But I knew how capacitances and resistances add when they are connected in series/parallel. So I reasoned that spring constants should behave as capcitances because both springs and capacitors store energy.
This line reasoning did give me the correct answer for how spring constants add, but I was just curious if this analogy makes sense, and if it does, how far one can take it. That is, knowing just that two things store energy, what all can you say will be similar for the two things.
-
We do this problem in our 10th grade. LOL. – ramanujan_dirac Sep 8 '12 at 6:18
4 Answers
Electrical analogies of mechanical elements such as springs, masses, and dash pots provide the answer. The "deep" connection is simply that the differential equations have the same form.
In electric circuit theory, the across variable is voltage while the through variable is current.
The analogous quantities in mechanics are force and velocity. Note that in both cases, the product of the across and through variables has the unit of power.
(An aside, sometimes it is convenient to use force and velocity as the across and through variables respectively while other times, it is more convenient to switch those roles.)
Now, assuming velocity is the through variable, velocity and electric current are analogous. Thus, displacement and electric charge are analogous.
For a spring, we have $f = kd \rightarrow d = \frac{1}{k}f$ while for a capacitor we have $Q = CV$.
For a mass, we have $f = ma = m\dot v$ while for an inductor we have $V = L \dot I$
Finally, for a dashpot, we have $f = Bv$ while for a resistor we have $V = RI$.
So, we have
$\frac{1}{k} \rightarrow C$
$m \rightarrow L$
$B \rightarrow R$
For a nice summary with examples, see this.
UPDATE: In another answer, RubenV questions the answer given above. His reasoning requires an update.
Alfred Centauri's answer is not correct. The analogy he mentions is true, but it is irrelevant as it does not tell you anything about components in series or in parallel.
In fact, it is relevant and it does tell you everything about components in series or in parallel. Let's review:
When two circuit elements are in parallel, the voltage across each is identical.
When two circuit elements are in series, the current through each is identical.
This is fundamental and must be kept in mind when moving to the mechanical analogy.
In the mechanical analogy where a spring is the mechanical analog of a capacitor:
force is the analog of voltage
velocity is the analog of current.
Keeping this in mind, consider two springs connected in mechanical parallel and note that the velocity (rate of change of displacement) for each spring is identical.
But recall, in this analogy, velocity is the analog of current. Thus, the equivalent electrical analogy is two capacitors in series (identical current).
In series, capacitance combines as so:
$\dfrac{1}{C_{eq}} = \dfrac{1}{C_1} + \dfrac{1}{C_2}$
With the spring analogy, $C \rightarrow \frac{1}{k}$ , this becomes:
$k_{eq} = k_1 + k_2$
The key point to take away from this is that mechanical parallel is, in this analogy, circuit series since, in mechanical parallel, the velocity (current) is the same, not the force (voltage).
For example, consider dash pots (resistors). Two dash pots in "parallel" combine like two resistors in series, i.e., the resistance to motion of two dash pots in "parallel" is greater then each individually.
Now, if the roles of the analogous variables are swapped, if force is like current and velocity is like voltage, then mechanical parallel is like circuit parallel. However, in this analogy, mass is like capacitance.
-
Stiffnesses add when the springs are connected "in parallel" (side by side): $F = F_1 + F_2$; $x_1 = x_2 = x$; $k_x = k_1 x + k_2 x$; $k_x = k_1 + k_2$.
Compliances add when the springs are connected "in series" (end to end): $F_1 = F_2 = F$; $x = x_1 + x_2$; $s_ F = s_1 F + s_2 F$; $s_ = s_1 + s_2$.
This is what I mean by "the math changes"-- the physics is obviously the same, but the form of the equations is different. I am calling into question your ability to jump from "they store energy" to "they add in parallel (and add as inverses in series)" by pointing out that with the use of compliances rather than stiffnesses, the opposite equations (representing of course the same physics) are obtained.
As for inductors-- why are capacitors a better analogy than inductors?Hm. How does your reasoning stand up to the following objections?
We are used to working with the stiffness or rate of springs k, defined by $F = k x$. But it is no less reasonable to work with the compliance $s = 1 / k$, defined by $x = s F$. This does not change the fundamental fact that springs store energy, nor that they can be viewed as the electrical analogues of capacitors. But the math changes!
Alternatively, consider inductors, which also store energy, and are typically characterized by the inductance L defined by $V=L. di/dt$. Do inductances add like capacitances or like resistances?
-
I intended to post this as a comment to Alfred Centauri's above post, but I do not have enough reputation for this yet, so I will state it here:
Alfred Centauri's answer is not correct. The analogy he mentions is true, but it is irrelevant as it does not tell you anything about components in series or in parallel. To prove my point: as he illustrates, the analogy connects C with 1/k, however as the OP noted, both C and k have the same addition rules, whereas the C <-> 1/k link would suggest that they should be opposite.
The RLC-circuit vs. spring analogy is based on both differential equations having the same structure, however the DE determines the evolution of the system for fixed constants (R, L, C, or gamma, m, k) and there is no reason to suspect that the analogy also works for how you add up many components into one effective component, and indeed, the analogy breaks down in those cases.
In other words, to understand why capacitors add the way they do, one must go back all the way to their definitions.
For a spring: F = kx, where x is the stretching from equilibrium, and F is the restoring force. For a capacitor: CV = Q, where Q is the amount of charge on one side, and V is the restoring potential. One can again note the C <-> 1/k link (i.e. 1/k and C are the factors in front of the restoring force, if you will). The reason that 1/k and C do not add in the same way, however, is because of the physically different behaviour of the restoring forces:
Consider two springs in parallel: a certain stretch x will now cause much more restoring force, both springs independently trying to undo the stretching; this translates itself into $k_{net} = k_1 + k_2$. However, if one considers two capacitors in parallel, then putting a certain amount of charge Q on them will lead to a smaller voltage difference than for the one-capacitor case, since the Q is now diluted over two capacitors (*); taking into account that C is the factor in front of the potential, this is summarized in $C_{net} = C_1 + C_2$ (C is like a measure of how easily one puts charge on, whereas k is a measure of how difficult it is to extend the spring; the reason they add the same way is because adding more springs in parallel makes stretching harder, whereas adding capacitors makes adding charge easier).
The conclusion is as follows: indeed, C and 1/k are analogous, but it is rather despite this connection than thanks to it that they add in similar ways, the reason for this being in their definitions and the opposite behaviour of the restoring forces in terms of which these constants are defined.
(*) Another reason is that parallel voltage differences are not cumulative, whereas parallel forces are.
-
1
I believe it is the case that you've reasoned incorrectly here. When two capacitors are in parallel, the voltage across each is identical. When two capacitors are connected in series, the current through each is identical. This is fundamental. Now, when two springs are connected side by side, the velocity is the same, not the force. So, in the analogy where velocity is like current, the two springs are not connected in "parallel", i.e., their "currents" are the same, not their "voltage". In series, capacitances don't add. The answer I gave is correct. – Alfred Centauri Nov 5 '12 at 13:16
In series, $C_{eq} = \frac{1}{\frac{1}{C_1} + \frac{1}{C_2}}$. Substituting $\frac{1}{k}$ for C, we get $\frac{1}{k_{eq}} = \frac{1}{k_1 + k_2}$ or $k_{eq} = k_1 + k_2$ as desired. – Alfred Centauri Nov 5 '12 at 13:20
When I said "The analogy he mentions is true, but it is irrelevant" I was referencing to your $$\frac{1}{k} \to C$$ analogy you listed at the end of your post (in what seemed to be the conclusion), and I still stand by my statement (based on your post back then). That being said, the aforementioned analogy is relevant if you also include the analogy "series <-> parallel (and vice versa)" which you indeed discussed in the update to your post (so I have no more problems with your modified answer, so thank you for expanding!). – RubenV Dec 7 '12 at 10:46
You cannot draw this conclusion from a naive "they both store energy" argument (though you can use it as a mnemonic if you find it helps you).
Capacitances are usually measured in farads which corresponds to coulombs squared per joule while spring constants are usually measured in newtons per metre equivalent to joules per metre squared.
If you did your calculations with joules in a consistent position in the unit, for example using using reciprocal capacitance (elastance?), I think you would find the parallel and series composition calculations were reversed (and reciprocal capacitance would behave more like resistance and inductance).
-
protected by Qmechanic♦Mar 18 at 13:21
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500846862792969, "perplexity_flag": "middle"}
|
http://mathlesstraveled.com/2010/02/03/divisor-nim/
|
Explorations in mathematical beauty
## Divisor nim
Posted on February 3, 2010 by
Yesterday in math club I had the students play a game which I dimly remember seeing somewhere but forget where. Since I don’t know what it is really called, I’m calling it “divisor nim”. Here’s how it works:
1. The players pick a positive integer.
2. The two players work together to write down all the divisors of the chosen integer (being sure to include 1 and the integer itself).
3. The players now alternate moves as follows: on a player’s turn, she must choose one of the divisors $d$, and then cross out that divisor as well as all of the other listed numbers which are divisible by $d$.
4. On subsequent turns, players may only choose numbers which are not yet crossed out.
5. Whoever is forced to choose 1 (because it is the only number left) is the loser!
For example, suppose the chosen number is 12. We write down the divisors of 12:
1, 2, 3, 4, 6, 12.
Now suppose the first player chooses 4 (actually, this is a bad move; I’ll let you figure out why); they then cross out 4 and 12, since 12 is divisible by 4. The game now looks like
1, 2, 3, 4 , 6, 12 .
Now it’s the other player’s turn; suppose they pick 3 (this is actually a bad move too…!), so they cross out 3 and 6. Now the game looks like
1, 2, 3 , 4 , 6 , 12 .
The first player now crosses out 2, and the second player is forced to choose 1, so the first player wins.
The kids thought this was a lot of fun and it leads to all kinds of interesting discussions. First, of course, you have to figure out how to write down all the divisors of the starting number (how do you know when you’ve listed them all? what are some systematic strategies for listing the divisors?). Then you can talk about strategies for playing the game. I might talk about some of these things in some future posts. For now I will just note that this actually has some deep connections to the theory of posets (we are basically just using each integer as an abbreviation for its poset of divisors). Although I’ve played around with it a bit I don’t yet know of a general strategy — although any particular starting integer necessarily gives a winning strategy for ONE of the two players, and it’s not too hard to figure it out by working backwards. More on this later, I suppose.
In the meantime, have fun playing!
### Share this:
This entry was posted in games, number theory, pattern and tagged divisor, game, lattice. Bookmark the permalink.
### 2 Responses to Divisor nim
1. Michael Lugo says:
For numbers with only two distinct prime factors (like 12, with prime factors 2 and 3) this is a disguised version of Chomp.
2. Joshua Zucker says:
There’s a simple strategy-stealing argument that shows the 1st player always has a forced win (well, except when the game starts with just the number 1).
However, like most strategy stealing arguments, it doesn’t tell you anything about HOW to win.
Comments are closed.
• Brent's blogging goal
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9506975412368774, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/139272/determine-convergence-of-sum-n-1-infty-cos-frac2n-cos-frac4n
|
# Determine convergence of $\sum_{n=1}^{\infty} (\cos{\frac{2}{n}}-\cos{\frac{4}{n}})$
Determine convergence of $$\sum_{n=1}^{\infty} \left(\cos{\frac{2}{n}}-\cos{\frac{4}{n}}\right)$$
In the answer, it says
$$\cos{\frac{2}{n}}-\cos{\frac{4}{n}} = 2\sin{\frac{3}{n}}\sin{\frac{1}{n}} \le 2\cdot \frac{3}{n} \cdot \frac{1}{n} = \frac{6}{n^2}$$
But how do I get the above trig substitution? I guess removing the fractions, I will get $\cos{x}-\cos{2x}=2\sin{(2x-1)}\sin{(x-1)}$ ... probably this is wrong, but how do I get that?
-
– Martin Sleziak May 1 '12 at 6:40
## 2 Answers
The result follows from the Addition Law for Cosines.
We have $\cos(x+y)=\cos x\cos y-\sin x\sin y$ and $\cos(x-y)=\cos x\cos y+\sin x\sin y$. Subtract. We get $$\cos(x-y)-\cos(x+y)=2\sin x\sin y.$$ Let $x-y=\frac{2}{n}$ and $x+y=\frac{4}{n}$. Solve for $x$ and $y$. We get $x=\frac{3}{n}$ and $y=\frac{1}{n}$.
-
Oh, now how do I get $2\sin{\frac{3}{n}}\sin{\frac{1}{n}} \le 3\cdot \frac{3}{n}\cdot \frac{1}{n}$? Is it squeeze theorm? But won't I have $$-1 \le \sin{\frac{3}{n}} \le 1$$ $$-2 \le 2\sin{\frac{3}{n}} \le 2$$ $$-2 |\sin{\frac{1}{n}}| \le 2\sin{\frac{3}{n}}\sin{\frac{1}{n}} \le 2|\sin{\frac{1}{n}}|$$ then as $n\rightarrow 0$, by limit goes to 0? So convergent? – Jiew Meng May 1 '12 at 7:45
1
@JiewMeng: No. You need to prove the convergence of a series, and it is not sufficient (but it is necessary) for the sequence of individual terms to converge to zero to conclude that the series converges. You need to use a suitable dominating series here. Also, you need to remember a result telling which is bigger $\sin x$ or $x$, when $x$ is positive? A reference to a theorem in your book is absolutely needed for full credit. – Jyrki Lahtonen May 1 '12 at 7:55
@JyrkiLahtonen, thanks, but how do i get $2\sin{\frac{3}{n}}\sin{\frac{1}{n}} \le \color{blue}2 \cdot \frac{3}{n}\cdot \frac{1}{n}$ then?. highlighted typo in last comment, should be 2 instead of 3 – Jiew Meng May 1 '12 at 8:03
@JiewMeng: I think I already answered that? You must have done that bit earlier in your class for the hint given in your book to make sense! If $a>b>0$ and $c>d>0$, then $a\cdot c > b\cdot d$? – Jyrki Lahtonen May 1 '12 at 8:08
hmm, actually that was the answer, hmm... I'm terrible at maths ... need to think a bit – Jiew Meng May 1 '12 at 8:27
show 2 more comments
I suggest another way:
$\cos\left(\frac{2}{n}\right)-\cos\left(\frac{4}{n}\right)=-\left(1-\cos\left(\frac{2}{n}\right)\right)+1-\cos\left(\frac{4}{n}\right)\sim -\frac{4}{2n^2}+\frac{16}{2n^2}=\frac{6}{n^2}$ and $\sum \frac{6}{n^2}$ converges.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376669526100159, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/183753-subsequences.html
|
# Thread:
1. ## Subsequences
Suppose $\{x_n\}\to x_0$ and $\{y_n\}\to x_0$. Define a sequence $\{z_n\}$ as follows: $z_{2n}=x_n$ and $z_{2n-1}=y_n$. Prove that $\{z_n\}$ converges to $x_0$.
Let $\epsilon >0$. Then $\exists N_1, \ N_2\in\mathbb{N}$ such that for $n\geq N_1, \ N_2$ we have $|x_n-x_0|<\epsilon$ and $|y_n-x_0|<\epsilon$.
I don't know what to do now.
2. ## Re: Subsequences
Let $N:=\max(N_1,N_2)$. For $n\geq N$ we have $|z_{2n}-x_0|\leq \varepsilon$ and $|z_{2n-1}-x_0|\leq \varepsilon$ hence if $k\geq 2N-1$ we have $|z_k-x_0|\leq \varepsilon$.
3. ## Re: Subsequences
Originally Posted by dwsmith
Suppose $\{x_n\}\to x_0$ and $\{y_n\}\to x_0$. Define a sequence $\{z_n\}$ as follows: $z_{2n}=x_n$ and $z_{2n-1}=y_n$. Prove that $\{z_n\}$ converges to $x_0$.
Let $\epsilon >0$. Then $\exists N_1, \ N_2\in\mathbb{N}$ such that for $n\geq N_1, \ N_2$ we have $|x_n-x_0|<\epsilon$ and $|y_n-x_0|<\epsilon$.
Let $N=2(N_1+N_2)$. If $n\ge N$ then if $n\text{ is odd}$ we have $k = \left\lfloor {\frac{n}{2}} \right\rfloor > N_2$ and $z_n=y_k$.
Use a similar idea if $n\text{ is even}$.
4. ## Re: Subsequences
Originally Posted by Plato
Let $N=2(N_1+N_2)$. If $n\ge N$ then if $n\text{ is odd}$ we have $k = \left\lfloor {\frac{n}{2}} \right\rfloor > N_2$ and $z_n=y_k$.
Use a similar idea if $n\text{ is even}$.
Why is $N=2(N_1+N_2)$
5. ## Re: Subsequences
Originally Posted by dwsmith
Why is $N=2(N_1+N_2)$
First of all, it insures absolutely that $N>N_1~\&~N>N_2$.
Therefore, we can use anyone of the statements already restricted.
6. ## Re: Subsequences
Originally Posted by Plato
then if $n\text{ is odd}$ we have $k = \left\lfloor {\frac{n}{2}} \right\rfloor > N_2$ and $z_n=y_k$.
Can you also explain this?
7. ## Re: Subsequences
Originally Posted by dwsmith
Can you also explain this?
You do the mathematics.
Just take many cases.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 48, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188010096549988, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Bluestein's_FFT_algorithm
|
Bluestein's FFT algorithm
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (January 2013)
Bluestein's FFT algorithm (1968), commonly called the chirp z-transform algorithm (1969), is a fast Fourier transform (FFT) algorithm that computes the discrete Fourier transform (DFT) of arbitrary sizes (including prime sizes) by re-expressing the DFT as a convolution. (The other algorithm for FFTs of prime sizes, Rader's algorithm, also works by rewriting the DFT as a convolution.)
In fact, Bluestein's algorithm can be used to compute more general transforms than the DFT, based on the (unilateral) z-transform (Rabiner et al., 1969).
Algorithm
Recall that the DFT is defined by the formula
$X_k = \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} nk } \qquad k = 0,\dots,N-1.$
If we replace the product nk in the exponent by the identity nk = –(k–n)2/2 + n2/2 + k2/2, we thus obtain:
$X_k = e^{-\frac{\pi i}{N} k^2 } \sum_{n=0}^{N-1} \left( x_n e^{-\frac{\pi i}{N} n^2 } \right) e^{\frac{\pi i}{N} (k-n)^2 } \qquad k = 0,\dots,N-1.$
This summation is precisely a convolution of the two sequences an and bn defined by:
$a_n = x_n e^{-\frac{\pi i}{N} n^2 }$
$b_n = e^{\frac{\pi i}{N} n^2 },$
with the output of the convolution multiplied by N phase factors bk*. That is:
$X_k = b_k^* \sum_{n=0}^{N-1} a_n b_{k-n} \qquad k = 0,\dots,N-1.$
This convolution, in turn, can be performed with a pair of FFTs (plus the pre-computed FFT of bn) via the convolution theorem. The key point is that these FFTs are not of the same length N: such a convolution can be computed exactly from FFTs only by zero-padding it to a length greater than or equal to 2N–1. In particular, one can pad to a power of two or some other highly composite size, for which the FFT can be efficiently performed by e.g. the Cooley–Tukey algorithm in O(N log N) time. Thus, Bluestein's algorithm provides an O(N log N) way to compute prime-size DFTs, albeit several times slower than the Cooley–Tukey algorithm for composite sizes.
The use of zero-padding for the convolution in Bluestein's algorithm deserves some additional comment. Suppose we zero-pad to a length M ≥ 2N–1. This means that an is extended to an array An of length M, where An = an for 0 ≤ n < N and An = 0 otherwise—the usual meaning of "zero-padding". However, because of the bk–n term in the convolution, both positive and negative values of n are required for bn (noting that b–n = bn). The periodic boundaries implied by the DFT of the zero-padded array mean that –n is equivalent to M–n. Thus, bn is extended to an array Bn of length M, where B0 = b0, Bn = BM–n = bn for 0 < n < N, and Bn = 0 otherwise. A and B are then FFTed, multiplied pointwise, and inverse FFTed to obtain the convolution of a and b, according to the usual convolution theorem.
Let us also be more precise about what type of convolution is required in Bluestein's algorithm for the DFT. If the sequence bn were periodic in n with period N, then it would be a cyclic convolution of length N, and the zero-padding would be for computational convenience only. However, this is not generally the case:
$b_{n+N} = e^{\frac{\pi i}{N} (n+N)^2 } = b_n e^{\frac{\pi i}{N} (2Nn+N^2) } = (-1)^N b_n .$
Therefore, for N even the convolution is cyclic, but in this case N is composite and one would normally use a more efficient FFT algorithm such as Cooley–Tukey. For N odd, however, then bn is antiperiodic and we technically have a negacyclic convolution of length N. Such distinctions disappear when one zero-pads an to a length of at least 2N−1 as described above, however. It is perhaps easiest, therefore, to think of it as a subset of the outputs of a simple linear convolution (i.e. no conceptual "extensions" of the data, periodic or otherwise).
z-Transforms
Bluestein's algorithm can also be used to compute a more general transform based on the (unilateral) z-transform (Rabiner et al., 1969). In particular, it can compute any transform of the form:
$X_k = \sum_{n=0}^{N-1} x_n z^{nk} \qquad k = 0,\dots,M-1,$
for an arbitrary complex number z and for differing numbers N and M of inputs and outputs. Given Bluestein's algorithm, such a transform can be used, for example, to obtain a more finely spaced interpolation of some portion of the spectrum (although the frequency resolution is still limited by the total sampling time), enhance arbitrary poles in transfer-function analyses, etcetera.
The algorithm was dubbed the chirp z-transform algorithm because, for the Fourier-transform case (|z| = 1), the sequence bn from above is a complex sinusoid of linearly increasing frequency, which is called a (linear) chirp in radar systems.
References
• Leo I. Bluestein, "A linear filtering approach to the computation of the discrete Fourier transform," Northeast Electronics Research and Engineering Meeting Record 10, 218-219 (1968).
• Lawrence R. Rabiner, Ronald W. Schafer, and Charles M. Rader, "The chirp z-transform algorithm and its application," Bell Syst. Tech. J. 48, 1249-1292 (1969). Also published in: Rabiner, Shafer, and Rader, "The chirp z-transform algorithm," IEEE Trans. Audio Electroacoustics 17 (2), 86–92 (1969).
• D. H. Bailey and P. N. Swarztrauber, "The fractional Fourier transform and applications," SIAM Review 33, 389-404 (1991). (Note that this terminology for the z-transform is nonstandard: a fractional Fourier transform conventionally refers to an entirely different, continuous transform.)
• Lawrence Rabiner, "The chirp z-transform algorithm—a lesson in serendipity," IEEE Signal Processing Magazine 21, 118-119 (March 2004). (Historical commentary.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9065487384796143, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/70163/is-there-some-textbook-for-the-details-of-the-computation-of-the-homology-groups/70172
|
## Is there some textbook for the details of the computation of the homology groups
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there some results for the cyclic homology group $HC_1(A)$, for example, when it is zero, or which case we can compute out it explictly, here $A$ is a commutative algebra over the complex field.
-
1
Your title is not as descriptive as it could be. – Qiaochu Yuan Jul 12 2011 at 19:51
## 2 Answers
There is a formula in the commutative case, $HC_1(A) \cong \Omega^1(A)/(dA)$. Namely there is a Connes exact sequence $HH_0(A) \to HH_1(A) \to HC_1(A) \to 0$. In the case of a commutative algebra over a field $HH_0(A) \cong A$ and $HH_1(A) \cong \Omega^1(A)$. The left hand map is d, giving the above formula. For smooth algebras, you have a similar formula for all of the cyclic homology groups. You can surely find all of this in Loday.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You should provide more information about your algebra $A$ if you intend a useful, non-generic answer. I doubt you will find a textbook exposition---but for appropriate classes fo algebras, one can surely direct you to papers where computations are carried out.
The one general approach to computing $HC_\bullet$ inthe commutative case is to mimick rational homotopy theorty and construct a model of your algebra $A$, that is, a differential graded algebra $\mathcal A$, and then use the fact that $HC(A)$ and $HC(\mathcal A)$ (this last homology is the homology of graded differential algebras) are isomorphic.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9122105836868286, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/32648/why-doesnt-the-anthropic-principle-select-for-n-2-susy-compactifications-with-a?answertab=votes
|
# Why doesn't the anthropic principle select for N=2 SUSY compactifications with an exactly zero cosmological constant?
The party line of the anthropic camp goes something like this. There are at least $10^{500}$ flux compactifications breaking SUSY out there with all sorts of values for the cosmological constant. Life takes a lot of time to evolve, and this is incompatible with a universe which dilutes away into de Sitter space too soon. The cosmological constant has to be fine-tuned to the order of $10^{123}$. Without SUSY, the zero point energy contribution from bosons and fermions would not cancel naturally.
However, superstring theory also admits N=2 SUGRA compactifications which have to have an exactly zero cosmological constant. Surely some of them can support life? I know there are a lot more flux compactifications out there compared to hyperKahler compactifications, but does the ratio exceed $10^{123}$? What probability measure should we use over compactifications anyway? Trying to compute from eternal inflation leads to the measure problem.
-
1
Maybe we are in an N=2 universe. arxiv.org/abs/hep-th/0109168 physics.stackexchange.com/questions/27421/… – Mitchell Porter Jul 23 '12 at 9:59
Eternal inflation is the wrong measure, you should use causal patch inflation--- what's the probability of producing a given string vacuum from a small deSitter patch. This is not going to be like the eternal inflation nonsense, since it doesn't have the outside volume. – Ron Maimon Jul 23 '12 at 19:55
Yes, but the real answer is that the Anthropic Principle is just a notion that some folks have. There is no real evidence for it at all. Don't forget, the fact that we exist to ask the question guarantees that this universe has parameters that allow life. – Paul J. Gans Dec 22 '12 at 2:33
## 1 Answer
N=2 compactifications are a lot more constrained than compactifications with no SUSY. For life to be likely to evolve, there needs to be more than just fine tuning of the cosmological constant. So many other parameters also need to be adjusted to maximize the chances of life evolving. There are a lot more metastable vacua with a wide range (landscape) of different parameters. On the other hand, the N=2 parameters aren't a priori maximized for the evolution of life.
-
1
Note that since we already know that life evolved, we don't need parameters that make this likely; they only have to make it possible. We could live in a universe where life has a really low (but nonzero) chance to evolve but where we were lucky anyway. – Lagerbaer Dec 22 '12 at 16:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248929619789124, "perplexity_flag": "head"}
|
http://www.newton.ac.uk/programmes/KIT/seminars/2010110215001.html
|
Navigation: Home > KIT > Seminars > Mischler, S
# KIT
## Seminar
### Factorisation for non-symmetric operators and exponential H-theorems
Mischler, S (Paris-Dauphine)
Tuesday 02 November 2010, 15:00-15:45
Seminar Room 1, Newton Institute
#### Abstract
We present a factorization method for estimating resolvents of non-symmetric operators in Banach or Hilbert spaces in terms of estimates in another (typically smaller) reference'' space. This applies to a class of operators writing as a regularizing'' part (in a broad sense) plus a dissipative part. Then in the Hilbert case we combine this factorization approach with an abstract Plancherel identity on the resolvent into a method for enlarging the functional space of decay estimates on semigroups. In the Banach case, we prove the same result however with some loss on the norm. We then apply these functional analysis approach to several PDEs: the Fokker-Planck and kinetic Fokker-Planck equations, the linear scattering Boltzmann equation in the torus, and, most importantly the linearized Boltzmann equation in the torus (at the price of extra specific work in the latter case). In addition to the abstract method in itself, the main outcome of the paper is indeed the first proof of exponential decay towards global equilibrium (e.g. in terms of the relative entropy) for the full Boltzmann equation for hard spheres, conditionnally to some smoothness and (polynomial) moment estimates. This improves on the result in [Desvillettes-Villani, Invent. Math., 2005] where the rate was almost exponential'', that is polynomial with exponent as high as wanted, and solves a long-standing conjecture about the rate of decay in the H-theorem for the nonlinear Boltzmann equation, see for instance [Cercignani, Arch. Mech, 1982] and [Rezakhanlou-Villani, Lecture Notes Springer, 2001].
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 3, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8524659872055054, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/47301?sort=votes
|
Which linear transformations between f.d. Hilbert spaces contract the inner product?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given two finite-dimensional Hilbert spaces $U, V,$ a linear transformation $T:U\to V$ contracts the inner product if for all $x,y \in U,$ $$\langle x,y \rangle_U \ge \langle Tx, Ty\rangle_V.$$ All unitary transformations satisfy this criterion; is there a larger class of linear transformations that do?
-
All partial isometries - and also all multiples of them by scalars of modulus at most 1. If two partial isometries have orthogonal support and range then adding multiples of them will also contract the inner product. – Ollie Margetts Nov 25 2010 at 6:34
Sorry, a little more thought (and Faisal's answer) shows that the above comment is wrong! – Ollie Margetts Nov 25 2010 at 6:38
1 Answer
Such a map will preserve orthogonality, and any such map must be a scalar multiple of an isometry. This is true in great generality, e.g. the map $T$ doesn't have to be linear, and $U$ and $V$ don't have to be finite-dimensional; see Theorem 1 in
Chmieliński, Linear mappings approximately preserving orthogonality. J. Math. Anal. Appl. 304 (2005), no. 1, 158–169.
Consequently, a map $T \colon U \to V$ contracts the inner product if and only if $T = \alpha S$, where S is an isometry and $|\alpha| \leq 1$.
Edit: Here's a simple proof of the assertion that an orthogonality-preserving linear map between finite-dimensional inner product spaces is a scalar multiple of an isometry. Let $T \colon U \to V$ be such a map and fix an orthonormal basis `$\{e_1, \ldots, e_n\}$` for $U$. Observe that $e_i + e_j \perp e_i - e_j$. Thus `$$ 0 = \langle T(e_i + e_j), T(e_i - e_j) \rangle = \langle Te_i, Te_i \rangle - \langle Te_j, Te_j \rangle. $$` So we may set $\alpha = \langle Te_i, Te_i \rangle$; this is a nonnegative constant independent of $i$. In particular, if $T$ kills one $e_i$, it kills all the others. It follows that either $T=0$ or else `$\{Te_1, \ldots, Te_n\}$` is an orthogonal basis for the range of $T$. In the latter situation, an easy computation yields `$$ \|Tx\|^2 = \sum_i \frac{|\langle Tx, Te_i \rangle|^2}{\langle Te_i, Te_i \rangle} = \sum_i \frac{|\langle \sum_j \langle x, e_j \rangle Te_j, Te_i \rangle|^2}{\langle Te_i, Te_i \rangle} = \sum_i |\langle x,e_i \rangle|^2 \langle Te_i, Te_i \rangle = \alpha \|x\|^2 $$` for all $x \in U$. It follows that $\frac{1}{\sqrt{\alpha}}T$ is an isometry.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.888221025466919, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/White_noise
|
# White noise
For other uses, see White noise (disambiguation).
Plot of a Gaussian white noise signal.
In signal processing, white noise is a random signal with a flat (constant) power spectral density. In other words, a signal that contains equal power within any frequency band with a fixed width. The term is used, with this or similar meanings, in many scientific and technical disciplines, including physics, acoustic engineering, telecommunications, statistical forecasting, and many more. (Rigorously speaking, "white noise" refers to a statistical model for signals and signal sources, rather than to any specific signal.)
A "white noise" image.
The term is also used for a discrete signal whose samples are regarded as a sequence of serially uncorrelated random variables with zero mean and finite variance. Depending on the context, one may also require that the samples be independent and have the same probability distribution. In particular, if each sample has a normal distribution with zero mean, the signal is said to be Gaussian white noise.[1]
The samples of a white noise signal may be sequential in time, or arranged along one or more spatial dimensions. In digital image processing, the samples (pixels) of a white noise image are typically arranged in a rectangular grid, and are assumed to be independent random variables with uniform probability distribution over some interval. The concept can be defined also for signals spread over more complicated domains, such as a sphere or a torus.
Some "white noise" sound.
An infinite-bandwidth white noise signal is a purely theoretical construction. The bandwidth of white noise is limited in practice by the mechanism of noise generation, by the transmission medium and by finite observation capabilities. Thus, a random signal is considered "white noise" if it is observed to have a flat spectrum over the range of frequencies that is relevant to the context. For an audio signal, for example, the relevant range is the band of audible sound frequencies, between 20 to 20,000 Hz. Such a signal is heard as a hissing sound, resembling the /sh/ sound in "ash". In music and acoustics, the term white noise may be used for any signal that has a similar hissing sound.
White noise draws its name from white light, which is commonly (but incorrectly) assumed to have a flat spectral power density over the visible band.
The term white noise is sometimes used in the context of phylogenetically based statistical methods to refer to a lack of phylogenetic pattern in comparative data.[2] It is sometimes used in non technical contexts, in the metaphoric sense of "random talk without meaningful contents".[3][4]
## Statistical properties
Spectrogram of pink noise (left) and white noise (right), shown with linear frequency axis (vertical).
Being uncorrelated in time does not restrict the values a signal can take. Any distribution of values is possible (although it must have zero DC component). Even a binary signal which can only take on the values 1 or -1 will be white if the sequence is statistically uncorrelated. Noise having a continuous distribution, such as a normal distribution, can of course be white.
It is often incorrectly assumed that Gaussian noise (i.e., noise with a Gaussian amplitude distribution — see normal distribution) necessarily refers to white noise, yet neither property implies the other. Gaussianity refers to the probability distribution with respect to the value, in this context the probability of the signal falling within any particular range of amplitudes, while the term 'white' refers to the way the signal power is distributed (i.e., independently) over time or among frequencies.
We can therefore find Gaussian white noise, but also Poisson, Cauchy, etc. white noises. Thus, the two words "Gaussian" and "white" are often both specified in mathematical models of systems. Gaussian white noise is a good approximation of many real-world situations and generates mathematically tractable models. These models are used so frequently that the term additive white Gaussian noise has a standard abbreviation: AWGN.
White noise is the generalized mean-square derivative of the Wiener process or Brownian motion.
A generalization to random elements on infinite dimensional spaces, such as random fields, is the white noise measure.
## Practical applications
### Music
White noise is commonly used in the production of electronic music, usually either directly or as an input for a filter to create other types of noise signal. It is used extensively in audio synthesis, typically to recreate percussive instruments such as cymbals which have high noise content in their frequency domain.
### Electronics engineering
Whitenoise is also used to obtain the impulse response of an electrical circuit, in particular of amplifiers and other audio equipment. It is not used for testing loudspeakers as its spectrum contains too great an amount of high frequency content. Pink noise is used for testing transducers such as loudspeakers and microphones.
### Acoustics
To set up the equalization for a concert or other performance in a venue, a short burst of white or pink noise is sent through the PA system and monitored from various points in the venue so that the engineer can tell if the acoustics of the building naturally boost or cut any frequencies. The engineer can then adjust the overall equalization to ensure a balanced mix.
### Computing
White noise is used as the basis of some random number generators. For example, Random.org uses a system of atmospheric antennae to generate random digit patterns from white noise.
### Tinnitus treatment
White noise is a common synthetic noise source used for sound masking by a tinnitus masker.[5] White noise machines and other white noise sources are sold as privacy enhancers and sleep aids and to mask tinnitus.[6] Alternatively, the use of an FM radio tuned to unused frequencies ("static") is a simpler and more cost-effective source of white noise.[7] However, white noise generated from a common commercial radio receiver tuned to an unused frequency is extremely vulnerable to being contaminated with spurious signals, such as adjacent radio stations, harmonics from non-adjacent radio stations, electrical equipment in the vicinity of the receiving antenna causing interference, or even atmospheric events such as solar flares and especially lightning.
### Work environment
The effects of white noise upon cognitive function are mixed. Recently, a small study found that white noise background stimulation improves cognitive functioning among secondary students with attention deficit hyperactivity disorder (ADHD), while decreasing performance of non-ADHD students.[8][9] Other work indicates it is effective in improving the mood and performance of workers by masking background office noise,[10] but decreases cognitive performance in complex card sorting tasks.[11]
## Mathematical definitions
### White noise vector
A random vector (that is, a partially indeterminate process that produces vectors of real numbers) is said to be a white noise vector or white random vector if its components each have a probability distribution with zero mean and finite variance, and are statistically independent: that is, their joint probability distribution must be the product of the distributions of the individual components.[12]
A necessary (but, in general, not sufficient) condition for statistical independence of two variables is that they be statistically uncorrelated; that is, their covariance is zero. Therefore, the covariance matrix R of the components of a white noise vector w with n elements must be an n by n diagonal matrix, where each diagonal element Rii is the variance of component wi; and the correlation matrix must the n by n identity matrix.
In particular, if in addition to being independent every variable in w also has a normal distribution with zero mean and the same variance $\sigma^2$, w is said to be a Gaussian white noise vector. In that case, the joint distribution of w is a multivariate normal distribution; the independence between the variables then implies that the distribution has spherical symmetry in n-dimensional space. Therefore, any orthogonal transformation of the vector will result in a Gaussian white random vector. In particular, under most types of discrete Fourier transform, such as FFT and Hartley, the transform W of w will be a Gaussian white noise vector, too; that is, its n Fourier coefficients will be independent Gaussian variables with zero mean and the same variance $\sigma^2$.
The power spectrum P of a random vector w can be defined as the expected value of the squared modulus of each coefficient of its Fourier transform W, that is, Pi = E(|Wi|2). Under that definition, a Gaussian white noise vector will have a perfectly flat power spectrum, with Pi = $\sigma^2$ for all i.
If w is a white random vector, but not a Gaussian one, its Fourier coefficients Wi will not be completely independent of each other; although for large n and common probability distributions the dependencies are very subtle, and their pairwise correlations can assumed to be zero.
Often the weaker condition "statistically uncorrelated" is used in the definition of white noise, instead of "statistically independent". However some of the commonly expected properties of white noise (such as flat power spectrum) may not hold for this weaker version. Under this assumption, the stricter version can be referred to explicitly as independent white noise vector.[13]:p.60 Other authors use strongly white and weakly white instead.[14]
An example of a random vector that is "Gaussian white noise" in the weak but not in the strong sense is x=[x1,x2] where x1 is a normal random variable with zero mean, and x2 is equal to +x1 or to −x1, with equal probability. These two variables are uncorrelated and individually normally distributed, but they are not jointly normally distributed and are not independent. If x is rotated by 45 degrees, its two components will still be uncorrelated, but their distribution will no longer be normal.
In some situations one may relax the definition by allowing each component of a white random vector w to have non-zero expected value $\mu$. In image processing especially, where samples are typically restricted to positive values, one often takes $\mu$ to be one half of the maximum sample value. In that case, the Fourier coefficient W0 corresponding to the zero-frequency component (essentially, the average of the w_i) will also have a non-zero expected value $\mu\sqrt{n}$; and the power spectrum P will be flat only over the non-zero frequencies.
### Continuous-time white noise
In order to define the notion of "white noise" in the theory of continuous-time signals, one must replace the concept of a "random vector" by a continuous-time random signal; that is, a random process that generates a function $w$ of a real-valued parameter $t$.
Such a process is said to be white noise in the strongest sense if the value $w(t)$ for any time $t$ is a random variable that is statistically independent of its entire history before $t$. A weaker definition requires independence only between the values $w(t_1)$ and $w(t_2)$ at every pair of distinct times $t_1$ and $t_2$. An even weaker definition requires only that such pairs $w(t_1)$ and $w(t_2)$ be uncorrelated.[15] As in the discrete case, some authors adopt the weaker definition for "white noise", and use the qualifier independent to refer to either of the stronger definitions. Others use weakly white and strongly white to distinguish between them.
However, a precise definition of these concepts is not trivial, because some quantities that are finite sums in the finite discrete case must be replaced by integrals that may not converge. Indeed, the set of all possible instances of a signal $w$ is no longer a finite-dimensional space $\mathbb{R}^n$, but an infinite-dimensional function space. Moreover, by any definition a white noise signal $w$ would have to be essentially discontinuous at every point; therefore even the simplest operations on $w$, like integration over a finite interval, require advanced mathematical machinery.
Some authors require each value $w(t)$ to be a real-valued random variable with some finite variance $\sigma^2$. Then the covariance $\mathrm{E}(w(t_1)\cdot w(t_2))$ between the values at two times $t_1$ and $t_2$ is well-defined: it is zero if the times are distinct, and $\sigma^2$ if they are equal. However, by this definition, the integral
$W_{[a,a+r]} = \int_a^{a+r} w(t)\, dt$
over any interval with positive width $r$ would be zero. This property would render the concept inadequate as a model of physical "white noise" signals.
Therefore, most authors define the signal $w$ indirectly by specifying non-zero values for the integrals of $w(t)$ and $|w(t)|^2$ over any interval $[a,a+r]$, as a function of its width $r$. In this approach, however, the value of $w(t)$ at an isolated time cannot be defined as a real-valued random variable. Also the covariance $\mathrm{E}(w(t_1)\cdot w(t_2))$ becomes infinite when $t_1=t_2$; and the autocorrelation function $\mathrm{R}(t_1,t_2)$ must be defined as $N \delta(t_1-t_2)$, where $N$ is some real constant and $\delta$ is Dirac's "function".
In this approach, one usually specifies that the integral $W_I$ of $w(t)$ over an interval $I=[a,b]$ is a real random variable with normal distribution, zero mean, and variance $(b-a)\sigma^2$; and also that the covariance $\mathrm{E}(W_I\cdot W_J)$ of the integrals $W_I$, $W_J$ is $r\sigma^2$, where $r$ is the width of the intersection $I\cap J$ of the two intervals $I,J$. This model is called a Gaussian white noise signal (or process).
## Mathematical applications
### Time series analysis and regression
In statistics and econometrics one often assumes that an observed series of data values is the sum of a series of values generated by a deterministic linear process, depending on certain independent (explanatory) variables, and on a series of random noise values. Then regression analysis is used to infer the parameters of the model process from the observed data, e.g. by ordinary least squares, and to test the null hypothesis that each of the parameters is zero against the alternative hypothesis that it is non-zero. Hypothesis testing typically assumes that the noise values are mutually uncorrelated with zero mean and the same Gaussian probability distribution — in other words, that the noise is white. If there is non-zero correlation between the noise values underlying different observations then the estimated model parameters are still unbiased, but estimates of their uncertainties (such as confidence intervals) will be biased (not accurate on average). This is also true if the noise is heteroskedastic — that is, if it has different variances for different data points.
Alternatively, in the subset of regression analysis known as time series analysis there are often no explanatory variables other than the past values of the variable being modeled (the dependent variable). In this case the noise process is often modeled as a moving average process, in which the current value of the dependent variable depends on current and past values of a sequential white noise process.
### Random vector transformations
These two ideas are crucial in applications such as channel estimation and channel equalization in communications and audio. These concepts are also used in data compression.
In particular, by a suitable linear transformation (a coloring transformation), a white random vector can be used to produce a "non-white" random vector (that is, a list of random variables) whose elements have a prescribed covariance matrix. Conversely, a random vector with known covariance matrix can be transformed into a white random vector by a suitable whitening transformation.
## Generation
White noise may be generated digitally with a digital signal processor, microprocessor, or microcontroller. Generating white noise typically entails feeding an appropriate stream of random numbers to a digital-to-analog converter. The quality of the white noise will depend on the quality of the algorithm used.[16]
Colors of noise
White
Pink
Red (Brownian)
Grey
## References
1. Diebold, Frank (2007). Elements of Forecasting (Fourth ed.).
2. Fusco, G; Garland, T., Jr; Hunt, G; Hughes, NC (2011). "Developmental trait evolution in trilobites". Evolution 66: 314–329.
3. Claire Shipman (2005), : "The political rhetoric on Social Security is white noise. Good Morning America TV show, January 11, 2005.
4. Don DeLillo (1985),
5. Jastreboff, P. J. (2000). "Tinnitus Habituation Therapy (THT) and Tinnitus Retraining Therapy (TRT)". Tinnitus Handbook. San Diego: Singular. pp. 357–376.
6. López, HH; Bracha, AS; Bracha, HS (September 2002). "Evidence based complementary intervention for insomnia". Hawaii Med J 61 (9): 192, 213. PMID 12422383.
7. Noell, Courtney A; William L Meyerhoff (2003-02). "Tinnitus. Diagnosis and treatment of this elusive symptom". Geriatrics 58 (2): 28–34. ISSN 0016-867X. PMID 12596495. `|accessdate=` requires `|url=` (help)
8. Soderlund, Goran; Sverker Sikstrom, Jan Loftesnes, Edmund Sonuga Barke (2010). "The effects of background white noise on memory performance in inattentive school children". Behavioral and Brain Functions 6 (1): 55.
9. Söderlund, Göran; Sverker Sikström, Andrew Smart (2007). "Listen to the noise: Noise is beneficial for cognitive performance in ADHD.". Journal of Child Psychology and Psychiatry 48 (8): 840–847. doi:10.1111/j.1469-7610.2007.01749.x. ISSN 0021-9630.
10. Loewen, Laura J.; Peter Suedfeld (1992-05-01). "Cognitive and Arousal Effects of Masking Office Noise". Environment and Behavior 24 (3): 381–395. doi:10.1177/0013916592243006. Retrieved 2011-10-28.
11. Baker, Mary Anne; Dennis H. Holding (1993-07). "The effects of noise and speech on cognitive task performance.". Journal of General Psychology 120 (3): 339–355. ISSN 0022-1309.
12. Francis X. Diebold (2007), Elements of Forecasting, 4th edition.
13. Matt Donadio. "How to Generate White Gaussian Noise". Retrieved 2012-09-19.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 52, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8932744264602661, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/72490/why-are-operads-useful/72496
|
Why are operads useful?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The question is not about where operads are used, I know that. It is about what makes them useful. For example, van Kampen diagrams are useful in combinatorial group theory because these are planar graphs and so one can use planar geometry (say, the Jordan lemma) to investigate the word problem in complicated groups. Similarly, asymptotic cones are useful in geometric group theory because they allow to study large scale properties of a discrete object (a group) by looking at small scale properties of a continuous object. I would like to know a similar answer for operads.
Update Many thanks to everybody for your answers. Unfortunately I can accept only one. So I just accept the first answer.
-
Something that takes less than 10 minutes and might be useful to you (if you have not already done this): Use "operads" as a search term in the MathOverflow article search, and skim over the results, picking one or two questions that appeal to you and glance over them. One that I liked was mathoverflow.net/questions/36222/… . Gerhard "Ask Me About System Design" Paseman, 2011.08.09 – Gerhard Paseman Aug 9 2011 at 17:40
@Gerhard: I have done so. The answer I am looking for (2-3 lines) is not there. – Mark Sapir Aug 9 2011 at 17:48
5
Unlike the example you give (van Kampen diagrams), I think (and someone will surely disagree with this) that operads are more immediately useful as a tool for defining things than for proving things. In particular, using operads we can define structures which might otherwise be combinatorially cumbersome to specify and to reason about. Once understood in terms of operads, it is often easier to prove things about such structures (e.g., loop spaces) and we can see similarities between a priori distinct structures (e.g., Steve's (1) below). – Michael A Warren Aug 9 2011 at 18:57
5
Operads are not a tool (like van Kampen diagrams). They are a language. Recognizing that something is an operad gives you a convienient language to manipulate it. You might ask yourself the same question about "groups". You could (any many people once did) manipulate collections of matrices that are closed under multiplication. Abstracting that clears away the extraneous clutter and gives you a framework to manipulate your group. – John_L Aug 11 2011 at 17:47
A nice survey by Bruno Vallette: math.unice.fr/~brunov/publications/… – Thomas Riepe Dec 29 2011 at 8:47
7 Answers
Here are a couple 2-3 line answers to your question:
1) They allow you to treat various algebraic problems uniformly. For example, Commutative, Associative, and Lie algebras all have their own cohomology theories (Harrison, Hochschild, and Chevalley-Eilenberg respectively). These can all be seen as instances of a single operad cohomology.
2) One can use operads to construct cohomology classes for the Mapping Class group and Out(Fn) (and others). The idea is to use graphs "colored" by operads, and construct a chain complex out of these colored graphs that computes the desired cohomology.
and perhaps the most classic answer:
3) They allow you to classify loop spaces and infinite loop spaces. For connected spaces, these are exactly classified as algebras over various operads.
-
2
@Steve: Thank you! It is close to what I want, but perhaps not quite the same. Your 2) and 3) are still about where operads are used, not quite what makes them useful. Are they useful because the notion is so general that almost anything is an operad and then you can use the theory of operads to study these things (something like the notion of algebraic system in algebra)? – Mark Sapir Aug 9 2011 at 18:24
@Steve: 1) sounds fascinating! I've never heard of operad cohomology. Could you give some references? – Chris Heunen Aug 10 2011 at 15:31
@Chris: try section 4.2 of "Koszul duality for Operads" by Ginzburg and Kapranov (arxiv.org/abs/0709.1228). The arXiv version doesn't seem to have any pictures though... – Steve Aug 10 2011 at 21:55
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Operads come into play whenever you're dealing with a family of objects that have coherent operations. An inaccurate way to get it across but which gets quite close to the spirit would be to think about things like computing the number of ways of partitioning $10^{100}$ into $60$ subsets, vs. the generating function for partitions. Perhaps the generating function does not help you compute this one instance of the partition problem, but it does inform on some general aspects of the partition problem, such as asymptotics.
To make the analogy more complete, operads are relevant when you're dealing with things like:
• A universal algebra. This is in some sense the original operad idea -- operads were designed to be a category-independent notion of universal algebra.
• Whenever you have families of spaces that have families of ways in which they can be combined. Topological operads were designed to encode this kind of information. The first concrete instance of this was the cubes operad, which acts on iterated loop spaces. Cubes operads have the added benefit that they can be used to deduce that spaces are iterated loop spaces.
In that regard operads tend not to be used to completely plumb the depths of one particular object. They tend to be used to extract coherent information about a family of objects (unless of course your object is itself a "family of objects").
I think in group theory the way this kind of thing comes up would be when you take classifying spaces. For example, pure braid groups. There are various maps $P_n \times P_k \to P_{n+k-1}$ given by "blowing up" one of the strands of the $n$-stranded braid and inserting the $k$-stranded braid in its place. When you take classifying spaces, these are the structure maps for the operad of $2$-cubes. I suppose it's subjective, but the homology and cohomology of the pure braid groups have a far more pleasant exposition in this operadic framework -- and it's quite analogous to the initial generating function analogy -- than say in the language of explicit group cycles and cocycles.
So I suppose my point could be phrased as operads do not do anything, in some computational sense. They just hold information in a more pleasant way. They're more akin to a data type than an algorithm to perform a task.
-
@Ryan: Your answer is mostly about "what" rather than "why". Right? For example, why would anybody need a "category-independent notion of universal algebra" (and what does it mean exactly)? – Mark Sapir Aug 10 2011 at 8:36
1
@Mark: I think the why is at least partially addressed by the braid groups example. One way to look at groups is they act on things. But in this instance, the classifying spaces of pure braid groups is itself an "actor" -- acting on configuration spaces -- but acting in an operadic sense. So there's a type of higher symmetry among braid groups that the operad is catching. In algebraic topology universal algebras come up quite often -- things like algebras of cohomology operations. Cohomology as a module over these algebras are a more powerful object than just cohomology rings. – Ryan Budney Aug 10 2011 at 15:27
I find two different points of view useful.
1. Further to Steve's first answer, I would say that operads put many algebraic structures into one compact and useful meta-algebraic setting. Lie, associative, commutative, Poisson, Gerstenhaber, etc. All of these fit into one nice framework which then tells us how to define cohomology theories and study the deformation theory in each setting. This universal setup also tells us how to study generators and relations, homological algebra, duality theory, and so on. Operads, somewhat like category theory, allow one to see the common structure behind many a priori different worlds.
2. My other point of view is that operads, along with their siblings, the cyclic and modular operads, are all about studying structures that glue/compose along trees or graphs. Manifestations of this type of composition appear in topological field theory, infinite loop space theory, low dimensional topology, and all sorts of other places.
-
@Jeffrey: " studying structures that glue/compose along trees or graphs" - What does it mean? – Mark Sapir Aug 10 2011 at 8:30
2
For example, take pairs of pants, and their associated moduli spaces. If you glue a bunch of pants together then you get a surface and the dual graph to the pants decomposition is a trivalent graph. Pants aren't closed under composition because if you glue two you get a sphere with 4 holes instead of 3. So consider moduli spaces of genus zero surfaces with boundaries - these now have an operad structure. If you start gluing these genus zero guys together you get a surface, and the dual graph is an arbitrary graph. That's the sort of picture I meant. – Jeffrey Giansiracusa Aug 10 2011 at 12:04
One reason that operads are used is in obstruction theory. Suppose we have a CW-complex $X$ with basepoint and multiplication $\mu:X \times X \to X$ which is associative and unital up to homotopy, and we want to know if $X$ is homotopy equivalent to a topological monoid $X'$ by a map that, up to homotopy, respects the multiplication. This is a question about homotopy theory, but constructing a strictly associative multiplication is not amenable to methods of homotopy theory.
This is a question about the associative operad, but we can replace it by a question about an equivalent operad such as the collection of Stasheff associahedra. This has a simple presentation, as an operad, in terms of generators and relations, and so it is easier to classify actions on an object. This provides a sequence of obstructions in `$\pi_k Map(X^{k+3}, X)$` to finding on an object (and the choices are similarly parameterized).
Of course, as with many things in algebraic topology, this general method works far better to show something does not admit a multiplication, or when the obstructions occur in zero groups. However, it is difficult to attack such problems outside specific circumstances without using operads. (Perhaps someone else knows of methods that don't implicitly use operads; I don't.)
I'm not complete clear on what constitutes the difference between "where" and "why" in your statement, but I hope this qualifies.
-
There surely are many answers to this question... For me, one of the key reasons is that there are lots of situations where existing geometric and algebraic structures exhibit some kind of associativity. (Geometrically, think of gluing pants, like in Jeff's comment, algebraically think of composing operations and/or cooperations.) The notion of an operad allows to formalise this observation, and treat objects like that as associative algebras in a certain monoidal category, - and since we know an awful lot of ways to approach usual associative algebras, this gives intuition of how to approach problems for these, more tricky objects. I think the analogy with your examples is quite clear.
-
I'm not an expert by the way I could give you an answer based on my personal experience in my study of category theory.
Operads allow to make lots of constructions and to encode information of many mathematical object into an algebraic structure, this algebraic structure allows to work in a simpler way with the above mentioned objects.
In pratical I think operads are similar to homotopy/homology groups, which encode homotopical information of topological spaces and enable to distinguish such spaces in a simple way, studying algebraic structures. This is useful because is more simple classifying groups rather then topological spaces.
More in general I think the usefulness of all such structures derive by the fact that usually (concrete) algebraic structures are easier to work with, but I emphasise that these are just my thoughts.
-
I don't know the answer myself, but the book linked to (much of which is available on google books) purports to give several answers:
Martin Markl, Steve Shnider, Jim Stasheff (2002). Operads in Algebra, Topology and Physics. American Mathematical Society. ISBN 0821843621.
-
@Igor: Thanks! But I am looking for a brief (2-3 lines) statement, not a book. Also I am far away, about 6500 miles, from my University library now. – Mark Sapir Aug 9 2011 at 17:47
You may have enough access to Google Books to get satisfaction. You should probably get your own link to the book Igor suggested, but here is one that I used: books.google.com/… . Gerhard "Ask Me About System Design" Paseman, 2011.08.09 – Gerhard Paseman Aug 9 2011 at 17:56
Further, it may be enough to skim the history, preface, etc. Hopefully this won't take much more than another 10 minutes. Good luck in your search, and perhaps someone on MathOverflow will provde your 2-3 lines soon. Gerhard "Ask Me About System Design" Paseman, 2011.08.09 – Gerhard Paseman Aug 9 2011 at 17:59
6
The book by Markl, Shnider, and Stasheff is available in electronic form: gen.lib.rus.ec/… – Dmitri Pavlov Aug 9 2011 at 18:03
Someone down voted this? Strange. – Igor Rivin Nov 17 2011 at 18:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304643869400024, "perplexity_flag": "middle"}
|
http://nrich.maths.org/2769
|
An Introduction to Galois Theory
This article only skims the surface of Galois theory and should probably be accessible to a 17 or 18 year old school student with a strong interest in mathematics.
What's a Group?
Explore the properties of some groups such as: The set of all real numbers excluding -1 together with the operation x*y = xy + x + y. Find the identity and the inverse of the element x.
Groups of Sets
The binary operation * for combining sets is defined as the union of two sets minus their intersection. Prove the set of all subsets of a set S together with the binary operation * forms a group.
An Introduction to Mathematical Structure
Stage: 5
Article by Emma and Gareth McCaughan
When I started to study "algebra" at university, I was surprised to discover that it looked nothing like the "algebra" I had studied at school. Gone were the algebraic expressions and quadratic equations, and in came a whole new set of words and symbols.
But it was still to do with generalising. In school-level algebra, we can generalise results that work for lots of different numbers (such as $(x-1)(x+1)\equiv x^2-1$,or find a formula that generalises a sequence of numbers ($n^{\textrm{th}}$ term $=3n+4$). The algebra studied at university makes connections between more disparate areas of mathematics, such as arithmetic, combinatorics and symmetry. This is very powerful; if we can show that two situations behave in the same way, then if we find something interesting about one situation, there will be an equivalent result in the other situation.
So algebraists look for ways to describe seemingly different situations in the same way. They will tend to describe them in terms of a set of elements, and one or more operations, which are ways of combining elements. This is quite difficult to understand without seeing some examples, so let's explore some:
1) Imagine taking the numbers 0, 1, 2 and 3. These are the elements . We're going to add them, but we'll do this "mod 4"; that just means that we'll write down the remainder when the answer is divided by 4. This is the operation. So, for example, 2 + 3 = 5 = 1 mod 4.
We can build up a table of the answers we get:
| | | | | |
|----|----|----|----|----|
| + | 0 | 1 | 2 | 3 |
| 0 | 0 | 1 | 2 | 3 |
| 1 | 1 | 2 | 3 | 0 |
| 2 | 2 | 3 | 0 | 1 |
| 3 | 3 | 0 | 1 | 2 |
Here are some more sets of 4 elements, each time with an operation. Try to complete each table, then click here to see if you are right. Some involve arithmetic, some involve symmetry, and one involves looking very silly.
2) Take the numbers 2, 4, 6, 8 and multiply them mod 10 (so just write down the last digit).
| | | | | |
|----|----|----|----|----|
| x | 2 | 4 | 6 | 8 |
| 2 | | | | |
| 4 | | | | |
| 6 | | | | |
| 8 | | | | |
3) Take the numbers 1, 2, 3, 4 and multiply them mod 5.
| | | | | |
|----|----|----|----|----|
| x | 1 | 2 | 3 | 4 |
| 1 | | | | |
| 2 | | | | |
| 3 | | | | |
| 4 | | | | |
4) Take the numbers 1, 3, 5, 7 and multiply them mod 8.
| | | | | |
|----|----|----|----|----|
| x | 1 | 3 | 5 | 7 |
| 1 | | | | |
| 3 | | | | |
| 5 | | | | |
| 7 | | | | |
5) Take a square. Our elements this time will be "the rotations of a square". We could leave it as it is(call this I), we could turn it through 90° anticlockwise (call this Q), we could turn it through 180° (call this Q²), or we could turn it through 270° anticlockwise (call this Q³). Our operation this time is just doing one element after another. So Q²Q³ is turning through 180° then 270°, which is the same as turning through 90°, so the answer is Q.
| | | | | |
|----|----|----|----|----|
| | I | Q | Q² | Q³ |
| I | | | | |
| Q | | | | |
| Q² | | | | |
| Q³ | | | | |
6) Take a rectangle. Ideally you need a rectangle of clear plastic, with each corner painted a different colour, but paper will do. We're going to be interested in where the colours move to, but not which way up the plastic is. This time our elements will be the symmetries of a rectangle; the ways we could move it so that it is still in the same orientation. We could leave it as it is (I), or we could flip it vertically or horizontally (V, H), or we could rotate it through 180° (R). Again, the operation is just going to be doing one followed by another; to start you off, VH = R.
| | | | | |
|----|----|----|----|----|
| | I | V | H | R |
| I | | | | |
| V | | | | |
| H | | | | |
| R | | | | |
7) Take a T-shirt, one where the front and back are clearly different. In order that you don't get too many strange looks, you might like to try this out in the privacy of your bedroom! If you take the T shirt off and put it on again, there are four things you could do in between. You could leave it as it is (Same), you could turn it back to front (BTF), you could turn it inside-out (I-O), or you could do both of these (Both). These are our four elements, and the operation is just doing one after another.
| | | | | |
|------|------|-----|-----|------|
| | Same | BTF | I-O | Both |
| Same | | | | |
| BTF | | | | |
| I-O | | | | |
| Both | | | | |
Now have a good look at the tables you have completed, and look at the similarities and differences.
All these tables have a number of things in common:
• the only elements in the table are the ones we started with
• they all have one column and one row which shows the elements in the original order
• each element appears exactly once in each row and column
• they are all symmetrical about the "leading diagonal" (top left to bottom right)
Mathematicians call this structure a group. Not all groups have four elements (they could even have an infinite number), but they all have tables which share most of the properties above.
Put more formally, a group is a set of elements and an operation which have the following properties, where a, b, etc are elements, and * is the operation:
• closure; this means that when we combine two elements, we only get elements which are in the group;
• there is an identity element, e , such that for each element a , e *a = a = a *e
• each element a has an inverse, a -1 , such that a *a -1 = e = a -1 *a
• associativity; this means that if we have an expression involving the operation twice, it does not matter which bit is done first: addition is associative as $a+(b+c)\equiv(a+b)+c$ , but subtraction is not, as $a-(b-c) \neq (a-b)-c$
The first three of these properties have been coloured red, green and blue to show how they relate to the corresponding properties we observed for the tables. The tables we constructed also have the associative property, but that can't easily be seen from thetables.
All the tables we constructed were symmetrical about the leading diagonal, but this symmetry is not part of the definition of a group. However groups that have this property are important enough to have a special name: they're called abelian groups after the Norwegian mathematician Niels Abel. (Many scientists and mathematicians have things named after them; the real challenge is to discover or invent something which is used so much that it loses its capital letter. Abel managed this, even though he died when he was only 26 years old!)
Look at the groups above and identify which element is the identity element in each group. You could also identify the inverse of each element. Some elements are self-inverse ; a *a =e .
We've looked at the properties shared by the tables above. Now look again at the tables. How many different tables are there?
You probably thought there were three. However, try filling in these two again:
2) (multiplication mod 10)
| | | | | |
|----|----|----|----|----|
| x | 6 | 8 | 2 | 4 |
| 6 | | | | |
| 8 | | | | |
| 2 | | | | |
| 4 | | | | |
3) (multiplication mod 5)
| | | | | |
|----|----|----|----|----|
| x | 1 | 3 | 2 | 4 |
| 1 | | | | |
| 3 | | | | |
| 2 | | | | |
| 4 | | | | |
Rearranging the elements shows you that all of these examples fit one of just two structures.
We have seen that the tables are not reliable as a way of distinguishing; one useful way to start to describe individual groups is to look at the order of the elements in the group. The order of an element is the number of times it needs to be combined with itself to get the identity element. For instance, if we turn te T-shirt inside-out twice, we're back where we started, so the order of this element is 2. If we rotate a square through 90°, we have to do it 4 times, whereas if we rotate it through 180°, the order is just 2. The identity element obviously has order 1, and elements which are self-inverse have order 2.
There are in fact only two different groups of order 4 (consisting of 4 elements).
V, or K4 (Klein-4) is the name given to a group with 4 elements where all elements other than the identity are self-inverse. (Why V? It's the first letter of the German word for "four".)
C4 is the name given to a cyclic group of 4 elements. A cyclic group is one where one (or more) element isof the same order as the group; all the other elements are created by combining this element with itself.
With a bit of thinking, you may be able to convince yourself that there are no other groups of order 4. There are some hints on this at the end of the article.
This article has been looking at just one kind of mathematical structure, the group. In fact, it's only been looking at groups of order 4. This, of course, barely scratches the surface. Groups can have any number of elements; they can even have infinitely many elements. For instance, take the integers as elements and addition as your operation; the result is a group. (Check those properties!)
Now, the integers are interesting because they have more structure than addition alone gives them: you can multiply integers, too. Multiplication of integers isn't a group operation because most elements don't have inverses: 1/2 isn't an integer. The non-zero integers under multiplication form what's called a semigroup , which more or less means "like a group, but without inverses". But semigroups are pretty boring; matters start to get more interesting when you put addition and multiplication together. The result is a structure called a ring , which means something like "some elements, a group operation, and a semigroup operation, where the two operations are related by the distributive law. This is interesting because rings have enough structure to do all kinds of useful things with, but on the other hand they have little enough structure that lots of things in mathematics either are rings or can easily be turned into rings.
There are lots of different sorts of mathematical structure: semigroups, groups, rings, fields, modules, groupoids, vector spaces, and so on and so on. They're all based on the same insight: that when something interesting (like the integers) turns up, you should try to work out what the basic facts about it are that make it interesting, and then look for other things that share those basic facts -that is, other instances of the same structure. Those links give you a flavour; actually understanding all this stuff takes a lot of time at university!
Mathematicians have even gone one step further and asked: What about this whole business of mathematical structures? What's its structure? The answer to that turns out to be a whole new area of mathematics called "category theory". It's not for the faint-hearted!
Coming down from these stratospheric heights of abstraction, there's an awful lot more even to finite groups than we've seen here. For instance: in a certain (rather complicated) way, all finite groups can be built out of building-blocks called the finite simple groups . (It's a bit like the way that all positive integers can be built out of prime numbers.) There are a few infinite families of finite simple groups, where each member of the family is built in basically the same way; and there are just 26 other finite simple groups that don't belong to any of those families. The biggest of those groups is called "the Monster" and it has exactly 808017424794512875886459904961710757005754368000000000 elements. When mathematicians say something's "simple" they don't mean quite the same as normal people do!
Further exploration
Try proving that there are only two different groups of order 4. The best strategy for this is to try and construct a group which is not K4 . It must have an identity element e, and at least one element a which is not self-inverse, and its inverse a -1 . If you try to construct a table, sooner or later you will establish that the only possibility turns out to be C4 .
We've looked at a couple of symmetry groups; one was the rotations of a square, and the other was the symmetries (both reflections and rotations) of a rectangle.
Using the Shuffles resource , you can explore the symmetries of various regular polygons. For example, the set of symmetries of an equilateral triangle, the set of rotations of a regular hexagon and the set of reflections of a regular hexagon all have six elements. Are they groups? (Check the properties.) Are they the same group?
Are there any elements of order 3 in the groups explored in this article? Can you explain? What about groups with different numbers of elements? What orders are the elements in these groups?
You might like to read the earlier NRICH article Small Groups. There are various problems involving groups in the March 2005 issue of the site. In particular, you might like to look at "What's a group?"
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.952854335308075, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/momentum+electromagnetism
|
# Tagged Questions
1answer
75 views
### Can electromagnetic momentum be introduced at pre-university level as for electromagnetic energy?
Electromagnetic energy is introduced at pre-university level, starting with static electric energy followed by static magnetic energy. But the introduction of electromagnetic momentum usually has to ...
1answer
716 views
### Violation of Newton's 3rd law and momentum conservation
Why and when does newtons 3rd law violate in relativistic mechanics? Check this link http://www.animations.physics.unsw.edu.au/jw/Newton.htm.
2answers
154 views
### Does constraint for speed of Electric and magnetic fields violates Conservation of momentum or Newton's third law?
I'm just a beginner so bear with me. Consider two frames at rest wrt to each other separated by distance enough for light to take a minute or so. At a given instant we create two large dipoles by some ...
3answers
257 views
### Lorentz force in Dirac theory and its classical limit
It is well known that in Dirac theory the time derivative of $P_i=p_i+A_i$ operator (where $p_i=∂/∂_i$, $A_i$ - EM field vector potential) is an analogue of the Lorentz force: \$\frac{dP_i}{dt} = ...
1answer
181 views
### Why electromagnetic waves propagating along x transfers to electron momentum along z?
Why EM waves having only x momentum transfers to electron z momentum? Electron begins oscillating along z, so will not radiate EM waves along z direction, to compensate its z momentum. It seems that ...
6answers
1k views
### How can there be net linear momentum in a static electromagnetic field (not propagating)?
I understand from basic conservation of energy and momentum considerations, it is clear in classical electrodynamics that the fields should be able to have energy and momentum. This leads to the usual ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9056031703948975, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/131625/time-to-take-for-water-to-cool-down?answertab=oldest
|
# Time to take for water to cool down.
My question refers to the cooling of water in a well-insulated , cylindrical cup that is opened at the top.
$0.336$Kg of water at $84$C$^\circ$ is poured into the cup which is left to stand on a table at an ambient temperature of $25$C$^\circ$. The specific heat capacity of water is $4150$ J Kg$^{-1}$ K$^{-1}$ and the U-value is $10.64$Wm$^{-2}$K$^{-1}$. The area of the water surface that is exposed to the surroundings air is $0.00447$ m$^2$.
So, how to calculate the time taken in seconds for the water to cool down to $46$C$^\circ$?
This is what I have done so far....
$$\begin{align*} \text{Let } x & = \text{(U-value)(Surface Area)/(mass)(Heat Capacity)}\\ \text{Let } y_1 &= 46\text{C}^\circ (\text{cool down temp})\\ \text{Let } y_2 &= 84\text{C}^\circ (\text{Water temp})\\ \text{Let } y_3 &= 25\text{C}^\circ (\text{Ambient tempt}) \end{align*}$$
Taking time to be $T$, so...
$$\begin{align*} T & = \dfrac{-1}{x} \log \frac{y_1-y_3}{y_2-y_3}\\ & = -13153 (\text{sec}) \end{align*}$$
I got negative, so my guess something is wrong here. The formula is from my book. Need help....
-
## 1 Answer
The formula that you used does not produce a negative number. Note that $\frac{y_2-y_3}{y_1-y_3}$ is between $0$ and $1$, so its logarithm is negative. The minus sign in the front of your formula then produces a positive number.
Remark: Do check what kind of log is involved in the formula. You used logarithm to the base $10$. It may be that logarithm to the base $e$ (natural logarithm) is intended. I cannot tell from the numbers, since the difference can be absorbed into the definition of $U$-value. (Whether you use base $10$ or base $e$, the log of $\frac{21}{59}$ is negative.)
-
I saw my mistake. Its log base e....Thanks for the help =D – Jack Apr 14 '12 at 9:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947969913482666, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/tagged/alterations
|
## Tagged Questions
0answers
227 views
### Slight Alteration to a Diophantine Result
Hello all! In the pursuit of a minor research problem I was pointed in the direction of an interesting result in the realm of Diophantine Analysis. The content of the result follo …
0answers
246 views
### Alterations of regular varieties
Let $X$ be a regular quasi-projective variety over a perfect field $k$. The existence of a "good compactification" of $X$, i.e. a regular projective variety $\bar{X}$ with an embed …
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8427690863609314, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/203153/sorting-algorithm-analysis-on-a-list-of-0-and-1-element/203590
|
# Sorting Algorithm analysis on a list of 0 and 1 element.
I'm trying to understand the difference would it make if following sorting algorithms are given a set of binary inputs i.e. collection of 0 and 1's only.
a) Heapsort b) Quicksort c) MergeSort d) Insertion Sort.
I'm looking for difference in number of comparison required for sorting the list.
Exact Question: How the restriction of 0's and 1's element may affect the total number of comparison done and give the resulting \theta bound. In my perspective there won't be any change in MergeSort and Insertion sort as they would require the same number of comparison.
However on a very different thought, I'm thinking that if we know about the data (i.e. they are 0 or 1) then in decision tree there won't be n! factorial outputs. As we can reduce it to few less I'm not sure about this decision tree thought. Please provide your thoughts on this.
-
3
– Ayman Hourieh Sep 27 '12 at 0:35
Have you considered asking this on the computerScience.SE beta site? – Rick Decker Sep 27 '12 at 18:47
Nopes, I'm not aware of this website. Can you give me exact url? (computerScience.se doesn't load) – Rohit Kandhal Sep 27 '12 at 18:54
## 2 Answers
Mergesort is an oblivious algorithm, which is to say that it will perform the same steps (except for those involved in merging) for each input sequence, so its average-and worst-case times will be $\Theta(n\log n)$ on inputs restricted to $0/1$ sequences.
Insertion sort becomes interesting for your inputs, but it's not too hard to see that the worst-case running time will still be $O(n^2)$: Look at the number of swaps that will have to be done on the input $\langle\; 1, 0, 1, 0, \dots , 1, 0\;\rangle$. The average performance is, as usual, more involved and I'm not ready to claim any results for that.
Quicksort is even more interesting, since the first call to partition will leave the array in sorted order after $O(n)$ swaps. If QS were written with that in mind, then its behavior would change from $O(n\log n)$ on average (or $O(n^2)$ in the worst case) to $O(n)$. If this fact weren't identified, then after the first partition, the subsequent ones would split each subarray into two pieces, one containing a single element and the other containing all the rest, leading to $O(n^2)$ performance in both average- and worst-case.
Oops! I just noticed that heapsort was also on your list. I'll have to get back to you on that.
-
Thanks for your response. :) I too got somewhat similar result but wasn't sure whether they are correct or not. Here are my results: Insertion Sort: theta(n^2), If either no. of 1's or 0's are constant then it is theta(n) Merge Sort: theta(nlogn), No change even if 1's or 0's are constant. Heapsort: theta(nlogn), If no. of 1's are constant then theta(nlogn). If no. of 0's are constant then theta(n). QUickSort: theta(n^2), No change if no. of 0's or 1's are constant. – Rohit Kandhal Sep 27 '12 at 18:51
Best case = data already sorted Average case= some data sorted, some data unsorted. Worst case= data totally unsorted
Merge sort: Best = NlogN , AVE=NlogN, WORST = NlogN
Insertion Sort = Best = N , AVE=N^2, WORST = N^2
Quicksort = NlogN , AVE=NlogN, WORST = N^2
-
1
I know about these formulas, I want to know what difference would it make with the input values between 0 and 1 only. Would there be any change theta bound of these algo? I think there will be some but I'm not able to figure out the appropriate way to findout the same. – Rohit Kandhal Sep 27 '12 at 5:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305707216262817, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/65627?sort=votes
|
## Formulaic definitions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In Jech's Set Theory, p. 194, I read - as a comment on the definition of ordinal-definable sets ("A set X is ordinal-definable if there is a formula such that [...]") -:
It is not immediate clear that the property "ordinal-definable" is expressible in the language of set theory.
Just to show that there is an equivalent definition that is.
a) Cannot every formulaic definition be translated into a set theoretic one by gödelization?
b) If gödelization is not what Jech means: Are there "working" formulaic definitions (working = used in practice) that cannot be translated into a set set theoretic one?
-
## 2 Answers
Definability is a slippery concept (see this previous MO answer), and the subtle fact here is that although the class of ordinal-definable sets is definable, in general we have no way to define the class of definable sets.
The answer to your questions (a) and (b) is to realize that the crucial difference is whether you have just one formula, which is no problem, or whether you intend to quantify over all formulas, which is where the problems arise.
For any given fixed formula $\varphi$, of course, the class $\{\ x\ \mid\ \varphi(x)\ \}$ is definable, since it is the formula $\varphi$ that defines it. Thus, the notion of "being defined by the formula $\varphi$" is a first-order expressible property of $x$ in a direct way, and this is probably what you are thinking.
But the class of hereditarily ordinal definable sets wants to include the sets that are definable from ordinal parameters using any formula, not just one fixed formula. In this case, it turns out that one may not easily generalize the direct approach above. Indeed, the collection $\{\ \langle x,{\ulcorner}\varphi{\urcorner}\rangle\ \mid\ \varphi(x)\ \}$ is never a definable class in any model of set theory. This fact is known as Tarski's theorem on the non-definability of truth. There is an easy proof using the Gödel fixed point lemma: if truth were definable, then we could make a sentence asserting its own non-truth, and this is self-contradictory.
One way to think about it is that as $\varphi$ increases in complexity, the assertion that $\varphi(x)$ holds of a set $x$ becomes increasingly complex. If we had a single formula that could assert "$\varphi(x)$ is true," then this formulas would have some fixed complexity, and the complexity hierarchy would collapse. But we can prove that the complexity hierarchy is strict, and so there can be no such formula defining truth.
Meanwhile, for a fixed set $M$, we may define the class $\{\ \langle x,{\ulcorner}\varphi{\urcorner}\rangle\ \mid\ M\models\varphi(x)\ \}$, since satisfaction in a set is defined by induction on formulas. It is a curious situation that Tarski is credited both with defining truth (by this inductive definition) and with proving that truth is not definable (in the non-definability of truth theorem above). Note that when $M$ is a fixed set, the complexity of saying $M\models\varphi(x)$ is merely $\Delta_0(x)$, since all quantifiers have been bounded by the set $M$, or if one thinks of $\varphi$ varying, then it is $\Delta_1(x,{\ulcorner}\varphi{\urcorner})$, since one must quantify to get access to the satisfaction-in-$M$ relation. (Note that in non-standard models, the inductive definition applies to non-standard $\varphi$.)
The observation of the previous paragraph is the key to showing that HOD is a definable class, since if $x$ is definable in $V$ by a formula with ordinal parameters, then by the reflection theorem it is definable in some large $V_\alpha$ by the same definition, and so
• $\text{HOD}=\{\ x \ \mid\ \exists \alpha\ x\text{ is definable in }V_\alpha\ \}$
Perhaps this way of thinking about HOD accords a little better with the way you want to think about HOD.
-
@Joel, thanks for the elucidating answer! Could you please be a bit more specific with regard to "as $\phi$ increases in complexity, the assertion that $\phi(x)$ holds of a set $x$ becomes increasingly complex". Do you mean that the complexity of "$\phi(x)$ holds for $x$" must be greater than that of $\phi(x)$? – Hans Stricker May 23 2011 at 6:33
No, I just mean that the complexity of saying "$\varphi$ holds of $x$" has the same complexity as $\varphi$. So if you think of this as an assertion about $x$ and about ${\ulcorner}\varphi{\urcorner}$, then we should expect it to be more complex than any one formula. And indeed, this is why it is not expressible. – Joel David Hamkins May 23 2011 at 15:24
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
a) Yes. For example, "definable" cannot be translated into a set theoretic definition.
b) Yes, see (a).
-
"definable" = there exists a set which is the gödel number of a formula $\phi$ such that $X = \lbrace x\ |\ \phi(x)\rbrace$? – Hans Stricker May 21 2011 at 8:10
Now you would need to define `$\{x\; |\; \phi(x)\}$`. – Ricky Demer May 21 2011 at 8:35
I believed that's an unproblematic abbreviation: $\Phi(\lbrace x\ |\ \phi(x)\rbrace)$ abbreviates $(\exists ! X) \Phi(X) \wedge (x)\ x \in X \equiv \phi(x)$ – Hans Stricker May 21 2011 at 10:48
Are there supposed to be parentheses around everything after $(\exists ! X)$? In either case, that only defines `$\Phi(\{x\; |\; \phi(x)\})$`, not $\{x\; |\; \phi(x)\}$. – Ricky Demer May 21 2011 at 18:10
Why isn't my definition enough? Where else would you want to use the term $\lbrace x | \phi(x)\rbrace$ than in a formula? – Hans Stricker May 23 2011 at 6:54
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473995566368103, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/32857/lipschitz-orthonormal-frames-on-submanifolds-of-mathbfrn
|
## Lipschitz orthonormal frames on submanifolds of $\mathbf{R}^n$ ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we are given a $d-$dimensional submanifold of $\mathbf{R}^n$ with a trivial normal bundle, whose $d-$dimensional volume is $V$ and has a non-self-intersecting tube of radius $r$ around it. Can one obtain an explicit upper bound $L$ such that there must exist a frame of orthonormal vector fields that is $L-$Lipschitz (with respect to the Euclidean norm on both the frame and the manifold)?
-
Have you worked this out when $d = 1$? That seems like a somewhat easier initial case to understand. – Deane Yang Jul 22 2010 at 0:38
## 2 Answers
You will probably need to require $C^2$ smoothness of your submanifold. Take a simple example of $d=1$ and the manifold the graph of the function $f(x) = x^\alpha$ for $x > 0$ and $f(x) = 0$ if $x \le 0$. Then for $1 < \alpha < 2$, the normal bundle is only Hölder continuous, so no $L$ exists at $x = 0$. Or would you exclude this counterexample for the reason that the exponential map from the normal bundle is not a diffeomorphism onto any ball of radius $r > 0$ at $x = 0$?
Generally, you will probably need to have a bound on the extrinsic curvature, for which $C^2$ and compactness would suffice.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Sorry, I meant an orthonormal frame for the normal bundle that is Lipschitz. Supposing the manifold is closed, When $d=1$ the manifold is $S^1$, so can't one just do parallel transport along the curve starting from a point $x$ all the way back and then use a constant-rate rotation to match up at $x$?
-
You might possibly be having a problem of browser losing the cookies. Thus you are shown as multiple users. You might want to contact the moderators to merge them. – Anweshi Jul 25 2010 at 6:53
And then normally you can comment on questions for additional remarks, without needing to add them as answer. – Anweshi Jul 25 2010 at 6:55
Thanks a lot, but I think I won't be shown as the first user any more, so it should be fine. – Hari Jul 25 2010 at 7:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317079186439514, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/43119/real-world-applications-of-prime-numbers/43143
|
# Real world applications of prime numbers?
I am going through the problems from Project Euler and I notice a strong insistence on Primes and efficient algorithms to compute large primes efficiently.
The problems are interesting per se, but I am still wondering what the real world applications of primes would be.
What real tasks require the use of prime numbers?
Edit: A bit more context to the question: I am trying to improve myself as a programmer, and having learned a few good algorithms for calculating primes, I am trying to figure out where I could apply them.
The explanations concerning cryptography are great, but is there nothing else that primes can be used for?
-
3
– Arturo Magidin Jun 4 '11 at 3:44
Thanks, that's a great answer. Is there any other use besides cryptography? – Sylverdrag Jun 4 '11 at 4:14
2
As far as "real tasks" (if you don't consider mathematical research to be a real task) cryptography is the main use, though no doubt they make appearances in many other algorithms used all over the place, they don't have the "leading role", as it were, that they have in cryptography. – Arturo Magidin Jun 4 '11 at 4:17
11
No offense taken. When my dad's advisor was teaching a course in automata theory in the sixties, a student asked "Is there any practical application of automata theory?" After thinking about it for about 10 seconds, he replied "I know that at least me and thirty odd other people in the country make a living by doing automata theory. If you can come up with something more practical than that, let me know." – Arturo Magidin Jun 4 '11 at 21:08
3
Beside cryptography is coding theory. Random number generators, error correcting codes, and hashes often involve primes: either directly or indirectly. Another not so obvious (indirect) application: many libraries which perform arithmetic on large integers, or polynomials involve reductions modulo primes (see Hensel's lemma) for computational complexity reason. – user2468 Jul 16 '12 at 15:56
show 1 more comment
## 11 Answers
The most popular example I know comes from Cryptography, where many systems rely on problems in number theory, where primes have an important role (since primes are in a sense the "building blocks" of numbers).
Take for example the RSA encryption system: All arithmetic is done modulo $n$, with $n=pq$ and $p,q$ large primes. Decryption in this system relies on computing Euler's phi function, $\varphi(n)$, which is hard to compute (hence the system is hard to break) unless you know the prime factorization of $n$ (which is also hard to compute unless you know it upfront). Hence you need a method to generate primes (the Miller-Rabin primality checking algorithm is usually used here) and then you construct $n$ by multiplying the primes you have found.
-
Note that this encryption system will be utterly useless as soon as quantum computers are reasonably usable. – akkkk Nov 30 '12 at 22:48
Indeed. However, it is still not clear whether quantum computers will even be reasonably useful at a level that allowed them to break real-world RSA ciphers, and meanwhile RSA is used practically everywhere, so RSA is a good example of practical use of primes even if someday it will be obsolete. – Gadi A Dec 3 '12 at 9:45
Here is a hypothesized real-world application, but it's not by humans...it's by cicadas.
Cicadas are insects which hibernate underground and emerge every 13 or 17 years to mate and die (while the newborn cicadas head underground to repeat the process). Some people have speculated that the 13/17-year hibernation is the result of evolutionary pressures. If cicadas hibernated for X years and had a predator which underwent similar multi-year hibernations, say for Y years, then the cicadas would get eaten if Y divided X. So by "choosing" prime numbers, they made their predators much less likely to wake up at the right time.
(It doesn't matter much anyway, because as I understand it, all of the local bug-eating animals absolutely gorge themselves whenever the cicadas come out!)
EDIT: I should have refreshed my memory before posting. I just re-read the article, and the cicadas do not hibernate underground. They apparently "suckle on tree roots". The article has a few other mild corrections to my answer, as well.
-
2
I somehow don't think 13 and 17 are "large primes" that need computing, though, even if you are a cicada... – Arturo Magidin Jun 4 '11 at 4:54
1
Cicada's don't have the computing power that we do, so they stuck with smaller primes. Anyway, I realize my answer is not quite was the OP was looking for, but I still thought it was neat. – Jeff Jun 4 '11 at 4:58
1
Still, it's a very nice "real world application of primes". – Gadi A Jun 4 '11 at 4:58
2
And the computation is not done by the cicadas anyway, but by the predators who ate all the 15- and 16-year cicadas. – MJD Jul 16 '12 at 16:14
@Jeff: to expand on Mark's answer, it's not a matter of a computational power, because the burden of proof is on the predators. It's more likely because 13 and 17 were the smallest primes that allowed them to avoid most predators. A hypothetical group of 89-year period cicadas would grow much more slowly while not avoiding many more predators, so it would not be favored by evolution. – Generic Human Jul 25 '12 at 13:46
show 2 more comments
You can use prime numbers to plot this fine pattern :)
Intensity of green colour for each pixel was calculated using a function, which can be described with this pseudocode snippet:
````g_intensity = ((((y << 32) | x))^((x << 32) | y))) * 15731 + 1376312589) % 256
````
where x and y are a pixel coordinates in screen space, stored in a 64bit integer variables.
-
7
+1 designing carpets :) – user2468 Jul 16 '12 at 15:52
Nice picture! FWIW this is equivalent to `((x^y)*115 + 13) % 256` and it has nothing to do with prime numbers, but rather with the fact that 115 is odd and has a binary representation that is "random enough". – Generic Human Jul 25 '12 at 14:05
Just to add one more: Primes are also useful when generating Pseudo-Random Numbers with the computer. A few formulas use them to avoid patterns in the output.
-
that sounds interesting. Any specific example? – Sylverdrag Jun 4 '11 at 6:58
The most basic case is probably this: en.wikipedia.org/wiki/Lehmer_random_number_generator it was also asked a few days ago here math.stackexchange.com/questions/41847/… – Listing Jun 4 '11 at 7:05
When I was some 20 years old and living by myself for the first time, I designed a little racetrack with numbered squares on it, along with a handful of coloured tokens that would race along the track at the speed of one square per day. Each token had a household chore and a prime number on it; when a token hit its number, I had to carry out the given task, and it would get reset to zero. So, I washed the dishes every two days, watered the plants every three, vacuumed the carpet every five, ....
It was a good system. It made cleaning fun, it provided variety and structure at the same time, and I was obliged to devote the entire day to chores only once every 1397.73 years.
-
Primes are also useful for generating hash codes.
-
How would they be used for that purpose? Is it different from the cryptographic use? – Sylverdrag Jun 4 '11 at 6:56
The requirements for a hash are a little different: you want to minimize collisions and you don't really care whether the "encoding" is easy to undo or not. Though both randomizing functions and encryption functions can be used to generate hashes. – trutheality Jun 4 '11 at 7:05
Another reason prime numbers are used is that when the size of a hash table is prime, collisions are less likely. – trutheality Jun 4 '11 at 7:07
Maybe you want to expand your answer - explain what hash codes are and how primes are used to generate them. – Gadi A Jun 4 '11 at 8:49
1
No I don't. I'm severely underqualified for that. Those interested are better off searching for further details on Wikipedia or Google. – trutheality Jun 4 '11 at 8:56
Like yourself, I got into primes since this was a common exercise program to do when learning new programming languages and it was interesting to see which language was faster on the same algorithm/error check plan.
It was only when I was refining my Ada coded program to get the highest number of primes that I could get from a 32-bit machine that I came across the offset logarithmic integral. (I needed to reserve enough - but not too much - memory for my holding array for the primes. The array, of course, had to be declared prior to making any assignments to it. On a 1 GB memory 32-bit machine, I can get primes up to ~ 50 million before stack blows.)
$${\rm Li} (x) = \int_2^x \frac{dt}{\ln t}$$
This function represents the best approximation to the number of primes up to some number, x.
All I'm saying here is that this equation made me wonder about primes in the context of a number of other things that use related functions . . . That led me on to thinking about entropy calculations, particularly about selecting compositions more likely to give rise to metastable crystal forms - possibly even glasses - than other compositions using the same constituent elements.
-
Not sure this answers the question... – J. M. Jul 16 '12 at 16:07
A metastable phase of an existing substance is effectively a whole new material. It has its own individual properties, some (e.g. magnetic properties of metallic glasses, mechanical properties of diamond-like carbon, abrasive properties of cubic boron nitride, . . ) potentially very useful to mankind. The mathematical approach to predicting such compositions likely to obviate the usual kinetics of crystallisation has to be cheaper and simpler than existing approaches, like rapid solidification, huge external magnetic fields, phase prediction based on existing thermochemical data, etc. – Deek Jul 16 '12 at 19:12
Yes indeed modern cryptography is a useful branch which requires extensive use of prime numbers. A real world application to them would be how we use large primes in order for us to be able to encode information that is sent wirelessly when making transactions on our debit cards, credit cards, computers,$~\ldots$etc in order to keep our information safe. Now when I say real world I don't mean the physical world. Primes numbers use is only in the computer world, in which we use computers in our physical world; if that makes any sense at all. Primes number had little use until about the 19th century, when mathematicians experimented with them in hopes to uncover some breakthrough with their use. When the times of the war came around, the U.S. defense needed a way of secrecy of all high class confidential information, so files and messages all needed to be encoded, so that enemy lines could not retrieve vital information of plans and routines. Encryption was used, and to make the process of using primes numbers to encode information, computers came into play to create more complex and longer codes that would be much harder to crack by anyone. Primes can also be used in pseudorandom number generators and computer hash tables. There are some biological instances in which primes are used to help in predicting the predator-prey model for a special type of insect to have a higher survival rate which are "Cicada". Something else would be public-key encryption, formally known as RSA.
There are many types of classifications of prime numbers, but the main two are Fermat primes and Mersenne primes.
Have a look at this video here from Terence Tao. Structure and Randomness in Prime Numbers
Articles Here:
Treatment on Primes, They are the very top 9 links by Terry Tao and others.
Powerpoint Link in First Paragraph
-
A simple answer is finding GCF and LCD for whole numbers which allows us to efficiently manipulate fractions, both arithmetic and algebraic. Another is rationalizing and simplifying radical expressions. Prime number manipulation is a basic and not-so-basic tool of mathematics.
-
There may be some applications (other than to cryptography, already mentioned) in Manfred Schroeder's book, Number Theory in Science and Communication.
-
Prime numbers are used in public key cryptography. It is used because you generally don't think of the really big prime numbers, so it is great for codes and to keep things safe.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406687021255493, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/23247/approximating-holomorphic-maps-by-holomorphic-embeddings/23251
|
## Approximating holomorphic maps by holomorphic embeddings
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\mathrm{Hol}^d(\Sigma, \mathbb{C} \mathbb{P}^n)$ denote the space of holomorphic maps of degree $d$ from a Riemann surface $\Sigma$ to complex projective space of dimension $n$. Let $\mathrm{HolEmb}^d(\Sigma, \mathbb{C} \mathbb{P}^n)$ denote the subspace of those holomorphic maps which are also embeddings, so there is an inclusion $$i : \mathrm{HolEmb}^d(\Sigma, \mathbb{C} \mathbb{P}^n) \longrightarrow \mathrm{Hol}^d(\Sigma, \mathbb{C} \mathbb{P}^n).$$
If we were discussing smooth maps, instead of holomorphic, Whitney's embedding theorem would say that this map is approximately $(\frac{n}{2}-2)$-connected. Is there a connectivity range for this map of spaces of holomorphic maps?
-
Perhaps I misunderstood which space you were referring to by "the space", because there are two. What I am asking about is the relative connectivity of two spaces, holomorphic embeddings and holomorphic maps. Surely the argument you gave applies to both of the spaces, and shoes that neither is simply connected: what does this mean for the relative connectivity though? – Oscar Randal-Williams May 3 2010 at 8:30
Oscar huge thanks, I see now that I completely misuderstood your question:) So I delit my answer. – Dmitri May 3 2010 at 9:08
## 2 Answers
I will assume that d is the degree of the pull-back of $\mathcal{O}(1)$ to $\Sigma$ and that it is sufficiently large with respect to the genus g of $\Sigma$. In this case, the dimension of the space of holomorphic maps of $\Sigma$ in $\mathbb{P}^n$ is $$D_n := (n+1)d + n(1-g) ,$$ while the dimension of the space of holomorphic maps that are not isomorphisms onto their image is $$(n+1)d + n(1-g) -n+2 = D_n -(n-2) .$$ In particular, since the inclusion that you are interested in has complement of (complex) codimension n-2, it follows that it is "quite connected", roughly (2n-1)-connected?
Unless I made some mistakes in my computations, the estimates for the dimensions above are only valid if d is sufficiently large, otherwise they should simply be lower bounds on the actual dimensions of the spaces. In the case of n=2 you obviously must allow singularities in the image, but you can also prove that the locus where the morphism is not a local embedding (i.e. when the derivative is not injective somewhere) has codimension one.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
At least for CP^2, HolEmb is empty unless g=(d-2)(d-1)/2 by the adjunction formula.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411405920982361, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/214158-linear-algebra-help-print.html
|
Linear Algebra help
Printable View
• March 3rd 2013, 11:00 AM
Tweety
Linear Algebra help
${v_{1}, v_{2}, v_{3} }$ is a basis for a three dimensional real vector space V. Show that the set ${v_{1} + v_{2} , v_{2} + v_{3}, v_{3}+ v_{1} }$ is also a basis for V.
I am finding it really hard ot understand the concept of finding bases, and my notes dont make sense, am not even sure where to start, do i first find a spanning set ?
• March 3rd 2013, 11:16 AM
MacstersUndead
Re: Linear Algebra help
Note that if $w_1 = v_1 + v_2, w_2 = v_2+v_3, w_3 = v_3+v_1$ then
$w_1-w_2+w_3 = 2v_1$ and so $v_1 = (1/2)(w_1-w_2+w_3)$
The same can be done to find a linear combination of w's for $v_2,v_3$. Can you finish?
• March 3rd 2013, 12:39 PM
ILikeSerena
Re: Linear Algebra help
Hi Tweety! :)
To show the new set is a basis for V, you should show that each of the original vectors can be constructed from the new set.
Does that make sense?
• March 6th 2013, 07:51 AM
Tweety
Re: Linear Algebra help
so some how show $v_{1}$ = $v_{1} + v_{2}$ ? Still not sure where to start though
• March 6th 2013, 07:52 AM
Tweety
Re: Linear Algebra help
Quote:
Originally Posted by MacstersUndead
Note that if $w_1 = v_1 + v_2, w_2 = v_2+v_3, w_3 = v_3+v_1$ then
$w_1-w_2+w_3 = 2v_1$ and so $v_1 = (1/2)(w_1-w_2+w_3)$
The same can be done to find a linear combination of w's for $v_2,v_3$. Can you finish?
But thats not a linear combination ?
• March 6th 2013, 08:39 PM
MacstersUndead
Re: Linear Algebra help
The new set that ILikeSerena is referring to is your new set of w vectors. If you can show that each v vector is a linear combination of w vectors, then you are done. You are done because let t be an arbitrary vector in V. since t is a vector in V, with v's as the basis, t can be expressed as $t = av_1 + bv_2 + cv_3$. If you can show that $v_1,v_2,v_3$ are linear combinations of w vectors, then you can replace each $v_i$ with w's.
I have given you one such combination. (Verify by replacing each w vector by the sum of v vectors and algebra) In this case by simple replacement $t = a[(\frac{1}{2})(w_1-w_2+w_3)] + bv_2 + cv_3$
Combine like terms after you express $v_2$ and $v_3$ in terms of w's and observe the arbitrary vector t in the vector space V is a linear combination of w's, hence the set of w vectors is a basis.
All times are GMT -8. The time now is 12:05 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9233183860778809, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=3626476
|
Physics Forums
Thread Closed
Page 2 of 2 < 1 2
Blog Entries: 1
Recognitions:
Gold Member
Science Advisor
## What would you pay...?
Quote by Moonbear I know I'm risk aversive when it comes to gambling. I'd pay $2. I can't really see any reason to pay more if you can win all the higher prizes regardless of how much you spent on the game. Perhaps you meant the question to be, "How much could someone ask to pay to play and you'd still be willing to try?" I probably still wouldn't play for more than$2, but you might get more people to play if charged $4 or$8 considering they play lottery tickets for that much.
You could interpret 'willing to pay' as if there were an auction on the right to play once.
What is amusing is that the same theory that correctly says all casino gambling and all lottery games (unless the winning amount rolls unclaimed a number times) are foolish, claims that you should be willing to bid any amount to play this game once.
Recognitions:
Science Advisor
Quote by DaveC426913 Uh. Isn't that what I just said?
Sorry, data transmission error. The message I "received" was something like "why is the OP asking how much you would pay on a rational basis, when there is no mathematically correct answer?". Which is not what you wrote!
The game host should pay me at least 51 cents to play his game, of course! (Since 1+1+1+...=-1/2) lol ;) kidding...
Mentor
Quote by nucl34rgg The game host should pay me at least 51 cents to play his game, of course! (Since 1+1+1+...=-1/2) lol ;) kidding...
50 cents. Be fair!
I was tempted to post this answer myself some time ago.
This is discussed here: http://www.physics.harvard.edu/acade.../problems.html If anyone is interested. It's toward the bottom, Week 6.
I think the answer is that you should play the game if you have to pay 4 dollars or less. (If you pay 2 dollars, you are guaranteed to always at least break even. If you pay $4, you have a 50-50 shot at at least getting your money back or making more. If you pay anything over$4, you have less than a 50% chance to win, so you shouldn't play.
Since the question is what I would pay, I would pay zero, so that any winnings would be pure profit. Frankly, I think the question (and the game rules) are both poorly worded.
Recognitions: Gold Member I would pay \$1.99. Guaranteed to win, baby!
Mentor
Quote by larrybud Since the question is what I would pay, I would pay zero, so that any winnings would be pure profit.
And what if the person running the game said "No thanks" to that offer? What is the top amount you would be willing to pay to play? Keep in mind that the expected value is infinite. Also keep in mind that the only way to obtain this expected value is to flip an infinite number of tails in a row, which is something you will not be able to do in a finite amount of time. (And you will presumably die in a finite amount of time.)
Recognitions:
Gold Member
Quote by D H And what if the person running the game said "No thanks" to that offer? What is the top amount you would be willing to pay to play? Keep in mind that the expected value is infinite. Also keep in mind that the only way to obtain this expected value is to flip an infinite number of tails in a row, which is something you will not be able to do in a finite amount of time. (And you will presumably die in a finite amount of time.)
Most forms of gambling (and lottery) are just taxes on stupid people. If you're not literally enjoying the act of gambling, then the value of the game might be negligible to some.
In my opinion, this is a boring game... I would elect to low-ball.
Mentor
Quote by FlexGunship Most forms of gambling (and lottery) are just taxes on stupid people.
Everyone gambles many times a day. They just don't know it. Hopping into the car to drive to work, run errands, or go out on the town is a gamble. You might be killed or injured by someone with failed brakes or by some fool on a cell phone. Every election of how to allocate one's retirement account is a gamble. It doesn't matter whether its the riskiest REIT that can be found or a supposedly safe money market account.
Quote by D H Everyone gambles many times a day. They just don't know it. Hopping into the car to drive to work, run errands, or go out on the town is a gamble. You might be killed or injured by someone with failed brakes or by some fool on a cell phone. Every election of how to allocate one's retirement account is a gamble. It doesn't matter whether its the riskiest REIT that can be found or a supposedly safe money market account.
I see a qualitative distinction between
- performing activities that one need to do accomplish things in one's daily life, knowing those activities carry a risk of failure, and
- taking a risk purely for the thrill of the possible win.
Strictly using the term gambling, I would apply it to the latter but not the former.
Blog Entries: 1
Recognitions:
Gold Member
Science Advisor
Quote by DaveC426913 I see a qualitative distinction between - performing activities that one need to do accomplish things in one's daily life, knowing those activities carry a risk of failure, and - taking a risk purely for the thrill of the possible win. Strictly using the term gambling, I would apply it to the latter but not the former.
Where would you put investing in a mutual fund? Starting your own business? Objectively, the latter has a relatively low probability of success, but a favorable expectation (or so you judge).
I have an objection to the reasoning that the expected gain is infinite. Rather, the expectation value $E(X)$ is infinite. However, one must ask "how does $E(X)$ acquire its usual meaning of the expected gain?". The answer is: "due to some limit theorem". These limit theorems (e.g. law of large numbers) requires that E(X) is finite. Consequently, the math doesn't seem to tell us anything about the expected gain.
I'd bet 4$. You have a 50/50 chance of losing 2$ on the first toss. If you don't lose you have a 50/50 chance of winning 4$(8-4=4) on the next. You also will keep doubling up for every consecutive tails you throw after the first one. If you throw 3 tails in a row you have 16$ and a 50/50 chance to double it on the next toss, as well as after every additional tails thrown.
Thread Closed
Page 2 of 2 < 1 2
Thread Tools
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9716969132423401, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/55291/form-of-the-classical-em-lagrangian/55319
|
# Form of the Classical EM Lagrangian
So I know that for an electromagnetic field in a vacuum the Lagrangian is $\mathcal L=-\frac 1 4 F^{\mu\nu} F_{\mu\nu}$, the standard model tells me this. What I want to know is if there is an elementary argument (based on symmetry perhaps) as to why it has this form. I have done some searching/reading on this, but have only ever found authors jumping straight to the expression and sometimes saying something to the effect that it is the "simplest possible".
Thanks for the great answers, I accepted the one that was exactly what I was looking for, but the other two long ones were equally interesting and useful.
-
## 5 Answers
The Lagrangian for Electromagnetism follows uniquely from requiring renormalizability and gauge invariance (plus parity time reversal)
## U(1) gauge Invariance
if you require your Lagrangian to be locally invariant under symmetry operations of the unitary group U(1) that is under
$$\phi\to e^{i\alpha(x)}\phi$$
all derivatives $\partial_\mu$ have to be replaced by the covariant derivative $D_\mu = \partial_\mu+ieA_\mu$, where, in order to save local invariance the gauge field is introduced. Loosely speaking this is necessary to make fields at different spacetiem points comparable Since two points may have an arbitrary phase difference, due to the fact that we can set $\alpha(x)$ as we wish, something has to compensate this difference, before we can compare fields, which is what differentiation basically does. This is similar to parallel transport in general relativity (the mathematical keyword is connection see wiki: Connection (wiki) The gauge field $A_\mu$ transforms as $A_\mu \to A_\mu - \frac{1}{e}\partial_\mu\alpha(x)$.
Now the question is what kind of Lagrangians we can build with this requirement. For matter (i.e. non-gauge) fields it's easy to construct gauge invariant quantities by just replacing the derivatives with the covariant derivatives, i.e.
$$\bar{\psi}\partial_\mu\gamma^\mu\psi\to \bar{\psi}D_\mu\gamma^\mu\psi$$,
this will yield kinetic terms for the field (the part with the normal derivative), and interactions terms between matter fields and the gauge field.
## Gauge-Field only terms
the remaining question is how to construct terms involving only the gauge field and no matter fields (i.e. the 'source-free' terms your question is about). For this we must construct gauge-invariant germs of $A_\mu$.
Once $\alpha(x)$ is chosen we can imagine starting from a point and walking on a loop back to that same point (this is called a wilson loop (wiki)). This must necessarily be gauge invariant since any phase that we pick up on the way we must also loose on the way back. It turns out, that this is exactly the term $F_{\mu\nu}$, i.e. the field strength. (the calculation is a little longer, see Peskin & Schroeder page 484). Actually this is only true for abelian symmetries such as U(1), for non abelian ones such as SU(3) we will get some interaction terms between the gauge fields which is why light does not interact with itself but gluons do.
Bilinear mass terms such as $A_\mu A^\mu$ are not gauge invariant (in the end this is the need for the Higgs meachanism)
## Renormalizability
If we wish that our theory is renormalizable, we can only include terms into the lagrangian up to mass dimension 4. Now listing all terms up to mass dimension 4 we arrive at
$$\mathcal{L} = \cdot\bar{\psi}D_\mu\psi - m\bar{\psi}\psi - b\cdot F_{\mu\nu}F^{\mu\nu} + d\cdot \epsilon^{\alpha\beta\gamma\delta}F_{\alpha\beta}F_{\gamma\delta}$$
the last term involves the anti-symmetric tensor $\epsilon^{\alpha\beta\gamma\delta}$ and is therefore not time and parity invariant.
Note that we have not included linear terms here since we will be expanding around a local minimum anyways, so that the linear term will vanish.
## Conclusion
if we require U(1) gauge invariance and renormalizability (mass dimension up to 4) and time and parity invariance we only get
$$\mathcal{L} = \cdot\bar{\psi}D_\mu\psi - m\bar{\psi}\psi - b\cdot F_{\mu\nu}F^{\mu\nu}$$
In the source-free case this is
$$\mathcal{L} = - b\cdot F_{\mu\nu}F^{\mu\nu}$$
the overall factor $\frac{1}{4}$ is not important.
-
The most economical derivation of the Maxwell equations (i.e., depending on the least number of postulates) is known nowadays by the name of "Feynman's proof of the Maxwell equations". This "proof" was discovered by Feynman in 1948, but was not published because Feynman did not think that it leads to new physics. It was only in 1989 after Feynman's death when his proof was published by Dyson, please see the Dyson's article.
After being published by Dyson, the Feynman's proof was found to contain very deep ideas in Poisson geometry. As well as its generalization to the non-Abelian case leads to the Wong equations describing the motion of a point particle in an external Yang-Mills field, (please see the following exposition by: Montesinos and Abdel Perez-Lorenzana) which in turn is related to the Kaluza-Klein theory. It explains various mechanical ideas, for example the phenomenon of the falling cat. The whole subject is known today by the name of sub-Riemannian geometry.
The postulates of the Feynman's proof are the following:
1. The position and velocity of the particle satisfy the canonical Poisson bracket relation with the position.
2. The acceleration of a charged particle in an electromagnetic field is a function of the position and velocity only.
Under these assumptions only, Feynman proved (please see the Dyson's article for details) that the electromagnetic force must satisfy the Lorentz law and the electromagnetic field must satisfy the homogeneous Maxwell equations.
-
The Maxwell term
$$\tag{1} {\cal L}_{\rm Maxwell}~=~-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$$
emerges naturally for many reasons.
1) Pure EM without matter. There is a very short list of special relativistic renormalizable terms, that one can put in a local Lagrangian density with no higher-order time derivatives, and that is gauge-invariant (up to boundary terms). Among the shortlist of these candidates, the Lagrangian density (1) is the only one (modulo an overall normalization factor and modulo boundary terms) that leads to Maxwell equations (without source terms). The Born-infeld Lagrangian density is an example of a non-local candidate.
2) EM coupled to matter. Additional conditions arise when one tries to couple EM to point charges. One may argue that the relativistic Lagrangian for $n$ point charges $q_1, \ldots, q_n$, at positions ${\bf r}_1, \ldots, {\bf r}_n$, in an EM background is given as
$$\tag{2} L ~=~ -\sum_{i=1}^n \left(\frac{m_{0i}c^2}{\gamma({\bf v}_i)} +q_i\{\phi({\bf r}_i) - {\bf v}_i\cdot {\bf A}({\bf r}_i)\} \right).$$
See also this Phys.SE answer. This Lagrangian (2) e.g. reproduces correctly the Lorentz force. From eq. (2)it is only a small step to conclude that the interaction term ${\cal L}_{\rm int}$ between EM and matter [in the $(-,+,+,+)$ sign convention] is of the form
$$\tag{3} {\cal L}_{\rm int}~=~J^{\mu}A_{\mu}.$$
Also recall that Maxwell's equations with sources are
$$\tag{4} d_{\nu} F^{\nu\mu}~=~-J^{\mu}.$$
If the action
$$\tag{5} S[A]~=~\int\! d^4x~ {\cal L}$$
is supposed to be varied wrt. the $4$-gauge potential $A_{\mu}$, i.e. the $4$-gauge potential $A_{\mu}$ are the fundamental variables of the theory, and if moreover the corresponding Euler-Lagrange equations
$$\tag{6} 0~=~\frac{\delta S}{\delta A_{\mu}} ~\stackrel{?}{=}~d_{\nu} F^{\nu\mu}+J^{\mu}$$
are supposed to reproduce Maxwell's equations (4), it quickly becomes clear that
$$\tag{7} {\cal L}~=~{\cal L}_{\rm Maxwell}+{\cal L}_{\rm int}$$
is the appropriate Lagrangian density for EM with background sources $J^{\mu}$.
3) For a discussion of formulating EM without the $4$-gauge potential $A_{\mu}$, see also this Phys.SE post.
-
Varying it with respect to the four-potential yields Maxwell's equations. That's really the only answer to why any classical Lagrangian has the form that it does - because it yields the correct field equations.
-
But of course there are other Lagrangians that do the same thing. And that won't do as a derivation, suppose you didn't know Maxwell's equations beforehand. – Sean D Feb 27 at 11:33
1
– Nick Kidman Feb 27 at 13:13
Yes, symmetry argument is correct - It has to be invariant under local gauge transformations.
-
That makes sense, but it's not clear that there couldn't be other Lagrangians which are also invariant. – Sean D Feb 27 at 11:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351432919502258, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/59822?sort=votes
|
When is Out$(SL_n(R))$ a torsion group ?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is a follow up question to this question. So my question is:
For which rings $R$ (commutative, with unit) (and which integers $n$) is $Out(SL_n(R))$ a torsion group? A consequence of Theorem A and B in O'Meara The automorphisms of the linear groups over any integral domain is that this is the case (for $n\ge 3$) for any integral domain, whose underlying additive abelian group is finitely generated.
However this is just a computation and I am wondering, whether this question has already been studied somewhere more systematically or if there are other results that also have such a corollary.
-
1 Answer
This question has been studied even for non-commutative associative rings. See, for example Golubchik, I. Z.; Mikhalëv, A. V. Isomorphisms of the general linear group over an associative ring. Vestnik Moskov. Univ. Ser. I Mat. Mekh. 1983, no. 3, 61–72. They prove that if the ring $R$ contains $1/2$ (that is $2$ is invertible), then every isomorphism $\phi: GL_n(R)\to GL_n(R)$, $n\ge 3$, is standard on the subgroup $GE_n(R)$ generated by the elementary and diagonal matrices. "Standard" means "generated by an automorphism of $R$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351125955581665, "perplexity_flag": "head"}
|
http://polymathprojects.org/2012/07/13/minipolymath4-project-second-research-thread/?like=1&_wpnonce=9501e873bf
|
# The polymath blog
## July 13, 2012
### Minipolymath4 project, second research thread
Filed under: polymath proposals — Terence Tao @ 7:49 pm
It’s been almost 24 hours since the mini-polymath4 project was opened in order to collaboratively solve Q3 from the 2012 IMO. In that time, the first part of the question was solved, but the second part remains open. In other words, it remains to show that for any sufficiently large $k$ and any $N$, there is some $n \geq (1.99)^k$ such that the second player B in the liar’s guessing game cannot force a victory no matter how many questions he asks.
As the previous research thread is getting quite lengthy (and is mostly full of attacks on the first part of the problem, which is now solved), I am rolling over to a fresh thread (as is the standard practice with the polymath projects). Now would be a good time to summarise some of the observations from the previous thread which are still relevant here. For instance, here are some relevant statements made in previous comments:
1. Without loss of generality we may take N=n+1; if B can’t win this case, then he certainly can’t win for any larger value of N (since A could simply restrict his number x to a number up to n+1), and if B can win in this case (i.e. he can eliminate one of the n+1 possibilities) he can also perform elimination for any larger value of N by partitioning the possible answers into n+1 disjoint sets and running the N=n+1 strategy, and then one can iterate one’s way down to N=n+1.
2. In order to show that B cannot force a win, one needs to show that for any possible sequence of questions B asks in the N=n+1 case, it is possible to construct a set of responses by A in which none of the n+1 possibilities of x are eliminated, thus each x belongs to at least one of each block of k+1 consecutive sets that A’s answers would indicate.
3. It may be useful to look at small examples, e.g. can one show that B cannot win when k=1, n=1? Or when k=2, n=3?
It seems that some of the readers of this blog have already obtained a solution to this problem from other sources, or from working separately on the problem, so I would ask that they refrain from giving spoilers for this question until at least one solution has been arrived at collaboratively.
Also, participants are encouraged to edit the wiki as appropriate with new developments and ideas, and participate in the discussion thread for any meta-discussion about the polymath project.
## 21 Comments »
1. If k=1 n=1 then then A chooses N=2 x=1. To the question is x=1 or x=2 A answers yes unless the previous question was the same and he answered yes to it then he answers no. To the question is x in the set containing both 1 or 2 A answers yes. Then he never tells 2 lies in a row and never breaks symmetry between 1 or 2 so it will be impossible for B to figure out which A has chosen.
Comment by kristalcantwell — July 13, 2012 @ 8:15 pm
2. Just to rephrase the question, as it takes me a long time to figure out how B can extract information if A just acts randomly:
Suppose that B askes a series of questions, and A answers a sequence of Yes and No. We call A’s answers a “dubious” sequence, call it Ans, indexed by i.
For every element x in {1,2,…,N}, we can construct a “truthful” sequence, call it Truth(x), i being the index, where A always give the correct answer given that x is the secret number.
Now B can exclude the possibility of y being secret number, if Truth(y) and Ans differ in k+1 consecutive terms (*). Phrasing in this way, A’s lying strategy would be looking at the set of all truthful sequences given B’s question, and choose Yes or No to prevent scenario (*) from happening.
Comment by prospector — July 14, 2012 @ 3:22 am
• Comparing sequences Ans and Truth(y), we could define Lie(y)_ j := total number of consecutive lies by A so far, after answering j-th question, after last telling truth. If Ans_i = Truth(x)_i, Lie(x)_i clears to zero; otherwise, Lie(x) increments by 1. When Lie(y) reaches k+1, we know that y is not the secret number; and this is what A wants to avoid.
The trick here for B is to force the sequence Lie(x) for some x, to increase k+1 times consecutively, regardless of how A answer. At first, all Lie(x) are 0.
If B asks if “x is in S?”, A has two choices:
1. Say “Yes”. Then Lie(x) clears to zero for x in S, and Lie(x) increments by 1 for x not in S.
2. Say “No”. Then Lie(x) increments by 1 for x in S, and Lie(x) becomes 0 for x not in S.
In this way, we can think of the following games, similar to Conway’s checkers:
There are N checkers on horizontal line y=0. Each time, B can partition these checkers into two set, X and Y, then A can either move all the checkers in X vertically up by 1 and put all checkers in Y back to line y = 0, or move all the checkers in Y vertically up by 1 and all checkers in X back to the line y = 0. (Here Lie(x) is the vertical distance of checker x to the line y = 0.) Show that no checkers can reach line y = k+1.
Comment by Prospector — July 14, 2012 @ 6:31 am
3. For k = 2, n = 2 (which, like (k, n) = (1, 1), has k = 1/2 * 2^n)), one can proceed as follows (here N = 3; say the numbers are 1, 2, and 3):
I) Whatever B guesses on turn 1, A answers as if A’s number was ACTUALLY 1.
II) Whatever B guesses on turn 2, A answers as if A’s number was ACTUALLY 2.
III) Whatever B guesses on turn 3, A answers as if A’s number was ACTUALLY 3.
And so A goes on: pretending his number is actually 1 on turns congruent to 1 mod 3, 2 on turns that are 2 mod 3, and 3 that are 3 mod 3… no matter what A chooses, A’s statement is automatically true at least every third move..
Comment by William Meyerson — July 14, 2012 @ 9:41 pm
4. Here’s some tentative ideas; they need cleaned up.
Formulation 1
Let $Q = \{1,2,\ldots,n+1\}$, and let $G$ be a sequence of $k+1$ non-empty subsets of $Q$. Player A needs to find a cover $C$ of $Q$, with $|C| \leq k+1$, consisting of member(s) of $G$ and their complement(s) with respect to $Q$.
If such a strategy exists for $k = m$, it exists for all $k \geq m$ as well, since Player A could simply use the $k = m$ strategy.
If Player B asks $q_a$ followed by $q_b$, $q_a \supseteq q_b$ and $|b - a| \leq k$, then Player A can complete the cover by negating the answer for $q_a$.
If the set of possible $x$ indicated over the past $k$ turns by Player A is $A$ and $q_n \subseteq A$, then the cover can be completed by answering “No”. If the set of not-yet indicated $x$ over the past $k$ turns by Player A is $A$ and $q_n \supseteq A$, then the cover can be completed by answering “Yes”.
Formulation 2
There are $2^{n+1}-1$ questions. Assign each question a unique, $n+1$-digit binary string $b_i$, where the $i$-th digit of $b_i$ equals 1 if it is asked, and accumulate the indicated answers in a bit string $A$. Player A can either bitwise or $b_i$ and $A$, or bitwise or $\lnot b_i$ and $A$. The goal is to have $A = 11111...$ after $k+1$ turns.
$1.99^k \leq n \leq 2^k$, so an equivalent condition is $k\ \log_2(1.99) \leq \log_2(n) \leq k$. Therefore Player A must indicate half or more of the remaining digits with each question. This can be done by selecting either $b_i$ or $\lnot b_i$, depending on whichever indicates more new bits. $b_i$ or its negation will indicate at least half of the remaining bits, since any $b_i$ and its negation indicate all of the bits. This can be done in at most $k+1$ operations: given $\log_2(n) \leq k$, $\lceil \log_2(n+1) \rceil \leq k+1$. For at least one of those operations, telling the truth will be the optimal choice. If it were not, Player B would have to ask $k+1$ unique questions such that $x$ was always in the smaller group of bits. This would mean that only $x$ was not indicated on question $k+1$. However, $x$ would still be indicated in that answer. If $x \in b_{k+1}$, then a “yes” answer indicates $x$, and if not, a “no” answer indicates $x$ as well.
Comment by ajk — July 14, 2012 @ 10:06 pm
5. One can have a look at Chapter 15 in Alon & Spencer’s book `Probabilistic Methods’…
Comment by — July 14, 2012 @ 10:10 pm
• eh, it’s Chapter 15 in the third edition … specficially sectin 15.3 (tenure game)
Comment by lowai — July 15, 2012 @ 2:10 pm
6. An easy observation that probably a lot of people have made: A can win if k=n=1. A chooses either 1 or 2, at random, and then alternates answering between answering the question as if the correct answer were 1 and answering as if it were 2. There is no way for B to know which half of A’s answers are truthful. More generally, A can win whenever k=n.
What is the current fastest growing function f(k) so that A can win for n=f(k)?
Comment by David Speyer — July 15, 2012 @ 12:14 am
7. Here is how I am currently thinking about the problem. I’m using the reduction where $N=n+1$.
Let’s say that player A “suggests” $i$ whenever she gives an answer which is consistent with $i$ being the secret number.
For any \$i\$, if \$A\$ ever goes $k+1$ turns without suggesting $i$, then \$A\$ loses. If $i$ was the secret number, then $A$ has broken the rules. If not, then $B$ can eliminate $i$ and win.
If there are ever $2$ numbers, say $i$ and $j$, which have not been suggested for $k$ turns, then $B$ wins. This is because $B$ can ask a question which makes it impossible for $A$ to suggest both $i$ and $j$, then win as in the previous paragraph.
Similarly, \$A\$ loses if there are ever $4$ numbers which have not been suggested in $k-1$ turns, or $8$ which have not been suggested in $k-2$, etc.
Turning this logic around, in a winning strategy for $A$, the number of digits which have NOT been suggested for the last $k-r$ rounds should grow exponentially in $r$.
This hasn’t suggested a strategy to me yet, but my understanding is that the rules of a Polymath are that I should toss all my ideas out there.
Comment by David Speyer — July 15, 2012 @ 12:23 am
• Here is a template for a strategy for $A$. At any moment in time, let $a_i$ be the number of values which have not been suggested for the last $i$ rounds. So the number of elements which just got suggested is $a_0$, and we want to make sure that $a_{k+1}$ is always $0$. Choose some positive real weights $w_0$, $w_1$, …, $w_k$. $A$ always makes the choice (whether to lie or tell the truth) which minimizes $\sum a_i w_i$. At the moment, I don’t have specific weights to nominate, but my intuition is that $w_i$ should grow just a little slower than $2^i$.
Comment by David Speyer — July 15, 2012 @ 12:47 am
• For example, suppose that we take $w_i = 1.99^i$. Let the current score be B. Let $B_1$ be the part of the score which comes from numbers that would be suggested by answering YES and let $B_2$ be the part of the score which comes from numbers that would be suggested by answering NO. So $B=B_1 + B_2$. Let $n_1$ be the number of numbers contributing to $B_1$, and likewise for $n_2$. So $n+1=n_1+n_2$, and both are positive.
Without loss of generality, say $B_1 > B_2$. So A answers YES, and the new score is
$n_1 + (1.99) B_2 < n + (1.99/2) B$.
So the score can never get above the fixed point of this recursion, at $B = 200 n$. So there will never be an element which goes unsuggested for $k+1$ steps where $1.99^{k+1} = 200 n$. So this is a winning strategy for $n = 1.99^{k+1}/200$.
I think I just solved the problem. I'm going to sit back and think about this for a bit.
Comment by David Speyer — July 15, 2012 @ 1:00 am
• You want to solve it for n greater than 1.99^k. That can be fixed by changing w_i to 1.999^i. If your method works I think you can change the exponent in your w_i and get an answer greater than any exponent less than 2for large enough k.
Comment by kristalcantwell — July 15, 2012 @ 1:27 am
8. Here is an attempt to write up my solution neatly from scratch. My apologies if either (1) I have stolen people’s fun by getting to the answer too fast or (2) I have wasted people’s time with a faulty solution.
Let $\epsilon>0$ be a small parameter. Our strategy will allow Alice to deny Bob a certain win with $n+1 = \epsilon (2-\epsilon)^{k+1}$. By first choosing $\epsilon$ sufficiently small (say $10^{-3}$) and then $k$ sufficiently large, this is greater than $1.99^k$.
With $n=\epsilon (2-\epsilon)^{k+1}/2$, Alice starts by choosing a random integer from $1$ to $n+1$. Alice then never considers this value again. Since her answers will in no way depend what number she has chosen, they can reveal no information about that number, and therefore can provide Bob with no aid. What we must see is that Alice can do this while obeying the rule that she must tell the truth once every $k+1$ rounds.
We say that Alice “suggests $i$” if she gives an answer which would be truthful, if her number were $i$. We will describe a strategy so that Alice suggests every number at least once every $k+1$ turns. In particular, she suggests the true number once every $k+1$ rounds, and thus obeys the rules.
Let $a_r$ be the number of values which have not been suggested in the last $r$ rounds. Alice’s strategy will be to always answer so as to minimize the quantity $\sum a_r (2-\epsilon)^r$. We’ll call this quantity the score. At the beginning of the game, the score is $n+1$. Suppose that the score is $B$. The next question is asked about set $S$. Let $n_1 = |S|$ and $n_2 = (n+1) - |S|$. Let $B_1$ be the part of the score contributed by numbers in $S$, and let $B_2$ be the part contributed by numbers not in $S$. Then the new score is $\min((2- \epsilon) B_1 + n_2, (2- \epsilon) B_2 + n_1)$. Since $( (2- \epsilon) B_1 + n_2 ) + ((2- \epsilon) B_2 + n_1) = (2-\epsilon) B + n + 1$, Alice can definitely arrange for the new score to be $\leq ((2-\epsilon) B + n+1)/2 = (1-\epsilon/2) B + (n+1)/2$. So, inductively, the score never gets higher than $(n+1)/\epsilon$. In particular, there is no contribution from $r$ such that $(2-\epsilon)^r > (n+1)/\epsilon$. Since we chose $(k,n)$ such that $n+1 = \epsilon (2-\epsilon)^{k+1}$, that means that we have succeeded in suggesting every value at least once every $k+1$ steps, as desired. QED
“Just the place for a Snark! I have said it twice: That alone should encourage the crew.
Just the place for a Snark! I have said it thrice: What I tell you three times is true.”
Comment by David Speyer — July 15, 2012 @ 1:23 am
• As there were comments about the threshold of n.
Is it the most one can get using this method?
For example could the weights (2 – \epsilon)^r be modified to handle the case when n = 2^k / k^m
Comment by — July 15, 2012 @ 4:33 am
9. We need to prove given any set of questions offered by questioner B, A can always make an answer such that at least (2-epsilon)^k numbers satisfy the rules for that answer.
So we just need to construct a strategy for A such that eventually (2-epsilon)^k possibilities remain.
Comment by Anonymous — July 15, 2012 @ 4:03 am
• Is there anything wrong with the following proof that the bound of part 1 can be improved to
n>2^(k-1) +1?
Preamble
For a set U of 2^m possible integers, define strategy X for B as follows. First ask m “binary digit” questions. A will then have claimed x is in m subsets which exclude just 1 of the 2^m possible integers – call this integer X(U). B then asks if x is X(U).
Proof
Suppose n = 2^(k-1) +2.
As noted by others, A can be “forced” to say x is a particular element. Consider this to be the first question and answer.
Let U be 2^(k-1) of the integers, excluding the particular element and one other. Apply strategy X for questions 2 to k+1. (For questions 2 to k arbitrarily add just one of the two excluded elements to S.)
As before, A is “forced” to say x is X(U) in answer to question k+1. Now consider questions k and k+1 to be the first two questions. At this point, A has claimed that x is in a set containing 2^(k-2) + 1 integers and has claimed that x is X(U). Let V and W be two sets, each containing 2^(k-2) integers, but excluding X(U), such that V is part of the first claimed set and W is not.
Now, for questions 3 to k+1, apply strategy X to set W. Questions 3 to k can be used to similarly and simultaneously analyse set V. (For questions 3 to k arbitrarily add just one of the two elements not in V or W to S.)
As before, A is “forced” to say x is X(W) in answer to question k+1. Furthermore, if B asks if x is X(V) for question k+2, then A is “forced” to say yes.
Now, once again, consider questions k and k+1 to be the first two questions. Let Z be the 2^(k-1) integers excluding X(V) and X(W) and apply strategy X to Z. Before the final question is asked B knows that x is not X(Z).
Comment by Stan Dolan — July 16, 2012 @ 2:17 pm
• For N = 2 if N = 2^k, thus sharpening the assertion of the IMO problem to n >= 2^k – 1 for k >= 2. For k = 2 it seems to be similar to Jerzy Wojciechowski’s in #10. Combining this with William Meyerson’s #3 we thus know that n = 2^k – 1 = 3 is the minimum for k = 2. From other posts we have n = 1 for k = 0 and n = 2 for k = 1. The question of the minimum for arbitrary k appears to be still open.
Comment by Kai Neergård — October 7, 2012 @ 1:21 pm
• The interface apparently does’nt like “less than” and “larger than” signs and so skipped a large part of my post. Since I see no way of editing or withdrawing posts, I thus have to submit it once more. Please neglect that above:
For N less than 2^k, binary digit questions don’t necessarily enforce a yes. The x that is excluded may be larger than N. In that case B learns nothing from asking if it is the true x. Thus the “particular element” of Stan Dolan’s argument may not exist.
The argument seems to work for k not less than 2 if N = 2^k, thus sharpening the assertion of the IMO problem to n not less than 2^k – 1 for k not less than 2. For k = 2 it seems to be similar to Jerzy Wojciechowski’s in #10. Combining this with William Meyerson’s #3 we then know that n = 2^k – 1 = 3 is the minimum for k = 2. From other posts we have n = 1 for k = 0 and n = 2 for k = 1. The question of the minimum for arbitrary k appears to be still open.
Comment by Kai Neergård — October 7, 2012 @ 1:37 pm
10. Let’s look at the case when k=2 and n=3. It seems to me that B can win. Let N=4. Let B ask the first question with S={1,2} and the second with S={1,3}. Then there is exactly one number m in {1,2,3,4} so that when x=m the first two answers of A are false. Then B asks the third question with S={m}, A must answer yes, since otherwise m could be eliminated. Let w be such that the second question was false when x=m or when x=w. Let B ask the fourth question with S={w}. Again A must answer yes, since otherwise w could be eliminated. If {1,2,3,4}-{m,w}={y,z} then let B ask the fifth question with S={z}. If A answers yes, then y can be eliminated, if A answers no, then z can be eliminated.
Comment by Jerzy Wojciechowski — July 19, 2012 @ 9:53 pm
11. Looks like I’m very late to the party. I wanted to point out a different solution to part 1 that has ties to the De Brujin sequence cycle, though admittedly it’s more complicated. I typed up a little bit here: http://evan-qcs.blogspot.com/2012/07/liars-guessing-game-and-de-brujin.html
The idea still uses the fact that within k questions, we can get player 1 to lie about one of the numbers k times (for instance, the strategy of asking about what the i-th bit is does that trick), and then… how to uses subsequent questions to force player 1 into fewer and fewer options for answer choices.
In short, for this strategy, player 2 follows the de Brujin sequence for bitstrings of length k (B(2,k)) to figure out when he should include a particular number i in the question. For the i-th number, start from the i-th element of the de Brujin sequence, if we see a 1, include it in the set for the question. What this does is that for every contiguous block of questions, and no matter what player 1 answers, he will lie about a particular number \$k\$ times, and this number rotates. In effect, if we examine all possible strategies for player 1 for this set of questions, the player must follow 1 of 2^k possible sequences.
Then, use the last number (2^k+1) as a trap: throw in this number and follow a particular possible answer sequence for k questions. On the k+1-th question, place the last number to oppose that answer sequence, then player 1 is trapped if he followed that answer sequence. Set these traps one bunch at a time, until all of them are gone. It takes a lot longer than the short answer proposed already, but it seems fun. Should have occurred to me to set the trap up at the beginning. Oh well…
Comment by Evan — July 26, 2012 @ 2:01 am
12. I realize that this polymath project has ended about a month ago. I’m writing this comment only for “future reference” if someone is interested in this IMO problem and wants to figure out a better estimate (asymtotics) for the smallest n (as a function of k) that makes the game a win for B.
First I state an equivalent formulation of the problem in terms of existence of certain binary trees. This provides another way to look at the problem and has the advantage that this problem has been studied in relation to unsatisfiable CNF Boolean formulas and the complexity of certain restrictions of SAT, so I can quote results (including one that is partially mine – sorry).
A finite rooted binary tree is called a (k, d)-tree if
(i) every non-leaf vertex has exactly two children,
(ii) each leaf has distance at least k from the root, and
(iii) every node u of the tree has at most d leaves that are both descendants of u and in distance at most k from u.
We call leaves satisfying (iii) with respect to a vertex u “visible from u”. We call them left- or right-visible depending on whether they are descendants of the left or right child of u (the two children of u are distinguished arbitrarily).
CLAIM: The liar’s guessing game for k and n is a win for B if and only if there exists a (k+1,n+1)-tree.
PROOF: Let’s assume first that B wins and fix a winning strategy for N=n+1. We assume he stops asking questions as soon as he can eliminate one of the n+1 possibilities for the unknown x, that is when A’s last k+1 answers are all false for some possible choice for x. Let us build the game tree from B’s strategy: the root corresponds to the initial position before the first question was answered, its two children correspond to the two possible answers, etc. Leaves correspond to the end of the game when the last k+1 answers eliminate a possible value for x. Let us label each leaf with one of the eliminated values.
For the “only if” part we prove that this is a (k+1,n+1)-tree. Parts (i) and (ii) of the definition are satisfied trivially. Part (iii) follows from the observation that all leaves visible from a vertex u of the tree must have different labels. Indeed, otherwise there would be a vertex u’ of the tree with a left-visible and a right-visible leaf having the same label x. But this would mean that both answers to the question B asks at the position u’ rules out x – a contradiction.
For the reverse direction (the “if” part) we use the observation that B wins if and only if he wins with N=n+1. Assume there is a (k+1,n+1)-tree. Label the leaves with values 1<=x<=n+1 such that all leaves visible from the same vertex have distinct labels. This is possible, one can label the leaves in increasing order of their level (distance from the root). Condition (iii) makes sure we have always enough labels to choose from. Now here is a strategy for B to eliminate one of the n+1 possible values for x: Start at the root. At a vertex u ask if there is a leaf labeled x left-visible from u. If the answer is YES move to right child, for NO answer move to the left child. When B arrives to a leaf it can rule out its label and hence wins the game.
QED
And now the references: Let f(k) denote the largest d with no (k,d)-tree (referred to as f_2 in [GST]). Then f(k)=\Theta(2^k/k) by [G] and f(k)=(2/e+O(k^{-1/2}))2^k/k by [GST]. The lower bound can easily be derived from Lovasz's local lemma, the upper bound is based on an elaborate construction. Because of the shift by 1 this means that the smallest n such that B wins the liar's guessing game for k and n is (4/e+O(k^{-1/2}))2^k/k.
[G] H. Gebauer, Disproof of the Neighborhood Conjecture with Implications to SAT, Combinatorica, to appear, Proc. 17th Annual European Symposium on Algorithms (ESA 2009), LNCS 5757, 764–775. http://www.inf.ethz.ch/personal/gebauerh/NeighborhoodConjecture.pdf
[GST] H. Gebauer, T. Szabo, G. Tardos, The local lemma is tight for SAT, Proc. of the 21st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2011). http://www.renyi.hu/~tardos/SAT.pdf
Comment by — August 14, 2012 @ 8:55 pm
RSS feed for comments on this post. TrackBack URI
Theme: Customized Rubric. Blog at WordPress.com.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 144, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490904808044434, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/175301-algebra-ii-change-equation-into-ellipse-form.html
|
# Thread:
1. ## Algebra II - Change this equation into Ellipse Form
So basically my teacher decided to give us only one problem for homework but its something he hasn't taught us yet...
He wants us to convert this equation:
$x^2+9y^2-4x+54y+49=0$
into:
$(x-h)^2/a^2+(y-k)^2/b^2=1$
I tried bringing the 49 to the other side and dividing everything by it but didn't know what to do from there. Please help.
2. Originally Posted by Connor
So basically my teacher decided to give us only one problem for homework but its something he hasn't taught us yet...
He wants us to convert this equation:
$x^2+9y^2-4x+54y+49=0$
into:
$(x-h)^2/a^2+(y-k)^2/b^2=1$
I tried bringing the 49 to the other side and dividing everything by it but didn't know what to do from there. Please help.
You will need to complete the square I will do the y part for you
$9y^2+54y=9(y^2+6y)$
Now we take half of the coeffient on the y term and add and subtract it to give
$\displaystyle 9(y^2+6y)=9(y^2+6y+9-9)=9(y^2+6y+9)-81=9(y+3)^2-81$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9665784239768982, "perplexity_flag": "middle"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Flux
|
# All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Flux
Flux is the rate at which something flows through a surface. The amount of sunlight that lands on a patch of ground each second is a kind of flux. The magnitude of a river's current, that is, the amount of water that flows through a cross-section of the river each second is another kind of flux. In the most general terms, flux is a measure of passage: that is, how much stuff passes through an area in a period of time.
As a mathematical concept, flux is represented by the surface integral of a vector field,
$\Phi_f = K \int_S \mathbf{F} \cdot dA$
where K is a constant, F is a vector density, dA is the area element of the surface S, and Φf is the resulting flux.
Pictorially (see image at right), the flux is a flow. The number of red arrows passing through a unit area is the vector density, the curve encircling the red arrows denotes the boundary of the surface, and the orientation of the arrows with respect to the surface denotes the inner product of the vector density with the orientation vectors of the surface area elements.
We can apply this mathematic definition to many disciplines in which we see currents or forces applied through areas.
Contents
## Meaning of flux
To better understand the concept of flux, imagine a butterfly net. The amount of air moving through the net at any given instant in time is the flux. If the wind is blowing hard, then the flux through the net is larger than before. If the net is made bigger, then the flux would be larger. For the most air to move through the net, the opening of the net must be facing the direction the wind is blowing. If the net is parallel to the wind, then no wind will be moving through the net; it will all be moving past it instead.
## Flux in chemistry
### Diffusion
Flux, or diffusion, for gaseous molecules can be related to the function:
$\Phi = 4\pi\sigma_{ab}^2\sqrt{\frac{8kT}{\pi N}}$
where N is the total number of gaseous particles, k is Boltzmann's constant, T is the relative temperature in Kelvins, and σab is the mean free path between the molecules a and b.
### Thermal systems
In thermal systems, the flux is the rate of heat flow .
## Flux in physics
### Maxwell's equations
The flux of electric and magnetic field lines is frequently discussed in electrostatics. This is because in Maxwell's equations in integral form involve integrals like above for electric and magnetic fields. For instance, Faraday's law of induction in integral form is:
$\oint_C \mathbf{E} \cdot d\mathbf{l} = -\int_{\partial C} \ {d\mathbf{B}\over dt} \cdot d\mathbf{s} = - \frac{d \Phi_D}{ d t}$
A consequence of Faraday's law is that a change in the magnetic flux through a loop of wire will create an electric potential difference in that wire. This is the basis for inductors and many electric generators.
### Electromagnetic radiation
For electromagnetic radiation, the flux of the Poynting vector through a surface is the power, or energy per unit time, passing through that surface.
### Fluid systems
In fluid systems the flux is the rate of fluid flow.
In fluid dynamics, flux is a physical rate process defined as the rate of flow or transfer of a physical quantity through an area per time. It is a key concept used in understanding fluid dynamics and related transport phenomena.
There are three basic fluxes used in the study of transport phenomena. Each type of flux has its own distinct unit of measurement along with distinct physical constants. The three basic forms of flux are defined as:
1. Momentum flux, the rate of momentum in and out of the system.
2. Heat flux, the rate of heat transfer.
3. Mass flux, the rate of mass transfer.
When dealing with one-dimensional flux, the fundamental laws that govern this process include:
• Newton's Law of Viscosity
• Fourier's Law of Convection
• Fick's Law of Diffusion.
## Types of flux
• electric flux
• electron flux
• energy flux
• heat flux
• magnetic flux
• neutron flux
• proton flux
## See also
03-10-2013 05:06:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8900911808013916, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/28380/a-canonical-and-categorical-construction-for-geometric-realization
|
## A canonical and categorical construction for geometric realization
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There is a very intimate connection between categories, simplicial sets, and topological spaces. On one hand, simplicial sets are the presheaf category on the category $\Delta$ and $\Delta$ is a canonically defined "invariant" of the theory of categories. (e.g. the machinery of Mark Weber "spits out" $\Delta$ when you "plug in" the free category monad: http://golem.ph.utexas.edu/category/2008/01/mark_weber_on_nerves_of_catego.html)
However, $\Delta$ is also linked with topological spaces. The key to this link is the functor $\Delta \to Top$ which assigns the category $[n]$ the standard n-simplex $\Delta^n$. It is this functor which produces the adjunction between the geometric realization functor and the singular nerve functor which allow you to transfer the model structure on $Top$ to $Set^{\Delta^{op}}$ so that this adjunction becomes a Quillen equivalence.
My question is the following:
Is there a deep categorical justification for the functor $\Delta \to Top$ being defined exactly how it is? If we didn't know about the standard n-simplices, how could we "cook up" such a functor? I would like a construction of this functor which is truly canonical.
The closest to an answer I've found is Drinfeld's paper http://arxiv.org/abs/math/0304064. However, this doesn't quite "nail it home" to me. First of all, the definition is just made, but not motivated. The definition shouldn't be a "guess that works", but something canonical. Moreover, if you unwind it enough, it is secretly using the fact that finite subsets of the interval with cardinality $n$ correspond to points in (the interior of) the $(n+1)$-simplex. Plus, there's some funny business going on for geometric realization of non-finite simplicial sets. (Don't get me wrong- I think it's a great paper. It just doesn't totally answer my question).
EDIT: A possible lead:
$Set^{\Delta^{op}}$ is the classifying topos for interval objects and the standard geometric realization functor $Set^{\Delta^{op}} \to Top$ is uniquely determined by its sending the generic interval to $[0,1]$. This reduces the question to "why is [0.1] the canonical interval?". Is it perhaps the unique interval object whose induced functor $Set^{\Delta^{op}} \to Top$ is both left-exact and conservative?
EDIT: I've proposed a partial answer to this below, along the lines of the above lead. I would love any feedback that anyone has on this.
-
3
It would be very surprising to me if a nice categorical construction could start off with finite linear orders, increasing functions, and somehow spit out the topology of the real numbers. – Steven Gubkin Jun 16 2010 at 15:01
I wouldn't be surprised, but, perhaps I have too much faith in category theory. As a side note, really you want the topology of the compact unit interval to be spat out, not the reals. – David Carchedi Jun 17 2010 at 9:45
As a cute "observation", we can use the topology on the unit interval as a "seed" to produce all the standard n-simplices. The topologically-enriched free commutative monoid on the unit interval is the disjoint union all the standard n-simplices. If this somehow fit in, it would be nice. – David Carchedi Jun 17 2010 at 9:48
1
@Harry, the singular complex functor USES the definition of the standard n-simplices. But, having the definition of each standard n-simplex (and the maps between them) gives us the functor $\Delta \to Top$ which if we left-Kan extend we get geometric realization. So, this doesn't gain us anything. – David Carchedi Jun 18 2010 at 1:25
1
I'm getting a sense, especially from remarks about recovering the unit interval from nonsense, that an unasked question is "why work in $Top$?" --- btw, the unit interval [seems to be](golem.ph.utexas.edu/category/2009/09/…) a terminal coalgebra for a reasonable functor, as observerd by Peter Freyd. – some guy on the street Aug 25 2010 at 21:51
show 2 more comments
## 6 Answers
Too long for a comment, too trivial for an answer:
It seems that if $K$ is a finite simplicial complex and $K'$ is the set of simplices of $K$ topologized as a quotient space of $K$ then the quotient map is a weak equivalence. Proof: $K$ is the union of contractible open sets, the open stars of its vertices, with the intersection of any two or more of these being contractible or empty. $K'$ is the union of corresponding contractible open sets, with intersections again being contractible or empty according to the same rule. Now repeatedly use the fact (consequence of Van Kampen and (for singular homology with local coefficients) excision): A continuous map from $X=U_1\cup U_2$ to $Y=V_1\cup V_2$ must be a weak equivalence if it gives weak equivalences $U_1\to V_1$, $U_2\to V_2$, and $U_1\cap U_2\to V_1\cap V_2$.
Replace simplicial complex by simplicial set and you get into a little trouble -- some subdivision is needed.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here are some crazy ideas. I am posting now just to get the ball rolling, and if anyone (myself included) comes up with anything of substance in this direction you should just edit this answer. I am making it community wiki for ease of editing, and because the ideas are currently just kind of wonky.
I think recovering the standard geometric realization functor from pure abstract nonsense might be hard, just because there would have to be an implicit abstract nonsense construction of the closed interval as a topological space out of just finite linear orders, and I feel I would have seen that somewhere before.
On the other hand, I recently learned that every finite CW complex is weakly equivalent to a finite topological space. For example the circle is weakly equivalent to a topological space with 4 points. In fact "for any finite abstract simplicial complex K, there is a finite topological space $X_K$ and a weak homotopy equivalence $f : |K| \to X_K$ where $|K|$ is the geometric realization of $K$." (according to wikipedia). So maybe we could make this construction functorial from finite simplicial complexes to finite topological spaces. Maybe this functor (which factors through geometric realization, and is "just as good" as far as algebraic topology is concerned) could then be extended to simplicial sets. Since the construction should be more combinatorial, and not involve the reals in any way, I feel like this new functor (if it exists) might be more amenable to an "abstract nonsense" description. As far as algebraic topology is concerned, this new functor might be "just as good" as geometric realization.
I have some ideas for what this functor might look like, but I am still playing around with small examples. Feel free to join in the madness if you like, and add you edits with your name attached.
-
I suspect the finite space you want to use as a replacement for $\Delta^n$ is the one with elements the simplices of $\Delta^n$, and topology given by the preorder of face inclusions i.e. so the faces are the closed subsets. – Oscar Randal-Williams Jun 16 2010 at 18:17
Yes, that is the first idea I had, and what I am playing around with. The problem that I am having is figuring out how to glue these guys appropriately: How can I recover (a finite substitute for) a circle, or a torus? – Steven Gubkin Jun 16 2010 at 18:24
Since there aren't any references for finite spaces on Wikipedia, I'll point out that Peter May's website contains some nice notes on the topic, and there are also recent papers by Minian and Barmak on the arXiv. The original source for the result (stated in the question) about weak homotopy equivalence is McCord's paper (Duke J., 1966). Richard Stong also wrote some early papers on the topic. – Dan Ramras Jun 17 2010 at 0:28
This does seem very interesting, but, I have some concerns. If we could definte a "natural" functor $\Phi$ from $\Delta$ to $FinTop$ (finite topological spaces), just because for each $[n]$ $\Phi([n])$ is weakly equivalent to $\Delta^{n}$, it doesn't seem obvious to me that its left Kan-extension (denote it also by $\Phi$) would have $\Phi(X)$ w.e. to $|X|$ for each simplicial set $X$. In a more fundamental way, I am against replacing $|X|$ by something which is homotopy equivalent and saying it's "good enough" for algebraic topology. (will continue in another comment). – David Carchedi Jun 17 2010 at 9:40
The reason is, the geometric realization functor has two important properties: 1.) It is conservative, 2.) it is left-exact. Especially the later is used to relate simplicial categories with topological categories, etc. Moreover, it seems that since the geometrical realization enjoys these properties, that it is "more correct" than any homotopy equivalent replacement. – David Carchedi Jun 17 2010 at 9:42
show 4 more comments
As to "why is the unit interval the canonical interval?", there is an interesting universal property of the unit interval given in some observations of Freyd posted at the categories list, characterizing $[0, 1]$ as a terminal coalgebra of a suitable endofunctor on the category of posets with distinct top and bottom elements.
There are various ways of putting it, but for the purposes of this thread, I'll put it this way. Recall that the category of simplicial sets is the classifying topos for the (geometric) theory of intervals, where an interval is a totally ordered set (toset) with distinct top and bottom. (This really comes down to the observation that any interval in this sense is a filtered colimit of finite intervals -- the finitely presentable intervals -- which make up the category $\Delta^{op}$.) Now there is a join $X \vee Y$ on intervals $X$, $Y$ which identifies the top of $X$ with the bottom of $Y$, where the bottom of $X \vee Y$ is identified with the bottom of $X$ and the top of $X \vee Y$ with the top of $Y$. This gives a monoidal product $\vee$ on the category of intervals, hence we have an endofunctor $F(X) = X \vee X$. A coalgebra for the endofunctor $F$ is, by definition, an interval $X$ equipped with an interval map $X \to F(X)$. There is an evident category of coalgebras.
In particular, the unit interval $[0, 1]$ becomes a coalgebra if we identify $[0, 1] \vee [0, 1]$ with $[0, 2]$ and consider the multiplication-by-2 map $[0, 1] \to [0, 2]$ as giving the coalgebra structure.
Theorem: The interval $[0, 1]$ is terminal in the category of coalgebras.
Let's think about this. Given any coalgebra structure $f: X \to X \vee X$, any value $f(x)$ lands either in the "lower" half (the first $X$ in $X \vee X$), the "upper" half (the second $X$ in $X \vee X$), or at the precise spot between them. Thus, you could think of a coalgebra as an automaton where on input $x_0$ there is output of the form $(x_1, h_1)$, where $h_1$ is either upper or lower or between. By iteration, this generates a behavior stream $(x_n, h_n)$. Interpreting upper as 1 and lower as 0, the $h_n$ form a binary expansion to give a number between 0 and 1, and therefore we have an interval map $X \to [0, 1]$ which sends $x_0$ to that number. Of course, should we ever hit $(x_n, between)$, we have a choice to resolve it as either $(bottom_X, upper)$ or $(top_X, lower)$ and continue the stream, but these streams are identified, and this corresponds to the identification of binary expansions
$$.h_1... h_{n-1} 100000... = .h_1... h_{n-1}011111...$$
as real numbers. In this way, we get a unique well-defined interval map $X \to [0, 1]$, so that $[0, 1]$ is the terminal coalgebra.
(Side remark that the coalgebra structure is an isomorphism, as always with terminal coalgebras, and the isomorphism $[0, 1] \vee [0, 1] \to [0, 1]$ is connected with the interpretation of the Thompson group as a group of PL automorphisms $\phi$ of $[0, 1]$ that are monotonic increasing and with discontinuities at dyadic rationals.)
-
Yes, I mentioned that... (dang! comments have no anchors!) – some guy on the street Sep 2 2010 at 22:47
Yes, I see now that you refer to this result and give a link. Why didn't you write it as an answer? IMO it comes close to answering David's plea for a canonical categorical justification. For example, I mentioned to David in email that the terminal coalgebra for this particular endofunctor could be seen as a universal solution to the problem of constructing an interval which is invariant with respect to <i>subdivision</i> (which may help justify why this particular endofunctor). – Todd Trimble Sep 3 2010 at 2:11
oh, why not is probably that I'm just so thoroughly used to homotopy being about the interval; so that when asked "whence this geometric realization functor" what occurs to me isn't "because the interval is terminal in ...", but rather that the cone monad --- using the interval --- gives a natural topological simplex category. And that's the answer I did give. – some guy on the street Sep 6 2010 at 23:43
Well, your first comment on my answer gave me the impression that you were annoyed that I hadn't acknowledged your earlier mention (and if you were, then please accept my apologies). I think the point however is that any interval (toset with distinct top and bottom) induces a left exact geometric realization, so the question still remains: why choose this one? Is there some sort of abstract conceptual reason? The same question applies to the cone monad: for any interval there is an associated cone monad, so what's the reason for choosing [0, 1] as the interval? – Todd Trimble Sep 7 2010 at 2:28
This is really pretty awesome. – Steven Gubkin Nov 5 2010 at 22:24
Well, I've done some reading and (re)discovered an old paper of Peter Johstone which comes pretty close to answering this question using topos theory. It follows the idea I posted as a "possible lead" in my EDIT.
First some background information:
It was shown by Joyal that simplicial sets is the classifying topos for "interval objects" (this is explained for instance in "Sheaves in Geometry and Logic" by Mac Lane and Moerdijk, in which they refer to interval objects as linear orders). By an interval object in $Set$, one roughly means a linearly ordered set together with a top and bottom element. You can say $I$ in a topos $\mathcal{E}$ is an interval object if and only if $Hom(E,I)$ is an interval object in $Set$ for all $E \in \mathcal{E}$. Since $Set^{\Delta^{op}}$ is the classifying topos for interval objects, for any topos $\mathcal{E}$, there is an equivalence of categories between the category of geometic morphisms $Hom(\mathcal{E},Set^{\Delta^{op}})$ and the category of interval objects in $\mathcal{E}$, $Int(\mathcal{E})$. Notice that the inverse image functor $f^*:Set^{\Delta^{op}} \to \mathcal{E}$ of a geometric morphism $f:\mathcal{E} \to Set^{\Delta^{op}}$ is always left-exact, so, this can be thought of as a "geometric realization functor with values in $\mathcal{E}$".
Now, the classical geometric realization functor $Set^{\Delta^{op}} \to Top$ nearly fits in this framework- it is left-exact and is uniquely determined by the fact that the universal interval object of simplicial sets is mapped to the standard unit interval $[0,1]$. However, $Top$ is not a topos. This is where Peter Johstone's 1977 Paper "On a topological topos" comes in. In this paper he constructs a topos $\mathcal{T}$ which contains sequential topological spaces (and hence e.g. CW-complexes) as a reflective subcategory. (In case you are interested, this topos is the topos of sheaves with respect to the canonical topology on the fullsubcategory of $Top$ consisting of the one-point space and the one-point compactification of $\mathbb{N}$.) Moreover, the inclusion of sequential spaces into $\mathcal{T}$ preserves lots of colimits- e.g. all colimits you'd use to construct CW-complexes. Now, since the standard unit interval $[0,1]$ is an object of $\mathcal{T}$, it corresponds to a unique geometric morphism $r:\mathcal{T} \to Set^{\Delta^{op}}$. Johnstone then proves that if $X$ is a simplicial set, then $r^*(X)$ is exactly $|X|$ (as a sequential space considered as an object of $\mathcal{T}$) AND that if $T \in \mathcal{T}$ is a sequential space, then `$r_*(T) \cong Sing(T)$`.
This is somewhat satisfying. However, for it to truly be satisfying, we'd have to either make sense out of why $\mathcal{T}$ is a natural choice, or, show that any "suitable choice" of a topos would give the same answer. Moreover, although intuively somehow clear, I would like to make sense out of in what way the "standard unit interval" $[0,1]$ is really a "canonical interval object".
-
Have you looked at Peter Freyd's "Algebraic Real Analysis"? I have barely cracked it open, but it seems like if you are looking for a categorical analysis of the unit interval, that is the place to go. – Steven Gubkin Jun 19 2010 at 12:37
I've taken a brief look, but as of yet, have not penetrated too deeply. I'll let you know if I find anything of interest. Thanks for the reference! – David Carchedi Jun 20 2010 at 13:57
I don't think "I is an interval object in the topos `$\mathcal E$`" is equivalent to "Hom$(E,I)$ is an interval object in $Set$ for all `$E\in\mathcal E$`." Consider, for example, the case of `$\mathcal E=Set$`, $I=[0,1]$, and $E=2$. I don't see a linear ordering on Hom$(2,I)$ induced by the linear ordering on $I$. (This doesn't affect the rest of your answer.) Generally, this sort of "definition via Yoneda embedding" works fine for concepts defined by universal Horn sentences, but not for notions like "linear order" or "field". For these, one needs the internal logic of the topos. – Andreas Blass Aug 25 2010 at 20:24
Geometric realization is just $\operatorname{hocolim}\limits_{\Delta^{op}}$ — or more precisely, the explicit construction for this homotopy colimit functor (lifting it from HoTop to Top). An n-simplex arise there as a cofibrant replacement for the map from n points to 1 point. And the convex hull of n points certainly seems to be the most natural contractible set containing n distinct points, although I can't think of a precise statement here.
-
I'd just like to point out that there is a monad on $Top$, (which in the homotopy category looks rather dull,) assigning to each space $X$ its cone $CX$, the mapping cylinder of $X\to *$. The unit map is the inclusion of $X \to CX$, and the composition $CCX\to CX$ may as well be the map $[[x,s],t]\mapsto [x,s+t-ts].$ (This ugly formula is just a natural obfuscation of the heuristic description of $CX$ as the union of convex combinations of points $x\in X$ and the new point $*$.) Another way to think of it is that $CX$ is the underlying space of the free contraction of $X$.
The topological (realization of the) simplex category is just the orbit of a one-point space $\star$ under this monad, together with the maps derived from the monad and the maps $C\star\to \star$ and $\star\to *\to C\star$. I think this gets at what Grigory M means above by "most natural" contractible set on $n$ points. Somewhere in this nonsense I should say "Bar construction", but I can't remember precisely where.
-
The multiplication may as well be [[x, s], t] |-> [x, st], or even [[x, s], t] |-> [x, min(s, t)], and the unit x |-> [x, 1]. Under the min formulation, you can similarly form a monad C_I for any interval I, where C_I X is the pushout of the inclusion X -> X x I (at the endpoint "top" of I) along X -> 1. You can define a notion of I-contractibility as involving a retraction h: C_I X -> X of the unit (or even demand h to be an algebra structure). You can go on to define an I-analogue of the topological simplex category. So what is so canonical about the usual choice I = [0, 1]? – Todd Trimble Sep 7 2010 at 2:49
As it happens, I've thought about this kind of thing a fair amount. If you'd like to discuss this further, you can write me at the address [email protected]. Regards, Todd. – Todd Trimble Sep 7 2010 at 2:53
ahah! ... well spotted, sir. – some guy on the street Sep 7 2010 at 13:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 142, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377416372299194, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/3728/using-rolling-returns-in-a-multivariate-linear-regression/3741
|
# Using rolling returns in a multivariate linear regression?
I am trying to use fundamental factors such as PE, BV, & CFO in a multivariate linear regression with the response variable being the rolling 1 month returns. But this approach seems flawed as the autocorrelation of the residuals is to high and the Durbin Watson test points also to such flaws. What is the best what to use long time horizon rolling returns in a linear regression?
-
1
Incidentally, you have stumbled upon one of the skeletons in the closet for the Fama-French type multiple regression techniques. Many academic papers suffer from exactly the issue you describe (autocorrelated residuals) but it's simply not discussed. – Quant Guy Aug 9 '12 at 19:36
I can't seem to find any deterministic solutions and I am not conducting such a study for academic purposes so I can't afford to be lazy with assumptions. I will post back once I have some results of interest – user1129988 Aug 28 '12 at 9:26
why not ARMA(n,m)-GARCH(p,q)? several users below actually suggested just an arma model.. – pyCthon Nov 16 '12 at 3:58
Have a look at the several serial correlation robust bootstrap estimators. – Jase Dec 7 '12 at 2:03
## 4 Answers
Why not fit an ARMA model to the rolling returns first, and then model the residuals in your regression equation? That way you should be removing most of the effects of auto-correlation.
-
where can I read more about this approach of using an ARMA model? ARMA model will help to deal the smoothing for rolling returns but I dont quite understand what you mean by model the residuals? – user1129988 Jul 11 '12 at 8:10
Are you looking for practical examples of ARMA models in a language like R or more a theoretical explanation? By modelling the residuals I meant you first fit an ARMA model, then run predictions from this model and the residuals are the actual returns minus the predicted returns. You use the residuals (which if you choose your ARMA model correctly, should be free from autocorrelation) instead of the rolling returns in your model. Note: You will likely want to fit the ARMA model on a rolling window like you would a rolling mean. – psandersen Jul 11 '12 at 9:50
1
Alternatively you could look at including your predictors as exogenous variables in your ARMA process. – psandersen Jul 11 '12 at 9:50
I am using matlab. if there are any practical examples for using an ARMA model approach that would be great. I still don't quite grasp how the ARMA model is supposed to work.. – user1129988 Jul 12 '12 at 6:56
I dont know any examples in matlab unfortunately, but I found plenty using R (my language of choice). You may find the series of blog posts at theaverageinvestor.wordpress.com/2011/04/14/… useful, and a number of trading strategies with code discussed at systematicinvestor.wordpress.com/?s=garch (for example, his use of rolling regression for factor analysis seems similar in principle to what you want to do.) Since you seem to be at the prototyping stage, might make sense to play around with some of these examples in R first. Hope this helps. – psandersen Jul 12 '12 at 19:34
The rolling n month return contains autocorrelation by its very nature. This is most obvious in a two-period case where $R_{t}\equiv0.5r_{t}+0.5r_{t-1}$. This implies that $R_{t-1}\equiv0.5r_{t-1}+0.5r_{t-2}$ so that when you calculate the correlation between $R_{t}$ and $R_{t-1}$ it should be close to $0.5$ as a result of both containing a $R_{t-1}$ component.
Hence, it might be better off avoiding using the rolling return in the calculation. If you are using it because you are seeing better results, it is likely an artifact of the smoothing used to construct the rolling series. If you insist on using it, one option is to include the lag of the rolling return in the regression.
-
How is this related to the question at hand? The question was not about autocorrelation of the return series. – Freddy Jul 8 '12 at 6:15
I thought I was quite clear that I was talking about the autocorrelation of the rolling return series. – John Jul 8 '12 at 13:10
same, it does not relate at all to autocorrelation in residuals. Maybe OP can specify what he means with "rolling returns", however, I am almost certain that he meant calculating returns for each given months rather than building something such as a moving average. If he calculated moving returns in ways you described in your post then I would agree with you but it would make zero sense to use absolute absolute fundamental factor values to regress over a moving average return series. It again all comes down to choosing matching periodicity (see my suggested answer) – Freddy Jul 9 '12 at 2:24
For equities, I wouldn't think there would be much of an issue with autocorrelation if he were only looking at one month returns. But you're right that the OP is not exactly clear. – John Jul 9 '12 at 3:55
Would it be valid(correct) to difference the rolling returns in order to assess the robustness of the regression? and then after convert it to levels data.. in matlab; diff(rolling returns) --> run regstats(x,y) --> obtain regression stats data --> return to levels data: cumsum(yhat) in order to obtain prediction? – user1129988 Jul 10 '12 at 12:18
show 1 more comment
It simply points to the fact that your model as stands does not have much explanatory power of monthly returns. One reason could be of a observation period mismatch. I am not a fundamental type of guy, but I imagine that the monthly returns are measured over too short a period (1 month) while most fundamental factors are updated on a quarterly basis (sometimes semi-annually or annually depending on local market regulations). Thus, changes or absolute levels in cash flows on a quarterly basis may not be able to explain changes in monthly returns. Can you run the same model again but use quarterly returns (or match the periodicity of the independent variables periodicity) and report back?
-
once I difference (once) the rolling returns and the factors it seems to handle any issues that may violate the requirements of the OLS.. I still think I am missing something? – user1129988 Jul 10 '12 at 13:32
Try fitting a model with ARMA errors?
However, if by "rolling returns" you imply a moving average of returns or some QoQ or YoY return series, which has much persistence, I am not so sure what the right way to proceed really is (with the exception that you can apply some corrections suggested in econometrics literature).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217565059661865, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/2949/which-one-result-in-maths-has-surprised-you-the-most/11131
|
# Which one result in maths has surprised you the most? [closed]
A large part of my fascination in mathematics is because of some very surprising results that I have seen there.
I remember one I found very hard to swallow when I first encountered it, was what is known as the Banach Tarski Paradox. It states that you can separate a ball $x^2+y^2+z^2 \le 1$ into finitely many disjoint parts, rotate and translate them and rejoin (by taking disjoint union), and you end up with exactly two complete balls of the same radius!
So I ask you which are your most surprising moments in maths?
• Chances are you will have more than one. May I request post multiple answers in that case, so the voting system will bring the ones most people think as surprising up. Thanks!
-
2
big-list usually means community wiki. For this question it applies. – Aryabhata Aug 21 '10 at 19:01
3
– Qiaochu Yuan Aug 21 '10 at 20:56
3
– Qiaochu Yuan Aug 21 '10 at 21:21
3
– Qiaochu Yuan Aug 21 '10 at 21:28
3
I'm getting tired of this question being bumped every once in a while. It seems to have served its purpose and there's no need to accumulate more than 100 answers. Therefore I voted to close it. – t.b. Sep 5 '11 at 22:09
show 7 more comments
## closed as too localized by t.b., Zev Chonoles♦Sep 5 '11 at 22:18
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, see the FAQ.
## 91 Answers
(Mazur) If $E$ is an elliptic curve over $\mathbf{Q}$ then the torsion subgroup of $E(\mathbf{Q})$ is one of
$\mathbf{Z}/N\mathbf{Z}$ for N=1,2,3,4,5,6,7,8,9,10,12
or
$\mathbf{Z}/2\mathbf{Z} \times \mathbf{Z}/N\mathbf{Z}$ for N=1,2,3,4
I find it very surprising that there are so few possibilities for the rational torsion on an elliptic curve. It's also strange to see every number from 1 through 12 except 11 in that first list.
-
How about the Anti-Calculus Proposition (Erdős): Suppose $f$ is analytic throughout a closed disc and assumes its maximum modulus at the boundary point $a$. Then $f^\prime(a)$ is not equal to $0$ unless $f$ is constant. (Source: Bak & Newman, Complex Analysis 2nd ed.)
-
Another example from the numerics front: it's surprising that despite the theoretical fact that Gaussian elimination can be unstable (even with pivoting!), examples that trigger this instability are in fact very rare in practice, and can be handled by a simple fix if they do arise.
-
show 1 more comment
I got really struck by duality, when my professor lectured about it the first time. I think that even though the algebraic concept is easy to understand, to think that there exists a space such that all inclusions are switched always had a special place in my mind.
-
show 4 more comments
Similar to Thomae's function, I was impressed by the Dirichlet function, which is not only discontinuous everywhere, but impossible to plot. The function is defined as:
$f(x)=\begin{cases} 1 \mbox{ if } x\in\mathbb{Q} \\ 0 \mbox{ if }x\notin\mathbb{Q} \end{cases}$
-
For me it would be the Green-Tao theorem, which states: For any natural number $k$, there exist $k$-term arithmetic progressions of primes.
-
To me the Cayley-Salmon theorem is an example of a result that still strikes me as rather surprising.
Theorem(Cayley-Salmon)
A smooth cubic surface $\mathcal{S} \subset \mathbb{P}^{3}_{k}$ contains exactly 27 lines, where $k$ is an algebraically closed field.
Here is a link to some history about it. There's a really nice treatment of this result in chapter 5 of Klaus Hulek's book Elementary Algebraic Geometry.
-
If $G$ is a (Hausdorff, locally compact) totally disconnected abelian group which is a filtered union of its compact open subgroups (e.g. the additive group of $\mathbb{Q}_p$), then the category of smooth complex representations of $G$ (smooth meaning the action map is continuous when the vector space has the discrete topology) is canonically equivalent to the category of sheaves of complex vector spaces on the Pontryagin dual of $G$.
This is a beautiful example of an algebro-geometric duality in representation theory and quite shocking to me. The situation is sort of analogous to the equivalence, for a commutative ring $R$, of $R$-modules with quasi-coherent sheaves on $\text{Spec} \ R$.
-
When I was a freshman at the University of Rome I took a course in Geometry in which some projective geometry was covered. I was really amazed by two facts:
1. a Moebius strip sits inside the real projective plane.
2. the group of rotations of the real plane fixes two (complex) lines and all the circumferences pass through the points at infinity of these two lines (the socalled cyclic points). A few years later I realized that I was the only grad student in David Eisenbud's class at Brandeis to know this (actually simple) fact!
-
The consequences of busy beaver problem are really suprising. For every conjecture/hypothesis about countable number of cases if we can write a program that can verify whether this conjecture holds for some case then we need to check only FINITE number of cases to prove that this conjecture is true for every case.
-
All eigenvalues of a Hermitian matrix A are real. No immediate intuition as to why it must be true. If we think of the Riemann hypothesis "All non-trivial zeros of the Riemann zeta function are of the form $\frac{1}{2}+zi$ where z is real" and try to think of Hermitian matrices having real eigenvalues in equal awe, the wonderment increases. The two ideas are also related.
-
Wedderburn’s Theorem: Every finite division ring is a field.
Why should finiteness imply commutativity???
(Background: The only way a division ring can fail to be a field is if its multiplication is not commutative.)
-
3
Mike: you have indeed many "one result that surprised you the most" :) – t.b. May 21 '11 at 1:12
1
At different times:) - Remember the one about being able to count on and on and on:) – Mike Jones May 21 '11 at 7:33
The fact that any known first order property of $\mathbb{C}$ in the language of rings is valid in any algebraically closed field.
-
1
Well, of characteristic 0... – Harry Altman Jun 10 '11 at 8:29
Almost everything I've seen so far, but especially that the area under Gaussian curve converges to the square root of the ratio of circumference of the circle to its diameter. This result is old and well-known, but these two things seem so unrelated that I still find it amazing!
-
show 1 more comment
Aside from some results I found amazing that have already been mentionned, Lagrange's Theorem in group theory is one that amazed me for some time.
For those who don't know about it, it tells us that the order of any subgroup of a group $G$ divides the order of $G$.
-
I have had two results in the last year that surprised me.
One appeals to my intuition in physics, although it may seem more obvious to those more versed mathematics.
The set of pauli matrices (with identity) when multiplied by $i$ form the group of unitary quaternions (under matrix multiplication).
The was surprising to me how you can connect such a physical concept as electron spin to an abstract algebraic concept. Its what led me to dive into group and representation theory as it applies to physics. Now I understand that we can consider spin symmetry as SU(2) , which is injective into SO(3) the group of symmetry of $R^3$ of which the quaternions can be thought of as a representation.
The second result, which I had to prove in an algebra assignment:
For a field $F$ of characteristic $p$ where $p$ is prime $(x+y)^p = x^p + y^p$ for $x,y \in F$. Lovely little result that spits in the face of everything that our grade school teachers taught us in algebra.
-
I remember a homework question in elementary school. Something like this: Billy and Jane's house is x blocks east and y blocks north of school. Billy walks home by walking east for x blocks and then north for y blocks. Jane decides to take a short cut: she walks alternately a block north and a block east. (There is a picture: Jane's route is a step-like hypotenuse.) Is Jane's route really a short cut?
Of course it is exactly the same distance, but I found this really hard to digest. I knew that in the triangle the sum of the two sides would exceed the hypotenuse.
-
3
– Raskolnikov Nov 19 '10 at 10:39
5
Thus illustrating the difference between the $L^\infty$ and $W^{1,\infty}$ metrics. – Nate Eldredge Nov 19 '10 at 21:19
show 2 more comments
This is not a very specific answer but I was struck with awe when I saw the entanglement between partial differential equations and stochastic processes.
-
This one goes hand in hand with the enumerability of $\mathbb{Q}$
The fact that though most of the real numbers are transcendental, it is extremely difficult to find one. (excluding some slight modification of the already known ones)
-
show 1 more comment
The divergence of the Harmonic Series.
-
The extension principle of fuzzy subset theory. More generally, the extension of several parts of classical mathematics to an infinite-valued context.
-
Supersymmetry. From a mathematical standpoint it implies that every bosonic (resp. fermionic) particle in the universe has a fermionic (resp. bosonic) superpartner, i.e., the existence of two physical bijections.
-
Mamikon’s Theorem: The area of a tangent sweep is equal to the area of its tangent cluster, regardless of the original shape of the curve.
This theorem allows you to, among other things, easily obtain results that were obtained before only with difficulty, such as the area under one arch of a cycloid. This theorem is the basis of what has come to be known as Visual Calculus. Here’s the link to Tom Apostol’s account of this awesome insight:
http://eands.caltech.edu/articles/Apostol%20Feature.pdf
-
The fact that the sum of the first n odd numbers is n squared. Also, not just this fact, but the nice visual “wrapping” proof of it (as opposed to the induction proof).
-
show 1 more comment
Computational instability of the Quadratic Formula. Whowuduvthought?
Due to this computational stability an alternative formula is also employed. Here is the relevant quote from the Wikipedia article:
“The alternative formula can reduce loss of precision in the numerical evaluation of the roots, which may be a problem if one of the roots is much smaller than the other in absolute magnitude.”
http://en.wikipedia.org/wiki/Quadratic_equation
-
Kind of simple, but I find it really counterintuitive:
A strictly increasing function can have zero derivative.
-
1
A strictly increasing function can have zero derivative almost everywhere. – Jonas Meyer Sep 5 '11 at 20:25
show 3 more comments
The fact that surface area is simply the derivative of volume!
Bear in mind that in mathematics, “volume” is a generic term (i.e., a convenient handle, no pun intended) for a “container” of arbitrary dimension. So, this holds not only for dimension 3 (i.e., a sphere), but also for dimension 2 (i.e., a circle, a circle being a 2-sphere, and perimeter of a 2-dimensional object corresponding to the surface area of a 3-dimensional object), and for dimension 1 (i.e., an interval, an interval being a 1-sphere, and the set of endpoints of a 1-dimensional object corresponding to the surface of a 3-dimensional object). So, we have:
Dimension 3: d(4/3)πr^3/dr = 4πr^2
Dimension 2: dπr^2/dr = 2πr
Dimension 1: d2r/dr = 2
Beautiful!
-
1
$r/{\mathrm{d}r}=1$ does not make sense to me. – Rasmus Jun 10 '11 at 9:11
show 1 more comment
The Feit-Thompson theorem, although I don't know yet how to prove it. It was impressive for me that only from the condition of having an odd order a group would be solvable.
-
How the "inverse" of area under a curve is the slope.
(I mean: $\frac{d}{dx} \int x {\mathrm dx} = x$)
-
The Chinese Magic Square:
816
357
492
It adds up to 15 in every direction! Awesome! And the Chinese evidently thought so too, since they incorporated it into their religious writings.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457486271858215, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/6720/calculate-euler-yaw-pitch-and-roll-for-a-3d-point-to-face-another/28583
|
# Calculate Euler yaw pitch and roll for a 3d point to face another
This may be a really basic question but the typical atan2 method isn't working when I write it in a program.
I need a camera object to face a certain target in 3D space.
I need to calculate the 3 Euler angles (yaw, pitch, roll) for the camera.
How can I do this?
-
1
The solution will not be unique since you can always rotate the camera on its axis and still have it pointing at a target. Like choosing portrait vs. landscape in a real camera. – Jyotirmoy Bhattacharya Oct 13 '10 at 23:36
You're right, but at least the X and Y can be calculated to face the point, if not a proper Z (roll). – Jarvis Oct 14 '10 at 5:06
Can you explain exactly how your use of two-argument arctangent is failing? It seems to me that any correction to your algorithm would involve the addition or subtraction of some multiple of $\pi/2$. – J. M. Oct 20 '10 at 11:07
## 1 Answer
From looking at your diagram, it appears to me that the transformation takes the unit vector $(i_x,i_y,i_z)$ and rotates it to point to $(f_x,f_y,f_z)$:
$$\begin{pmatrix}c_\Psi&s_\Psi&0\\-s_\Psi&c_\Psi&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\0&c_\theta&-s_\theta\\0&s_\theta&c_\theta\end{pmatrix} \begin{pmatrix}c_\phi&-s_\phi&0\\s_\phi&c_\phi&0\\0&0&1\end{pmatrix} \begin{pmatrix}i_x\\i_y\\i_z\end{pmatrix} = \begin{pmatrix}f_x\\f_y\\f_z\end{pmatrix}$$
where $c_\phi = \cos(\phi), s_\phi = \sin(\phi)$, etc.
It isn't hard to solve for $\phi,\theta$ and $\Psi$ if you assume that your camera begins by pointing in a convenient direction such as $(i_x,i_y,i_z) = (0,0,1)$.
-
Uh, I should add that when writing these things out, I normally have (a) wrong signs and (b) wrong orders of matrix multiplication. – Carl Brannen Mar 23 '11 at 0:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288419485092163, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/140757-linear-functional.html
|
# Thread:
1. ## Linear Functional
I need to show that a linear functional $\alpha_v$ on a Hilbert space $H$ defined by $\alpha_v(w)=<w,v>$ has operator norm $\|\alpha_v\|_{op}=\|v\|$.
I have already shown by the Cauchy Schwarz inequality that $\|\alpha_v\|_{op}\leq\|v\|$.
I think now I need to show that $\|\alpha_v\|_{op}\geq\|v\|$.
Any help would be great, Thanks
2. Originally Posted by ejgmath
I need to show that a linear functional $\alpha_v$ on a Hilbert space $H$ defined by $\alpha_v(w)=<w,v>$ has operator norm $\|\alpha_v\|_{op}=\|v\|$.
I have already shown by the Cauchy Schwarz inequality that $\|\alpha_v\|_{op}\leq\|v\|$.
I think now I need to show that $\|\alpha_v\|_{op}\geq\|v\|$.
Any help would be great, Thanks
Tip, plug in $v/||v||$.
3. Into where, can you elabourate?
4. Originally Posted by ejgmath
Into where, can you elabourate?
Into $\alpha_v$. What you want is $\alpha_v(x)=||v||$ for some x with $||x|| \leq 1$, which precisely shows the inequality you are looking for (as $||\alpha_v||:=\sup \{|\alpha_v(x)|: ||x|| \leq 1 \}$ ).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9659618139266968, "perplexity_flag": "head"}
|
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoms/1177728793
|
### Some Theorems Relevant to Life Testing from an Exponential Distribution
B. Epstein and M. Sobel
Source: Ann. Math. Statist. Volume 25, Number 2 (1954), 373-381.
#### Abstract
A life test on $N$ items is considered in which the common underlying distribution of the length of life of a single item is given by the density \begin{equation*}\tag{1} p(x; \theta, A) = \begin{cases}\frac{1}{\theta} e^{-(x-A)/\theta},\quad\text{for} x \geqq A \\ 0,\quad\text{otherwise}\end{cases}\end{equation*} where $\theta > 0$ is unknown but is the same for all items and $A \geqq 0.$ Several lemmas are given concerning the first $r$ out of $n$ observations when the underlying p.d.f. is given by (1). These results are then used to estimate $\theta$ when the $N$ items are divided into $k$ sets $S_j$ (each containing $n_j > 0,$ items, $\sum^k_{j=1} n_j = N)$ and each set $S_j$ is observed only until the first $r_j$ failures occur $(0 < r_j \leqq n_j).$ The constants $r_j$ and $n_j$ are fixed and preassigned. Three different cases are considered: 1. The $n_j$ items in each set $S_j$ have a common known $A_j (j = 1, 2, \cdots, k).$ 2. All $N$ items have a common unknown $A.$ 3. The $n_j$ items in each set $S_j$ have a common unknown $A_j (j = 1, 2, \cdots, k).$ The results for these three cases are such that the results for any intermediate situation (i.e. some $A_j$ values known, the others unknown) can be written down at will. The particular case $k = 1$ and $A = 0$ is treated in [2]. The constant $A$ in (1) can be interpreted in two different ways: (i) $A$ is the minimum life, that is life is measured from the beginning of time, which is taken as zero. (ii) $A$ is the "time of birth", that is life is measured from time $A$. Under interpretation (ii) the parameter $\theta,$ which we are trying to estimate, represents the expected length of life.
First Page:
Full-text: Open access
Permanent link to this document: http://projecteuclid.org/euclid.aoms/1177728793
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9098297953605652, "perplexity_flag": "head"}
|
http://physics.aps.org/articles/v4/48
|
# Viewpoint: Holes in the stream
, Department of Physics and Astronomy, University of Missouri, Columbia, Missouri 65211, USA
Published June 13, 2011 | Physics 4, 48 (2011) | DOI: 10.1103/Physics.4.48
Measurement of the Coulomb drag correlation in a two-dimensional electron gas provides a quantitative explanation for the movement of electron-hole packets opposite to the expected direction.
#### Measurement of Electron-Hole Friction in an n-Doped GaAs/AlGaAs Quantum Well Using Optical Transient Grating Spectroscopy
Luyi Yang, J. D. Koralek, J. Orenstein, D. R. Tibbetts, J. L. Reno, and M. P. Lilly
Published June 13, 2011 | PDF (free)
The behavior of coupled electron-hole packets is crucial to the properties of certain semiconductor devices, such as some spintronic systems that rely on spin currents carried by polarized electron-hole packets [1]. But the motion of these packets in an electric field, known as ambipolar transport because the electrons and holes move in a coherent way, does not always follow the predictions of standard theory [2]—in some circumstances, they move in the opposite direction to what is expected [3]. Although the underlying reason for this behavior has been known for some time, Luyi Yang at Lawrence Berkeley National Laboratory and colleagues have measured a key quantity characterizing the nonstandard motion [4]. The new work, which appears in Physical Review Letters, should make possible better modeling and design of ambipolar transport in devices.
Laser light with energy larger than the band gap of a semiconductor quantum well that hosts, say, a two-dimensional electron gas (2DEG), will excite additional electrons in the conduction band and holes in the valence band. If the laser intensity is spatially nonuniform, these additional electrons and holes concentrate where the laser intensity is higher; if the laser is focused on a single spot, they form an electron-hole packet. Once the laser is turned off, the electrons and holes begin to recombine, but the recombination time is sufficiently long to allow experimentalists to monitor the packet’s evolution. The packet undergoes diffusion, spreading at a rate controlled by the so-called ambipolar diffusion constant $Da$, a weighted average of the diffusion constants of electrons $De$ and holes $Dh$ [2]:
$Da=(σeDh+σhDe)(σe+σh),$
(1)
where $σe$ and $σh$, the Drude conductivities of electrons and holes, are proportional to the densities of electrons and holes. In an electric field $E$, the packet will drift at a velocity $vp=μaE$, where the ambipolar mobility is
$μa=(σeμh-σhμe)(σh+σe),$
(2)
$μe$ and $μh$ being the electron and hole mobilities, respectively. The drift velocity of the packet will, in general, differ from the drift velocity of the underlying majority carriers: it is the velocity of propagation of a disturbance on the sea of majority carriers. If the electrons are in an overwhelming majority, as is often true in practice, $σe≫σh$ and therefore $Da≅Dh$ and $μa≅μh$. The electron-hole packet should then drift in the direction of the electric field, opposite to the direction in which electrons drift, with a velocity determined by the mobility of the holes.
Ambipolar mobility has long been understood in terms of dielectric screening—the Coulomb correlation that tends to suppress any spatial charge imbalance in a high-mobility electron gas. An electric field causes the electrons and the holes in a packet to start drifting in opposite directions, but the Coulomb attraction between the electrons and the holes prevents them separating beyond a critical distance $μ¯Eτd$, where $μ¯$ is the average carrier mobility and $τd$ is the dielectric relaxation time, which is usually very short in a high-mobility semiconductor. Once this separation is reached, the drifting holes lead the packet, pulling up and discarding background electrons as they go to maintain charge neutrality. In brief, the packet—regarded as a disturbance of the 2DEG—is located where the holes are, and goes where the holes go.
As early as the 1980s, however, it became clear that this simple picture could not be the whole story. In 1986, Höpfel et al. [3] reported that an electron-hole packet injected in a high mobility $GaAs$ quantum well would drift in the direction of the majority rather than the minority carriers. This seems very strange: What could persuade the positively charged holes to drift against the direction of the electric field? Höpfel et al. proposed an explanation that hinges on another correlation between electrons and holes, the Coulomb drag correlation. When electrons and holes drift with different average velocities—as they do when an electron-hole packet moves through an electron gas—the density fluctuations of the two species of carriers tend to interlock. This causes a phenomenon akin to friction, by which momentum transfer from one species to the other reduces their velocity difference. This phenomenon has long been studied in bilayer systems [5], not only for its intrinsic interest (quantitative tests of many-body theories of the 2DEG, diagnostics of phase transitions), but also as a possible way of coupling devices. In the case of bilayer systems, however, the electron-hole interaction is rather small and the effect is correspondingly weak.
In the Höpfel et al. experiment, on the other hand, electrons and holes coexist in the same layer. Momentum transfer between electrons and holes acts as an effective electric field that opposes the external field, and can force the holes to drift in a direction opposite to their natural inclination, or, in other words, to go with the electron flow [see Fig. 1(b)]. The drag field on the holes is
$Edrag=-ρehje=-(ρehρe)E,$
(3)
where $je$ is the electron current density, $ρe$ is the usual electron resistivity, and $ρeh$ is the electron-hole transresistivity, which relates the field acting on one carrier species to the current of the other. A key point is that $ρeh$ has an intrinsic origin (the Coulomb drag correlation), while $ρe$ is primarily controlled by extrinsic impurity scattering and can, in principle, be made arbitrarily small. The drag field can therefore overcome the external field if $ρeh>ρe$, i.e., if the electron mobility of the 2DEG is sufficiently high.
Ironically, the key quantity $ρeh$ has been thoroughly studied in bilayer systems [5] but never before in a single-layer system. The reason is that bilayer measurements relied on the ability to separately probe the two layers—a possibility that does not exist here. Yang et al. [4] have now found a way to do the single-layer measurement by using a completely different experimental technique. Rather than creating a single electron-hole packet, Yang and coauthors create an electron-hole density modulation by superimposing on the surface of a 2DEG two linearly polarized laser beams coming from different directions [6]. Interference between the laser beams creates a standing wave, and electron-hole packets form under its maxima [see Fig. 1(a)]. The resulting electron-hole density modulation then serves as a diffraction grating for another laser beam, acting as a probe. From the intensity of the diffraction pattern the experimenters can determine the amplitude of the electron-hole density modulation. Naturally, this amplitude decays after the pump laser beams are switched off; this is why Yang et al. call the technique transient grating spectroscopy. By measuring the decay rate of the electron-hole density modulation as a function of its own wavelength (the latter is varied by changing the angle between the beams), the experimenters extract both the electron-hole recombination time and the ambipolar diffusion constant. Finally, by measuring the phase shift of the diffracted wave relative to a reference wave, they are able to determine the drift velocity of the grating, and hence the value of the ambipolar mobility.
Yang et al. have developed a simple quantitative model that expresses $Da$ and $μa$ in terms of the electron and hole mobilities and the electron-hole transresistance for different 2DEG densities and laser intensity levels. From their formulas the hole transresistivity can be extracted. In contrast to the bilayer case, in which $ρeh$ is smaller than $ρe$, here $ρeh$ is larger than $ρe$ throughout the range of temperatures explored in the experiment, so that ambipolar mobility is always negative. At low temperature, $ρe≪ρeh$ and the ambipolar mobility approaches the electronic mobility: the relative velocity between the packet and the background electrons is effectively zero. At higher temperature, the electronic resistance increases more rapidly than the electron-hole transresistivity: as $ρeh/ρe$ approaches $1$, the packet mobility approaches the hole mobility in absolute value, but still has a negative sign. Only for $ρeh≪ρe$ is the “textbook result” recovered. Remarkably, the measured transresistivity is in good quantitative agreement with a simple microscopic calculation of momentum transfer rate between electrons and holes.
Besides filling an obvious gap in our understanding of the impact of Coulomb correlations on electron-hole transport, the experimental techniques described by Yang et al. open up a number of exciting possibilities. Changing the polarization of the laser pumps from linear to circular would generate spin gratings whose mobility could be studied in experiments similar to the ones that led to the first observation of spin Coulomb drag [7, 8]. The spin grating can be made hole-free by going to times longer than the electron-hole recombination time, but then the spin orbit interaction, which is very significant in, say, a $[100]$ $GaAs$ quantum well, becomes important. It will then be possible to study the behavior of the spin grating mobility near special wavelengths, where an extremely long-lived helical state has been predicted [9] and subsequently observed [10]. This will deepen our understanding of these intriguing new states of electronic matter. On a more practical level, the techniques developed in this paper will be useful for a better characterization of carrier transport in spintronic devices.
## Acknowledgments
This work was supported by DOE under Grant No. DE-FG02-05ER46203. I thank M. E. Flatté for a useful discussion.
### References
1. M. E. Flatté in Manipulating Quantum Coherence in Solid State Systems, NATO Science Series II: Mathematics, Physics, and Chemistry, edited by M. E. Flatté and I. Tifrea (Springer, Dordrecht, 2007)[Amazon][WorldCat].
2. Semiconductors, edited by R. A. Smith (Cambridge University Press, New York, 1978)[Amazon][WorldCat].
3. R. A. Höpfel, J. Shah, P. A. Wolff, and A. C. Gossard, Phys. Rev. Lett. 56, 2736 (1986).
4. L. Yang, J. D. Koralek, J. Orenstein, D. R. Tibbetts, J. L. Reno, and M. P. Lilly, Phys. Rev. Lett. 106, 247401 (2011).
5. For a review, see A. G. Rojo, J. Phys. Condens. Matter 11, R31 (1999).
6. A. R. Cameron, P. Riblet, and A. Miller, Phys. Rev. Lett. 76, 4793 (1996).
7. G. Vignale in Manipulating Quantum Coherence in Solid State Systems, NATO Science Series II: Mathematics, Physics, and Chemistry, edited by M. E. Flatté and I. Tifrea (Springer, Dordrecht, 2007)[Amazon][WorldCat].
8. C. P. Weber, N. Gedik, J. E. Moore, J. Orenstein, J. Stephens, and D. D. Awschalom, Nature 437, 1330 (2005).
9. B. A. Bernevig, J. Orenstein, and S.-C. Zhang, Phys. Rev. Lett. 97, 236601 (2006).
10. J. D. Koralek et al., Nature 458, 610 (2009).
### About the Author: Giovanni Vignale
Giovanni Vignale is Curators’ Professor of Physics at the University of Missouri-Columbia. After graduating from the Scuola Normale Superiore in Pisa in 1979 and gaining his Ph.D. at Northwestern University in 1984, he carried out research at the Max-Planck-Institute for Solid State Research in Stuttgart, Germany, and Oak Ridge National Laboratory. He became a Fellow of the American Physical Society in 1997. Professor Vignale’s main areas of research are many-body theory and density-functional theory. He is the author of two books, Quantum Theory of the Electron Liquid and The Beautiful Invisible and has published almost 200 papers.
## Related Articles
### More Semiconductor Physics
Chemistry of an Interface
Synopsis | May 9, 2013
Superconductivity on a Charge Diet
Viewpoint | Apr 15, 2013
### More Mesoscopics
Possible Sign of Annihilating Majorana Pairs
Synopsis | Mar 21, 2013
Setting Ground Rules for Quantum Circuits
Synopsis | Jan 17, 2013
## New in Physics
Wireless Power for Tiny Medical Devices
Focus | May 17, 2013
Pool of Candidate Spin Liquids Grows
Synopsis | May 16, 2013
Condensate in a Can
Synopsis | May 16, 2013
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Fire in a Quantum Mechanical Forest
Viewpoint | May 13, 2013
Insulating Magnets Control Neighbor’s Conduction
Viewpoint | May 13, 2013
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 38, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8991197347640991, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/124659/analytic-rank-of-an-elliptic-curve-with-algebraic-rank-0
|
## Analytic rank of an elliptic curve with algebraic rank 0
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $E$ be an elliptic curve over $\mathbb{Q}$ with algebraic rank 0. Is there any way, one can argue that the analytic rank must be even? Of course this would follow from the standard conjectures, such as BSD, Sha is finite, and the parity conjectures, but are there any unconditional results?
-
2
No . – wccanard Mar 15 at 23:16
1
as long as it's known to be at most two! – Will Sawin Mar 16 at 3:22
mathoverflow.net/questions/123813/… – Srilakshmi Mar 16 at 6:23
In general no. If you know that the p-Selmer rank is 0 for some p, then yes. – Tim Dokchitser Mar 16 at 14:55
@Will Sawin and Tim Dokchitser: "as long as you have a condition, there is an unconditional result" ;-) Tim: are you not tempted to give a reference? ;-) – wccanard Mar 16 at 17:51
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8987432718276978, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/187711/how-to-prove-t231-irreducible-in-f-p
|
# How to prove $t^{23}+1$ irreducible in $F_p$?
I have tried to prove that $t^2+1$ is irreducible over $F_3$ by supposing to the contrary $t^2+1=(t+\alpha)(t+\beta)=t^2+(\alpha+\beta)t+\alpha\beta$.
Then, $\alpha+\beta\equiv 0 \pmod 3, \alpha\beta\equiv 1 \pmod 3$. But this leads to $\beta ^2\equiv 2 \pmod 3$ which is not possible by considering cases.
My question would be how do I prove that $t^{23}+1$ (or other similar polynomials like $t^5+1$) is irreducible over $F_p$? Considering cases (linear factors, quadratic factors) seems too tedious to be a feasible method.
Sincere thanks for any help!
[apologies: I have edited the question. The first polynomial should be $t^2+1$ instead of $t^3+1$]
-
2
`maple> factor(x^23+1) mod 3` returns $( 1+t ) ( 1+2\,t+{t}^{2}+2\,{t}^{3}+{t}^{4}+2\,{t}^{ 5}+{t}^{6}+2\,{t}^{7}+{t}^{8}+2\,{t}^{9}+{t}^{10}+2\,{t}^{11}+{t}^{12} +2\,{t}^{13}+{t}^{14}+2\,{t}^{15}+{t}^{16}+2\,{t}^{17}+{t}^{18}+2\,{t} ^{19}+{t}^{20}+2\,{t}^{21}+{t}^{22} )$ – user2468 Aug 28 '12 at 2:03
In $\Bbb Z$ the irreducible factors of $t^n-1$ are the cyclotomic polynomials $\Phi_d(t)$ for divisors $d\mid n$ (which includes at least $d=1$ and $n$). In arbitrary fields even these factors may reduce further. – anon Aug 28 '12 at 2:15
## 4 Answers
The polynomial $t^3+1$ has the root $-1$ in $F_3$, so it is not irreducible. The same is true of $t^{23}+1$, indeed $t^n+1$ where $n$ is odd. In all cases, the polynomial is divisible by $t-(-1)$, that is, $t+1$.
The same result holds in any field $F$. If $n\gt 1$ is odd, then $t^n+1$ is not irreducible over $F$. The proof is the same.
-
Why is $t+1$ aka $t+2$ mod 3? – user2468 Aug 28 '12 at 2:07
1
@Jennifer Dylan: Thanks! Because I was thinking of $t-2$. – André Nicolas Aug 28 '12 at 2:08
Update: this post is now obsolete. I was referring to a typo in an earlier version of the question.
I want to make a comment about a mistake in your contradiction proof that $x^3 + 1$ is irreducible over $\Bbb{Z}/3\mathbb{Z}$. You begin by assuming: $$t^3+1=(t+\alpha)(t+\beta)=t^2+(\alpha+\beta)t+\alpha\beta$$ How come a degree $3$ polynomial is equal to a degree $2$ polynomial?
Fixing it, we should assume: $$t^3 + 1 = (t+a)(t+b)(t+c) = t^3 + (a+b+c) t^2 +(bc+ac+ab) t+ abc$$ which leads to the system: $$\begin{eqnarray} a+b+c & \equiv & 0 \pmod{3} \\ bc + ac + ab & \equiv & 0 \pmod{3} \\ abc & \equiv & 1 \pmod{3} \\ \end{eqnarray}$$ The $3$rd equivalence gives $4$ possibilities for $(a,b,c)$: $(1, 1, 1)$, $(1, 2, 2)$, $(2, 1, 2)$ and $(1, 1, 2)$. Filtering through the $1$st equivalence, we are left with $(1, 1, 1)$, which is consistent with the $2$nd second equivalence. Hence $-1$ is a root of $t^3 + 1$ over $\Bbb{Z}/3\mathbb{Z}$. So $$t^3 + 1 \equiv (t+1)^3 \pmod{3}$$
-
Sincere thanks! I actually meant $t^2+1$. [question edited] – yoyostein Aug 28 '12 at 3:28
This special form is relatively easy, since its roots are roots of unity. Every solution is a 23rd root of -1, which must be a 46th root of unity.
I'm going to assume you're not in characteristic 2 or 23: I'll let you work out that special case yourself.
Breaking the group of 46th roots of unity into its relatively prime parts, we have a primitive square root of unity -1 and a primitive 23rd root of unity $\omega$. Your particular polynomials roots are of the form $-\omega^n$. That is,
$$t^{23} + 1 = \prod_{n=0}^{22} (t + \omega^n)$$
This is its factorization in the splitting field. How can we understand how it factors in the field we are actually interested in?
The answer is Galois theory! You group the roots into conjugacy classes, and factor the product accordingly. Each factor will be a polynomial over your base field.
So, how can we group the roots into conjugacy classes? One simple way is to simply figure out what field a particular root generates.
Suppose your base field is $\mathbb{F}_{13}$, the finite field of 13 elements. It has a primitive 12th root of unity. Its quadratic extension has a primitive 168th root of unity. Its cubic extension has a primitive $13^{3}-1$ root of unity. But which one can have a 23rd root of unity?
The answer is simple! We just need $23 | 13^n - 1$, or $13^n \equiv 1 \pmod{23}$.
The only possibilities for $n$ are $1,2,11,22$, being the divisors of $\phi(23)$. $n=11$ is the smallest one that works.
So, $\omega$ generates a degree 11 extension of $\mathbb{F}_{13}$: its conjugacy class has 11 terms. Another 23rd root of unity is 1. What about the other 11 primitive roots of unity? Well, this field has all of the 23rd roots of unity, so those 11 must also be in a conjugacy class.
Therefore, $t^{23}+1$ factors as $t+1$ and a product of two degree 11 factors. The Galois action is generated by Frobenius: by $x \mapsto x^{13}$, so an explicit formula for those factors is
$$\prod_{n=0}^{10} (t + \omega^{13^n})$$ and $$\prod_{n=0}^{10} (t + \omega^{5 \cdot 13^n})$$
Where did the $5$ come from? It's not a power of 13 (modulo 23). Any such exponent would do in its place.
-
$t^r+1$ has the root $t=-1$ as long as $r$ is odd, so it is reducible over $F_p$ for any $p$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486204981803894, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=970773
|
Physics Forums
## Superellipse and a good coordinate system
http://mathworld.wolfram.com/Superellipse.html
you can find the definition of superellipse. Now consider the particular super ellipse
$$\frac{x^{2n}}{A^{2n}} +\frac{y^{2n}}{B^{2n}} = 1$$
In which A,B, are constant and n is a positive integer.
What is the coordinate system that has two families of curves in which one represents the superellipse and the other one is perpendicular to it? In the particular case of A = B, n = 1, the coordinate system is
$$x = r\cos{\varphi}$$
$$y = r\sin{\varphi}$$
If A is different than B but still n =1, the coordinate system is similar but it involves also hyperbolic sine and cosine and one family of curves is the generic ellipse and the other family is the generic hyperbola and they are perpendicular to each other, like it happens in the polar coordinates. So, is there a similar curvilinear coordinate system for the particular superellipse I described?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
What about: $$x = A (\cos \varphi)^{\frac{1}{n}}$$ $$y = B (\sin \varphi)^{\frac{1}{n}}$$ ?
This is a parametric representation of the superellipse, not a coordinate system. In fact, you have only a "free" parameter $$\varphi$$ and you must have two parameters, like for polar coordinates, where you have $$\varphi$$ and $$r$$.
## Superellipse and a good coordinate system
Ok...so consider then:
$$x = A\, r [\cos \varphi]^{\frac{1}{n}}$$
$$y = B\, r [\sin \varphi]^{\frac{1}{n}}$$
from which we then get:
$$\frac{x^{2n}}{A^{2n}} +\frac{y^{2n}}{B^{2n}} = r^{2n}$$
Let $\{\mathbf{e}_1, \mathbf{e}_2} \}$ form an orthonormal basis for the rectangular coordinate system and let $\mathbf{w} = x\, \mathbf{e}_1 + y\, \mathbf{e}_2\;$.
So then,
$$\mathbf{w} = r (A [\cos \varphi]^{\frac{1}{n}} \mathbf{e}_1 + B [\sin \varphi]^{\frac{1}{n}} \mathbf{e}_2)$$
To find a tangent vector to some superellipse given by fixed $r\,$, we find:
$$\frac{\partial \mathbf{w}}{\partial \varphi} = \frac{r}{\sin \varphi \cos \varphi} (-\frac{A\, \sin^2 \varphi}{n} [\cos \varphi]^{\frac{1}{n}} \mathbf{e}_1 + \frac{B\, \cos^2 \varphi}{n} [\sin \varphi]^{\frac{1}{n}} \mathbf{e}_2)$$
And I have no idea where I'm going with this but I'm having fun with the TeX stuff :p
It is not difficult to find a curvilinear coordinate system, the difficulty is to find one in which the two curves that you obtain when you set one parameter constant are perpendicular to each other. See for example the link http://mathworld.wolfram.com/Ellipti...ordinates.html From which it appears clear that the two families of curves (ellipses and hyperbolas) are perpendicular in each point they intercept. My desire is to find the analytical expressions for the curve similar to the link I posted. But instead of ellipses I have a particular kind of superellipses in which n is positive integer (see previous equation). Notice that when you change superellipse, A,B change as well, like in polar coordinate system, when you move from a circle to another one r changes.
So far, we obtained the parametric representaion. Nobody knows better? Should I give up? Is not here any expert?
Nobody in this forum is able to help me? Can you please tell me another forum where I can ask the same question?
Thread Tools
| | | |
|----------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Superellipse and a good coordinate system | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 2 |
| | Introductory Physics Homework | 1 |
| | Linear & Abstract Algebra | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8844071626663208, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/105945/elliptic-curves-with-and-without-cm/105967
|
## elliptic curves with and without CM
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $E/\mathbb C$ be an elliptic curve. It is known that if $E$ has CM, then $j(E)$ is an algebraic integer. My first question is: what about the converse? Is there a way to identify the subset of algebraic integers whose elements correspond to [isomorphism classes of] elliptic curves with CM? The second question is: fixed a number field $K$, does there always exist an elliptic curve over $K$ with CM? And without CM?
Thank you very much!
-
3
For the second question: there are CM elliptic curves over Q. Their base changes to the field are hand are CM as well. E.g., you could take y^2 = x^3 - x over any number field. On the other hand there are non-CM curves as well: just take the j-invariant to lie in the number field you want but to not be an algebraic integer. – Kestutis Cesnavicius Aug 30 at 13:41
However the extra endomorphisms of those curves, in this case $x\to -x$, $y\to iy$, are not defined over $\mathbb Q$. This is because their action on $1$-forms is multiplication by an element of the ring of integers of the imaginary quadratic field, so they cannot be defined over $\mathbb Q$. – Will Sawin Aug 30 at 16:16
## 3 Answers
1)If an elliptic curve has integral $j$-invariant it absolutely DOES NOT NEED to have CM. The class of curves with integral $j$-invariant (let's call that the class of IM Elliptic curves for Integral Modulus) is MUCH MUCH larger than the class of CM Elliptic curves. In fact, one can use Heilbronn's Theorem that class numbers of imaginary quadratic fields tend to infinity to show that over any given number field, there are only finitely many elliptic curves with CM. In particular over $\mathbf{Q}$, there are only 13 CM $j$-invariants. Even for all number fields of a certain degree, there are only a finite number $N(d)$ of elliptic curves with CM over any number field of degree $d$. This gives a way to enumerate all the CM $j$-invariants or "singular moduli," which is about as good as you can hope for in terms of describing the complex numbers which are $j$-invariants of CM elliptic curves. To do this explicitly (say for number fields of degree up to 100) see Mark Watkins' enumeration of imaginary quadratic fields of class number up to 100.
Meanwhile, for any regular integer $n$ (1,2,3, etc) there is at least one elliptic curve over $\mathbf{Q}$ whose $j$ invariant is $n$. Therefore over $\mathbf{Q}$ and therefore over any number field, there are infinitely many non-isomorphic IM elliptic curves.
If you want to say something general about elliptic curves with IM, consider the theorem of Deuring that an elliptic curve has IM if and only if it has potential good reduction. For this and much much more see Serre-Tate's "Good reduction of abelian varieties"
2) Easy proof that the answer is yes, at least as long as you mean "has a CM $j$-invariant" when you say CM: $y^2 = x^3 + 1$ is a CM elliptic curve with $j$-invariant zero defined over any number field. On the other hand, if we take an elliptic curve with $j$-invariant equal to 1/2, say this one: y^2 + x*y = x^3 + 72*x + 13822 , well it doesn't have integral $j$-invariant and therefore can't have CM!
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a way to "describe" the j-invariants of elliptic curves with CM over the complex numbers (but I am not sure it is really practical), by using the $\tau$ invariant.
Recall that an elliptic curve over $\mathbb C$ can be descrived as $\mathbb C/\Lambda$, where $\Lambda$ is a lattice in $\mathbb C$; such a lattice can be always put as $\Lambda=\mathbb Z w_1 \oplus \mathbb Z w_2$, with $\tau=w_1/w_2$ in the upper half plane (with imaginary part $>0$). Then $E$ has CM if an only if $\tau$ is algebraic and quadratic imaginary (in our case, the field $\mathbb Q(\tau)$ has degree $2$ over $\mathbb Q$).
But there is a function, the Klein's j function
http://en.wikipedia.org/wiki/J-invariant
that gives the j-invariant of the corresponding elliptic curve. So the description of the j-invariants with CM could be $j(\tau)$ for $\tau$ quadratic imaginary.
Incidentally, the only $\tau$'s algebraic whose $j(\tau)$ is algebraic are the quadratic imaginary (this is a theorem by Theodor Schneider).
-
The $j$ invariant must lie on the intersection of the diagonal $x=y$($=j$) with the classical modular curve.
This is because $\Phi_n(j_1,j_2)=0$ if and only if the curve with $j$-invariant $j_1$ and the curve with $j$-invariant $j_2$ have an isogeny between them whose kernel is $\mathbb Z/n$, so $\Phi_n(j,j)=0$ if and only if the curve has an endomorphism of degree $n$ whose kernel is cyclic.
Every CM endomorphism can be factored into a standard, multiplication by $k$, endomorphism and an endomorphism with cyclic kernel. If the CM endomorphism corresponds to a non-rational integer in the imaginary quadratic field, the second component cannot be trivial, so it must be recognized by one of the $\Phi_n$.
Thus it is easy to show that a curve is not a CM curve for a field which contains a non-rational element of norm $\leq n$. Just check all the classical modular polynomials!
If you can compute the degree $d$ of $j$ as an algebraic integer then you know it can only have CM by a field of class number $\leq d$. This is a finite list, so you can find a value of $n$ sufficient to rule out $CM$ entirely by ruling out each field individually. I'm not sure if you can find an explicit bound for $n(d)$ or how good it is.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278098940849304, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/77734/devlins-constructibility-as-a-resource/81174
|
## Devlin’s “Constructibility” as a resource
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is fairly well-known among set-theorists that Keith Devlin's 1984 book "Constructibility" has flaws in its initial development of fine structure theory. (See Lee Stanley's review 1 of the text for the Journal of Symbolic Logic, for example.) I've had the book on my shelf for twenty years now, and although there is much in there that I find interesting, the fact that I know there are some errors in it means that I've been reluctant to invest a lot of time working through it. So, this brings me to my questions for the experts:
How badly do these flaws mar the rest of the book? Is the damage localized to the initial development of properties of the the J-hierarchy, or is it much more widespread?
Of particular interest to me are the following questions:
1) Is Devlin's treatment of the Covering Lemma for L on solid ground?
2) What about his treatment of morasses?
I know that there are other sources for this material, but I've always appreciated Devlin's writing style.
-
1
Having studied Magidor's covering lemma whose proof can be slightly modified to prove the Jensen covering lemma, I can suggest this as an alternative (which is interesting on its own). (Magidor M. Representing Sets of Ordinals as Countable Unions of Sets in the Core Model.** Trans. of the Amer. Math. Soc., Vol. 317, No. 1 (Jan., 1990), pp. 91-126) - link: jstor.org/stable/2001455 – Asaf Karagila Oct 10 2011 at 20:19
Thanks Asaf. Devlin's book is allegedly giving Magidor's proof of the Covering Lemma (which is also the proof outlined in Jech's book) so in theory he shouldn't be using fine structure. That's why I was hoping his treatment of it was solid. I'd like to be able to tell a grad student who wants to see a proof of the covering lemma to "Just go look at V.5 in Devlin's book". (Of course, Magidor is an excellent writer as well, though) – Todd Eisworth Oct 10 2011 at 20:32
I was told by a rather reliable source that Jech skips the hard work of the proof and just gives an outline as though it is simple. Magidor is an avid writer, and he teaches even better. :-) – Asaf Karagila Oct 10 2011 at 21:05
You may also find some slightly relevant help in the comments to this question: mathoverflow.net/questions/38573 – Asaf Karagila Oct 11 2011 at 15:22
1
My summary of the review: consider BS to be RUD (sorry for the bad joke.) – Trevor Wilson Jul 26 at 16:59
## 1 Answer
Mathias has a paper where he corrects the flaws that occur in Devlin's theory BS (= Basic Set Theory). The theory has to be only slightly strengthened to be correct. (It is more than sufficent to add an axiom that asserts for any set and any $n \in \omega$ there is a set of all its sized $n$-subsets.) It is really only in dealing with syntax and showing that certain straightforward concepts are $\Delta_1$ that BS comes unstuck.
I think the book can be safely read beyond a certain point. It is true that Devlin has not correctly proved that the satisfaction relation for $\Delta_0$ formulae is $\Delta_1$, but one can just take the attitude that the result is correct (as Jensen showed), it is just that that particular development in BS failed. BS needs another axiom and all would be well.
Thus, the constructions of the $J$-hierarchy, the fine structural concepts of projecta, mastercodes these are all fine, as are the constructions of trees, $\Box$, Morasses etc, and the Covering Lemma can all be safely read since there one is past the point where these delicate matters are being considered.
Mathias: Weak Systems of Gandy, Jensen and Devlin, Centre de Recerca Matem`atica, Barcelona, 2003--2004, Birkh\"auser Verlag, 2006, Eds: Bagaria, Joan and Todorcevic, Stevo, Trends in mathematics Series.
-
Thanks Philip! This is perfect! – Todd Eisworth Nov 17 2011 at 14:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9530049562454224, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/209859-polynomial-rigs-_-dummit-foote-chapter-9-a.html
|
3Thanks
• 1 Post By jakncoke
• 1 Post By Deveno
• 1 Post By Deveno
# Thread:
1. ## Polynomial Rigs _ Dummit and Foote Chapter 9
I am reading Dummit and Foote Chapter 9 on Polynomial Rings and am trying to get a good understanding of Propostion 2 (see Attachment - page 296) which states:
================================================== ==============
Let I be an ideal of the ring R and let (I) = I[x] denote the ideal of R[x] generated by I (set of polynomials with co-efficients in I). Then
R[x]/I[x] $\cong$ (R/I) [x]
In particular, if I is a prime ideal of R then (I) is a prime ideal of R[x]
================================================== ===============
I decided to generate a simple example using R= $\mathbb{Z}$ and I = 2 $\mathbb{Z}$
Then $\mathbb{Z}$ = { ..., -2, -1, 0, 1, 2, 3, ... } and 2 $\mathbb{Z}$ = { ..., -4, -2, 0, 2, 4, 6 .... }
also $\mathbb{Z}$/2 $\mathbb{Z}$ = { $\overline{0}, \overline{1}$ }
Then it appears to me that R[x] = $\mathbb{Z}$ [x] is the set of all polynomials with integer coefficients and I[x] is the set of polynomials with even integers as coefficients
Now how do I formally express R[x], I[x] and $\mathbb{Z}$/2 $\mathbb{Z}$ formally and algebraicly?? Is my text above OK?
It seem that $\mathbb{Z}$ [x] /2 $\mathbb{Z}$ [x] would have two elements - one which was all the polynomials with even co-efficients and one which contains all the polynomials with odd integer co-efficients - but again - how do I express this in formal algebraic symbolism
Further, returning to Propostion 2 above
R/I = $\mathbb{Z}$/2 $\mathbb{Z}$ = { $\overline{0}, \overline{1}$ } and so (R/I)[x] appears to have two elements - one the set of polynomials with coefficients in $\overline{0}$ and one with coefficients in [TEX]\overline{1}TEX] which seems to be correct.
Is my example and reasoning correct? Would appreciate an assurance from someone that all is OK.
How would my exampole ber expressed in more formal algebraic symbolism?
Note: I was also somewhat thrown by D&F's use of the symbolism (I) for I[x]. Previously (see attachment on Properties of Ideals, D&F ch 7 page 251) the symbol (I) was used to denote the smallest idea of R containing I which D&F point out is the set of all finite sums of elements of the form ra with r $\in$ R and a $\in$ I ie sums such as $r_1 a_1 + r_2 a_2 + ... + r_n a_n$. Isn't the use of (I) for I[x] somewhat inconsistent with the use of the symbolism just described.
Peter
Attached Files
• Dummit and Foote - Ch 9 - Polynomial Rings - pages 295-296.pdf (112.6 KB, 2 views)
• Dummit and Foote - Chapter 7 - Introduction to Rings - page 251 - Properties of Ideals.pdf (71.8 KB, 1 views)
2. ## Re: Polynomial Rigs _ Dummit and Foote Chapter 9
First of all, yes your examples are correct.
$R[x] = \{a_nx^n+a_{n-1}x^{n-1}+...+a_0 | a_i \in R, n \in \mathbb{N} \}$
Same for I[x]
As for $\frac{\mathbb{Z}}{2\mathbb{Z}} = \{0 + 2\mathbb{Z}, 1 + 2\mathbb{Z} \}$
one which was all the polynomials with even co-efficients and one which contains all the polynomials with odd integer co-efficients - but again - how do I express this in formal algebraic symbolism
Since most of this is merely representation, i think if you are clear, you can write it anyway.
Like,
$(\frac{\mathbb{Z}}{2\mathbb{Z}})[X] = \{ \{a_0+a_1x+...+a_nx^n | a_i \in 0 + 2\mathbb{Z} \}, \{a_0+a_1x+...+a_nx^n | a_i \in 1 + 2\mathbb{Z} \} \}$
OR
$(\frac{\mathbb{Z}}{2\mathbb{Z}})[X] = \{ \{a_0+a_1x+...+a_nx^n | a_i = 2k \text{ for some } k \in \mathbb{N}\}, \{a_0+a_1x+...+a_nx^n | a_i = 2m + 1 \text{ for some } m \in \mathbb{N} \} \}$
3. ## Re: Polynomial Rigs _ Dummit and Foote Chapter 9
let's look at Z[x]/(2Z)[x] and (Z/2Z)[x] separately.
the first consists of cosets of the form:
p(x) + (2Z)[x]
for example, we have the cosets 4x3+1 + (2Z)[x] and 2x2+4x+3 + (2Z)[x]. note that these are actually the SAME coset, since:
4x3+1 - (2x2+4x+3) = 4x3-2x2-4x-2, which is in (2Z)[x].
the second consists of polynomials in (Z/2Z)[x], which have coefficients in the ring (Z/2Z) = {[0],[1]} (and a common abuse of notation is just to call these 0 and 1). for example, x+1, x2+1, and x3+x+1 are all elements of
if we want to show that these two rings are isomorphic, we can (and probably SHOULD) display an isomorphism:
let's try this one:
we know we have a (ring) homomorphism k-->k (mod 2) of Z onto Z/(2Z). let's call this φ.
so now, let's define a map ψ:Z[x]-->(Z/2Z)[x] by:
ψ(a0+a1x+...+anxn) = φ(a0) + φ(a1)x +....φ(an)xn.
i leave it to you to verify ψ is actually a homomorphism (it's easy). essentially, we're just replacing aj with aj (mod 2), so for example, in our polynomials above:
ψ(4x3+1) = φ(4)x3 + φ(1) = 0x3 + 1 = 1,
ψ(2x2+4x+3) = φ(2)x2 + φ(4)x + φ(3) = 0x2 + 0x + 1 = 1.
if we can show ker(ψ) = (2Z)[x], we're done (by the first isomorphism theorem for rings)!
but if ψ(p(x)) = 0, we must have φ(aj) = 0, for every coefficient aj of p, thus p is in (2Z)[x], so ker(ψ) is in (2Z)[x].
on the other hand if p(x) is in (2Z)[x] = (ker(φ))[x], then ψ(p(x)) = 0 + 0x +....+0xn = 0, so (2Z)[x] is contained in ker(ψ).
(the same proof works, by the way, for R and the ideal I).
thus Z[x]/(2Z)[x] is isomorphic to (Z/2Z)[x].
**************************
(Z/2Z)[x] DOES NOT HAVE TWO ELEMENTS!!!!! it has infinitely many. all of them have "1's" as coefficients, but x and x2 are NOT the same polynomial in (Z/2z)[x]. x+1 and x are also distinct polynomials. in fact, for any degree n, we have 2n distinct polynomials of degree n. for example, we have, of degree 2:
x2
x2+1
x2+x
x2+x+1
**************************
a word about the notation (I). note that this is the smallest ideal of R[x] containing I. since (I) is an ideal of R[x], it must contain all monomials of the form axj, with a in I. thus it must contain all sums of these (since ideals are closed under addition), and thus I[x]. hence I[x] is contained in (I). on the other hand, since I[x] is an ideal of R[x] containing I (as the polynomials a*1, for a in I), we must have (I) in I[x]. hence (I) = I[x].
4. ## Re: Polynomial Rigs _ Dummit and Foote Chapter 9
Originally Posted by Deveno
let's look at Z[x]/(2Z)[x] and (Z/2Z)[x] separately.
the first consists of cosets of the form:
p(x) + (2Z)[x]
for example, we have the cosets 4x3+1 + (2Z)[x] and 2x2+4x+3 + (2Z)[x]. note that these are actually the SAME coset, since:
4x3+1 - (2x2+4x+3) = 4x3-2x2-4x-2, which is in (2Z)[x].
the second consists of polynomials in (Z/2Z)[x], which have coefficients in the ring (Z/2Z) = {[0],[1]} (and a common abuse of notation is just to call these 0 and 1). for example, x+1, x2+1, and x3+x+1 are all elements of
if we want to show that these two rings are isomorphic, we can (and probably SHOULD) display an isomorphism:
let's try this one:
we know we have a (ring) homomorphism k-->k (mod 2) of Z onto Z/(2Z). let's call this φ.
so now, let's define a map ψ:Z[x]-->(Z/2Z)[x] by:
ψ(a0+a1x+...+anxn) = φ(a0) + φ(a1)x +....φ(an)xn.
i leave it to you to verify ψ is actually a homomorphism (it's easy). essentially, we're just replacing aj with aj (mod 2), so for example, in our polynomials above:
ψ(4x3+1) = φ(4)x3 + φ(1) = 0x3 + 1 = 1,
ψ(2x2+4x+3) = φ(2)x2 + φ(4)x + φ(3) = 0x2 + 0x + 1 = 1.
if we can show ker(ψ) = (2Z)[x], we're done (by the first isomorphism theorem for rings)!
but if ψ(p(x)) = 0, we must have φ(aj) = 0, for every coefficient aj of p, thus p is in (2Z)[x], so ker(ψ) is in (2Z)[x].
on the other hand if p(x) is in (2Z)[x] = (ker(φ))[x], then ψ(p(x)) = 0 + 0x +....+0xn = 0, so (2Z)[x] is contained in ker(ψ).
(the same proof works, by the way, for R and the ideal I).
thus Z[x]/(2Z)[x] is isomorphic to (Z/2Z)[x].
**************************
(Z/2Z)[x] DOES NOT HAVE TWO ELEMENTS!!!!! it has infinitely many. all of them have "1's" as coefficients, but x and x2 are NOT the same polynomial in (Z/2z)[x]. x+1 and x are also distinct polynomials. in fact, for any degree n, we have 2n distinct polynomials of degree n. for example, we have, of degree 2:
x2
x2+1
x2+x
x2+x+1
**************************
a word about the notation (I). note that this is the smallest ideal of R[x] containing I. since (I) is an ideal of R[x], it must contain all monomials of the form axj, with a in I. thus it must contain all sums of these (since ideals are closed under addition), and thus I[x]. hence I[x] is contained in (I). on the other hand, since I[x] is an ideal of R[x] containing I (as the polynomials a*1, for a in I), we must have (I) in I[x]. hence (I) = I[x].
Yes sir, probably should have said 2 cosets.
5. ## Re: Polynomial Rigs _ Dummit and Foote Chapter 9
Originally Posted by jakncoke
Yes sir, probably should have said 2 cosets.
no, not even two cosets. infinitely many.
6. ## Re: Polynomial Rigs _ Dummit and Foote Chapter 9
Originally Posted by Deveno
no, not even two cosets. infinitely many.
Yes sir, right again, i'm gonna go to bed.
7. ## Re: Polynomial Rigs _ Dummit and Foote Chapter 9
A most helpful post - still working through the ideas in this post!
Thank you!!!
Peter
8. ## Re: Polynomial Rings _ Dummit and Foote Chapter 9
In the first part of your post above, you invite us to consider Z[x]/(2Z)[x].
This quotient ring of polynomials, you point out, consists of cosets of the form
p(x) + (2Z)[x]
and you give as an example the cosets of $4x^3 + 1 + (2Z)[x]$ and $2x^3 + 4x + 3 + (2Z)[x]$
and you point out that these are actually the same coset since $(4x^3 + 1) - (2x^3 + 4x + 3) \in (2Z)[x]$
Thus you seem to be using the following rule:
Two polynomials r(x) and s(x) belong to the same coset if $r(x) - S(x) \in (2Z)[x]$
Is this correct? Presumably you can generalise this from (2Z)[x] to the general ring R[x]
================================================== =============================
To try to get a better understanding of your post and the above in general I turned to Hungerford's Abstract Algebra Ch 5 Congruence in F[x] and Congruence-Class Arithmetic - see attachment for pages 119-121.
I tried to fit or reconcile your definitions and process above with Hungerford's definitions at the bottom on p119 and page 120 plus his two illustrative examples of congrience modulo $(x^2 + 1)$ in R[x] and congruence modulo $(x^2 = x + 1)$ in $Z_2[x]$, but was unable to formally do so.
Can you please show how your analysis fits with the approach of Hungerford. I know they deal with the same structures so they should be in harmony but I cannot formally reconcile the two approaches. Can you show how this works?
By the Hungerford deals with F[x] where F is a field. DOes it work if F is simply a ring?
Peter
Attached Files
• Hungerford - Abstract Algebra - Ch 5 - pages 119-121.pdf (243.1 KB, 1 views)
9. ## Re: Polynomial Rigs _ Dummit and Foote Chapter 9
in ANY ring (polynomial or not) R, if I is an ideal of R, we can form the factor ring R/I.
if R is a commutative ring, then we have to ways to "factor by I":
R[x]/I[x] (form the polynomial rings _[x] first, then factor the "big" polynomial ring by the "smaller" one).
(R/I)[x] (factor R by I first, THEN form a polynomial ring).
it turns out that this is "two paths to the same destination".
in general, for a factor ring R/I, we have a+I = b+I iff a-b in I (this is the abelian group version of aH = bH iff ab-1 in H). it turns out that for a ring, the additive structure "controls" how we form the cosets (ideals/kernels are the pre-images of the additive identity of the image).
the same rule holds for the factor ring F[x]/I, where I is an ideal of F[x], for a field F: f(x) + I = g(x) + I iff f(x) - g(x) is in I. however, F[x] where F is a field, has a property general polynomial rings R[x] do not: every ideal is principal (this is the corresponding idea to "cyclic group" for ideals: a principal ideal is generated by a single element, so every element in it is some multiple of the generator.
this is usually indicated in F[x] by <(p(x)> or (p(x)). so if f(x) - g(x) is in (p(x)), this means that f(x) - g(x) = k(x)p(x), for some polynomial k(x), or more pithily: p(x) DIVIDES f(x) - g(x).
to illustrate what goes "wrong" in a ring R[x], consider the ideal <2,x> in Z[x] (the ring of polynomials with integer coefficients). what do the elements of <2,x> look like?
they are the sum of a polynomial with all even coefficients and a polynomial with no constant term. a shorter description is: all polynomials with even constant term. so
2+x
3x + x2 <--0 constant terms (and thus a factor of x).
4 + 2x + x3 are all examples
now there is NO polynomial p(x) in Z[x] such that <2,x> = <p(x)>. to see this, note that such a polynomial would have to divide 2, and so must be 1,-1,2, or -2 (the only integer divisors of 2).
but 2 does NOT divide x (there is no polynomial k(x) in Z[x] with 2k(x) = x), and neither does -2. but <-1> = <1> = Z[x], and <2,x> is NOT equal to the entire ring (for example 1+x is NOT in <2,x> since it does not have even constant term).
a "plain old commutative ring" doesn't, in general, "have enough divisors" for the gcd of two polynomials to be a "maximal" generator. so the gcd "generates too much". when the ring is a field, this problem goes away, the "extra structure" we get with a field (namely, "more units") gives "more structure" to its corresponding polynomial field.
the example hungerford gives of R[x]/<x2+1> is an extremely important one.
here, our ideal is <x2+1>, consisting of all real polynomials that contain x2+1 as a factor. the elements of R[x]/<x2+1> are cosets of <x2+1>. i will write this ideal as I, to underscore the similarity with R/I for a general ring R (R[x] is a ring, after all).
as with ANY factor ring, we declare f(x) CONGRUENT to g(x) if they lie in the same coset of I = <x2+1> (this is by direct analogy with the factor ring Z/nZ = Z/(n) where we declare a = b (mod n) if a - b is in (n) = nZ, that is, if a and b differ by a multiple of n. the congruence class [a] = a (mod n) is actually the coset a + nZ = a + (n)).
as is the case with groups, a ring congruence creates a partition (or equivalence) on the ring. in this case, we seek to find "simple representatives" of the cosets (equivalence classes) f(x) + I.
it turns out that we can represent ANY coset as the coset of a LINEAR polynomial ax +b + I. rather than go about PROVING this, i will just show how it's done for a polynomial of degree 5.
say we have x5 + 3x + 1 in R[x]. then x5 + 3x + 1 = x3(x2 + 1) - x3 + 3x + 1 (take a minute, and work through the algebra on this).
thus x5 + 3x + 1 - (x2 + 3x + 1) = x3(x2 + 1), and the RHS is in I = <x2 + 1>, so
x5 + 3x + 1 = -x3 + 3x + 1 (mod x2+1), that is: x5 + 3x + 1 + I = -x3 + 3x + 1 + I.
so now we've reduced our "representative" down to a polynomial of degree 3. we can keep going:
-x3 + 3x + 1 = -x(x2 + 1) + x + 3x + 1 = -x(x2 + 1) + 4x + 1 so:
-x3 + 3x + 1 - (4x + 1) is in I, so -x3 + 3x + 1 = 4x + 1 (mod I).
if you are paying attention, you'll see i am just doing "polynomial long division" one step at a time. basically i am "dividing" x5 + 3x + 1 by x2 + 1 until i get a remainder of degree less than deg(x2 + 1) = 2.
so x5+ 3x+ 1 + I = 4x + 1 + I, 4x + 1 is the linear polynomial i promised i would produce.
now here is where it gets fun:
what do we get if we multiply:
(a+bx + I)(c+dx + I), after we "reduce mod x2 + 1"? (you 'll see later why i wrote the constant term first) let's do it:
(a+bx + I)(c+dx + I) = ac + (ad+bc)x + bdx2 + I. this is a quadratic, so we can reduce by one more degree. what we're going to do is add and subtract bd, so that we have a factor bd(x2 + 1).
ac + (ad+bc)x + bdx2 + I = ad - bc + (ad+bc)x + bd(x2 + 1) + I. but since bd(x2 + 1) is IN I, bd(x2 + 1) + I = I, so:
= (ad-bc) + (ad+bc)x + I.
compare this with the product of complex numbers:
(a+bi)(c+di) = ac-bd + (ad+bc)i...see any similarities?
in fact, and this is MOST important: it turns out that the coset x+I is a ROOT of the polynomial x2+1 in R[x]/I:
(x+I)2 + (1+I) = (x2+I ) + (1+I) = x2+1 + I = I = 0+I (recall that "I" is the 0-element (additive identity) of the ring R[x]/I).
in other words, a+bx + I = a(1 + I) + b(x + I) acts "just like" the complex number a+bi with "1+I" playing the role of the real number 1, and "x+I" playing the role of the imaginary number i (which is a root of x2+1).
so one might suspect that R[x]/I is, in fact, a field isomorphic to the complex numbers, and it turns out this is true.
10. ## Re: Polynomial Rigs _ Dummit and Foote Chapter 9
Thank you for that extensive post - I will now happily work through it!
Peter
11. ## Re: Polynomial Rigs _ Dummit and Foote Chapter 9
You write:
"now there is NO polynomial p(x) in Z[x] such that <2,x> = <p(x)>. to see this, note that such a polynomial would have to divide 2, and so must be 1,-1,2, or -2 (the only integer divisors of 2)."
I am struggling to see why in the above <2,x> = <p(x)> implies that such a polynomial p(x) in Z[x] would have to divide 2
Can you formally show why this is the case
Peter
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9127808809280396, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/tags/supersymmetry/hot
|
# Tag Info
## Hot answers tagged supersymmetry
6
### Supersymmetry and non-compact $R$-symmetry group?
Noncompact internal symmetries – and R-symmetry is an internal symmetry (it doesn't transform positions in the spacetime) – are unacceptable in a physical theory because they would lead to negative-norm states. Consider the $i$-th superpartner of a bosonic particle state, $|i\rangle$, where $i=1,2,\dots,N$. The inner product $\langle i|j\rangle$ of such ...
4
### Scalar top quark (stop) pair production
There are two aspects. One is sort of trivial and comprehensible; the other is a bit technical. The trivial reason is that $\tilde t \bar{\tilde t}$ has two "accents" on top of each other and the symbol therefore occupies too much vertical space which is undesirable because we may get overlapping characters and/or non-uniform spacing between lines. The ...
4
### Why should SUSY be expected naturally?
There is a standard paradigm for thinking about the new physics that lies beyond the standard model, at higher and higher energies: weak-scale supersymmetry, grand unification, string theory. The purpose of weak-scale supersymmetry is to stabilize weak-scale physics (i.e. everything we know about) against quantum corrections. Grand unification can explain ...
3
### Spectra of the Type II String theories
The NS-NS sector is the same in type IIA and IIB, but the R-NS and NS-R sectors differ. The type IIA theory is non-chiral, so the R-NS and NS-R fields transform in $\mathbf{8}_s \otimes \mathbf{8}_v$ and $\mathbf{8}_v \otimes \mathbf{8}_s'$, where $\mathbf{8}_s$ and $\mathbf{8}_s'$ are the two eight-dimensional spinor representations of $SO(8)$. Type IIB, on ...
3
### How can string theory work without supersymmetry?
If you spend some time looking in detail at the arguments that string theory requires supersymmetry, you'll find that they are not watertight. (How could they be, since we still can't say/don't know precisely what string theory is?) Basically, some string theorists argue that that the usual classification depends too strongly on choosing nearly trivial ...
2
### Question on the Hagedorn tower in Type I string theory
Hagedorn spectrum just means that the density of states varies exponentially with the energy/mass. $m^2$ (asymptotically) given by the "level" (N) of the state (upto a sqrt). The number of states at level $N$ corresponds to the possible partitions of $N$ into different oscillator modes. That means that the number of states at level $N$ will increase ...
1
### How can string theory work without supersymmetry?
p-adic strings or the adelic approach created by B.Dragovich don't require SUSY at all. At least, not the usual SUSY symmetry... Non-critical string theory, the so-called Liouville theory, is based on the hypothesis of non-imposing the condition that critical strings with fermions (superstrings) impose on the space-time dimension due to internal ...
1
### Soft Mass and Physical Mass in Softly-broken SUSY
The physical masses should be independent of the renormalization scale. We, however, only calculate a finite number of loop corrections, resulting in a scale dependence in the physical mass. This scale dependence can be used to estimate the error in the mass calculation from the missing higher orders. In principle, one could calculate the sparticle mass ...
1
### Mathematical concept of supersymmetry
I would really recommend a study in QFT before going on to study SUSY. QFT has many quirks that make supersymmetry a very interesting expansion of the regular framework. You'd miss out on all that as you just had to believe the facts presented w/o following the thought that lead to the results in detail. On the Mathematical level you will need Grassmann ...
1
### Some questions on a version of the O'Raifeartaigh model
There are lots of questions here! I think I can answer at least some... First of all, you are aware that the fields in $W$ and $K$ are superfields? These contain the entire supermultiplet, so they must be complex in general. This is a short entry but it links to others: http://en.wikipedia.org/wiki/Superfield As mentioned by Jose in his comment, the ...
1
### SUSY's Critical role in String Theory
"Falsifying supersymmetry" is a phrase that has to be properly qualified. Our ability to falsify with experiment is limited. We can rule out the existence of supersymmetry only at accessible energy/distance/density scales. LHC, for example, is not able to resolve physics at distance scales much smaller than \$\frac{\hbar c}{7\mbox{ TeV}} \simeq ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9175880551338196, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/310009/graph-decomposition-and-graph-factor
|
Graph Decomposition and graph factor
If a graph G is H-decomposable does it imply that G has H-factor?
-
1 Answer
No. An $H$-factor is a set of vertex-disjoint copies of $H$ that partition the vertex set of a graph, while an $H$-decomposition is a set of edge-disjoint copies of $H$ that partition the edge set of a graph.
For example, let $H = K_3$ and let $G$ be the $5$-vertex graph consisting of two triangles with a point in common. Then $G$ clearly has an $H$-decomposition, but, since $3 \nmid 5$, it cannot have an $H$-factor.
-
Say i have a graph G with 3m vertices, and that G is $C_3$-decomposable. Can i say now that G has \$C_3_-factor? – kim_kibun Feb 21 at 14:44
@kim_kibun Again, not necessarily. Let $G$ be the $6$-vertex graph obtained by taking a triangle and adding three vertices, each adjacent to two of the vertices of the triangle. (More formally, let the vertices of the triangle be $u$, $v$, and $w$, and let the other three vertices be $a$, $b$, and $c$. Let $a$ be adjacent to $u$ and $v$, $b$ to $v$ and $w$, and $c$ to $w$ and $u$.) Then $G$ has a $K_3$-decomposition but no $K_3$-factor. – Andrew Uzzell Feb 21 at 18:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9239892959594727, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/26599/complexity-of-counting-homomorphisms/26742
|
## complexity of counting homomorphisms
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is a question I have thought about and asked a number of people, but have never got an answer beyond "It should be true that..."
Given a finitely generated group $G$ (eg. a link group $G_L:=\pi_1(S^3-L)$ for a link $L$) and a finite group $H$ we want to count homomorphisms from $G$ to $H$. For link groups as above, this is an invariant of $L$.
My question: (for which $H$) is there a polynomial-time algorithm (in the number of generators and relations for $G$) for computing $N(G,H):=|Hom(G,H)|$ (particularly for $G_L$)?
Some things I know: 1) If $L$ is a knot and $H$ is nilpotent then $N(G_L,H)$ is constant (M. Eisermann) 2) D. Matei; A. I. Suciu, have an algorithm for solvable $H$, but the complexity is not clear. 3) The abelianization of $G_L$ is just $Z^c$, $c$ the number of components, so for $H$ abelian it is easy.
A wild conjecture is that it should always be "FPRASable" i.e. there exists a fully polynomial randomized approximation scheme for the computation.
-
I guess you want polynomial time with respect of the logarithm of $\sharp(H)$? (The answer is trivially yes otherwise.) – Roland Bacher May 31 2010 at 16:20
Sorry, the answer is yes for finitely presented groups. – Roland Bacher May 31 2010 at 16:21
edited to clarify. – Eric Rowell May 31 2010 at 16:44
1
You can embellish this by using the peripheral structure. That is, a knot group comes with a pair of commuting elements (up to conjugacy). Then choose a pair of commuting elements in $G$ and count homomorphisms sending the first pair to the second pair. My naive guess is that your questions will have similar answers in this context? – Bruce Westbury Jun 2 2010 at 6:59
I like this idea Bruce. I have wondered how one might randomize the obvious (exponential) algorithm that tests all $n$-tuples from $H$ against the defining relations of $G_L$. Maybe something like this would work. – Eric Rowell Jun 2 2010 at 12:55
## 1 Answer
For $G$ a knot group, and for $H$ a dihedral group, there should be a simple algorithm for counting the number of homomorphisms. The meridians of $G$ normally generate, and are all conjugate, so they must be sent to conjugate elements in $H$. If they are sent to the cyclic subgroup of index 2, then the image is cyclic, and this is easy to count.
If a meridian is sent to an involution, then an index 2 subgroup of $G$ is sent to a cyclic group. This amounts to computing the homology of the 2-fold branched cover of the knot, together with the action of the involution on this homology. This is certainly polynomial-time computable, and I'm pretty sure one can determine its dihedral quotients easily. In any case, at least this reduces it to the problem of finding dihedral quotients of abelian-by-$\mathbb{Z}/2$ groups.
-
Thanks Ian! This makes sense algebraically: the braid group image associated with the Drinfeld center of a (generalized) Dihedral group is an integer specialization of the Burau representation, reduced modulo some $m$. The link invariant associated with this (modular) Hopf algebra is some version of counting homomorphisms. So maybe the statement you make can be generalized to semidirect products of cyclic groups? I think I read somewhere that the Alexander polynomial can be used to compute the homology of Seifert surfaces... – Eric Rowell Jun 1 2010 at 18:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201140403747559, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/bose-einstein-condensate
|
# Tagged Questions
The bose-einstein-condensate tag has no wiki summary.
0answers
66 views
### Change of basis in non-linear Schrodinger equation
At the mean-field level, the dynamics of a polariton condensate can be described by a type of nonlinear Schrodinger equation (Gross-Pitaevskii-type), for a classical (complex-number) wavefunction ...
1answer
45 views
### Bose-Einstein condensate for general interacting systems
There is Bose-Einstein condensate (BEC) for non-interacting boson systems. Can we prove the existence of BEC for interacting systems?
1answer
120 views
### First order coherence function in terms of momentum distribution function
Can someone show me how the first order coherence function $G^1(r,r')\equiv \left \langle \hat{\Psi}(r)\hat{\Psi}(r') \right \rangle$ for a system of bosons is related to the momentum distribution ...
0answers
92 views
### What is the condition for getting Bose-Einstein condensation? [closed]
Consider an ideal Bose gas in three dimension with energy-momentum relation E proportional to $p^s$ with $s>0$. Find the range of $s$ for which this system may undergo a Bose-Einstein ...
2answers
190 views
### What prevents bosons from occupying the same location?
The Pauli exclusion principle states that no two fermions can share identical quantum states. Bosons, one the other hand, face no such prohibition. This allows multiple bosons to essentially occupy ...
1answer
47 views
### Hamiltonians, density of state, BECs
When working with Bose-Einstein condensates trapped in potentials, how can one tell what the density of state of a system of identical bosons given the Hamiltonian, $H$? (I have been told that it is ...
0answers
79 views
### Deriving the “total” Bose Einstein density of states, including the condensate
Is is possible to derive the Bose-Einstein density of states containing the delta function representing the BE condensate?
0answers
42 views
### Why People talk so much about Feshbach resonance while dealing with Bose-Einstein Condensate (BEC)?
Why People talk so much about Feshbach resonance while dealing with Bose-Einstein Condensate (BEC)? Why not tune the system near the resonance and measure the effect on BEC formation?
1answer
144 views
### What does the wavefunction of atom look like at low temperature?
I am reading an introduction material on Bose-Einstein condensation (BEC) at low temperature and it stated that when the temperature approaches zero kelvin, almost all atoms are degenerated into the ...
1answer
74 views
### Why is the BCS trial function valid across the BEC-BCS crossover?
In one of the two main theoretical approaches used in describing ultracold Fermi gases and the BEC-BCS crossover, the so-called BCS-Leggett approach, the starting point is the BCS trial wavefunction: ...
3answers
182 views
### Why do Photons want to be together?
So I've heard that when a photon flies by a atom excited enough to release a photon there's a good chance it will. Because Photons want to be together and have the same direction etc? Is this true? ...
0answers
64 views
### How many ways are there to distribute M excitations of N identical particles among K=3 quantum harmonic oscillators?
I'm trying to numerically calculate a partition function of N non-interacting but identical particles in a 3D SHO. To do this, I'd like to know the degeneracy of $M$ excitations, $N$ indistinguishable ...
3answers
141 views
### Bose-Einstein condensation in systems with a degenerate ground state
I understand that when a system enters the BEC phase a sizable fraction of the total number of particles enters the ground state, until at some point almost all of your particles are in the ground ...
1answer
193 views
### Gross-Pitaevskii equation in Bose-Einstein condensates
I was hoping someone might be able to give a approachable explanation of the Gross-Pitaevskii equation. All the sources I've been able to find seem to concentrate on the derivation, and I don't have ...
2answers
353 views
### Can bosons that are composed of several fermions occupy the same state?
It is generally assumed that there is no limit on how many bosons are allowed to occupy the same quantum mechanical state. However, almost every boson encountered in every-day physics is not a ...
0answers
61 views
### BEC for holography?
I am spending some time reading about Bose-Einstein condensation. I want to know if it is possible to use atom lasers to realize the kind of holography traditionally associated with nano-fabrication. ...
0answers
86 views
### Fock picture of bosonification in condensates
I want to understand how bosonification in a condensate must be interpreted in the Fock states picture Say i have uncoupled fermions in a set of states $E_1$, $E_2$ ... over the vacuum $E_0$. They ...
3answers
242 views
### Looking for a complete review of the BEC-BCS crossover
I'm looking for comprehensive review of the BEC-BCS crossover, both from a theoretical point of view, and from a experimental one. Even something at textbook level, but exhaustive, would be OK, but I ...
2answers
301 views
### Why water is not superfluid?
My question is in the title. I do not really understand why water is not a superfluid. Maybe I make a mistake but the fact that water is not suprfluid comes from the fact that the elementary ...
0answers
62 views
### positronium BEC stability
After reading this article regarding Positronium BEC formation (for lasing purposes), there is a mention in there regarding Ps "up" atoms not annihilating with "down" atoms, the article is pretty ...
1answer
353 views
### Are all bose-einstein condensates superfluid?
I feel like the answer should be "no" since all superfluids are not strictly BEC since they can undergo a Kosterlitz–Thouless transition in 2D, for example. I believe the ideal gas isn't superfluid, ...
2answers
286 views
### Has Bose-Einstein theory been considered for dark matter?
Has Bose-Einstein theory been considered for dark matter? The theory would explain why no measurable radiation is emitted due to zero temperature--its lack of interaction with other matter and its ...
3answers
213 views
### What happens to those electrons of BEC cold atoms?
BEC cold atoms occupy the same ground state. But what about the electrons or other fermions of the BEC atoms? Are they in the same state? Do electrons of one atom interact with those of another?
1answer
163 views
### True Ground State Population of Ideal Bose-Einstein Condensate at Critical Temperature
I'm supposed to demonstrate that although we make the assumption in an ideal BEC that the ground state population follows $N_0 = N\left[1-\left(\frac{T}{T_c}\right)^{3/2}\right]$ in reality the true ...
2answers
396 views
### Can a system entirely of photons be a Bose-Einsten condensate?
Background: In Bose-Einstein stats the quantum concentration $N_q$ (particles per volume) is proportional to the total mass M of the system: $$N_q = (M k T/2 \pi \hbar^2)^{3/2}$$ where k ...
1answer
342 views
### Possibility of Bose-Einstein condensation in low dimensions
I remember having a problem (for practice preliminary exams at UC Berkeley) to prove that Bose-Einstein condensation(BEC) is not possible in two dimensions (as opposed to three dimensions): For ...
0answers
114 views
### Matter-wave interference from free falling cold atoms
and another exam question, this is about current research: Interference of matter waves has been studied using ultra-cold atoms. The phase of a matter wave for free-falling cold-atoms at time $t$ ...
0answers
451 views
### How do I derive the critical temperature for bose condensation in two dimensions?
In class we derived the 3D case, but there's a step I don't understand: N = g \cdot {V \over (2 \pi \hbar)^3} \cdot \int\limits_{0}^{\infty}{1 \over{e^{\left( E_p \over{K_B T}\right)}-1}} d^3 p = ...
1answer
323 views
### Expansion of multi-particle state vector as a sum of n-entangled states
Physically, quantum entanglement is ranged from full long-range entanglement (Bose-Einstein condensate), described by a basis of states that look like this: |\Psi\rangle = |\phi_{i_{0} i_{1} ... ...
1answer
557 views
### Bose-Einstein Condensate with T>0 in Theory and Reality
I am interested to understand how positive entropy Bose Einstein condensation for cold atoms (say) behave. The way I think about it is as follows: We have an ideal pure state where every atom is in ...
1answer
423 views
### BCS theory, Richardson model and Superconductivity
I'm studying Richardson Model in second quantization. There are many initial points that I don't understand: We supposed that an attractive force between 2 electrons exists, due to electron-phonon ...
2answers
917 views
### trying to understand Bose-Einstein Condensate (BEC)
I am a computer scientist interested in network theory. I have come across the Bose-Einstein Condensate (BEC) because of its connections to complex networks. What I know about condensation is the ...
4answers
719 views
### Bose-Einstein condensate in 1D
I've read that for a Bose-Einstein gas in 1D there's no condensation. Why this happenes? How can I prove that?
2answers
166 views
### existing bounds on maximum density achieved by a Bose condensate
As we know, fermions are subject to exchange interactions that limit the densities they can achieve. However bosons (simple or composite) are not constrained by this, which implies physical phenomena ...
4answers
5k views
### Practical applications for a Bose-Einstein condensate
What are the main practical applications that a Bose-Einstein condensate can have?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9018760323524475, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.