url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/22569?sort=votes
|
## Is there a nice expression for the number of lattice points on a sphere? [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Possible Duplicate:
Is there a simple way to compute the number of ways to write a positive integer as the sum of three squares?
Is there a nice expression for the number of points in $\mathbb{Z}^3$ which lie a distance of $\sqrt{n}$ from the origin? Here, $n$ is of course a positive integer.
-
9
Let me rephrase your question as follows: is there a closed form expression for the cube of the theta series, $\bigl(\sum_{n\in\mathbb Z}q^{n^2}\bigr)^3$. The answer is "no", although for a very close cube, $\bigl(\sum_{n\in\mathbb Z}(-1)^nq^{(6n+1)^2/24}\bigr)^3$, one has $\sum_{n=0}^\infty(-1)^n(2n+1)q^{(2n+1)^2/8}$. – Wadim Zudilin Apr 26 2010 at 8:01
6
@Wadim: the poster did not ask for a closed form expression, he asked for a "nice expression". Firstly, can you really assert with confidence that there is no "closed form expression"? And secondly, so much is known about the generating function---it spans the space of level 4 weight 3/2 modular forms, and there are surely by now computer algebra packages which efficiently compute coefficients---that one might argue that "it's the coefficient of q^n in the unique normalised weight 3/2 level 4 modular form, and a lot is known about coeffs of modular forms" is a "nice expression" for the number! – Kevin Buzzard Apr 26 2010 at 9:39
1
@Kevin: another level 4 weight 3/2 modular form, linearly independent with $f(q)$, is $\bigl(\sum_{n\in\mathbb Z}(-1)^nq^{n^2}\bigr)^3$. – Wadim Zudilin Apr 26 2010 at 10:45
6
See my answer mathoverflow.net/questions/3596/… – David Speyer Apr 26 2010 at 11:48
5
@Kevin: since this is an exact duplicate and you already got some very nice answers, I hope you don't mind my voting to close. – Hailong Dao Apr 26 2010 at 20:29
show 4 more comments
## 2 Answers
We (me, Michel, and Venkatesh) write something about this question in the preprint "Linnik's Ergodic method and the distribution of integral points on spheres."
In particular, in section 3 we explain how when n is squarefree and not congruent to 7 mod 8 the solution set of x^2 + y^2 + z^2 = n (up to the natural SO_3(Z) action) is naturally a torsor for a certain class group, so that in particular the size of the set is equal to the size of the class group. None of this is really original to us, I should emphasize! Maybe the use of the word "torsor," at most.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is one answer to your question that is classical, discovered by Dirichlet. The number of proper representations of $n$ as a sum of three squares can be expressed as a sum of Jacobi symbols, for example $$r_3'(n) = 24\sum_{m \leq n/4}\left(\frac{m}{n}\right)$$ if $n \equiv 1{\;}(4)$. Here $r_3'(n)$ denotes the number of proper representations, where $x,y,z$ in $x^2 + y^2 + z^2 = n$ has no common factor. If $n$ is squarefree then $r_3(n) = r_3'(n)$, otherwise $r_3(n)$ is given by a sum $$r_3(n) = \sum_{d^2|n}r_3'(n/d^2)$$ The above formula strongly suggests that there is no simple closed form expression for $r_3(n)$.
Whether this answer really qualifies as nice is uncertain. It is necessary to separate into cases. The formula looks slightly different when $n \equiv 3{\;}(4)$. How it looks when $n$ is even I do not know.
I should mention that Gauss had expressed the number of proper representations of $n$ as a sum of three squares in terms of class numbers of binary quadratic forms. Dirichlet obtained his formulas for $r_3'(n)$ by applying his class number formula.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436512589454651, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/255317/questions-about-total-bounded-functions
|
# Questions about total bounded functions
Let $f(x)=\sin(x)$ on $x \in[0,2\pi]$. Find two increasing functions $h$ and $g$ such that $f=g-h$ on $x \in [0, 2\pi]$.
Finding the explicit example is where I'm stuck. Since this is a bounded function of finite tototal variation I know an explicit $h$ and $g$ exists. I just don't know what it is.
-
## 1 Answer
Hint: $g$ can be any function for which $g'$ is large enough.
-
I don't think I follow. – emka Dec 10 '12 at 9:10
@emka: If $f=g-h$, then $g'=f\,'+h'$; $|f\,'|$ is bounded, so if you take $h'$ big enough, you can ensure that both $g'$ and $h'$ are positive. What do you know about a function with a positive derivative? – Brian M. Scott Dec 10 '12 at 9:46
A function of positive derivative is increasing. Could I take something simple and friendly like f(x)=x? – emka Dec 10 '12 at 13:40
You mean $g(x) = x$. Yes you can (though you might prefer $2x$). – Robert Israel Dec 10 '12 at 20:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926964521408081, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/2715/constructing-a-seven-digit-number-such-that-one-divides-the-other/2728
|
# Constructing a seven digit number such that one divides the other
This was asked to me by a friend. I tried this problem for a long time, but i couldn't solve it. Seems interesting though.
Prove that it is impossible to construct two different seven digit numbers, one of which is divisible by the other, out of the digits 1,2,3,4,5,6,7 (All seven digits must be in each number)
-
3
Since there are only 5040 permutations of {1,2,..,7}, one can simply check all of them. That does not increase your understanding, though. =) – Jens Aug 18 '10 at 14:40
## 3 Answers
Say the numbers are x and y and y divides x.
Consider the remainder upon dividing both numbers by 9. Both leave a remainder 1 (as sum of digits leaves a remainder 1).
So x = ky where 2 <= k <= 8 is not possible.
-
Is is possible for $1$ times $k$ to be congruent to $1$ (modulo $9$) when $2\le k\le 8$? – Robin Chapman Aug 18 '10 at 14:40
1
Since x = ky, this also holds modulo 9. 1=x=ky=k mod 9. Since the numbers have the same number of digits, k must be < 10, therefore k = 1. – yatima2975 Aug 18 '10 at 14:42
HINT $\; ax = b,\; (a,m)=1 \;\Rightarrow\; x = b/a \;\;{\rm mod}\; m \;$ is the unique solution such that $\; \; 0\le x < m. \;$
Now put $\; m = 9, \;$ i.e. cast out nines. This is a ubiquitous technique for recreational problems of this ilk.
CHALLENGE Find a natural generalization to arbitrary length integers.
-
7654321/1234567 = 6. something
so x = ky where 2<=k<=6
sum of the digits are not divisible by 3 so no 3 and 6.
now x = ky where k = 2,4 or 5 so not possible.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9009262919425964, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/108940/irreducible-representation-decomposition-of-tensor-on-manifold-with-metric/109703
|
## Irreducible representation decomposition of tensor on manifold with metric
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm aware that for some tensor product space, Schur-Weyl duality lets me decompose the space into irreducible representations by looking at irreps of the symmetric group. The simplest example is $V\otimes V\cong \mathrm{Sym}^2(V)\oplus \Lambda^2(V)$. In physicists' notation (warning, I am a physicist), we would write $T_{ab} = T_{(ab)} + T_{[ab]}$.
However, when talking about tensors on a manifold $M$ with metric $g$, we can also take traces and so further reduce symmetric products into a trace and trace-free part, i.e. $T_{ab} = \frac{1}{d}g_{ab}T + [T_{ab}]^{\tiny{STF}} + T_{[ab]}$ where $T\equiv g^{ab}T_{ab}$, $d$ is the dimension of the manifold, and $[T_{ab}]^{\tiny{STF}} = T_{(ab)}-\frac{1}{d}g_{ab}T$. This seems to be relevant because because $g$ is an invariant symbol of $SO(p,q)$ where $(p,q)$ is the signature of $g$. There will also be an alternating tensor of highest rank which will be an invariant symbol, and this could also appear in the decomposition.
Schur-Weyl doesn't seem to say anything about metrics, trace/trace-free decompositions, etc. What is the general decomposition here? Is there an algorithm to follow?
-
## 2 Answers
Have a look at the beginning of section 33 (in particular, 33.2) of the book "Natural operations in differential geometry" (pdf), for the Riemannian case only. It should work for the $SO(p,q)$ case also. There all $O(n)$-invariant tensors are described: The idea is to tensor with the metric or its inverse and then use the $GL(n)$ decomposition, i.e., involve traces and permutations.
-
1
Thanks, Peter, but I think you linked to a different text. I assumed you meant mat.univie.ac.at/~michor/kmsbookh.pdf ? – duetosymmetry Oct 5 at 21:19
You are right. Sorry. I shall correct this. – Peter Michor Oct 6 at 17:35
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a "Schur-Weyl theory" for representation of $O(n)$ and $Sp(n)$. The group algebra of the symmetric group is replaced by the Brauer algebra. Basically, you first decompose your tensor product with respect to "the number of traces the vectors contain" and then for each such part you can use the classical $GL$ decomposition.
The relevant representation theory over complex numbers is treated in a book by Wallach and Goodman Symmetry, Representations, and Invariants. In particular look at the appendix F on the linked web page. Alternatively, you can look into the older version of this book which was published under the name Representations and Invariants of the Classical Groups.
You should be careful when dealing with representation of noncompact real groups.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245252013206482, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/128353-pre-calculus-question-w-some-basic-calculus-involved.html
|
# Thread:
1. ## Pre-Calculus Question w/ Some Basic Calculus Involved
Hi,
$\lim_{\substack{x-3}} f(x)=\frac{x^2+4x-21}{x-3}$.
Using L'Hopital's rule (or algebra manipulation) we can determine the limit is 10. My question is, how is this possible if it is a rational function with a vertical asymptote at 3? Shouldn't it approach infinity?
After graphing it, I noticed that it happens to be a straight line and virtually equal to the graph of its derivative $f'(x)=2x+4$? This seems odd, as the graph said there was a hole at the point x=3, not a vertical asymptote. If I slightly change the function (such as x-4 in the denominator), it looks more like a rational function with a vertical asymptote.
I don't need any computational help with this problem - I just want to understand what's going on here.
2. Nevermind - I realized after factoring the numerator that the factor (x-3) from the denominator cancels out, essentially creating the function y=x+7 with a hole at x=3, equivelent to the entire function when you graph it.
This is what I get for teaching myself calculus while only just starting pre-calculus.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344742894172668, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/49683/list
|
## Return to Answer
2 added 123 characters in body
Both of these are facts which developed gradually over the course of several papers, so it's hard to give a definitive reference.
For the first, all the calculations one would need in order to establish this fact are done in, for example, Springer's Quelques applications de la cohomologie d’intersection, but this isn't stated as a theorem. If you're just looking for somewhere in the literature to cite, this is stated as Theorem 4 in The geometry of Markov traces by myself and Geordie Williamson. EDIT: And Bugs is completely right; you need to say "mixed" here, or you just get the group algebra of the Weyl group.
For the second, I would say $N$-equivariant, or Schubert smooth, rather than $B$-equivariant, since the $B$-equivariant derived category is the wrong thing (this is like the difference between category $\mathcal O$ and translation functors on it). You should also be careful about what category you're talking about; you don't want the center to act trivially, but nilpotently. The easiest reference I know of is Koszul Duality Patterns in Representation Theory by Beilinson, Ginzburg and Soergel, Proposition 3.5.2, though the theorem is older, going back to Soergel's Habilitationsschrift.
1
Both of these are facts which developed gradually over the course of several papers, so it's hard to give a definitive reference.
For the first, all the calculations one would need in order to establish this fact are done in, for example, Springer's Quelques applications de la cohomologie d’intersection, but this isn't stated as a theorem. If you're just looking for somewhere in the literature to cite, this is stated as Theorem 4 in The geometry of Markov traces by myself and Geordie Williamson.
For the second, I would say $N$-equivariant, or Schubert smooth, rather than $B$-equivariant, since the $B$-equivariant derived category is the wrong thing (this is like the difference between category $\mathcal O$ and translation functors on it). You should also be careful about what category you're talking about; you don't want the center to act trivially, but nilpotently. The easiest reference I know of is Koszul Duality Patterns in Representation Theory by Beilinson, Ginzburg and Soergel, Proposition 3.5.2, though the theorem is older, going back to Soergel's Habilitationsschrift.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412650465965271, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/148641/finding-the-values-of-n-for-which-mathbbf-5n-the-finite-field-with
|
# Finding the values of $n$ for which $\mathbb{F}_{5^{n}}$, the finite field with $5^{n}$ elements, contains a non-trivial $93$rd root of unity
For which values of $n$, does the finite field $\mathbb{F}_{5^{n}}$ with $5^{n}$ elements contain a non-trivial $93$rd root of unity?
I don't know how to find the value of $n$.
-
## 2 Answers
$\mathbb F_{5^n}$ contains an element of multiplicative order $93$ if and only if $93$ is a divisor of $5^n-1$, that is, $5^n \equiv 1\bmod 93$. The brute force way of finding the smallest value of $n$ is to just calculate the values of $5^1$, $5^2$, $5^3$, $\ldots$ modulo $93$ until you find the answer. So we proceed as follows: $$\begin{align*} 5^1 &\equiv 5 \bmod 93\\ 5^2 &\equiv 25 \bmod 93\\ 5^3 = 125 &\equiv 32 \bmod 93\\ 5^4 \equiv 5\times 32 =160 &\equiv 67 \bmod 93\\ 5^5 \equiv 5\times 67 = 335 &\equiv 56 \bmod 93\\ 5^6 \equiv 5\times 56 = 280 &\equiv 1 \bmod 93\\ \end{align*}$$ Thus, $\mathbb F_{5^6}$ is the smallest field of characteristic $5$ that contains an element of multiplicative order $93$. Since $5^6-1$ is a divisor of $5^n - 1$ if and only if $6$ is a divisor of $n$, we conclude that $\mathbb F_{5^n}$ contains an element of multiplicative order $93$ if and only if $n$ is an integer multiple of $6$.
-
Thanks! Now i can understand this properly. – Kns May 24 '12 at 6:07
It might also be worth mentioning that nobody knows how to do this in a way that is significantly better than brute-force enumeration. – MJD May 24 '12 at 16:01
thanks for the help on the question. but it is also true that 'n' could also be any other multiple of 6, like 12, 18 and so on... all of these have the similar property, right? – Ram Oct 29 '12 at 16:52
@Ram Yes, as I said at the very end of my answer "... $\mathbb F_{5^n}$ contains an element of multiplicative order $93$ if and only of $n$ is an integer multiple of $6$." This means that $12, 18 ,24, \ldots$, all multiples of $6$, also have the same property exactly as you say. – Dilip Sarwate Oct 30 '12 at 2:36
Why is "a non-trivial 93rd root of unity" an element of order 93? I'd say that any root of unity that is not 1 is non-trivial, in particular, elements of order 3 and 31 should also work. – Phira Dec 5 '12 at 9:31
show 1 more comment
I'll give you one direction - see if you can do the other on your own.
Suppose we have a non-trivial $93$rd root of unity $\zeta\in\mathbb{F}_{5^n}$. Then $\zeta$ would, of course, have to be non-zero. So $\zeta\in\mathbb{F}_{5^n}^\times$, and by definition we have that the order of $\zeta$ as an element of the multiplicative group $\mathbb{F}_{5^n}^\times$ is $93$. By Lagrange's theorem, this is only possible if $93\mid 5^n-1$. Note that $93\mid 5^n-1$ if and only if $3\mid 5^n-1$ and $31\mid 5^n-1$, because $93=3\cdot 31$ is the prime factorization of $93$.
When does that happen? Look at the following table: $$\begin{array}{c|c|c|c|c|c|c} n & 0 & 1 & 2 & 3 & 4 & 5 & 6\\ \hline 5^n & 1 & 5 & 25 & 125 & 625 & 3125 & 15625\\ \hline \text{mod }3 & 1 & 2 & 1 & 2 & 1 & 2 & 1 \\ \hline \text{mod }31 & 1 & 5 & 25 & 1 & 5 & 25 & 1 \end{array}$$ The period of $5^n$ modulo $3$ is $2$, and the period of $5^n$ modulo $31$ is $3$ (the proof that this period holds for all $n$ is straightforward). Thus, the $n$ for which $5^n\equiv 1\bmod 3$ and $5^n\equiv 1\bmod 31$ (i.e. the $n$ for which $3\mid 5^n-1$ and $31\mid 5^n-1$) are those $n$ such that $2\mid n$ and $3\mid n$, i.e. the $n$ which are multiples of $6$.
Ok, so we've established that, if there is a primitive $93$rd root of unity in $\mathbb{F}_{5^n}$, then it must be the case that $6\mid n$.
Is the converse true? Try to work it out yourself.
-
1
Zev - this is serious. We both need to go to sleep. – mixedmath♦ May 23 '12 at 8:42
2
Haha neverrrrr – Zev Chonoles♦ May 23 '12 at 8:43
2
+1 Do observe that the third cyclotomic polynomial $$\phi_3(x)=x^2+x+1$$ is quadratic, so even in characteristic zero you never need higher than a quadratic extension to include primitive cubic roots of unity! – Jyrki Lahtonen May 23 '12 at 9:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441494941711426, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/105543/algorithms-to-find-irreducible-polynomials-of-a-given-degree
|
## Algorithms to find irreducible polynomials of a given degree
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I need to know what are the efficient algorithms to find all the irreducible polynomials of a given degree, say $d$ over a given finite field, say $\mathbb{F}_{p^n}.$
One way is to factorize the polynomial $x^{p^{dn}}-x$, which is the product of all irreducible polynomials whose degree divides $d$, using factorization algorithms and collect all the degree $d$ factors. But I guess we are doing some extra job here. Are there better algorithms to find all irreducible polynomials of degree $d$ ?
I also want to know about the algorithms to find one irreducible polynomial of a given degree over a given finite field.
-
Extremely sorry for the typo, I was confusing with $\mathbb{F}_p$ – pritam Aug 26 at 16:35
## 2 Answers
The last word on the second question is this paper of Couveignes and Lercier. The question is highly nontrivial.
-
@quid: thanks for the edit, was doing this on iPad... – Igor Rivin Aug 26 at 18:16
@Igor Rivin: Thanks for the link, I was more interested in the second question. – pritam Aug 30 at 6:03
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If you want to work over `$\mathbb{F}_{p^n}$` then what you wrote is not quite right. What you want is the polynomial $x^{p^{dn}}-x$, which is divisible by all irreducible polynomials of degree $d$ over `$\mathbb{F}_{p^n}$`.
You can first use inclusion-exclusion to extract from $x^{p^{dn}}-x$ the factor which is the product of all irreducible polynomials of degree $n$ and then factor that. I don't think there is a better way of finding all irreducible polynomials of degree $n$.
If you only need to find one polynomial, then the best thing is to write down a random polynomial of degree $n$ and test for irreducibility. Repeat as necessary.
-
This polynomial will be the product of the elementary cyclotomic polynomials $\phi_k(x)$ for $k|p^{dn}-1$ but $k\not | p^{en}-1$ for $e<d$. This is true because each $x^{p^{en}}-x$ factors into a product of elementary cyclotomic polynomials so a polynomial computed from them by inclusion-exclusion does as well, and the elementary cyclotomic polynomials involved, since $p$ does not divide the order of their roots, are still relatively prime mod $p$, so we can easily check which ones are included in the product of ell irreducible polynomilas of degree $d$. – Will Sawin Aug 26 at 16:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454957842826843, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/38987/relationship-between-lasso-t-and-lars-number-of-steps-k
|
# Relationship between LASSO T and LARS number of steps k
We can see on the figure (cf Least Angle Regression p30, Efron, Hastie, Johnstone, Tibshirani - link: Least Angle Regression) that there is a direct relationship between:
• LASSO T absolute norm of $\beta$: $T(\beta) = \sum_j\vert\beta_j\vert$
• and the number of steps $k$ computed by LARS.
I am trying to find a mathematical or at least a direct relationship between both LASSO T and LARS k, like $A = B$ or $A = x \Rightarrow B = f(x)$.
-
I don't see it on p30, can you you give the number of the equation? – Bitwise Oct 9 '12 at 16:17
It is the figure 8. (legend: Plot of S versus T...) p30. Actually, I guess that if you understand how he gets this number of points (number of LARS steps) and how they fit the whole LASSO curve, you should be able to explain the relationship between the number of steps k and the LASSO T absolute norm of beta (which is a bound actually). Thanks a lot for your help. – user1731029 Oct 9 '12 at 17:59
I've merged your accounts so you can edit your posts, comment under them and stuff. – mbq♦ Oct 9 '12 at 19:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331336617469788, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/65226/the-ramanujan-problems/71260
|
## The Ramanujan Problems.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I originally thought of asking this question at the Mathematics Stackexchange, but then I decided that I'd have a better chance of a good discussion here.
In the Wikipedia page on Ramanujan, there is a link to a collection of problems posed by him. The page has a collection of about sixty problems which have appeared in the Journal of the Indian Mathematical Society.
While I was browsing through them at random, I came across this, which I recognized as the Brocard-Ramanujan problem. So,
1. Are all these problems solved? Or are there more unsolved problems among them?
2. Why were the problems posed exactly? Was there some sort of contest in the journal back then?
Here is one problem from the list. (I just cannot resist posting it here!)
Let $AB$ be a diameter and $BC$ be a chord of a circle $ABC$. Bisect the minor arc $BC$ at $M$; and draw a chord $BN$ equal to half the length of the chord $BC$. Join $AM$. Describe two circles with $A$ and $B$ as centers and $AM$ and $BN$ as radii cutting each other at $S$ and $S'$ and cutting the given circle again at the points $M'$ and $N'$ respectively. Join $AN$ and $BM$ intersecting at $R$ and also join $AN'$ and $BM'$ intersecting at $R'$. Through $B$, draw a tangent to the given circle meeting $AM$ and $AM'$ produced at $Q$ and $Q'$ respectively. Produce $AN$ and $M'B$ to meet at $P$ and also produce $AN'$ and $MB$ to meet at $P'$. Show that the eight points $PQRSS'R'Q'P'$ are cyclic and that the circle passing through these points is orthogonal to the given circle $ABC$.
How does one even begin to prove this? I've tried to construct the entire thing using compass and straightedge, but the resulting mess only added to my confusion. Here is the link to the original problem.
Thanks a lot in advance!
EDIT: This problem is mentioned in the link provided in Andrey Rekalo's answer below in pp. 34-35. The solution is given in the second link in pp. 244-246.
-
Can anyone give a solution to this problem imsc.res.in/~rao/ramanujan/collectedpapers/… please? – Quanta May 17 2011 at 19:30
@quanta: Why can't you this as a separate question? – Koundinya Vajjha May 20 2011 at 2:27
## 4 Answers
There is a survey article by Berndt, Choi, and Kang devoted to the set of 58 Ramanujan's problems. They indicate that the questions had originally appeared in the problems section of the Journal and apparently the editors published readers' solutions in subsequent issues.
Concerning your question 1, let me just quote from the Introduction to the survey:
Several of the problems are elementary and can be attacked with a background of only high school mathematics. For others, significant amounts of hard analysis are necessary to effect solutions, and a few problems have not been completely solved.
An elementary solution to the specific geometric problem you've mentioned can be found in Ramanujan's Notebooks, Part III by Berndt (Springer, 1991, pp. 244-246). The problem stems from Ramanujan's work on modular equations of degree 3...
-
Oh wow! Thanks for this! – Koundinya Vajjha May 17 2011 at 13:43
FYI, your link does not work without an account on Google docs or something like this. – Sergei Ivanov May 17 2011 at 13:43
@Sergei Ivanov: Thanks, I have mended the first link. – Andrey Rekalo May 17 2011 at 13:51
This survey is available from Bruce Berndt website: math.uiuc.edu/~berndt/jims.ps – Igor Pak May 17 2011 at 18:11
@Igor Pak: Thank you for the link. – Andrey Rekalo May 17 2011 at 23:53
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Concerning the planimetry problem: Having a suitable background, it is not hard to produce many puzzles like this. For example, here is one more pair of points on that mysterious circle: intersect a circle centered at $B$ with radius $BM$ and a circle centered at $A$ with radius $AB$, then the two intersection points are also on that 8-points circle.
This circle is a circle of Apollonius with foci at $A$ and $B$. More precisely, it is the locus of points $X$ such that $BX:AX=\sin\alpha$ where $\alpha=\angle BAM=\frac12\angle BAC$. The fact that this is a circle orthogonal to the original one is a general property of circles of Apollonius, and verifying that the ratio of distances equals $\sin\alpha$ for each point is straightforward.
-
An unelegant solution (to the described problem) which works in principle: Transform everything in algebraic equations, eg. by setting $A=(-1,0),B=(1,0),C=(x_C,y_C),M=(x_M,y_M),\dots$, $x_C^2+y_C-1=0,x_M=1/2(1+x_C),\dots$ and compute a Groebner basis. Doing a few numerical examples proves then that there is at least a large enough set of real solutions.
This is of course ugly but has the advantage that you can give the heavy work to a machine.
-
So how did Ramanujan come up with such a problem? He surely wouldn't have discovered this problem by the algebraic technique you have given. The intuition behind the problem will be more interesting if it is known. :) – Koundinya Vajjha May 17 2011 at 12:45
2
@Koundinya: as I've said before in another site... "that crazy Ramanujan guy never can tell us anything useful." – J. M. May 17 2011 at 12:54
Hahahaha! Oh well, he sure can make us rack our brains out. (: – Koundinya Vajjha May 17 2011 at 13:16
You may want to see this paper as well by Berndt:
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437835216522217, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/81830/conways-napkin-puzzle
|
# Conway's napkin puzzle
This puzzle is from the book "Mathematical Puzzles: a connoisseur's collection", and I am only asking for a part of it.
48 people are seated in a big circular table, where they find between each pair of settings, there is one napkin. As each person is seated, he chooses one napkin from his left, or right. If both are present, he choose it randomly. The question is, if the maitre d' seats each guest and try to make the number of napkinless guests as many as possible, what is the expected number?
Suppose that the first guest take the napkin to his right, then the maitre d' should seat the second guest two spaces to the right of the first, making one open seat so that anyone who seats here can possibly have no napkins to choose. Then, if the second guest actually choose the left napkin, then the next guest is seated to the right position. Otherwise, if he chooses the right again, the next guest should be seated two space to the right of the second guest. Then, the expected number of napkinless guests is 1/6 of the total number of guests.
I don't follow the last conclusion. But in the book it seems to be obvious. How to compute this value?
-
## 1 Answer
The result "$1/6$ of the total number of guests" sounds as if it's independent of the given number of guests, $48$. It can't be correct in this generality, since there will not be any napkinless guests if there are only one or two guests. So I'm not sure whether you're looking for the solution for $48$ people or an asymptotic result. Here's how to derive the asymptotic result for a table large enough that we can ignore that we'll eventually get back where we started.
In that case, in the situation where guest $n$ has chosen the napkin to her right, there's a probability $1/2$ that guest $n+2$ also chooses right and we have to try again, and a probability $1/2$ that guest $n+2$ chooses left, leaving guest $n+1$ napkinless. In the latter case, we have to seat everyone to the right of guest $n+2$ until someone chooses right again to get back to the initial state, and the expected number of guests to seat until that happens is $2$. So the expected number $k$ of guests we have to seat in order to produce one napkinless guest is $k=2+(\frac12k+\frac12\cdot2)$, and thus $k=6$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9661135077476501, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2010/09/21/morphisms-between-representations/?like=1&source=post_flair&_wpnonce=18b5244a97
|
# The Unapologetic Mathematician
## Morphisms Between Representations
Since every representation of $G$ is a $G$-module, we have an obvious notion of a morphism between them. But let’s be explicit about it.
A $G$-morphism from a $G$-module $(V,\rho_V)$ to another $G$-module $(W,\rho_W)$ is a linear map $T:V\to W$ between the vector spaces $V$ and $W$ that commutes with the actions of $G$. That is, for every $g\in G$ we have $\rho_W(g)\circ T=T\circ\rho_V(g)$. Even more explicitly, if $g\in G$ and $v\in V$, then
$\displaystyle\left[\rho_W(g)\right]\left(T(v)\right)=T\left(\left[\rho_V(g)\right](v)\right)$
We can also express this with a commutative diagram:
For each group element $g\in G$ our representations give us vertical arrows $\rho_V(g):V\to V$ and $\rho_W(g):W\to W$. The linear map $T$ provides horizontal arrows $T:V\to W$. To say that the diagram “commutes” means that if we compose the arrows along the top and right to get a linear map from $V$ to $W$, and if we compose the arrows along the left and bottom to get another, we’ll find that we actually get the same function. In other words, if we start with a vector $v\in V$ in the upper-left and move it by the arrows around either side of the square to get to a vector in $W$, we’ll get the same result on each side. We get one of these diagrams — one of these equations — for each $g\in G$, and they must all commute for $T$ to be a $G$-morphism.
Another common word that comes up in these contexts is “intertwine”, as in saying that the map $T$ “intertwines” the representations $\rho_V$ and $\rho_W$, or that it is an “intertwinor” for the representations. This language goes back towards the viewpoint that takes the representing functions $\rho_V$ and $\rho_W$ to be fundamental, while $G$-morphism tends to be more associated with the viewpoint emphasizing the representing spaces $V$ and $W$.
If, as will usually be the case for the time being, we have a presentation of our group by generators and relations, then we’ll only need to check that $T$ intertwines the actions of the generators. Indeed, if $T$ intertwines the actions of $g$ and $h$, then it intertwines the actions of $gh$. We can see this in terms of diagrams by stacking the diagram for $h$ on top of the diagram for $g$. In terms of equations, we check that
$\displaystyle\begin{aligned}\rho_W(gh)\circ T&=\rho_W(g)\circ\rho_W(h)\circ T\\&=\rho_W(g)\circ T\circ\rho_V(h)\\&=T\circ\rho_V(g)\circ\rho_V(h)\\&=T\circ\rho_V(gh)\end{aligned}$
So if we’re given a set of generators and we can write every group element as a finite product of these generators, then as soon as we check that the intertwining equation holds for the generators we know it will hold for all group elements.
There are also deep connections between $G$-morphisms and natural transformations, in the categorical viewpoint. Those who are really interested in that can dig into the archives a bit.
## 15 Comments »
1. penultimate line of final equation array needs to have a \rho_V(h) on the right side
Comment by | September 21, 2010 | Reply
2. also, the middle term in the right side of the penultimate line should be p_V(g)
Comment by | September 21, 2010 | Reply
3. Thanks; fixed the typos.
Comment by | September 21, 2010 | Reply
4. [...] group representations. What this means is that if and are -modules, and if we have an injective morphism of modules , then we say that is a “submodule” of . And, just to be clear, a [...]
Pingback by | September 22, 2010 | Reply
5. [...] quick one today. Let’s take two -modules and . We’ll write for the vector space of intertwinors from to . This is pretty appropriate because these are the morphisms in the category of -modules. [...]
Pingback by | September 29, 2010 | Reply
6. [...] Now that we know that images and kernels of -morphisms between -modules are -modules as well, we can bring in a very general [...]
Pingback by | September 30, 2010 | Reply
7. [...] and Commutant Algebras We will find it useful in our study of -modules to study not only the morphisms between them, but the structures that they [...]
Pingback by | October 1, 2010 | Reply
8. [...] Today I’d like to show that the space of homomorphisms between two -modules is “additive”. That is, it satisfies the [...]
Pingback by | October 11, 2010 | Reply
9. [...] representation . This is not the kernel we’ve talked about recently, which is the kernel of a -morphism. This is the kernel of a group homomorphism. In this context, it’s the collection of group [...]
Pingback by | October 29, 2010 | Reply
10. [...] of to a linear endomorphism of , we actually get a homomorphism that sends each element of to a -module endomorphism of [...]
Pingback by | November 1, 2010 | Reply
11. [...] parallel between left and right representations: we have morphisms between right -modules just like we had between left -modules. I won’t really go into the details — they’re pretty [...]
Pingback by | November 3, 2010 | Reply
12. [...] so must be an intertwinor. And so we conclude [...]
Pingback by | November 12, 2010 | Reply
13. [...] of all, functoriality of restriction is easy. Any intertwinor between -modules is immediately an intertwinor between the restrictions and . Indeed, all it has [...]
Pingback by | December 1, 2010 | Reply
14. [...] let’s start on the left with a linear map that intertwines the action of each subgroup element . We want to extend this to a linear map from to that [...]
Pingback by | December 3, 2010 | Reply
15. [...] To put it another way, is the vector specifies for the point . On the other hand, is the image of the vector specifies for the point . If these two vectors are the same for every , then we say that “intertwines” the two vector fields, or that and are “-related”. The latter term is a bit awkward, which is why I prefer the former, especially since it does have that same commutative-diagram feel as intertwinors between representations. [...]
Pingback by | June 3, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 45, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933441162109375, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/183893/rkhs-concepts-connection-with-matrix-trace-function
|
# RKHS concepts. Connection with matrix trace function.
With my basic linear algebraic background, am trying to connect the Reproducing Kernel Hilbert Space (RKHS) concepts to example functions over matrices, as I gradually learn about this new concept.
Question: Can $TrX^TKX$ be connected to a RKHS in any way? $K$ is a p.s.d kernel matrix and $X$ is a matrix of reals.
I know that $TrX^TKX$ can be represented as $Tr[(SX)^T(SX)]=||SX||^2_{HS}$, using the hilbert schmidt norm where $S$ is the p.s.d square root of K.(ex:$S=U\lambda^{1/2}$, where $K=U\lambda U^T$ is the eigen-decomposition of $K$). But am trying really hard with my narrow perspective to see the connections of this matrix function with the concept of RKHS. Do shed some light!
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9238170981407166, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/109733/equivalent-norms?answertab=votes
|
# Equivalent norms
Whenever two norms are equivalent in the sense that $||x||_1\le c_1\cdot ||x||_2$ and $||x||_2\le c_2\cdot ||x||_1$, they generate the same topology. Is the reverse also true, i.e. if a topology is generated by two different norms, are the norms equivalent in the above sense? We know this to be true for $\mathrm{R}^n$, but is it generally true, and if not, what are some counterexamples?
What about a (EDIT:Hausdorff, translationally invariant vector) topology generated by two different metrics, are the metrics equivalent in the sense $d_1(x,y)\le c_1\cdot d_2(x,y)$ and vice versa?
-
The answer here is "no." Consider the following two metrics generating the same topology on the positive real halfline: $d_1(x,y) = |x - y|$, and $d_2(x,y) = |1/x - 1/y|$. Because $x\mapsto 1/x$ is not Lipschitz continuous near the origin, the metrics are not equivalent (try to prove this). – William Feb 15 '12 at 20:11
2
– David Mitra Feb 15 '12 at 20:15
WNY:Sorry, I meant a translationally invariant metric. One rather trivial example is $|x-y|/(1+|x-y|)$ but that's because it is a bounded metric. – Ivan Feb 15 '12 at 20:22
David Mitra: I am not completely sure about why a norm on space X should induce a norm on a homeomorphic space Y? It'd be good for me to have an example of non-equivalent norms on a Banach space that induce the same topology (and no, this isn't a homework question, it's for my own elucidation:-)) – Ivan Feb 15 '12 at 20:27
I don't see why two norms on two different Banach spaces will induce two different norms on one of the spaces? – Ivan Feb 15 '12 at 20:33
show 7 more comments
## 1 Answer
First, let's have a look at the case of a metric space $X$ and two metrics $d_1$ and $d_2$ on it. If these are equivalent, meaning $d_1(x,y) \leq c_1 d_2(x,y)$ and $d_2(x,y)\leq c_2 d_1(x,y)$ for some real numbers $c_1,c_2 > 0$ and all $x,y \in X$, we get the following inclusions for open balls $B_{r}^i(x) = \{y \in X \:|\: d_i(x,y) < r\}$: $$B^2_{\frac{r}{c_1}}(x) \subseteq B^1_r(x)$$ $$B^1_{\frac{r}{c_2}}(x) \subseteq B^2_r(x)$$ for all $x \in X$ and $r > 0$. As every open subset is a union of open balls, it follows that the metrics induce the same topology. The case for norms is a corollary now.
Conversely, if the topologies induced by two norms $\|\cdot\|_1$ and $\|\cdot\|_2$ on a vector space $X$ are identical, we have an inclusion $B_r^1(0) \subset B^2_1(0)$ for some $r > 0$. Let $x \in X\setminus{\{0\}}$ and set $y = \frac{rx}{2\|x\|_1}$. Then $\|y\|_1 = \frac{r}{2} < r$, hence $\|y\|_2 < 1$ which shows $\|x\|_2 \leq \frac{2}{r} \|x\|_1$. By symmetry, you get the desired equivalence.
Now the bad news: Two metrics inducing the same topology do not have to be equivalent. take $X = \mathbb{R}$ and $d_1$ the normal norm induced metric and set $d_2 = \frac{d_1}{d_1 + 1}$. It is not difficult to show that $d_1$ and $d_2$ induce the same topology, but there is no $c > 0$ with $d_1(x,y) \leq c \cdot d_2(x,y)$ as the inequality $n \leq c \frac{n}{n + 1}$ does not hold for $n > c$.
-
Thank you, a great argument!:-) – Ivan Feb 15 '12 at 20:49
2
You've probably seen this argument in the proof that a linear operator between normed spaces is continuous iff it is bounded. This also gives an easy proof for normed spaces: If two norms $||\cdot||_1, ||\cdot||_2$ generate the same topology on $X$ then the identity operator $I : (X, ||\cdot||_1) \to (X, ||\cdot||_2)$ is a homeomorphism. So $I$ and $I^{-1}$ are continuous, hence bounded, and if you write down the definition of bounded you see that it means that the two norms are equivalent. The same argument in reverse also gives the converse. – Nate Eldredge Feb 15 '12 at 21:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267876744270325, "perplexity_flag": "head"}
|
http://gilkalai.wordpress.com/2008/08/31/a-diameter-problem-2/?like=1&source=post_flair&_wpnonce=c1c9c39881
|
Gil Kalai’s blog
## A Diameter Problem (2)
Posted on August 31, 2008 by
### 2. The connection with Hirsch’s Conjecture
The Hirsch Conjecture asserts that the diameter of the graph G(P) of a d-polytope P with n facets is at most n-d. Not even a polynomial upper bound for the diameter in terms of d and n is known. Finding good upper bounds for the diameter of graphs of d-polytopes is one of the central open problems in the study of convex polytopes. If d is fixed then a linear bound in n is known, and the best bound in terms of d and n is $n^{\log d+1}$. We will come back to these results later.
One basic fact to remember is that for every d-polytope P, G(P) is a connected graph. As a matter of fact, a theorem of Balinski asserts that G(P)\$ is d-connected.
The combinatorial diameter problem I mentioned in an earlier post (and which is repeated below) is closely related. Let me now explain the connection.
Let P be a simple d-polytope. Suppose that P is determined by n inequalities, and that each inequality describes a facet of P. Now we can define a family $\cal F$ of subsets of {1,2,…,n} as follows. Let $E_1,E_2,\dots,E_n$ be the n inequalities defining the polytope P, and let $F_1,F_2,\dots, F_n$ be the n corresponding facets. Every vertex v of P belongs to precisely d facets (this is equivalent to P being a simple polytope). Let $S_v$ be the indices of the facets containing v, or, equivalently, the indices of the inequalities which are satisfied as equalities at v. Now, let $\cal F$ be the family of all sets $S_v$ for all vertices of the polytope P.
The following observations are easy.
(1) Two vertices v and w of P are adjacent in the graph of P if and only if $|S_v \cap S_w|=d-1$. Therefore, $G(P)=G({\cal F})$.
(2) If A is a set of indices. The vertices v of P such that $A \subset S_v$ are precisely the set of vertices of a lower dimensional face of P. This face is described by all the vertices of P which satisfies all the inequalities indexed by $i \in A$, or equivalently all vertices in P which belong to the intersection of the facets $F_i$ for $i \in A$.
Therefore, for every $A \subset N$ if ${\cal F}[A]$ is not empty the graph $G({\cal F}[A])$ is connected – this graph is just the graph of some lower dimensional polytope. This was the main assumption in our abstract problem.
Remark: It is known that the assertion of the Hirsch Conjecture fails for the abstract setting. There are examples of families where the diameter is as large as n-(4/5)d.
### 1. A Diameter problem for families of sets
Let me draw your attention to the following problem:
Consider a family $\cal F$ of subsets of size d of the set N={1,2,…,n}.
Associate to $\cal F$ a graph $G({\cal F})$ as follows: The vertices of $G({\cal F})$ are simply the sets in $\cal F$. Two vertices $S$ and $T$ are adjacent if $|S \cap T|=d-1$.
For a subset $A \subset N$ let ${\cal F}[A]$ denote the subfamily of all subsets of $\cal F$ which contain $A$.
MAIN ASSUMPTION: Suppose that for every $A$ for which ${\cal F}[A]$ is not empty $G({\cal F}[A])$ is connected.
MAIN QUESTION: How large can the diameter of $G({\cal F})$ be in terms of $d$ and $n$.
### Like this:
This entry was posted in Combinatorics, Convex polytopes, Open problems. Bookmark the permalink.
### 5 Responses to A Diameter Problem (2)
1. jozsef says:
Dear Gil,
Thank for the exciting post! I have a question; Don’t you have a stronger assumption (if needed) that G(F[A]) is at least |F[A]|-connected?
2. Andy D says:
Hi Gil,
Yes, very nice posts!
By the ‘abstract setting’ for which HC fails, do you mean the ‘Main Question’ at the end of the post? If not, what are best known bounds for the Main Question?
Thanks!
3. Gil Kalai says:
Dear Jozsef, yes, we can make this stronger assumption but I do not know how to use it. (I am also not entirely sure it is really stronger.)
Dear Andy, right, there are examples for the “main question” where the diameter is n-4d/5 (or so). These examples arise from graphs of unbounded simple polyhedra.
4. Pingback: A Diameter Problem (5) « Combinatorics and more
5. Pingback: A Diameter problem (7): The Best Known Bound « Combinatorics and more
• ### Blogroll
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218447804450989, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/73121/recent-claim-that-inaccessibles-are-inconsistent-with-zf
|
Recent claim that inaccessibles are inconsistent with ZF
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Here it is mentioned that someone claims to have proven that there are no weakly inaccessibles in ZF.
Question 1: What reasons are there to believe that weakly inaccessibles exist?
Question(s) 2: Since all large cardinals are weakly inaccessible, this would have a profound effect on set theory. What are some of the most significant results whose only known proof assumes the existence of weakly inaccessibles? Might any of the arguments go through without their existence? For example, I've heard that the original proof of Fermat's Last Theorem (FLT) assumed (something equivalen to) a large cardinal, but it was then shown that the argument went through without such an assumption.
Edit. I just added the phrase "whose only known proof" to Question 2 above, which is what I intended originally. The point of course, is that I want to know which results, if any, would be "lost" if weakly inaccessibles were lost. FLT is not an example of that, but would have been before it was known that weakly inaccessibles are not necessary in its proof.
-
8
we try to avoid rumors, rendering judgment on preprints, etc. The arguments between users are no less damaging to a website for involving professionals. For whatever reason, FOM welcomes controversy. PLease post there. – Will Jagy Aug 18 2011 at 3:54
4
Address here, either post yourself when you would, or give it a few minutes and read what others may post: meta.mathoverflow.net/discussion/1117/… – Will Jagy Aug 18 2011 at 4:09
4
My personal opinion is that inaccessible cardinals exist (in great profusion) and therefore I expect that there is an error in the claimed proof. If I had nothing more urgent to do, I might look for the (or an) error, but I do have more urgent things to do, and the two papers (in the version I saw) add up to about 250 pages, so I'm not planning to scrutinize Kiselev's claim soon. (That could change fast if somebody like Solovay said this should be taken seriously.) – Andreas Blass Aug 18 2011 at 4:38
14
Since Will's comment doesn't make this very clear: this question has a meta thread - meta.mathoverflow.net/discussion/1117/… – François G. Dorais♦ Aug 18 2011 at 4:51
10
We already had an excellent discussion of FLT and large cardinals, see mathoverflow.net/questions/35746/… . May I propose that we not focus on that particular question here? – David Speyer Aug 18 2011 at 23:29
show 10 more comments
2 Answers
As I pointed out in the meta thread, this question overlaps with a bunch of older MO questions.
However, none of these questions directly address the particular case of the existence of inaccessible cardinals, which is of special interest as this is the weakest of all large cardinal hypotheses. This answer focuses on that case.
Penelope Maddy gives several answers to Question 1 in §III of Believing the Axioms, I [JSL 53 (1988), 481-511, MR0947855]. In this wonderful paper, Maddy justifies many set theoretic axioms and hypotheses using five widely believed "rules of thumb": maximize, inexhaustibility, uniformity, whimsical identity, and reflection. Here is a brief summary of these five arguments as it pertains to the existence of inaccessible cardinals.
• The maximization argument. The maximize rule of thumb is perhaps best understood as the opposite of Occam's Razor. However, blind application of this easily leads to contradictions. Thus, the rule is generally understood as a pair of statements: thikness — powersets are very large; and tallness — there are lots and lots of ordinals. The second easily leads to the existence of inaccessibles.
• The inexhaustibility argument. Maddy describes this one very well: "The universe of sets is too complex to be exhausted by any handful of operations, in particular by power set and replacement, the two given by the axioms of Zermelo and Fraenkel. Thus there must be an ordinal number after all the ordinals generated by replacement and power set. This is an inaccessible." (p. 502)
• The uniformity argument. Uniformity basically states that the richness of the universe should not concentrate in a small region, that if a certain property is found at a certain level of the cumulative hierarchy then analogue properties should also be found higher up. Thus, there should be many cardinals that share the same properties as $\aleph_0$, such as the fact that $2^k < \aleph_0$ for every $k < \aleph_0$. Combined with regularity, this leads to the existence of inaccessibles.
• The whimsical identity argument. This rule of thumb states that there should be no accidental identities, "like the identity between 'human' and 'featherless biped'." (p. 499) It seems unlikely that $\aleph_0$ should be characterized as the unique regular cardinal $\kappa$ such that $2^\mu < \kappa$ for every $\mu < \kappa$. Therefore, there must be inaccessible cardinals.
• The reflection argument. This powerful rule of thumb is a generalization of Montague's Reflection Theorem, which states that for every first-order formula $\phi(\bar{x})$ of $V \vDash \phi(\bar{x})$ then there are arbitrarily large ordinals $\alpha$ such that $V_\alpha \vDash \phi(\bar{x})$. The Reflection Principle generalizes this from first-order properties to arbitrary properties. Thus, since $V$ is closed under replacement and powerset, there must be arbitrarily large ordinals $\alpha$ such that $V_\alpha$ is also closed under replacement and powerset. These ordinals are inaccessibles.
These five arguments have a lot in common, but the basic principles behind them are quite different. I would contend that these are five distinct justifications for the existence of inaccessibles.
Note that Maddy's paper has a sequel Believing the Axioms, II [JSL 53 (1988), 736-764, MR0960996]). Let me also point out nother highly relevant paper: Kanamori and Magidor, The evolution of large cardinal axioms in set theory [LNM 669, 99-275, MR0520190]. Of course, detailed information can be found in Kanamori's The Higher Infinite [Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1994].
-
@François This is a nice answer to Question 1; thanks. I don't think any of the other MO questions you linked are what I'm asking in Question 2, either. Question 2 is intended to mean: "Are their any theorems whose only known proof relies on a weakly inaccessible but people suspect to be provable without?" FLT was an example of this before it was verified (sketchily, at least) that the proof goes through without weakly inaccessibles. I will edit Question 2 to make this clear. – Quinn Culver Aug 19 2011 at 12:43
Maddy's inexhaustibility argument only works if she means second order replacement. – mbsq Aug 23 2011 at 1:31
I think the wording Maddy uses for inexhaustibility works as is. To say "closed under replacement" is not at all the same as saying "satisfies the replacement scheme" since the word closed usually refers to the outside world, the universe $V$ in this case. What Maddy says entails that some transitive set $X$ is closed under powerset and replacement from the point of view of $V$; this does imply that $X = V_\kappa$ where $\kappa$ is inaccessible. – François G. Dorais♦ Feb 25 at 16:52
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
François has excellently addressed your question 1; allow me to address question 2. I understand the question to be: what will be the mathematical effects if someone were to show that there are no (weakly) inaccessible cardinals? A similar question would apply to any of several large cardinals. So let me list some consequences.
First, let me note that the existence of a weakly inaccessible cardinal is provably equiconsistent with the existence of a (strongly) inaccessible cardinal, since any weakly inaccessible cardinal is strongly inaccessible in $L$, and so the issue about weakly or strongly inaccessible is entirely irrelevant when it comes to consistency.
Second, let me note that set theorists are not generally satisfied by claims of the sort "the only known proof uses such-and-such," but rather they use the concepts of consistency strength and equiconsistency, which allow for precise claims to be proved about exactly which large cardinals are required to prove which statements. The situation is that for many mathematical assertions, we can prove that any proof must use a certain type of large cardinal or something just as strong, in the sense that the consistency of the statement itself implies the consistency of the large cardinal in question. In this way, we avoid any problematic issue about knowledge concerning whether a better proof is simply not yet discovered.
As a result, if inaccessible cardinals should be refuted, then using the known results we immediately gain an enormous number of positive theorems. So it isn't really a case of losing theorems, but rather gaining.
Theorem. If inaccessible cardinals are inconsistent, then (we can prove that) we can construct a non-Lebesgue measurable set of reals without using the axiom of choice.
This follows from the fact that Solovay and Shelah have proved that the possibility of constructing a non-Lebesgue measurable set of reals (in the context of ZF+DC) without using AC is exactly equivalent to the inconsistency of inaccessible cardinals.
Most people believe that one must use AC in any Vitali-type construction of a non-Lebesgue measurable set, and the theorem above shows that this belief is provably equivalent to the consistency of inaccessible cardinals. Perhaps many mathematicians would find their confidence in the consistency of inaccessible cardinals to increase upon learning of this, and in this sense, this is also an answer to question 1. In any case, many well-known set theorists have emphasized enormous confidence in the consistency of large cardinals, and have stated quite explicitly that if inaccessible cardinals should become known to be inconsistent, then we should expect further inconsistency much lower in ZFC itself or in the low levels of PA.
Theorem. If inaccessible cardinals are inconsistent (and even merely if we can refute infinitely many Woodin cardinals), then (we can prove that) there is a projective set of reals $A\subset\mathbb{R}$ whose corresponding two-person game of perfect information has no winning strategy for either player. In other words, the infinitary de Morgan law $$\neg\forall n_0\exists n_1\forall n_2\exists n_3\cdots A(\vec n)\iff\exists n_0\forall n_1\exists n_2\forall n_3\cdots\neg A(\vec n)$$ will fail for some projective set $A$.
The projective sets of reals are those reals that are definable by a property involving quantification only over real numbers and integers. The reason for the theorem is that projective determinacy is equiconsistent over ZFC with infinitely many Woodin cardinals, and so if we refute the large cardinals in ZFC, then we similarly refute projective determinacy.
Theorem. If inaccessible cardinals are inconsistent (and even if merely measurable cardinals are inconsistent), then (we can prove that) there is an analytic set (a continuous image of a Borel set) that is not determined.
Theorem. If inaccessible cardinals are inconsistent, then we can prove that the full set-theoretic universe is very close to the constructible universe in the sense of covering. In particular, $L$ computes the successors of singular cardinals correctly.
This shocking conclusion follows in this case from Jensen's covering lemma, since refuting inaccessible cardinals implies a refutation of $0^\sharp$.
Theorem. If inaccessible cardinals are inconsistent, then on no set is there a countably complete real-valued measure measuring all subsets of the set and giving points no mass.
This is simply because any real-valued measurable cardinal is measurable and hence inaccessible in an inner model.
Theorem. If inaccessible cardinals are inconsistent, then (we can prove that) there are no uncountable Grothendieck universes and the axiom of universes in category theory is false.
An uncountable Grothendieck universe is exactly $H_\kappa$ for an inaccessible cardinal $\kappa$, and the axiom of universes asserts that every set is in such a universe.
There are many more examples. (I invite any knowledgeable person to edit the answer with additional examples.)
-
@Joel Thanks for the great answer. Now I have two great answers, each to a different one of my questions, and so I don't know whose to accept. Is there a MO policy or tradition (e.g. accept the first one) for this type of situation? – Quinn Culver Aug 20 2011 at 2:05
Quinn, it really doesn't matter. I would encourage you to accept François's answer, since it is a really great summary of those philosophical arguments for inaccessible cardinals, and these ideas are not so widely known. – Joel David Hamkins Aug 20 2011 at 2:12
Joel, analytic sets are universally measurable, so I'm a bit confused by the sentence "If inaccessible cardinals are inconsistent (and even if merely measurable cardinals are inconsistent), then (we can prove that) there is an analytic set (a continuous image of a Borel set) that is not determined, not measurable, and etc." – Clinton Conley Aug 20 2011 at 2:12
Clinton, oops, the claim should just be about determinacy, and I have now edited. The measurability issue arises a bit higher in the descriptive set-theoretic hierarchy, I think at $\Sigma^1_2$, but I have to check. – Joel David Hamkins Aug 20 2011 at 2:48
2
Joel, I think you only pick up large cardinal strength from measurability of $\Sigma^1_3$ sets. Measurability of $\Sigma^1_2$ sets should follow from MA + not CH. – Clinton Conley Aug 20 2011 at 18:39
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939963161945343, "perplexity_flag": "middle"}
|
http://darrenjw.wordpress.com/
|
# Darren Wilkinson's research blog
Statistics, computing, Bayes, stochastic modelling, systems biology and bioinformatics
## Introduction to Approximate Bayesian Computation (ABC)
31/03/2013
Many of the posts in this blog have been concerned with using MCMC based methods for Bayesian inference. These methods are typically “exact” in the sense that they have the exact posterior distribution of interest as their target equilibrium distribution, but are obviously “approximate”, in that for any finite amount of computing time, we can only generate a finite sample of correlated realisations from a Markov chain that we hope is close to equilibrium.
Approximate Bayesian Computation (ABC) methods go a step further, and generate samples from a distribution which is not the true posterior distribution of interest, but a distribution which is hoped to be close to the real posterior distribution of interest. There are many variants on ABC, and I won’t get around to explaining all of them in this blog. The wikipedia page on ABC is a good starting point for further reading. In this post I’ll explain the most basic rejection sampling version of ABC, and in a subsequent post, I’ll explain a sequential refinement, often referred to as ABC-SMC. As usual, I’ll use R code to illustrate the ideas.
#### Basic idea
There is a close connection between “likelihood free” MCMC methods and those of approximate Bayesian computation (ABC). To keep things simple, consider the case of a perfectly observed system, so that there is no latent variable layer. Then there are model parameters $\theta$ described by a prior $\pi(\theta)$, and a forwards-simulation model for the data $x$, defined by $\pi(x|\theta)$. It is clear that a simple algorithm for simulating from the desired posterior $\pi(\theta|x)$ can be obtained as follows. First simulate from the joint distribution $\pi(\theta,x)$ by simulating $\theta^\star\sim\pi(\theta)$ and then $x^\star\sim \pi(x|\theta^\star)$. This gives a sample $(\theta^\star,x^\star)$ from the joint distribution. A simple rejection algorithm which rejects the proposed pair unless $x^\star$ matches the true data $x$ clearly gives a sample from the required posterior distribution.
#### Exact rejection sampling
• 1. Sample $\theta^\star \sim \pi(\theta^\star)$
• 2. Sample $x^\star\sim \pi(x^\star|\theta^\star)$
• 3. If $x^\star=x$, keep $\theta^\star$ as a sample from $\pi(\theta|x)$, otherwise reject.
• 4. Return to step 1.
This algorithm is exact, and for discrete $x$ will have a non-zero acceptance rate. However, in most interesting problems the rejection rate will be intolerably high. In particular, the acceptance rate will typically be zero for continuous valued $x$.
#### ABC rejection sampling
The ABC “approximation” is to accept values provided that $x^\star$ is “sufficiently close” to $x$. In the first instance, we can formulate this as follows.
• 1. Sample $\theta^\star \sim \pi(\theta^\star)$
• 2. Sample $x^\star\sim \pi(x^\star|\theta^\star)$
• 3. If $\Vert x^\star-x\Vert< \epsilon$, keep $\theta^\star$ as a sample from $\pi(\theta|x)$, otherwise reject.
• 4. Return to step 1.
Euclidean distance is usually chosen as the norm, though any norm can be used. This procedure is “honest”, in the sense that it produces exact realisations from
$\theta^\star\big|\Vert x^\star-x\Vert < \epsilon.$
For suitable small choice of $\epsilon$, this will closely approximate the true posterior. However, smaller choices of $\epsilon$ will lead to higher rejection rates. This will be a particular problem in the context of high-dimensional $x$, where it is often unrealistic to expect a close match between all components of $x$ and the simulated data $x^\star$, even for a good choice of $\theta^\star$. In this case, it makes more sense to look for good agreement between particular aspects of $x$, such as the mean, or variance, or auto-correlation, depending on the exact problem and context.
In the simplest case, this is done by forming a (vector of) summary statistic(s), $s(x^\star)$ (ideally a sufficient statistic), and accepting provided that $\Vert s(x^\star)-s(x)\Vert<\epsilon$ for some suitable choice of metric and $\epsilon$. We will return to this issue in a subsequent post.
#### Inference for an intractable Markov process
I’ll illustrate ABC in the context of parameter inference for a Markov process with an intractable transition kernel: the discrete stochastic Lotka-Volterra model. A function for simulating exact realisations from the intractable kernel is included in the smfsb CRAN package discussed in a previous post. Using pMCMC to solve the parameter inference problem is discussed in another post. It may be helpful to skim through those posts quickly to become familiar with this problem before proceeding.
So, for a given proposed set of parameters, realisations from the process can be sampled using the functions simTs and stepLV (from the smfsb package). We will use the sample data set LVperfect (from the LVdata dataset) as our “true”, or “target” data, and try to find parameters for the process which are consistent with this data. A fairly minimal R script for this problem is given below.
```require(smfsb)
data(LVdata)
N=1e5
message(paste("N =",N))
prior=cbind(th1=exp(runif(N,-4,2)),th2=exp(runif(N,-4,2)),th3=exp(runif(N,-4,2)))
rows=lapply(1:N,function(i){prior[i,]})
message("starting simulation")
samples=lapply(rows,function(th){simTs(c(50,100),0,30,2,stepLVc,th)})
message("finished simulation")
distance<-function(ts)
{
diff=ts-LVperfect
sum(diff*diff)
}
message("computing distances")
dist=lapply(samples,distance)
message("distances computed")
dist=sapply(dist,c)
cutoff=quantile(dist,1000/N)
post=prior[dist<cutoff,]
op=par(mfrow=c(2,3))
apply(post,2,hist,30)
apply(log(post),2,hist,30)
par(op)
```
This script should take 5-10 minutes to run on a decent laptop, and will result in histograms of the posterior marginals for the components of $\theta$ and $\log(\theta)$. Note that I have deliberately adopted a functional programming style, making use of the lapply function for the most computationally intensive steps. The reason for this will soon become apparent. Note that rather than pre-specifying a cutoff $\epsilon$, I’ve instead picked a quantile of the distance distribution. This is common practice in scenarios where the distance is difficult to have good intuition about. In fact here I’ve gone a step further and chosen a quantile to give a final sample of size 1000. Obviously then in this case I could have just selected out the top 1000 directly, but I wanted to illustrate the quantile based approach.
One problem with the above script is that all proposed samples are stored in memory at once. This is problematic for problems involving large numbers of samples. However, it is convenient to do simulations in large batches, both for computation of quantiles, and also for efficient parallelisation. The script below illustrates how to implement a batch parallelisation strategy for this problem. Samples are generated in batches of size 10^4, and only the best fitting samples are stored before the next batch is processed. This strategy can be used to get a good sized sample based on a more stringent acceptance criterion at the cost of addition simulation time. Note that the parallelisation code will only work with recent versions of R, and works by replacing calls to lapply with the parallel version, mclapply. You should notice an appreciable speed-up on a multicore machine.
```require(smfsb)
require(parallel)
options(mc.cores=4)
data(LVdata)
N=1e5
bs=1e4
batches=N/bs
message(paste("N =",N," | bs =",bs," | batches =",batches))
distance<-function(ts)
{
diff=ts[,1]-LVprey
sum(diff*diff)
}
post=NULL
for (i in 1:batches) {
message(paste("batch",i,"of",batches))
prior=cbind(th1=exp(runif(bs,-4,2)),th2=exp(runif(bs,-4,2)),th3=exp(runif(bs,-4,2)))
rows=lapply(1:bs,function(i){prior[i,]})
samples=mclapply(rows,function(th){simTs(c(50,100),0,30,2,stepLVc,th)})
dist=mclapply(samples,distance)
dist=sapply(dist,c)
cutoff=quantile(dist,1000/N)
post=rbind(post,prior[dist<cutoff,])
}
message(paste("Finished. Kept",dim(post)[1],"simulations"))
op=par(mfrow=c(2,3))
apply(post,2,hist,30)
apply(log(post),2,hist,30)
par(op)
```
Note that there is an additional approximation here, since the top 100 samples from each of 10 batches of simulations won’t correspond exactly to the top 1000 samples overall, but given all of the other approximations going on in ABC, this one is likely to be the least of your worries.
Now, if you compare the approximate posteriors obtained here with the “true” posteriors obtained in an earlier post using pMCMC, you will see that these posteriors are really quite poor. However, this isn’t a very fair comparison, since we’ve only done 10^5 simulations. Jacking N up to 10^7 gives the ABC posterior below.
ABC posterior from 10^7 samples
This is a bit better, but really not great. There are two basic problems with the simplistic ABC strategy adopted here, one related to the dimensionality of the data and the other the dimensionality of the parameter space. The most basic problem that we have here is the dimensionality of the data. We have 16 bivariate observations, so we want our stochastic simulation to shoot at a point in 32-dimensional space. That’s a tough call. The standard way to address this problem is to reduce the dimension of the data by introducing a few carefully chosen summary statistics and then just attempting to hit those. I’ll illustrate this in a subsequent post. The other problem is that often the prior and posterior over the parameters are quite different, and this problem too is exacerbated as the dimension of the parameter space increases. The standard way to deal with this is to sequentially adapt from the prior through a sequence of ABC posteriors. I’ll examine this in a future post as well, once I’ve also posted an introduction to the use of sequential Monte Carlo (SMC) samplers for static problems.
#### Further reading
For further reading, I suggest browsing the reference list of the Wikipedia page for ABC. Also look through the list of software on that page. In particular, note that there is a CRAN package, abc, providing R support for ABC. There is a vignette for this package which should be sufficient to get started.
Tags: ABC, approximate, Bayes, Bayesian, code, computation, example, Lotka-Volterra, mclapply, multicore, parallel, R, rstats
Posted in Uncategorized | 2 Comments »
## Getting started with Bayesian variable selection using JAGS and rjags
20/11/2012
#### Bayesian variable selection
In a previous post I gave a quick introduction to using the rjags R package to access the JAGS Bayesian inference from within R. In this post I want to give a quick guide to using rjags for Bayesian variable selection. I intend to use this post as a starting point for future posts on Bayesian model and variable selection using more sophisticated approaches.
I will use the simple example of multiple linear regression to illustrate the ideas, but it should be noted that I’m just using that as an example. It turns out that in the context of linear regression there are lots of algebraic and computational tricks which can be used to simplify the variable selection problem. The approach I give here is therefore rather inefficient for linear regression, but generalises to more complex (non-linear) problems where analytical and computational short-cuts can’t be used so easily.
Consider a linear regression problem with n observations and p covariates, which we can write in matrix form as
$y = \alpha \boldmath{1} + X\beta + \varepsilon,$
where $X$ is an $n\times p$ matrix. The idea of variable selection is that probably not all of the p covariates are useful for predicting y, and therefore it would be useful to identify the variables which are, and just use those. Clearly each combination of variables corresponds to a different model, and so the variable selection amounts to choosing among the $2^p$ possible models. For large values of p it won’t be practical to consider each possible model separately, and so the idea of Bayesian variable selection is to consider a model containing all of the possible model combinations as sub-models, and the variable selection problem as just another aspect of the model which must be estimated from data. I’m simplifying and glossing over lots of details here, but there is a very nice review paper by O’Hara and Sillanpaa (2009) which the reader is referred to for further details.
The simplest and most natural way to tackle the variable selection problem from a Bayesian perspective is to introduce an indicator random variable $I_i$ for each covariate, and introduce these into the model in order to “zero out” inactive covariates. That is we write the ith regression coefficient $\beta_i$ as $\beta_i=I_i\beta^\star_i$, so that $\beta^\star_i$ is the regression coefficient when $I_i=1$, and “doesn’t matter” when $I_i=0$. There are various ways to choose the prior over $I_i$ and $\beta^\star_i$, but the simplest and most natural choice is to make them independent. This approach was used in Kuo and Mallick (1998), and hence is referred to as the Kuo and Mallick approach in O’Hara and Sillanpaa.
#### Simulating some data
In order to see how things work, let’s first simulate some data from a regression model with geometrically decaying regression coefficients.
```n=500
p=20
X=matrix(rnorm(n*p),ncol=p)
beta=2^(0:(1-p))
print(beta)
alpha=3
tau=2
eps=rnorm(n,0,1/sqrt(tau))
y=alpha+as.vector(X%*%beta + eps)
```
Let’s also fit the model by least squares.
```mod=lm(y~X)
print(summary(mod))
```
This should give output something like the following.
```Call:
lm(formula = y ~ X)
Residuals:
Min 1Q Median 3Q Max
-1.62390 -0.48917 -0.02355 0.45683 2.35448
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0565406 0.0332104 92.036 < 2e-16 ***
X1 0.9676415 0.0322847 29.972 < 2e-16 ***
X2 0.4840052 0.0333444 14.515 < 2e-16 ***
X3 0.2680482 0.0320577 8.361 6.8e-16 ***
X4 0.1127954 0.0314472 3.587 0.000369 ***
X5 0.0781860 0.0334818 2.335 0.019946 *
X6 0.0136591 0.0335817 0.407 0.684379
X7 0.0035329 0.0321935 0.110 0.912662
X8 0.0445844 0.0329189 1.354 0.176257
X9 0.0269504 0.0318558 0.846 0.397968
X10 0.0114942 0.0326022 0.353 0.724575
X11 -0.0045308 0.0330039 -0.137 0.890868
X12 0.0111247 0.0342482 0.325 0.745455
X13 -0.0584796 0.0317723 -1.841 0.066301 .
X14 -0.0005005 0.0343499 -0.015 0.988381
X15 -0.0410424 0.0334723 -1.226 0.220742
X16 0.0084832 0.0329650 0.257 0.797026
X17 0.0346331 0.0327433 1.058 0.290718
X18 0.0013258 0.0328920 0.040 0.967865
X19 -0.0086980 0.0354804 -0.245 0.806446
X20 0.0093156 0.0342376 0.272 0.785671
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.7251 on 479 degrees of freedom
Multiple R-squared: 0.7187, Adjusted R-squared: 0.707
F-statistic: 61.2 on 20 and 479 DF, p-value: < 2.2e-16
```
The first 4 variables are “highly significant” and the 5th is borderline.
#### Saturated model
We can fit the saturated model using JAGS with the following code.
```require(rjags)
data=list(y=y,X=X,n=n,p=p)
init=list(tau=1,alpha=0,beta=rep(0,p))
modelstring="
model {
for (i in 1:n) {
mean[i]<-alpha+inprod(X[i,],beta)
y[i]~dnorm(mean[i],tau)
}
for (j in 1:p) {
beta[j]~dnorm(0,0.001)
}
alpha~dnorm(0,0.0001)
tau~dgamma(1,0.001)
}
"
model=jags.model(textConnection(modelstring),
data=data,inits=init)
update(model,n.iter=100)
output=coda.samples(model=model,variable.names=c("alpha","beta","tau"),
n.iter=10000,thin=1)
print(summary(output))
plot(output)
```
I’ve hard-coded various hyper-parameters in the script which are vaguely reasonable for this kind of problem. I won’t include all of the output in this post, but this works fine and gives sensible results. However, it does not address the variable selection problem.
#### Basic variable selection
Let’s now modify the above script to do basic variable selection in the style of Kuo and Mallick.
```data=list(y=y,X=X,n=n,p=p)
init=list(tau=1,alpha=0,betaT=rep(0,p),ind=rep(0,p))
modelstring="
model {
for (i in 1:n) {
mean[i]<-alpha+inprod(X[i,],beta)
y[i]~dnorm(mean[i],tau)
}
for (j in 1:p) {
ind[j]~dbern(0.2)
betaT[j]~dnorm(0,0.001)
beta[j]<-ind[j]*betaT[j]
}
alpha~dnorm(0,0.0001)
tau~dgamma(1,0.001)
}
"
model=jags.model(textConnection(modelstring),
data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,
variable.names=c("alpha","beta","ind","tau"),
n.iter=10000,thin=1)
print(summary(output))
plot(output)
```
Note that I’ve hard-coded an expectation that around 20% of variables should be included in the model. Again, I won’t include all of the output here, but the posterior mean of the indicator variables can be interpreted as posterior probabilities that the variables should be included in the model. Inspecting the output then reveals that the first three variables have a posterior probability of very close to one, the 4th variable has a small but non-negligible probability of inclusion, and the other variables all have very small probabilities of inclusion.
This is fine so far as it goes, but is not entirely satisfactory. One problem is that the choice of a “fixed effects” prior for the regression coefficients of the included variables is likely to lead to a Lindley’s paradox type situation, and a consequent under-selection of variables. It is arguably better to model the distribution of included variables using a “random effects” approach, leading to a more appropriate distribution for the included variables.
#### Variable selection with random effects
Adopting a random effects distribution for the included coefficients that is normal with mean zero and unknown variance helps to combat Lindley’s paradox, and can be implemented as follows.
```data=list(y=y,X=X,n=n,p=p)
init=list(tau=1,taub=1,alpha=0,betaT=rep(0,p),ind=rep(0,p))
modelstring="
model {
for (i in 1:n) {
mean[i]<-alpha+inprod(X[i,],beta)
y[i]~dnorm(mean[i],tau)
}
for (j in 1:p) {
ind[j]~dbern(0.2)
betaT[j]~dnorm(0,taub)
beta[j]<-ind[j]*betaT[j]
}
alpha~dnorm(0,0.0001)
tau~dgamma(1,0.001)
taub~dgamma(1,0.001)
}
"
model=jags.model(textConnection(modelstring),
data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,
variable.names=c("alpha","beta","ind","tau","taub"),
n.iter=10000,thin=1)
print(summary(output))
plot(output)
```
This leads to a large inclusion probability for the 4th variable, and non-negligible inclusion probabilities for the next few (it is obviously somewhat dependent on the simulated data set). This random effects variable selection modelling approach generally performs better, but it still has the potentially undesirable feature of hard-coding the probability of variable inclusion. Under the prior model, the number of variables included is binomial, and the binomial distribution is rather concentrated about its mean. Where there is a general desire to control the degree of sparsity in the model, this is a good thing, but if there is considerable uncertainty about the degree of sparsity that is anticipated, then a more flexible model may be desirable.
#### Variable selection with random effects and a prior on the inclusion probability
The previous model can be modified by introducing a Beta prior for the model inclusion probability. This induces a distribution for the number of included variables which has longer tails than the binomial distribution, allowing the model to learn about the degree of sparsity.
```data=list(y=y,X=X,n=n,p=p)
init=list(tau=1,taub=1,pind=0.5,alpha=0,betaT=rep(0,p),ind=rep(0,p))
modelstring="
model {
for (i in 1:n) {
mean[i]<-alpha+inprod(X[i,],beta)
y[i]~dnorm(mean[i],tau)
}
for (j in 1:p) {
ind[j]~dbern(pind)
betaT[j]~dnorm(0,taub)
beta[j]<-ind[j]*betaT[j]
}
alpha~dnorm(0,0.0001)
tau~dgamma(1,0.001)
taub~dgamma(1,0.001)
pind~dbeta(2,8)
}
"
model=jags.model(textConnection(modelstring),
data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,
variable.names=c("alpha","beta","ind","tau","taub","pind"),
n.iter=10000,thin=1)
print(summary(output))
plot(output)
```
It turns out that for this particular problem the posterior distribution is not very different to the previous case, as for this problem the hard-coded choice of 20% is quite consistent with the data. However, the variable inclusion probabilities can be rather sensitive to the choice of hard-coded proportion.
#### Conclusion
Bayesian variable selection (and model selection more generally) is a very delicate topic, and there is much more to say about it. In this post I’ve concentrated on the practicalities of introducing variable selection into JAGS models. For further reading, I highly recommend the review of O’Hara and Sillanpaa (2009), which discusses other computational algorithms for variable selection. I intend to discuss some of the other methods in future posts.
#### References
O’Hara, R. and Sillanpaa, M. (2009) A review of Bayesian variable selection methods: what, how and which. Bayesian Analysis, 4(1):85-118. [DOI, PDF, Supp, BUGS Code]
Kuo, L. and Mallick, B. (1998) Variable selection for regression models. Sankhya B, 60(1):65-81.
Tags: Bayes, Bayesian, Gibbs, indicator, jags, Kuo, Mallick, model, R, rjags, rstats, selection, variable
Posted in Uncategorized | 6 Comments »
## Keeping R up to date on Ubuntu linux
10/11/2012
R is included as part of the standard Ubuntu distribution, and can be installed with a command like
```sudo apt-get install r-base
```
Obviously the software included as part of the standard distribution usually lags a little behind the latest version, and this is usually quite acceptable for most users most of the time. However, R is evolving quite quickly at the moment, and for various reasons I have decided to skip Ubuntu 12.10 (quantal) and stick with Ubuntu 12.4 (precise) for the time being. Since R 2.14 is included with Ubuntu 12.4, and I’d rather use R 2.15, I’d like to run with the latest R builds on my Ubuntu system.
Fortunately this is very easy, as there is a maintained repository for Ubuntu builds of R on CRAN. Full instructions are provided on CRAN, but here is the quick summary. First you need to know your nearest CRAN mirror – there is a list of mirrors on CRAN. I generally use the Bristol mirror, and so I will use it in the following.
```sudo su
echo "deb http://www.stats.bris.ac.uk/R/bin/linux/ubuntu precise/" >> /etc/apt/sources.list
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E084DAB9
apt-get update
apt-get upgrade
```
That’s it. You are updated to the latest version of R, and your system will check for updates in the usual way. There are just two things you may need to edit in line 2 above. The first is the address of the CRAN mirror (here “www.stats.bris.ac.uk”). The second is the name of the Ubuntu distro you are running (here “precise”).
Tags: cran, latest, Linux, R, repo, rstats, Ubuntu
Posted in Uncategorized | 2 Comments »
## Inlining JAGS models in R scripts for rjags
02/10/2012
JAGS (Just Another Gibbs Sampler) is a general purpose MCMC engine similar to WinBUGS and OpenBUGS. I have a slight preference for JAGS as it is free and portable, works well on Linux, and interfaces well with R. It is tempting to write a tutorial introduction to JAGS and the corresponding R package, rjags, but there is a lot of material freely available on-line already, so it isn’t really necessary. If you are new to JAGS, I suggest starting with Getting Started with JAGS, rjags, and Bayesian Modelling. In this post I want to focus specifically on the problem of inlining JAGS models in R scripts as it can be very useful, and is usually skipped in introductory material.
#### JAGS and rjags on Ubuntu Linux
On recent versions of Ubuntu, assuming that R is already installed, the simplest way to install JAGS and rjags is using the command
```sudo apt-get install jags r-cran-rjags
```
Now rjags is a CRAN package, so it can be installed in the usual way with install.packages("rjags"). However, taking JAGS and rjags direct from the Ubuntu repos should help to ensure that the versions of JAGS and rjags are in sync, which is a good thing.
#### Toy model
For this post, I will use a trivial toy example of inference for the mean and precision of a normal random sample. That is, we will assume data
$X_i \sim N(\mu,1/\tau),\quad i=1,2,\ldots n,$
with priors on $\mu$ and $\tau$ of the form
$\tau\sim Ga(a,b),\quad \mu \sim N(c,1/d).$
#### Separate model file
The usual way to fit this model in R using rjags is to first create a separate file containing the model
``` model {
for (i in 1:n) {
x[i]~dnorm(mu,tau)
}
mu~dnorm(cc,d)
tau~dgamma(a,b)
}
```
Then, supposing that this file is called jags1.jags, an R session to fit the model could be constructed as follows:
```require(rjags)
x=rnorm(15,25,2)
data=list(x=x,n=length(x))
hyper=list(a=3,b=11,cc=10,d=1/100)
init=list(mu=0,tau=1)
model=jags.model("jags1.jags",data=append(data,hyper), inits=init)
update(model,n.iter=100)
output=coda.samples(model=model,variable.names=c("mu", "tau"), n.iter=10000, thin=1)
print(summary(output))
plot(output)
```
This is all fine, and it can be very useful to have the model declared in a separate file, especially if the model is large and complex, and you might want to use it from outside R. However, very often for simple models it can be quite inconvenient to have the model separate from the R script which runs it. In particular, people often have issues with naming files correctly, making sure R is looking in the correct directory, moving the model with the R script, etc. So it would be nice to be able to just inline the JAGS model within an R script, to keep the model, the data, and the analysis all together in one place.
#### Using a temporary file
What we want to do is declare the JAGS model within a text string inside an R script and then somehow pass this into the call to jags.model(). The obvious way to do this is to write the string to a text file, and then pass the name of that text file into jags.model(). This works fine, but some care needs to be taken to make sure this works in a generic platform independent way. For example, you need to write to a file that you know doesn’t exist in a directory that is writable using a filename that is valid on the OS on which the script is being run. For this purpose R has an excellent little function called tempfile() which solves exactly this naming problem. It should always return the name of a file which does not exist in a writable directly within the standard temporary file location on the OS on which R is being run. This function is exceedingly useful for all kinds of things, but doesn’t seem to be very well known by newcomers to R. Using this we can construct a stand-alone R script to fit the model as follows:
```require(rjags)
x=rnorm(15,25,2)
data=list(x=x,n=length(x))
hyper=list(a=3,b=11,cc=10,d=1/100)
init=list(mu=0,tau=1)
modelstring="
model {
for (i in 1:n) {
x[i]~dnorm(mu,tau)
}
mu~dnorm(cc,d)
tau~dgamma(a,b)
}
"
tmpf=tempfile()
tmps=file(tmpf,"w")
cat(modelstring,file=tmps)
close(tmps)
model=jags.model(tmpf,data=append(data,hyper), inits=init)
update(model,n.iter=100)
output=coda.samples(model=model,variable.names=c("mu", "tau"), n.iter=10000, thin=1)
print(summary(output))
plot(output)
```
Now, although there is a file containing the model temporarily involved, the script is stand-alone and portable.
#### Using a text connection
The solution above works fine, but still involves writing a file to disk and reading it back in again, which is a bit pointless in this case. We can solve this by using another under-appreciated R function, textConnection(). Many R functions which take a file as an argument will work fine if instead passed a textConnection object, and the rjags function jags.model() is no exception. Here, instead of writing the model string to disk, we can turn it into a textConnection object and then pass that directly into jags.model() without ever actually writing the model file to disk. This is faster, neater and cleaner. An R session which takes this approach is given below.
```require(rjags)
x=rnorm(15,25,2)
data=list(x=x,n=length(x))
hyper=list(a=3,b=11,cc=10,d=1/100)
init=list(mu=0,tau=1)
modelstring="
model {
for (i in 1:n) {
x[i]~dnorm(mu,tau)
}
mu~dnorm(cc,d)
tau~dgamma(a,b)
}
"
model=jags.model(textConnection(modelstring), data=append(data,hyper), inits=init)
update(model,n.iter=100)
output=coda.samples(model=model,variable.names=c("mu", "tau"), n.iter=10000, thin=1)
print(summary(output))
plot(output)
```
This is my preferred way to use rjags. Note again that textConnection objects have many and varied uses and applications that have nothing to do with rjags.
Tags: Bayes, Bayesian, Gibbs, inline, jags, MCMC, model, R, rjags, rstats, sampler, tutorial
Posted in Uncategorized | 2 Comments »
## MCMC on the Raspberry Pi
07/07/2012
I’ve recently taken delivery of a Raspberry Pi mini computer. For anyone who doesn’t know, this is a low cost, low power machine, costing around 20 GBP (25 USD) and consuming around 2.5 Watts of power (it is powered by micro-USB). This amazing little device can run linux very adequately, and so naturally I’ve been interested to see if I can get MCMC codes to run on it, and to see how fast they run.
Now, I’m fairly sure that the majority of readers of this blog won’t want to be swamped with lots of Raspberry Pi related posts, so I’ve re-kindled my old personal blog for this purpose. Apart from this post, I’ll try not to write about my experiences with the Pi here on my main blog. Consequently, if you are interested in my ramblings about the Pi, you may wish to consider subscribing to my personal blog in addition to this one. Of course I’m not guaranteeing that the occasional Raspberry-flavoured post won’t find its way onto this blog, but I’ll try only to do so if it has strong relevance to statistical computing or one of the other core topics of this blog.
In order to get started with MCMC on the Pi, I’ve taken the C code gibbs.c for a simple Gibbs sampler described in a previous post (on this blog) and run it on a couple of laptops I have available, in addition to the Pi, and looked at timings. The full details of the experiment are recorded in this post over on my other blog, to which interested parties are referred. Here I will just give the “executive summary”.
The code runs fine on the Pi (running Raspbian), at around half the speed of my Intel Atom based netbook (running Ubuntu). My netbook in turn runs at around one fifth the speed of my Intel i7 based laptop. So the code runs at around one tenth of the speed of the fastest machine I have conveniently available.
As discussed over on my other blog, although the Pi is relatively slow, its low cost and low power consumption mean that is has a bang-for-buck comparable with high-end laptops and desktops. Further, a small cluster of Pis (known as a bramble) seems like a good, low cost way to learn about parallel and distributed statistical computing.
Tags: C, distributed, Gibbs, MCMC, parallel, performance, Pi, Raspberry, raspi, sampler, speed
## Metropolis Hastings MCMC when the proposal and target have differing support
04/06/2012
### Introduction
Very often it is desirable to use Metropolis Hastings MCMC for a target distribution which does not have full support (for example, it may correspond to a non-negative random variable), using a proposal distribution which does (for example, a Gaussian random walk proposal). This isn’t a problem at all, but on more than one occasion now I have come across students getting this wrong, so I thought it might be useful to have a brief post on how to do it right, see what people sometimes get wrong, and why, and then think about correcting the wrong method in order to make it right…
### A simple example
For this post we will consider a simple $Ga(2,1)$ target distribution, with density
$\pi(x) = xe^{-x},\quad x\geq 0.$
Of course this is a very simple distribution, and there are many straightforward ways to simulate it directly, but for this post we will use a random walk Metropolis-Hastings (MH) scheme with standard Gaussian innovations. So, if the current state of the chain is $x$, a proposed new value $x^\star$ will be generated from
$f(x^\star|x) = \phi(x^\star-x),$
where $\phi(\cdot)$ is the standard normal density. This proposed new value is accepted with probability $\min\{1,A\}$, where
$\displaystyle A = \frac{\pi(x^\star)}{\pi(x)} \frac{f(x|x^\star)}{f(x^\star|x)} = \frac{\pi(x^\star)}{\pi(x)} \frac{\phi(x-x^\star)}{\phi(x^\star-x)} = \frac{\pi(x^\star)}{\pi(x)} ,$
since the standard normal density is symmetric.
#### Correct implementation
We can easily implement this using R as follows:
```met1=function(iters)
{
xvec=numeric(iters)
x=1
for (i in 1:iters) {
xs=x+rnorm(1)
A=dgamma(xs,2,1)/dgamma(x,2,1)
if (runif(1)<A)
x=xs
xvec[i]=x
}
return(xvec)
}
```
We can run it, plot the results and check it against the true target with the following commands.
```iters=1000000
out=met1(iters)
hist(out,100,freq=FALSE,main="met1")
curve(dgamma(x,2,1),add=TRUE,col=2,lwd=2)
```
If you have a slow computer, you may prefer to use iters=100000. The above code uses R’s built-in gamma density. Alternatively, we can hard-code the density as follows.
```met2=function(iters)
{
xvec=numeric(iters)
x=1
for (i in 1:iters) {
xs=x+rnorm(1)
A=xs*exp(-xs)/(x*exp(-x))
if (runif(1)<A)
x=xs
xvec[i]=x
}
return(xvec)
}
```
We can run this code using the following commands, to verify that it does work as expected.
```out=met2(iters)
hist(out,100,freq=FALSE,main="met2")
curve(dgamma(x,2,1),add=TRUE,col=2,lwd=2)
```
However, there is a potential problem with the above code that we have got away with in this instance, which often catches people out. We have hard-coded the density for $x>0$ without checking the sign of $x$. Here we get away with it as a negative proposal will lead to a negative acceptance ratio that we will reject straight away. This is not always the case (consider, for example, a $Ga(3,1)$ distribution). So really we should check the sign of $x^\star$ and reject immediately if is not within the support of the target.
Although this problem often catches people out, it tends not to be a big issue in practice, as it typically leads to an obviously incorrect sampler, or a sampler which crashes, and is relatively simple to debug and fix.
#### An incorrect sampler
The problem I want to focus on here is more subtle, but closely related. It is clear that any $x^\star<0$ should be rejected. With the above code, such values are indeed rejected, and the sampler advances to the next iteration. However, in more complex samplers, where an update like this might be one tiny part of a massive sampler with a very high-dimensional state space, it seems like a bit of a "waste" of a MH move to just propose a negative value, throw it away, and move on. Evidently, it seems tempting, therefore, to keep on sampling $x^\star$ values until a non-negative value is obtained, and then evaluate the acceptance ratio and decide whether or not to accept. We could code up this sampler as follows.
```met3=function(iters)
{
xvec=numeric(iters)
x=1
for (i in 1:iters) {
repeat {
xs=x+rnorm(1)
if (xs>0)
break
}
A=xs*exp(-xs)/(x*exp(-x))
if (runif(1)<A)
x=xs
xvec[i]=x
}
return(xvec)
}
```
As reasonable as this idea may at first seem, it does not lead to a sampler having the desired target, as can be verified using the following commands.
```out=met3(iters)
hist(out,100,freq=FALSE,main="met3")
curve(dgamma(x,2,1),add=TRUE,col=2,lwd=2)
```
So, this sampler seems to be sampling something close to the desired target, but not the same. This raises a couple of questions. First and most important, can we fix this sampler so that it does sample the correct target (yes), and second, can we figure out what target density the incorrect sampler is actually sampling (again, yes)? Let’s start with the issue of how to fix the sampler, as this will also help us to understand what the incorrect sampler is doing.
#### Fixing the truncated sampler
By repeatedly sampling from the proposal until we obtain a non-negative value, we are actually implementing a rejection sampler for sampling from the proposal distribution truncated at zero. This is a perfectly reasonable proposal distribution, so we can use it provided that we use the correct MH acceptance ratio. Now, the truncated density has the same density as the untruncated density, apart from the differing support and a normalising constant. Indeed, this may be why people often assume this method will work, because normalising constants often don’t matter in MH schemes. However, the normalising constant only doesn’t matter if it is independent of the state, and here it is not… Explicitly, we have
$f(x^\star|x) \propto \phi(x^\star-x),\quad x^\star>0.$
Including the normalising constant we have
$\displaystyle f(x^\star|x) = \frac{\phi(x^\star-x)}{\Phi(x)},\quad x^\star>0,$
where $\Phi(x)$ is the standard normal CDF. Consequently, the correct acceptance ratio to use with this proposal is
$\displaystyle A = \frac{\pi(x^\star)}{\pi(x)} \frac{\phi(x-x^\star)}{\phi(x^\star-x)}\frac{\Phi(x)}{\Phi(x^\star)} = \frac{\pi(x^\star)}{\pi(x)}\frac{\Phi(x)}{\Phi(x^\star)},$
where we see that the normalising constants do not cancel out. We can modify the previous sampler to use the correct acceptance ratio as follows.
```met4=function(iters)
{
xvec=numeric(iters)
x=1
for (i in 1:iters) {
repeat {
xs=x+rnorm(1)
if (xs>0)
break
}
A=xs*exp(-xs)/(x*exp(-x))
A=A*pnorm(x)/pnorm(xs)
if (runif(1)<A)
x=xs
xvec[i]=x
}
return(xvec)
}
```
We can verify that this sampler gives leads to the correct target with the following commands.
```out=met4(iters)
hist(out,100,freq=FALSE,main="met4")
curve(dgamma(x,2,1),add=TRUE,col=2,lwd=2)
```
So, truncating the proposal at zero is fine, provided that you modify the acceptance ratio accordingly.
#### What does the incorrect sampler target?
Now that we understand why the naive truncated sampler was wrong and how to fix it, we can, out of curiosity, wonder what distribution that sampler actually targets. Now we understand what proposal we are actually using, we can re-write the acceptance ratio as
$\displaystyle A = \frac{\pi(x^\star)\Phi(x^\star)}{\pi(x)\Phi(x)}\frac{\frac{\phi(x-x^\star)}{\Phi(x^\star)}}{\frac{\phi(x^\star-x)}{\Phi(x)}},$
from which it is clear that the actual target of this chain is
$\tilde\pi(x) \propto \pi(x)\Phi(x),$
or
$\tilde\pi(x)\propto xe^{-x}\Phi(x),\quad x\geq 0.$
The constant of proportionality is not immediately obvious, but is tractable, and turns out to be a nice undergraduate exercise in integration by parts, leading to
$\displaystyle \tilde\pi(x) = \frac{2\sqrt{2\pi}}{2+\sqrt{2\pi}}xe^{-x}\Phi(x),\quad x\geq 0.$
We can verify this using the following commands.
```out=met3(iters)
hist(out,100,freq=FALSE,main="met3")
curve(dgamma(x,2,1)*pnorm(x)*2*sqrt(2*pi)/(sqrt(2*pi)+2),add=TRUE,col=3,lwd=2)
```
Now we know the actual target of the incorrect sampler, we can compare it with the correct target as follows.
```curve(dgamma(x,2,1),0,10,col=2,lwd=2,main="Densities")
curve(dgamma(x,2,1)*pnorm(x)*2*sqrt(2*pi)/(sqrt(2*pi)+2),add=TRUE,col=3,lwd=2)
```
So we see that the distributions are different, but not so different that one would immediate suspect an error on the basis of a sample of output. This makes it a difficult bug to track down.
### Summary
There is no problem in principle using a proposal with full support for a target with limited support in MH algorithms. However, it is important to check whether a proposed value is within the support of the target and reject the proposed move if it is not. If you are concerned that such a scheme might be inefficient, it is possible to use a truncated proposal provided that you modify the MH acceptance ratio to include the relevant normalisation constants. If you don’t modify the acceptance probability, you will get a sampler which targets the wrong distribution, but it will often be quite similar to the correct target, making it a difficult bug to spot and track down.
Tags: gaussian, Hastings, innovation, MCMC, Metropolis, non-negative, postive, proposal, R, random, rstats, support, target, truncate, truncated, walk
Posted in Uncategorized | 7 Comments »
## Gibbs sampling a Gaussian Markov random field (GMRF) using Java
01/06/2012
### Introduction
As I’ve explained previously, I’m gradually coming around to the idea of using Java for the development of MCMC codes, and I’m starting to build up a collection of simple examples for getting started. One of the advantages of Java is that it includes a standard cross-platform GUI library. This might not seem like the most important requirement for MCMC, but can actually be very handy in several contexts, particularly for monitoring convergence. One obvious context is that of image analysis, where it can be useful to monitor image reconstructions as the sampler is running. In this post I’ll show three very small simple Java classes which together provide an application for running a Gibbs sampler on a (non-stationary, unconditioned) Gaussian Markov random field.
The model is essentially that the distribution of each pixel is defined intrinsically, dependent only on its four nearest neighbours on a rectangular lattice, and here the distribution will be Gaussian with mean equal to the sample mean of the four neighbouring pixels and a fixed (unit) variance. On its own this isn’t especially useful, but it is a key component of many image analysis applications.
### A simple Java implementation
We will start with the class MrfApp containing the main method for the application:
MrfApp.java
```import java.io.*;
class MrfApp {
public static void main(String[] arg)
throws IOException
{
Mrf mrf;
System.out.println("started program");
mrf=new Mrf(800,600);
System.out.println("created mrf object");
mrf.update(1000);
System.out.println("done updates");
mrf.saveImage("mrf.png");
System.out.println("finished program");
mrf.frame.dispose();
System.exit(0);
}
}
```
Hopefully this code is largely self-explanatory, but relies on a class called Mrf which contains all of the logic associated with the GMRF.
Mrf.java
```import java.io.*;
import java.util.*;
import java.awt.image.*;
import javax.swing.*;
import javax.imageio.ImageIO;
class Mrf
{
int n,m;
double[][] cells;
Random rng;
BufferedImage bi;
WritableRaster wr;
JFrame frame;
ImagePanel ip;
Mrf(int n_arg,int m_arg)
{
n=n_arg;
m=m_arg;
cells=new double[n][m];
rng=new Random();
bi=new BufferedImage(n,m,BufferedImage.TYPE_BYTE_GRAY);
wr=bi.getRaster();
frame=new JFrame("MRF");
frame.setSize(n,m);
frame.add(new ImagePanel(bi));
frame.setVisible(true);
}
public void saveImage(String filename)
throws IOException
{
ImageIO.write(bi,"PNG",new File(filename));
}
public void updateImage()
{
double mx=-1e+100;
double mn=1e+100;
for (int i=0;i<n;i++) {
for (int j=0;j<m;j++) {
if (cells[i][j]>mx) { mx=cells[i][j]; }
if (cells[i][j]<mn) { mn=cells[i][j]; }
}
}
for (int i=0;i<n;i++) {
for (int j=0;j<m;j++) {
int level=(int) (255*(cells[i][j]-mn)/(mx-mn));
wr.setSample(i,j,0,level);
}
}
frame.repaint();
}
public void update(int num)
{
for (int i=0;i<num;i++) {
updateOnce();
}
}
private void updateOnce()
{
double mean;
for (int i=0;i<n;i++) {
for (int j=0;j<m;j++) {
if (i==0) {
if (j==0) {
mean=0.5*(cells[0][1]+cells[1][0]);
}
else if (j==m-1) {
mean=0.5*(cells[0][j-1]+cells[1][j]);
}
else {
mean=(cells[0][j-1]+cells[0][j+1]+cells[1][j])/3.0;
}
}
else if (i==n-1) {
if (j==0) {
mean=0.5*(cells[i][1]+cells[i-1][0]);
}
else if (j==m-1) {
mean=0.5*(cells[i][j-1]+cells[i-1][j]);
}
else {
mean=(cells[i][j-1]+cells[i][j+1]+cells[i-1][j])/3.0;
}
}
else if (j==0) {
mean=(cells[i-1][0]+cells[i+1][0]+cells[i][1])/3.0;
}
else if (j==m-1) {
mean=(cells[i-1][j]+cells[i+1][j]+cells[i][j-1])/3.0;
}
else {
mean=0.25*(cells[i][j-1]+cells[i][j+1]+cells[i+1][j]
+cells[i-1][j]);
}
cells[i][j]=mean+rng.nextGaussian();
}
}
updateImage();
}
}
```
This class contains a few simple methods for creating and updating the GMRF, and also for maintaining and updating a graphical view of the GMRF as the sampler is running. The Gibbs sampler update itself is encoded in the final method, updateOnce, and most of the code is to deal with edge and corner cases (in the literal rather than metaphorical sense!). This is called repeatedly by the method update for the required number of iterations. At the end of each iteration, the method updateOnce triggers updateImage which updates the image associated GMRF. The GMRF itself is stored in a 2-dimensional array of doubles, but an image pixel typically consists of a grayscale value represented by an unsigned byte – that is, an integer from 0 to 255. So updateImage scans through the GMRF to find the maximum and minimum values and then maps the GMRF values onto the 0 to 255 scale. The image itself is set up by the constructor method, Mrf. This class relies on an additional class called ImagePanel, which is a simple GUI panel for displaying images:
ImagePanel.java
```import java.awt.*;
import java.awt.image.*;
import javax.swing.*;
class ImagePanel extends JPanel {
protected BufferedImage image;
public ImagePanel(BufferedImage image) {
this.image=image;
Dimension dim=new Dimension(image.getWidth(),image.getHeight());
setPreferredSize(dim);
setMinimumSize(dim);
revalidate();
repaint();
}
public void paintComponent(Graphics g) {
g.drawImage(image,0,0,this);
}
}
```
This completes the application, which can be compiled and run from the command line with
```javac *.java
java MrfApp
```
This should compile the code and run the application, which will show a GMRF updating for 1000 iterations. When the 1000 iterations are complete, the application writes the final image to a file and then quits.
### Using Parallel COLT
The above classes are very convenient, as they should work with any standard Java installation. However, in more complex scenarios, it is likely that a math library such as Parallel COLT will be required. In this case it will make sense to make use of features in the COLT library, such as random number generators and 2d matrix objects. We can adapt the above application by replacing the MrfApp and Mrf classes with the following versions (the ImagePanel class remains unchanged):
MrfApp.java
```import java.io.*;
import cern.jet.random.tdouble.engine.*;
class MrfApp {
public static void main(String[] arg)
throws IOException
{
Mrf mrf;
int seed=1234;
System.out.println("started program");
DoubleRandomEngine rngEngine=new DoubleMersenneTwister(seed);
mrf=new Mrf(800,600,rngEngine);
System.out.println("created mrf object");
mrf.update(1000);
System.out.println("done updates");
mrf.saveImage("mrf.png");
System.out.println("finished program");
mrf.frame.dispose();
System.exit(0);
}
}
```
Mrf.java
```import java.io.*;
import java.util.*;
import java.awt.image.*;
import javax.swing.*;
import javax.imageio.ImageIO;
import cern.jet.random.tdouble.*;
import cern.jet.random.tdouble.engine.*;
import cern.colt.matrix.tdouble.impl.*;
class Mrf
{
int n,m;
DenseDoubleMatrix2D cells;
DoubleRandomEngine rng;
Normal rngN;
BufferedImage bi;
WritableRaster wr;
JFrame frame;
ImagePanel ip;
Mrf(int n_arg,int m_arg,DoubleRandomEngine rng)
{
n=n_arg;
m=m_arg;
cells=new DenseDoubleMatrix2D(n,m);
this.rng=rng;
rngN=new Normal(0.0,1.0,rng);
bi=new BufferedImage(n,m,BufferedImage.TYPE_BYTE_GRAY);
wr=bi.getRaster();
frame=new JFrame("MRF");
frame.setSize(n,m);
frame.add(new ImagePanel(bi));
frame.setVisible(true);
}
public void saveImage(String filename)
throws IOException
{
ImageIO.write(bi,"PNG",new File(filename));
}
public void updateImage()
{
double mx=-1e+100;
double mn=1e+100;
for (int i=0;i<n;i++) {
for (int j=0;j<m;j++) {
if (cells.getQuick(i,j)>mx) { mx=cells.getQuick(i,j); }
if (cells.getQuick(i,j)<mn) { mn=cells.getQuick(i,j); }
}
}
for (int i=0;i<n;i++) {
for (int j=0;j<m;j++) {
int level=(int) (255*(cells.getQuick(i,j)-mn)/(mx-mn));
wr.setSample(i,j,0,level);
}
}
frame.repaint();
}
public void update(int num)
{
for (int i=0;i<num;i++) {
updateOnce();
}
}
private void updateOnce()
{
double mean;
for (int i=0;i<n;i++) {
for (int j=0;j<m;j++) {
if (i==0) {
if (j==0) {
mean=0.5*(cells.getQuick(0,1)+cells.getQuick(1,0));
}
else if (j==m-1) {
mean=0.5*(cells.getQuick(0,j-1)+cells.getQuick(1,j));
}
else {
mean=(cells.getQuick(0,j-1)+cells.getQuick(0,j+1)+cells.getQuick(1,j))/3.0;
}
}
else if (i==n-1) {
if (j==0) {
mean=0.5*(cells.getQuick(i,1)+cells.getQuick(i-1,0));
}
else if (j==m-1) {
mean=0.5*(cells.getQuick(i,j-1)+cells.getQuick(i-1,j));
}
else {
mean=(cells.getQuick(i,j-1)+cells.getQuick(i,j+1)+cells.getQuick(i-1,j))/3.0;
}
}
else if (j==0) {
mean=(cells.getQuick(i-1,0)+cells.getQuick(i+1,0)+cells.getQuick(i,1))/3.0;
}
else if (j==m-1) {
mean=(cells.getQuick(i-1,j)+cells.getQuick(i+1,j)+cells.getQuick(i,j-1))/3.0;
}
else {
mean=0.25*(cells.getQuick(i,j-1)+cells.getQuick(i,j+1)+cells.getQuick(i+1,j)
+cells.getQuick(i-1,j));
}
cells.setQuick(i,j,mean+rngN.nextDouble());
}
}
updateImage();
}
}
```
Again, the code should be reasonably self explanatory, and will compile and run in the same way provided that Parallel COLT is installed and in your classpath. This version runs approximately twice as fast as the previous version on all of the machines I’ve tried it on.
### Reference
I have found the following book very useful for understanding how to work with images in Java:
Hunt, K.A. (2010) The Art of Image Processing with Java, A K Peters/CRC Press.
Tags: analysis, field, gaussian, Gibbs, gmrf, gui, image, Java, Markov, MCMC, mrf, processing, random, sampler, sampling
## Multivariate data analysis (using R)
29/05/2012
I’ve been very quiet on-line in the last few months, due mainly to the fact that I’ve been writing a new undergraduate course on multivariate data analysis. Although there are many books and on-line notes on the general topic of multivariate statistics, I wanted to do something a little bit different from any text I have yet discovered. First, I wanted to have a strong emphasis on using techniques in practice on example data sets of reasonable size. For this, I found Hastie et al (2009) to be very useful, as it covered some interesting example data sets which have been bundled in the CRAN R package, ElemStatLearn. I used several of the data sets from this package as running examples throughout the course. In fact my initial plan was to use Hastie et al as the main course text, but it turned out that this text was in some places overly technical and in many places far too terse to be good as an undergraduate text. I would still recommend the book for researchers who want a good overview of the interface between statistics and machine learning, but with hindsight I’m not convinced it is ideal for typical statistics undergraduate students.
I also wanted to have a strong emphasis on numerical linear algebra as the basis for multivariate statistical computation. Again, this is a bit different from “old school” multivariate statistics (which reminds me, John Marden has produced a great text available freely on-line on old school multivariate analysis, which isn’t quite as “old school” as the title might suggest). I wanted to spend some time talking about linear systems and matrix factorisations, explaining, for example how the LU decomposition, the Cholesky factorisation and the QR factorisations are related, and why the latter two are both fundamental to multivariate data analysis, and how the singular value decomposition (SVD) is related to the spectral decomposition, and why it is generally better to construct principal components from the SVD of the centred data matrix than the eigen-decomposition of the sample variance matrix, etc. These sorts of topics are not often covered in undergraduate statistics courses, but they are crucial to understanding how to analyse large multivariate data sets in a numerically stable way.
I also wanted to downplay distribution theory as much as possible, as multivariate distribution theory is quite difficult, and not necessary for understanding most of the essential concepts in multivariate data analysis. Also, it is not obviously very useful. Essentially all introductory courses are based around the multivariate normal distribution, but I have yet to see a real non-trivial multivariate data set for which an assumption of multivariate normality is appropriate. Consequently I delayed the introduction of the multivariate normal until well into the course, and didn’t bother with the Wishart distribution, or testing for multivariate normality. Like much frequentist testing, it is really just a matter of seeing if you have yet collected a large enough sample to reject the null hypothesis – I just don’t see the point (null)!
Finally, I wanted to use R to illustrate all of the methods in practice as they were introduced. We use R throughout our undergraduate statistics programme, and I think it is a good language for learning about statistical methods, algorithms and concepts. In most cases I begin by showing how to carry out analyses using “elementary” operations (such as matrix manipulations), and then go on to show how to accomplish the same task more simply using higher-level R functions and packages. Again, I think it really helps understanding to first see the mathematical description directly translated into computer code before jumping to high-level data analysis functions.
There are several aspects of the course that I would like to distil out into self-contained blog posts, but I have a busy summer schedule, and a couple of other things I want to write about before I’ll have a chance to get around to it, so in the mean time, anyone interested is welcome to download a copy of the course notes (PDF, with hyperlinks). This is the student version, containing gaps, but the gaps mainly correspond to bits of standard theory and examples designed to be worked through by hand. All of the essential theory and context and all of the R examples are present in this version of the notes. There are seven chapters: Introduction to multivariate data; PCA and matrix factorisations; Inference, the MVN and multivariate regression; Cluster analysis and unsupervised learning; Discrimination and classification; Graphical modelling; Variable selection and multiple testing.
Tags: algebra, analysis, analytics, big, Cholesky, data, large, linear, matrix, multivariate, numerical, QR, R, rstats, statistics, SVD
## Catalogue of my first 25 blog posts
30/12/2011
This is my 25th blog post, so this seems like a good time to provide an index to those first 25 posts for ease of reference. I’ve covered a range of topics over my first two years of blogging, and managed to average almost one post per month, as suggested in my first post. Due to the rather occasional nature of my posting, most regular readers subscribe to my RSS feed using some kind of RSS feed reader. I use Google Reader for following blogs and other RSS feeds, which I find very convenient as it is web based and therefore synced across all the machines and devices I use, but there are plenty of other options. Alternatively, you can follow me on twitter, where I am @darrenjw, or my Google+ feed, as I always announce new posts on those platforms.
### Blog posts
• 2. Hypercubes in R (getting started with programming in R): Constructing, rotating and plotting (2d projections of) hypercubes in order to illustrate some elementary R programming concepts.
• 3. Systems biology, mathematical modelling and mountains: My review of an excellent workshop I participated in at BIRS on Multi-scale Stochastic Modeling of Cell Dynamics.
• 4. (Yet another) introduction to R and Bioconductor: A very quick tutorial on basic R concepts and getting started with Bioconductor – very first steps.
• 5. MCMC programming in R, Python, Java and C: this post showed how to implement a very simple bivariate Gibbs sampler in four different programming languages, and compared the speeds. The post has now been superseded by post number 18.
• 6. The last Valencia meeting on Bayesian Statistics and the future of Bayesian computation: My impressions of Bayes 9, together with some thoughts on Bayesian computing in the context of multicore CPUs, GPUs, clusters and “Big Data”.
• 7. Metropolis-Hastings MCMC algorithms: A quick introduction to the Metropolis algorithm, with example code in R, discussing implementation issues.
• 8. The pseudo-marginal approach to “exact approximate” MCMC algorithms: a simple explanation of the “pseudo-marginal” idea, which has many potential applications in Bayesian inference.
• 9. Introduction to the processing of short read next generation sequencing data: a quick introduction to high-throughput sequencing data, the FASTQ file format, and the use of Unix and command-line tools for initial processing and analysis of FASTQ files.
• 10. A quick introduction to the Bioconductor ShortRead package for the analysis of NGS data: a follow-on from post 9. where I show how to get started with the analysis of FASTQ sequencing data using R and Bioconductor.
• 11. Getting started with parallel MCMC: an introduction to parallel Monte Carlo algorithms and their implementation using C, the GSL, and MPI.
• 12. Calling C code from R: how to call a Gibbs sampler written in C from R.
• 13. Calling Java code from R: how to call a Gibbs sampler written in Java from R.
• 14. Parallel Monte Carlo with an Intel i7 Quad Core: a quick look at the potential speed-ups possible exploiting parallelisation on a laptop with a nice Quad-core CPU.
• 15. MCMC, Monte Carlo likelihood estimation, and the bootstrap particle filter: how the pseudo-marginal idea discussed in post number 8. can be exploited for state-space models by using a simple particle filter to construct an unbiased estimate of marginal likelihood.
• 16. The particle marginal Metropolis-Hastings (PMMH) particle MCMC algorithm: following on from the previous post, an explanation of the full PMMH pMCMC algorithm for simultaneous estimation of parameters and state for state-space models.
• 17. Java math libraries and Monte Carlo simulation codes: a post bemoaning the lack of anything quite like the GSL C library in Java, but highlighting some reasonable alternatives (COLT, Parallel COLT and Apache Commons Math).
• 18. Gibbs sampler in various languages (revisited): an updated version of post number 5, including detailed timings. I also take the opportunity to include new languages PyPy, Groovy and Scala.
• 19. Faster Gibbs sampling MCMC from within R: How to call MCMC code written in C, C++ and Java from R, with timing details.
• 20. Stochastic Modelling for Systems Biology, second edition: A quick introduction to the 2nd edition of my book, together with a tutorial introduction to the associated CRAN R package, smfsb, for simulation and inference of stochastic kinetic network models and other Markov processes.
• 21. Particle filtering and pMCMC using R: code for particle filtering and particle MCMC for Markov processes, in R.
• 22. Review of “Parallel R” by McCallum and Weston: my somewhat critical review of this book.
• 23. Lexical scope and function closures in R: an introduction to notions of variable scope and closure in the context of R.
• 24. Parallel particle filtering and pMCMC using R and multicore: a discussion of the parallelisation of the particle filtering code from post 21. using R’s high-level parallelisation constructs.
• 25. Catalogue of my first 25 blog posts: this post!
• Should I ever manage another 25 posts, I’ll do a similar review of the next 25 at post number 50…
## Parallel particle filtering and pMCMC using R and multicore
29/12/2011
### Introduction
In a previous post I showed how to construct a PMMH pMCMC algorithm for parameter estimation with partially observed Markov processes. The inner loop of a pMCMC algorithm consists of running a particle filter to construct an unbiased estimate of marginal likelihood. This inner loop is the place where the code spends almost all of its time, and so speeding up the particle filter will result in dramatic speedup of the pMCMC algorithm. This is fortunate, since as previously discussed, MCMC algorithms are difficult to parallelise other than on a per iteration basis. Here, each iteration can be speeded up if we can effectively parallelise a particle filter. Particle filters are much easier to parallelise than MCMC algorithms, and so it is tempting to try and exploit this within R. In fact, although it is the case that it is possible to effectively parallelise particle filters in efficient languages using low-level parallelisation tools (say, using C with MPI, or Java concurrency tools), it is not so easy to speed up R-based particle filters using R’s high-level parallelisation constructs, as we shall see.
### Particle filters
In the previous post we looked at the function pfMLLik within the CRAN package smfsb. As a reminder, the source code is
```pfMLLik <- function (n, simx0, t0, stepFun, dataLik, data)
{
times = c(t0, as.numeric(rownames(data)))
deltas = diff(times)
return(function(...) {
xmat = simx0(n, t0, ...)
ll = 0
for (i in 1:length(deltas)) {
xmat = t(apply(xmat, 1, stepFun, t0 = times[i], deltat = deltas[i], ...))
w = apply(xmat, 1, dataLik, t = times[i + 1], y = data[i,], log = FALSE, ...)
if (max(w) < 1e-20) {
warning("Particle filter bombed")
return(-1e+99)
}
ll = ll + log(mean(w))
rows = sample(1:n, n, replace = TRUE, prob = w)
xmat = xmat[rows, ]
}
ll
})
}
```
The function itself doesn’t actually run a particle filter, but instead returns a function closure which does (see the previous post for a discussion of lexical scope and function closures in R). There are obviously several different steps within the particle filter, and several of these are amenable to parallelisation. However, for complex models, forward simulation from the model will be the rate-limiting step, where the vast majority of CPU cycles will be spent. Line 9 in the above code is where forward simulation takes place, and in particular, the key function call is the apply call:
```apply(xmat, 1, stepFun, t0 = times[i], deltat = deltas[i], ...)
```
This call applies the forward simulation algorithm stepFun to each row of the matrix xmat independently. Since there are no dependencies between the function calls, this is in principle very straightforward to parallelise on multicore hardware.
### Multicore support in R
I’m writing this post on a laptop with an Intel i7 quad core chip, running the 64 bit version of Ubuntu 11.10. R has support for multicore processing on this platform – it is just a simple matter of installing the relevant packages. However, things are changing rapidly regarding multicore support in R right now, so YMMV. Ubuntu 11.10 has R 2.13 by default, but the multicore support is slightly different in the recently released R 2.14. I’m still using R 2.13. I may update this post (or comment) when I move to R 2.14. The main difference is that the package multicore has been replaced by the package parallel. There are a few other minor changes, but it should be easy to adapt what is presented here to 2.14.
There is a new O’Reilly book called Parallel R. I’ve got a copy of it. It does cover the new parallel package in R 2.14, as well as other parallel R topics, but the book is a bit light weight, to say the least, and I reviewed it on this blog. Please read my review for further details before you buy it.
If you haven’t used multicore in R previously, then
```install.packages(c("multicore","doMC"))
```
should get you started (again, I’m assuming that your R version is strictly < 2.14). You can test it has worked with:
```library(multicore)
multicore:::detectCores()
```
When I do this, I get the answer 8 (I have 4 cores, each of which is hyper-threaded). To begin with, I want to tell R to use just 4 process threads, and I can do this with
```library(doMC)
registerDoMC(4)
```
Replacing the second line with registerDoMC() will set things up to use all detected cores (in my case, 8). There are a couple of different strategies we could use to parallelise this. One strategy for parallelising the apply call discussed above is to be to replace it with a foreach / %dopar% loop. This is best illustrated by example. Start with line 9 from the function pfMLLik:
```xmat = t(apply(xmat, 1, stepFun, t0 = times[i], deltat = deltas[i], ...))
```
We can produce a parallelised version by replacing this line with the following block of code:
```res=foreach(j=1:dim(xmat)[1]) %dopar% {
stepFun(xmat[j,], t0 = times[i], deltat = deltas[i], ...)
}
xmat=t(sapply(res,cbind))
```
Each iteration of the foreach loop is executed independently (possibly using multiple cores), and the result of each iteration is returned as a list, and captured in res. This list of return vectors is then coerced back into a matrix with the final line.
In fact, we can improve on this by using the .combine argument to foreach, which describes how to combine the results from each iteration. Here we can just use rbind to combine the results into a matrix, using:
```xmat=foreach(j=1:dim(xmat)[1], .combine="rbind") %dopar% {
stepFun(xmat[j,], t0 = times[i], deltat = deltas[i], ...)
}
```
This code is much neater, and in principle ought to be a bit faster, though I haven’t noticed much difference in practice.
In fact, it is not necessary to use the foreach construct at all. The multicore package provides the mclapply function, which is a multicore version of lapply. To use mclapply (or, indeed, lapply) here, we first need to split our matrix into a list of rows, which we can do using the split command. So in fact, our apply call can be replaced with the single line:
```xmat=t(sapply(mclapply(split(xmat,row(xmat)), stepFun, t0=times[i], deltat=deltas[i], ...),cbind))
```
This is actually a much cleaner solution than the method using foreach, but it does require grokking a bit more R. Note that mclapply uses a different method to specify the number of threads to use than foreach/doMC. Here you can either use the named argument to mclapply, mc.cores, or use options(), eg. options(cores=4).
As well as being much cleaner, I find that the mclapply approach is much faster than the foreach/dopar approach for this problem. I’m guessing that this is because foreach doesn’t pre-schedule tasks by default, whereas mclapply does, but I haven’t had a chance to dig into this in detail yet.
### A parallelised particle filter
We can now splice the parallelised forward simulation step (using mclapply) back into our particle filter function to get:
```require(multicore)
pfMLLik <- function (n, simx0, t0, stepFun, dataLik, data)
{
times = c(t0, as.numeric(rownames(data)))
deltas = diff(times)
return(function(...) {
xmat = simx0(n, t0, ...)
ll = 0
for (i in 1:length(deltas)) {
xmat=t(sapply(mclapply(split(xmat,row(xmat)), stepFun, t0=times[i], deltat=deltas[i], ...),cbind))
w = apply(xmat, 1, dataLik, t = times[i + 1], y = data[i,], log = FALSE, ...)
if (max(w) < 1e-20) {
warning("Particle filter bombed")
return(-1e+99)
}
ll = ll + log(mean(w))
rows = sample(1:n, n, replace = TRUE, prob = w)
xmat = xmat[rows, ]
}
ll
})
}
```
This can be used in place of the version supplied with the smfsb package for slow simulation algorithms running on modern multicore machines.
There is an issue regarding Monte Carlo simulations such as this and the multicore package (whether you use mclapply or foreach/dopar) in that it adopts a “different seeds” approach to parallel random number generation, rather than a true parallel random number generator. This probably isn’t worth worrying too much about now, since it is fixed in the new parallel package in R 2.14, but is something to be aware of. I discuss parallel random number generation issues in Wilkinson (2005).
### Granularity
The above code is now a parallel particle filter, and can now be used in place of the serial version that is part of the smfsb package. However, if you try it out on a simple example, you will most likely be disappointed. In particular, if you use it for the pMCMC example discussed in the previous post, you will see that the parallel version of the example actually runs much slower than the serial version (at least, it does for me). However, that is because the forward simulator stepFun, used in that example, was actually a very fast simulation algorithm, stepLVc, written in C. In this case, the overhead of setting up and closing down the threads, and distributing the tasks, and collating the results from the worker threads back in the master thread, etc., outweighs the advantage of running the individual tasks in parallel. This is why parallel programming is difficult. What is needed here is for the individual tasks to be sufficiently computationally intensive that the overheads associated with parallelisation are easily outweighed by the ability to run the tasks in parallel. In the context of particle filtering, this is particularly problematic, as if the forward simulator is very slow, running a reasonable particle filter is going to be very, very slow, and then you probably don’t want to be working in R anyway… Parallelising a particle filter written in C using MPI is much more likely to be successful, as it offers much more fine grained control of exactly how the tasks and processes are managed, but at the cost of increased development time. In a previous post I gave an introduction to parallel Monte Carlo with C and MPI, and I’ve written more extensively about parallel MCMC in Wilkinson (2005). It also looks as though the new parallel package in R 2.14 offers more control of parallelisation, so that also might help. However, if you are using a particle filter as part of a pMCMC algorithm, there is another strategy you can use at a higher level of granularity which might be useful even within R in some situations.
### Multiple particle filters and pMCMC
Let’s look again at the main loop of the pMCMC algorithm discussed in the previous post:
```for (i in 1:iters) {
message(paste(i,""),appendLF=FALSE)
for (j in 1:thin) {
thprop=th*exp(rnorm(p,0,tune))
llprop=mLLik(thprop)
if (log(runif(1)) < llprop - ll) {
th=thprop
ll=llprop
}
}
thmat[i,]=th
}
```
It is clear that the main computational bottleneck of this code is the call to mLLik on line 5, as this is the call which runs the particle filter. The purpose of making the call is to obtain an unbiased estimate of marginal likelihood. However, there are plenty of other ways that we can obtain such estimates than by running a single particle filter. In particular, we could run multiple particle filters and average the results. So, let’s look at how to do this in the multicore setting. Let’s start by thinking about running 4 particle filters. We could just replace the line
```llprop=mLLik(thprop)
```
with the code
```llprop=0.25*foreach(i=1:4, .combine="+") %dopar% {
mLLik(thprop)
}
```
Now, there are at least 2 issues with this. The first is that we are now just running 4 particle filters rather than 1, and so even with perfect parallelisation, it will run no quicker than the code we started with. However, the idea is that by running 4 particle filters we ought to be able to get away with each particle filter using fewer particles, though it isn’t trivial to figure out exactly how many. For example, averaging the results from 4 particle filters, each of which uses 25 particles is not as good as running a single particle filter with 100 particles. In practice, some trial and error is likely to be required. The second problem is that we have computed the mean of the log of the likelihoods, and not the likelihoods themselves. This will almost certainly work fine in practice, as the resulting estimate will in most cases be very close to unbiased, but it will not be exactly unbiased, as so will not lead to an “exact” approximate algorithm. In principle, this can be fixed by instead using
```res=foreach(i=1:4) %dopar% {
mLLik(thprop)
}
llprop=log(mean(sapply(res,exp)))
```
but in practice this is likely to be subject to numerical underflow problems, as it involves manipulating raw likelihood values, which is generally a bad idea. It is possible to compute the log of the mean of the likelihoods in a more numerically stable way, but that is left as an exercise for the reader, as this post is way too long already… However, one additional tip worth mentioning is that the foreach package includes a convenience function called times for situations like the above, where the argument is not varying over calls. So the above code can be replaced with
```res=times(4) %dopar% mLLik(thprop)
llprop=log(mean(sapply(res,exp)))
```
which is a bit cleaner and more readable.
Using this approach to parallelisation, there is now a much better chance of getting some speedup on multicore architectures, as the granularity of the tasks being parallelised is now much larger. Consider the example from the previous post, where at each iteration we ran a particle filter with 100 particles. If we now re-run that example, but instead use 4 particle filters each using 25 particles, we do get a slight speedup. However, on my laptop, the speedup is only around a factor of 1.6 using 4 cores, and as already discussed, 4 filters each with 25 particles isn’t actually quite as good as a single filter with 100 particles anyway. So, the benefits are rather modest here, but will be much better with less trivial examples (slower simulators). For completeness, a complete runnable demo script is included after the references. Also, it is probably worth emphasising that if your pMCMC algorithm has a short burn-in period, you may well get much better overall speed-ups by just running parallel MCMC chains. Depressing, perhaps, but true.
### References
• McCallum, E., Weston, S. (2011) Parallel R, O’Reilly.
• Wilkinson, D. J. (2005) Parallel Bayesian Computation, Chapter 16 in E. J. Kontoghiorghes (ed.) Handbook of Parallel Computing and Statistics, Marcel Dekker/CRC Press, 481-512.
• Wilkinson, D. J. (2011) Stochastic Modelling for Systems Biology, second edition, Boca Raton, Florida: Chapman & Hall/CRC Press.
• ### Demo script
```require(smfsb)
data(LVdata)
require(multicore)
require(doMC)
registerDoMC(4)
# set up data likelihood
noiseSD=10
dataLik <- function(x,t,y,log=TRUE,...)
{
ll=sum(dnorm(y,x,noiseSD,log=TRUE))
if (log)
return(ll)
else
return(exp(ll))
}
# now define a sampler for the prior on the initial state
simx0 <- function(N,t0,...)
{
mat=cbind(rpois(N,50),rpois(N,100))
colnames(mat)=c("x1","x2")
mat
}
# convert the time series to a timed data matrix
LVdata=as.timedData(LVnoise10)
# create marginal log-likelihood functions, based on a particle filter
# use 25 particles instead of 100
mLLik=pfMLLik(25,simx0,0,stepLVc,dataLik,LVdata)
iters=1000
tune=0.01
thin=10
th=c(th1 = 1, th2 = 0.005, th3 = 0.6)
p=length(th)
ll=-1e99
thmat=matrix(0,nrow=iters,ncol=p)
colnames(thmat)=names(th)
# Main pMCMC loop
for (i in 1:iters) {
message(paste(i,""),appendLF=FALSE)
for (j in 1:thin) {
thprop=th*exp(rnorm(p,0,tune))
res=times(4) %dopar% mLLik(thprop)
llprop=log(mean(sapply(res,exp)))
if (log(runif(1)) < llprop - ll) {
th=thprop
ll=llprop
}
}
thmat[i,]=th
}
message("Done!")
# Compute and plot some basic summaries
mcmcSummary(thmat)
```
Tags: bootstrap, dynamics, filter, Lotka-Volterra, Markov, MCMC, multicore, parallel, particle, PMCMC, PMMH, population, process, R, rstats, SMC, transition
Posted in Uncategorized | 2 Comments »
Xi'an's Og
an attempt at bloggin, from scratch...
Normal Deviate
Thoughts on Statistics and Machine Learning
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 77, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241308569908142, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/78099-computer-game-question-mk-ii.html
|
# Thread:
1. ## Computer game question Mk II
EDIT: I'm editing this post as even though it's notched up a few views no-one has yet replied. If I could just ask whether it's because I've posted this in the wrong forum, whether the problem is too hard, whether I haven't described the problem very well or if this kind of problem isn't the kind of thing that should be asked in these forums? Any response is much appreciated.
Hi,
I'm looking for some help for a computer game I am trying to write and not having done much in the way of complex math (when I say complex I mean anything more complicated than checking my credit card statement) for about 15 years my math is a little rusty - seems trial and error isn't a good way to go when trying to solve math problems! Anyway my frustration and ineptitude have got the better of me and I came across this forum in my search for help.
The easiest way to describe what I'm doing (or trying to do) is to have objects travel from the top of the screen to the bottom in the same way that the notes in Guitar Hero do. i.e. They not only travel down, but also out from the center to give the effect of perspective.
The screen has the dimensions of 800 pixels across by 450 down with the objects traveling down a "road" (or fretboard for Guitar Hero aficionados) that is placed in the horizontal center of the screen. The road is 200 pixels wide at the top widening to 800 at the bottom. While the horizontal center of the road is located at 400 pixels I have decided to refer to it as 0 (x position = x position - 400) since any object placed to the left of this point falls down and to the left, anything placed to the right of this point falls down and to the right, and any object placed directly at the center (0) will fall straight down.
The initial position of the objects are y:1 and x:random number between -100 and 100. I have the objects falling down in a straight line by calculating the next y position using NEXTy = OLDy * 1.2 This makes the object fall faster as it gets closer to the bottom, which is what I want so as to give the effect of getting closer.
What I need help in is creating a formula that calculates the NEXTx position as well. I'm guessing it will involve using the initial x position somewhere as it is this value (how far it is from the center) that will determine how sharp the angle the object will fall at and also the y position somewhere as well.
I hope I've explained my problem ok. Here's a few examples just to help clarify:
An object placed at (x,y) (50, 1) will fall in a straight line to (200, 450).
An object placed at (x,y) (-50, 1) will fall in a straight line to (-200, 450).
An object placed at (x,y) (100, 1) will fall in a straight line to (400, 450).
An object placed at (x,y) (-100, 1) will fall in a straight line to (-400, 450).
An object placed at (x,y) (0, 1) will fall in a straight line to (0, 450).
So the information I have is:
Initial x position of the object.
Initial y position of the object.
Next y position of the object.
And what I need is a formula that calculates the Next x position of the object.
Any help would be MUCH appreciated.
I know I've rambled on a lot compared to other problems on this site but I just wanted to make sure I was being clear and I'm not sure how to get my problem across in mathematical terms - I hope you'll forgive me. I also hope I've posted this in the correct forum. If not, let me know.
Thanks.
PS. If it makes it easier I can shorten the height of the playing area from 450 to 400 pixels.
2. ## Computer game question Mk II
Hi,
I'm rephrasing my original post in the hopes it generates more replies than my original long-winded description. Ok, here goes my attempt at brevity.
Consider a symmetrical trapezoid with a top side measuring 200, bottom side measuring 800 and height of 450.
Placing this trapezoid on a grid with the top left being considered (0,0) and the bottom right being (800, 450) will give the horizontal center of the trapezoid at x:400.
Assuming the initial x value is a random number between with a range 300-400 and the initial y value is 1 I need a formula that can give the x value on a graph based on an increasing y value and the original x value. (Man, this is hard to explain succinctly.)
The initial x value also needs to influence the calculation as it is this number that directly affects the result. The angle should also increase as the value of the initial x moves away from x:400 (the horizontal center).
eg. When the initial xy = 400,1 then x should remain 400 as y increases (as 400 is the center and effectively 0 for my purposes. But as the initial x increases to a maximum of 500 then when y = 450 then x should equal 800. When initial x = 450 then when y = 450 x should equal 200.
Hope that makes sense. If not read my essay above.
Thanks.
3. I had a look at a video from Guitar Hero on YouTube, and you explanation became crystal-clear! (or so I hope. You'll be able to judge from the following...)
The right side of the trapezoid, if extended, crosses x=0 when y=-150 (this can be proved with Thales' theorem, but a sketch would do fine).
Then the equations of the trajectories are $x=\lambda(y+150)$ for some $\lambda$ to be determined (this is the equation of a line, and when y=-150 this gives x=0).
It remains to find $\lambda$. We have the initial situation: when $y=1$, then $x=x_0$ (the initial position), which means $x_0=\lambda\cdot 151$, hence $\lambda=\frac{x_0}{151}$.
As a conclusion, the equation of the trajectory starting at $(x=x_0,y=1)$ is given by $x=\frac{x_0}{151}(y+150)$.
I think this is what you need.
Edit: actually there is little problem coming from the fact that we start at y=1 and not at y=0; the equation won't give you exactly what you want at y=450 for x=0, 50 and 100, and it can't. In the same way, I think x should actually go from 0 to 199; given that the screen width is even, there is not 1 pixel at the middle, but 2, so are there two initial positions that give vertical lines? Anyway, nobody will look very close at the screen to count pixels, so you can use the previous formula.
If you want to adapt your formula to another situation, here's a useful equation to remember: the equation of the line that goes through $(x_1,y_1)$ and $(x_2,y_2)$ is $x=x_1+\frac{x_2-x_1}{y_2-y_1}(y-y_1)$.
4. Hi,
Firstly Laurent I'd like to thank you for taking the time to try and decipher my post and respond - it puts you in a very select group! It also sounds like you've nailed my problem. With my limited math skills it might take a me some time to figure out your response which I hope to do tonight. Will let you know how I go.
BTW the only reason I used 1 for the initial y position is so I could apply a formula to work out the subsequent y positions and using 0 would just return 0 for next y = current y * 1.2 and thus wouldn't move from the y = 0. I think your suggestion to use 0 for the purposes of the x position formula would be fine since the objects won't actually be placed at y=0 as it won't be noticeable to the eye on screen.
Thanks again!! Stay tuned for good news (hopefully).
Darren.
5. Hi,
Good news. Your formula did the trick!! Can't thank you enough. I really appreciate it.
Now if I can only get the rest of program working.
Darren
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496666789054871, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/24059-ln-limit.html
|
# Thread:
1. ## ln Limit
How do you solve this limit, $\displaystyle\lim_{x\to{0}}\frac{ln(\sqrt{x^5+3x+1 })}{x}$ without l'Hopital's Rule? Using l'Hopital's rule is too easy.
2. ## Re: Limit
First, you can use the properties of logarithms to take $\ln (\sqrt{1+3x+x^5}) = \tfrac{1}{2}\ln (1+3x+x^5)$. Then, you can use the fact that for small x ( $|x| \ll 1$), $\ln (1+x) \approx x - \dfrac{x^2}{2} + \dfrac{x^3}{3} - \cdots$ (the Mercator series), and thus in the limit as x approaches zero, $\ln (1+3x+x^5)$ approaches $3x-\tfrac{9}{2}x^2+\mathcal{O}(x^3)$, so thus
$\displaystyle\lim_{x\to{0}}\frac{ln(\sqrt{x^5+3x+1 })}{x} = \tfrac{1}{2}\displaystyle\lim_{x\to{0}}\frac{3x-\tfrac{9}{2}x^2+\cdots}{x} = \frac{3}{2}$
Hope this is illuminating.
--Kevin C.
3. Originally Posted by TwistedOne151
First, you can use the properties of logarithms to take $\ln (\sqrt{1+3x+x^5}) = \tfrac{1}{2}\ln (1+3x+x^5)$. Then, you can use the fact that for small x ( $|x| \ll 1$), $\ln (1+x) \approx x - \dfrac{x^2}{2} + \dfrac{x^3}{3} - \cdots$ (the Mercator series), and thus in the limit as x approaches zero, $\ln (1+3x+x^5)$ approaches $3x-\tfrac{9}{2}x^2+\mathcal{O}(x^3)$, so thus
$\displaystyle\lim_{x\to{0}}\frac{ln(\sqrt{x^5+3x+1 })}{x} = \tfrac{1}{2}\displaystyle\lim_{x\to{0}}\frac{3x-\tfrac{9}{2}x^2+\cdots}{x} = \frac{3}{2}$
Hope this is illuminating.
--Kevin C.
Is there any other way...? I really unfamilar with the series (Mercator, taylor etc.) stuff.
4. Originally Posted by polymerase
Is there any other way...? I really unfamilar with the series (Mercator, taylor etc.) stuff.
L'Hopital's rule is a nice trick to get past the tough work of using series and other definitions. BTW, why would you not want to use the rule?
5. ## Re: Limit
Not really. The presence of the logarithm ultimately requires either some form of series about the value to which its arguement converges, or else l'Hopital's rule (which essentially derives from the Taylor series for the numerator and denominator for the 0/0 indeterminate form). Ultimately you have to "know" in some fashion the behavior of ln(x) about x=1.
6. Originally Posted by colby2152
L'Hopital's rule is a nice trick to get past the tough work of using series and other definitions. BTW, why would you not want to use the rule?
If this was simply "find the answer" then of course i used that...but im trying to find "techniques" to imploy for hard limits because most of the time my prof won't let us use l'Hopital...when he does...the function is crazy.
7. In many cases, like this one, it is enough to consider the following:
$\lim_{u\rightarrow{0}}\frac{\ln(u+1)}{u}=\lim_{u\r ightarrow{0}}\ln(1+u)^{u}=1$
Because: $\lim_{u\rightarrow{0}}(1+u)^{u}=e$
Thus: $\lim_{x\rightarrow{0}}\frac{\ln(\sqrt[]{1+3x+5x^2})}{x}=\lim_{x\rightarrow{0}}\frac{\frac {1}{2}\cdot{\ln(1+3x+5x^2)}}{x}=\frac{1}{2}\cdot{\ lim_{x\rightarrow{0}}\frac{\frac{\ln(1+3x+5x^2)}{3 x+5x^2}}{\frac{x}{3x+5x^2}}}$
$\lim_{x\rightarrow{0}}\frac{\ln(\sqrt[]{1+3x+5x^2})}{x}=\frac{1}{2}\cdot{\lim_{x\rightarr ow{0}}\frac{\frac{\ln(1+3x+5x^2)}{3x+5x^2}}{\frac{ x}{3x+5x^2}}}=\frac{1}{2}\cdot{\lim_{x\rightarrow{ 0}}\frac{3x+5x^2}{x}}=\frac{3}{2}$
Since $\frac{\ln(1+3x+5x^2)}{3x+5x^2}\rightarrow{1}$
8. Originally Posted by PaulRS
In many cases, like this one, it is enough to consider the following:
$\lim_{u\rightarrow{0}}\frac{\ln(u+1)}{u}=\lim_{u\r ightarrow{0}}\ln(1+u)^{u}=1$
Because: $\lim_{u\rightarrow{0}}(1+u)^{u}=e$
Thus: $\lim_{x\rightarrow{0}}\frac{\ln(\sqrt[]{1+3x+5x^2})}{x}=\lim_{x\rightarrow{0}}\frac{\frac {1}{2}\cdot{\ln(1+3x+5x^2)}}{x}=\frac{1}{2}\cdot{\ lim_{x\rightarrow{0}}\frac{\frac{\ln(1+3x+5x^2)}{3 x+5x^2}}{\frac{x}{3x+5x^2}}}$
$\lim_{x\rightarrow{0}}\frac{\ln(\sqrt[]{1+3x+5x^2})}{x}=\frac{1}{2}\cdot{\lim_{x\rightarr ow{0}}\frac{\frac{\ln(1+3x+5x^2)}{3x+5x^2}}{\frac{ x}{3x+5x^2}}}=\frac{1}{2}\cdot{\lim_{x\rightarrow{ 0}}\frac{3x+5x^2}{x}}=\frac{3}{2}$
Since $\frac{\ln(1+3x+5x^2)}{3x+5x^2}\rightarrow{1}$
Nice
9. Originally Posted by polymerase
Nice
Just be aware that, in general
$\lim_{x \to a}(f(u(x))) \neq f \left ( \lim_{x \to a}u(x) \right )$
It works for this problem, but it can't be generalized to all functions.
-Dan
10. Originally Posted by topsquark
Just be aware that, in general
$\lim_{x \to a}(f(u(x))) \neq f \left ( \lim_{x \to a}u(x) \right )$
It works for this problem, but it can't be generalized to all functions.
-Dan
In this case it is true because of the continuity of the logarithm
11. An interesting observation to make is that, in general,
$\lim_{x\rightarrow{0}}\frac{ln(ax^{2}+bx+1)}{cx}=\ frac{b}{c}$
In this case, b/c = 3/2.
12. Originally Posted by galactus
An interesting observation to make is that, in general,
$\lim_{x\rightarrow{0}}\frac{ln(ax^{2}+bx+1)}{cx}=\ frac{b}{c}$
In this case, b/c = 3/2.
You should make some restrictions on the coefficients. Like $a\not = 0$ and that sort of stuff.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320180416107178, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-equations/164336-laplaces-equation-semi-infinite-strip-0-x-y-0-a-print.html
|
# Laplace's equation in the semi-infinite strip 0 < x < a, y > 0
Printable View
• November 25th 2010, 12:53 AM
mukmar
Laplace's equation in the semi-infinite strip 0 < x < a, y > 0
Problem:
$u_{xx} + u_{yy} = 0, 0 < x < a, y > 0$
$u(0, y) = u(a, y) = 0$ and $u(x, 0) = f(x)$
$u$ bounded as $y \to \infty$
Find $u(x, y)$ using separation of variables for general $f(x)$ and write the solution in the form:
Eq. 1
$u(x,y) = \displaystyle\int_0^a G(x, y, s) f(s) ds \label{1}$
Find $u(x, y)$ in as explicit a form as you can, i.e., sum the series.
Attempt at solution:
After using the method of separation of variables I came to the solution:
$u(x, y) = \displaystyle\sum_{n = 1}^\infty c_n e^{-n\pi y / a}\sin{\frac{n\pi x}{a}}$
Where $c_n = \frac{2}{a}\displaystyle\int_0^a f(x) \sin{\frac{n\pi x}{a}} dx$
I'm just not sure how I'm supposed to combine these two pieces of information to be able to express $u(x, y)$ in the form outlined in Eq. 1.
Any assistance would be greatly appreciated!
All times are GMT -8. The time now is 05:04 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9012144207954407, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/quarks?sort=active&pagesize=30
|
# Tagged Questions
The quarks tag has no wiki summary.
learn more… | top users | synonyms (1)
1answer
53 views
### Asymptotic Freedom - Qualitative Explanation
I am doing a (mostly qualitative) course on Particle Physics, and am confused about the concept of asymptotic freedom. The lecture notes basically say that a quark may experience no force/be "unbound" ...
2answers
218 views
### About free quarks and confinement
I simply know that a single free quark does not exist. What is the reason that we can not get a free quark? If we can't get a free quark then what is single-top-quark?
1answer
356 views
### Do color-neutral gluons exist?
If I'm correct a quark can change color by emitting a gluon. For example a blue up quark $u_b$ can change into a red up quark by emitting a gluon: $$u_b \longrightarrow u_r + g_{b\overline{r}}$$ ...
1answer
45 views
### Baryon wave function symmetry
If a baryon wavefunction is $\Psi = \psi_{spatial} \psi_{colour} \psi_{flavour} \psi_{spin}$, and we consider the ground state (L=0) only. We know that the whole thing has to be antisymmetric under ...
4answers
4k views
### What are quarks made of?
So atoms are formed from protons and neutrons, which are formed from quarks. But where do these quarks come from? What makes them?
2answers
494 views
### What Quark and Anti-quark are electrically neutral Pions made out of?
A positive pion is an up and an anti-down. A negative pion is a down and an anti-up. What's a pion with an electrical charge of 0?
3answers
877 views
### What does it mean that the neutral pion is a mixture of quarks?
The quark composition of the neutral pion ($\pi^0$) is $\frac{u\bar{u} - d\bar{d}}{\sqrt{2}}$. What does this actually mean? I think it's bizarre that a particle doesn't have a definite composition. ...
2answers
242 views
### Quark compositions in $\pi^+$ to $\pi^0$ pion decay
Pions can undergo a rare beta-like decay into leptons: Pion beta decay (with probability of about $10^{−8}$) into a neutral pion plus an electron and electron antineutrino (or for positive ...
1answer
452 views
### Neutral Pion Decay
While studying C-symmetry, a question about neutral pion decay came up. The most probable channels in which neutral pion $\pi^0$ decays are: $\pi^0\longrightarrow\gamma+\gamma$ (98%) ...
1answer
70 views
### Why is a pion so light compared to a neutron or proton?
A pion is made out of a pair of up and/or down quarks. A neutron or proton is three up or down quarks. So naively I'd expect a pion to be about 2/3 the mass of a nucleon. In fact it's less than 1/6 ...
2answers
146 views
### How can a pion have a mass, given it's a “field mediator” and created/destroyed continuously?
Maybe some of my assumptions here are basically wrong, but isn't it true that pion is the "mediator" for the strong force field. the quantum field theory basically says that there are no fields, ...
2answers
984 views
### Is it pions or gluons that mediate the strong force between nucleons?
From my recent experience teaching high school students I've found that they are taught that the strong force between nucleons is mediated by virtual-pion exchange, whereas between quarks it's gluons. ...
1answer
130 views
### Magnetic monopoles
I am a non-expert in this field, just have a layman's interest in the subject. Has anyone ever considered the possibility of magnetic monopoles (one positive and one negative charge) being confined ...
0answers
25 views
### What is the process that gives quarks fractional electric charge? [duplicate]
I've heard always that quarks has fractional electric charge, How do we know that quarks has fractional electric charge? what is the process that gives quarks its fractional electric charge? Ok ...
1answer
84 views
### Second baryon octet
Let's temporarily ignore spin. If 3 denotes the standard representation of SU(3), 1 the trivial rep, 8 the adjoint rep and 10 the symmetric cube then it's well-known that 3 x 3 x 3 = 1 + 8 + 8 + 10 ...
3answers
132 views
### How quark electric charge directly have been measured?
How quarks electric charge directly have been measured when quarks never directly observed in isolation? (Due to a phenomenon known as color confinement.)
1answer
84 views
### Facts About Quarks Electric Charge [duplicate]
Quarks have the unusual characteristic of having a fractional electric charge. here there is a new model that suggests maybe an up Quark has no electric charge and infact down Quark has electric ...
1answer
62 views
### What would be the effect of an excess of up quarks on stellar formation?
Suppose you had 80% up quarks, and only 20% down quarks. How would this affect stellar formation?
2answers
147 views
### So there are 6 quarks, what are anti-quarks considered then?
I just recently got into particle physics and the quantum world and I love it. So my first big question is. I watch all these videos and people explain the quarks (up, down, top, bottom, strange, ...
0answers
66 views
### Are quarks the limits? The end of the fundamental science. For collisions on more higher energies will lead to black holes after all?
Are quarks the limit and the Plank scale is believed to be the limit of distance when the very concept of space and length cease to exist (10^-19) Any attempt to research the existence of shorter ...
2answers
321 views
### If quarks didn't have mass, could protons (and neutrons) exist?
I read here (mass of a proton) that the mass of a proton is mostly (99%) due to the energy of the strong nuclear force which binds the quarks together, and not the actual mass of the quarks. My ...
0answers
99 views
### Strong decays of baryons via quark-antiquark pairs
I have the doubly charmed $\Xi_{cc}^{++}$ consisting of ccu quarks. This is meant to decay via strong force, producing a light baryon (cud/uuc/udc etc...) and a quark-antiquark pair along with a ...
1answer
61 views
### The fractional model of Quarks electric charge was found before discovery of the $\Delta^{++}$, or after it?
From Wikipedia: Existence of the $\Delta^{++}$ , with its unusual +2 electric charge, was a crucial clue in the development of the quark model. the fractional model of Quarks electric charge was ...
2answers
106 views
### What is mass of free up and down Quark?
Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei. Due to a phenomenon known as color confinement, quarks ...
2answers
237 views
### What IS Color Charge?
This question has been asked twice already, with very detailed answers. After reading those answers, I am left with one more question: what is color charge? It has nothing to do with colored light, ...
2answers
81 views
### What reason(s) exist to suppose that all degeneracy pressures can be overcome in Black-Hole formation?
In models of stellar collapse to a black hole, it is a given that density increases without bound towards a singularity. Electron degeneracy I get. Neutron degeneracy I get. I assume there's some ...
1answer
91 views
### A strange particle, $X$, decays in the following way: $X → π^– + p$. State what interaction is involved in this decay
A strange particle, $X$, decays in the following way: $X → π^– + p$. State what interaction is involved in this decay. I know the answer to be weak interaction, but why is it weak interaction? What ...
3answers
188 views
### Origin of lepton/quark generations?
What theoretical explanations exist for the fact that there are three generations of leptons and quarks? I'm not so much asking why there are exactly 3 generations, but rather what makes electron, ...
1answer
53 views
### Measuring nucleons using electron beams
sorry if the question is too elementary. From: The Britannica Guide to Particle Physics: The sizes of atoms, nuclei, and nucleons are measured by firing a beam of electrons at an appropriate target. ...
1answer
209 views
### Why isn't Hydrogen's electron pulled into the nucleus? [duplicate]
Possible Duplicate: Why do electrons occupy the space around nuclei, and not collide with them? Why don’t electrons crash into the nuclei they “orbit”? From what I learned in chemistry, ...
1answer
257 views
### How does Delta baryon decay conserve angular momentum?
I'm a chemist so bear with me: I understand the Delta baryons $\Delta^{+}$ and $\Delta^{0}$ to be in some sense spin (and isospin) quartet states of the proton and neutron. These can decay straight ...
1answer
90 views
### $sss$ decay and violation of strangeness
Why can the hyperon $\Omega^{-}$ not decay by strong interaction? It seems that strangeness must be violated, but why is it the only way?
2answers
241 views
### Cramer's rule, Origin of Quarks Fractional electric charge? [closed]
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns. 2u+1d=1 1u+2d=0 $$a_1d+b_1u=c_1$$ $$a_2d+b_2u=c_2$$ ...
1answer
187 views
### What was the first discovery of the delta baryon $\Delta^{++}$?
The delta baryons (also called delta resonances) are a family of subatomic hadron particles which have the symbols $\Delta^{++}$, $\Delta^{+}$, $\Delta^{0}$, and $\Delta^{−}$ and electric charges +2, ...
2answers
425 views
### Could the fractional model of Quarks electric charge turn out to be false? [closed]
The delta baryons (also called delta resonances) are a family of subatomic hadron particles which have the symbols $\Delta^{++}$, $\Delta^{+}$, $\Delta^{0}$, and $\Delta^{−}$ and electric charges +2, ...
1answer
227 views
### Pion Decay and Fractional electric Charge disappeared, why?
Since the charged pions decay into two particles, a muon and a muon neutrino Fractional electric Charge disappeared, why? The decay proceeds by the weak interaction $W^{+}$ and can be visualized in ...
1answer
260 views
### What is the relationship between the Higgs field and quarks?
I have some difficulty considering the relative size of each and the meaning behind the shape of Higgs boson. I ask relating to the structures of both the Higgs field and quarks. How is it that the ...
5answers
245 views
### Could Quark model turn out to be false?
Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei. Due to a phenomenon known as color confinement, quarks ...
1answer
98 views
### Relating theta_QCD to neutron EDM
How do I relate the topological $\theta_\text{QCD}$ parameter to the electric dipole moment (EDM) of the neutron? I am very familiar with chiral perturbation theory. I just need to know how to take ...
4answers
1k views
### Could the Periodic Table have been done using group theory?
These three questions are phrased as alternative-history questions, but my real intent is to understand better how well different modeling approaches fit the phenomena they are used to describe; see 1 ...
1answer
89 views
### Could quarks and leptons mix if they carried flavor charges?
If quarks and leptons carried flavor charges that differed across generations (as they do in some theories), then could mixing take place?
4answers
2k views
### Why do electron and proton have the same but opposite electric charge?
What is the explanation between equality of proton and electron charges (up to a sign)? This is connected to the gauge invariance and renormalization of charge is connected to the renormalization of ...
3answers
82 views
### Top quark production questions
I am just looking on the TopQuark production via proton antiproton collision and strong interaction. There seem to be three basic possibilities. \$q + \bar q \rightarrow Gluon \rightarrow t +\bar t ...
1answer
283 views
### Quark Radius Upper Bound
If quarks had internal structure (contradicting current beliefs), what is the lowest upper bound on their "radius" based on current experimental results? If possible, I'd prefer to only consider ...
1answer
456 views
### What is the difference between 'running' and 'current' quark mass?
When looking at the PDG, there is a difference between the 'running' and the 'current' quark masses. Does anyone know which is the difference between these two?
1answer
186 views
### What is meant by the rest energy of non-composite particle?
When talking about the rest energy of a composite particle such as a proton, part of the rest energy is accounted for by the internal kinetic energy of its constituent quarks. But what is physically ...
1answer
75 views
### Future of colliders and technical limitations
Are there any technical limitations (theoretical or technological) that prevent quark based colliders? ie. Colliding two quarks together.
1answer
85 views
### Is there a tb meson?
I was wandering around the particle date group page for meson and couldn't find a meson for top-bottom, which from symmetry you would expect. Q1: Is this because it hasn't been found? Q2: There is ...
1answer
129 views
### Particle mixing and indistinguishability
Neutral kaons have two flavor combinations: $\mathrm{d}\bar{\mathrm{s}}$ and $\mathrm{s}\bar{\mathrm{d}}$. They can also be weak eigenstates: $\mathrm{\frac{d\bar{s} \pm s\bar{d}}{\sqrt{2}}}$. But ...
2answers
203 views
### What is the interaction with Higgs field(s) that give the quarks so much different masses?
The masses of quarks are: mu 2∼3 MeV md 4∼6 MeV mc 1.3 GeV ms 80∼130 MeV mt 173 GeV mb 4∼5 GeV
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926020622253418, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/29903/why-do-people-recommend-wider-tyres-in-car-for-better-road-grip
|
# Why do people recommend wider tyres in car for better road grip?
Tyre companies boast of their wider tires for better grip on road. Also, the F1 cars have broad tires for better grip. But as far as I know Friction does not depend on the surface area of contact between the materials. Even the formula says so.. $F=\mu mg$ (where $F$ = Force of friction, $\mu$ = coefficient of friction, $m$ = mass, $g$ = gravity)
Can anyone please tell me the relation between broad tires and road grip ?
-
5
"But as far as I know Friction does not depend on the surface area of contact between the materials." This is true in physics 101 and is how we teach it, but it breaks down when there is non-trivial deformation of either material. Tires are well known as a "everyday situation" where this rule is not a good guide. I believe we have some users who know this matter in detail and they may give you a good answer. – dmckee♦ Jun 11 '12 at 13:38
A simplistic answer is to consider the finite material strength of the rubber/tarmac. The larger the contact area, the greater the amount of bonds available to absorb the force before tearing. – Nic Jun 11 '12 at 13:43
## 1 Answer
It's a surprisingly complicated question. Given your mention of friction, probably the main point is that for a car tyre the friction is not linearly dependant on load. Wikipedia has some information about this here.
If you had perfectly smooth surfaces the friction is actually proportional to the area of contact and independant of the load. This is because friction is an adhesive effect between atoms/molecules on the surfaces that are in contact. However in the real world surfaces are not smooth. If you touch two metal surfaces together the contact is between high spots on the two surfaces so the area that is in contact is much less than than the apparent area of contact. If you increase the load you deform these high spots and broaden them, so the effect of load is to increase the real area of contact. The real area of contact is approximately proportional to the load, and the friction is proportional to the area of contact, so the friction ends up being approximately proportional to the load.
However a rubber type is a lot softer than metal, and a road is a lot rougher than a metal plate. Even at low loads the tyre deforms to key into the irregularities in the road, so increasing the load has a lesser effect. That's why you get the sub-linear dependance described in the Wikipedia article.
But this is only the start of the complexity. If you use a wider tyre the contact patch area isn't necessarily bigger. A wider tyre has a wider shorter contact patch while a narrow tyre has a narrower longer contact patch. The contact patch area depends on the tyre pressure, the deformation of the sidewalls and probably lots of other things I can't think of at the moment.
And anyway, if by "grip" you mean grip when cornering, the grip isn't just controlled by the contact patch area. When a car is cornering the contact patch is being twisted. This is known as the slip angle. The wider shorter contact patch on a wide tyre has a smaller slip angle and as a result grips better.
-
Thanks a lot. It has cleared a large part of my confusions. – Luftwaffe Jun 12 '12 at 7:23
I didn't see a mention of temperature. Tires grip also varies with the temperature of the tire; a wider tire might not produce more grip than a skinnier one if there isn't sufficient friction (generated typically by driving fast enough to cause a significant slip angle) to heat the tire to its target heat range. This is particularly true of tires intended to work at higher temperatures such as race tires. – james creasy Apr 13 at 0:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.958504855632782, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/187966/express-real-number-in-exponential-form?answertab=active
|
# Express real number in exponential form
Express $z=-32$ in exponential form.
My reasoning:
1. $z=-32$ is the same as $z=-32+0i$
2. Exponential form should look like $z=Re^{\theta i}$
3. $R =\sqrt{(-32)^2 + 0^2} = 32$
4. $\theta = \tan^{-1}(\frac{0}{-32}) = 0$
5. So answer becomes: $z = 32e^{0i} = 32$
But, obviously, $-32 \ne 32$
What am I missing?
-
## 5 Answers
Since $z=re^{i \theta}$, with $r\geq 0$, you must have $|z| = r$. So $r=32$. Then you need to solve $-32 = 32 e^{i \theta} = 32 (\cos\theta + i \sin \theta)$. Since the imaginary part is zero, you must have $\theta = n \pi$ for some integer $n$. Choosing $n$ to be odd gives all solutions, since $\cos n \pi = -1$ iff $n$ is odd.
Consequently $-32 = 32 e^{i n \pi}$, where $n$ is odd.
-
$-32 = -32(1 + 0 i) = -32 e^{i2n\pi}$
Or, $32(-1 + 0 i) = 32( \cos \pi + i \sin \pi) = 32 e^{i \pi}$
-
As $R\cos\theta=-32<0=>\cos\theta<0$ and $\sin\theta=0$ so, $\theta$ lie in the 3rd quadrant $\pi ≤ \theta ≤ 3\frac{\pi}{2}$,
and $tan^{-1}(0)=n\pi$ where n is any integer.
So here, $\theta=(2m+1)\pi$ where m is any integer.
So, the general value of $-32=32e^{i(2m+1)\pi}$, the principal value being $32e^{i\pi}$
-
The problem is with the inverse tangent that you're using to find the value of the argument of $-32$. Remember that the inverse tangent will give you values between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$ so it is not giving you the correct argument.
In this case since $-32$ is in the negative real axis, its argument should be $\pi$ instead. Which will give you the "exponential form"
$$-32 = 32e^{\pi i}$$
-
$\tan^{-1}(0)$ could be both $0$ and $\pi$, and in your case, the second one fits.
$-32 = 32 e^{i\pi}$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278024435043335, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/control-theory+integral
|
# Tagged Questions
1answer
65 views
### How to find discrete integral given different time intervals.
I want to implement a PID controller and I'm unsure of how to find the integration part. Normally the integral is calculated as $\sum_{n=1}^{t} e_{n}$, where $e_n$ is the sample error at time n. ...
2answers
142 views
### Is the integral in a PID controller definite or indefinite?
Would the integral calculated by the PID controller be considered definite or indefinite?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8856789469718933, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/22087/neutron-electric-dipole-moment-and-t-symmetry-violation/22092
|
# Neutron electric dipole moment and T symmetry violation
Our textbook (and other sources I have found) says that non-zero electric dipole moment of neutron would violate T symmetry. They prove this statement by first assuming $\boldsymbol{D}=\beta\boldsymbol{J}$, where $\boldsymbol{D}$ is the dipole moment, $\boldsymbol{J}$ is the angular momentum, and $\beta$ is a constant.
But why? Why is $\boldsymbol{D}$ proportional to $\boldsymbol{J}$? Why is $\boldsymbol{D}$ related to $\boldsymbol{J}$ at all? And how can't this argument be applied to other composite particles such as atoms and molecules, thereby breaking T symmetry for most of the world?
-
In classical mechanics, we have this identity for spinning bodies of charge: $\frac{\mu}{L}=\frac{q}{2m}$, $\mu$ is dipole moment, $L$ is angular momentum. I dunno how this translates to particle physics, but it may help.. – Manishearth♦ Mar 8 '12 at 16:19
@Manishearth: I'm talking about electric dipole moment, where as $\mu$ is magnetic dipole moment. – C.R. Mar 9 '12 at 4:26
Aah, my bad. Didn't see the 'electric' in the question and i'm not familiar with your usage of symbols (I use p for electric dipole) – Manishearth♦ Mar 9 '12 at 4:46
## 1 Answer
As the neutron is not point-like, consider it has a continuous distribution of charge $\rho(\mathbf{r})$ confined in a volume $\Omega$. The dipole electric moment is then given by
$\mathbf{D}(\mathbf{r})=\int_\Omega \rho(\mathbf{r}')\delta(\mathbf{r}-\mathbf{r}')d^3r'$
where the coordinates are measured from the centre of mass of the distribution. For a charged particle, this definition implies that for $\mathbf{D} \neq\mathbf{0}$ the "centre of charge" is displaced from the centre of mass of the distribution. For a distribution which has no net charge, that is
$Q=\int_\Omega \rho(\mathbf{r}) d^3r=0$
this definition implies that a there is a greater positive charge side of your distribution and a correspondingly greater negative charge in the other side.
Consider now that your particle has angular momentum $\mathbf{J}$ and that its orientation is given by $m$ (the eigenvalue of the $\hat{J}_z$ operator) relative to the $\hat{\mathbf{z}}$ axis. Notice that the only way to know the orientation of your charge distribution ("particle") is by the orientation of the angular momentum.
As a consequence, both $\mathbf{J}$ and $\mathbf{D}$ must transform equally under parity $P$ and time reversal $T$ if $\mathbf{D} \neq \mathbf{0}$ and if there is $P$ and $T$ symmetries. But $\mathbf{D}$ changes its sign under $P$ whereas $\mathbf{J}$ does not so $\mathbf{D}$ must vanish if there is $P$ symmetry. In a similar way, $\mathbf{D}$ does not change sign under $T$ but $\mathbf{J}$ does, so $\mathbf{D}$ has to vanish if there is $T$ symmetry. Hence if the neutron electric dipole is not zero we will have a violation of $PT$ symmetry.
Remark: This argument only applies to particles with non-zero dipole moment.
Experimental searches of the neutron electric dipole moment can be found:
• Smith et al. Phys. Rev. 108, 120 (1957) [link to paper].
• Baker et al. Phys. Rev. Lett. 97, 131801 (2006) [link to paper].
The upper bound in the last one for $|\mathbf{D}|$ is $2.9 \cdot 10^{-26}$ e cm.
D.
EDIT: As David said below, there is not $CPT$ violation in the, hypothetical case, of having $PT$ violation [=existence of non zero electric dipole moment].
-
The neutron is composed of $\mathrm{udd}$ valence quarks so charge conjugation would switch it to $\mathrm{\bar u\bar d\bar d}$ - it's not the identity operation. So a neutron electric dipole moment wouldn't automatically violate $CPT$ symmetry. (But other than that, good answer!) – David Zaslavsky♦ Mar 8 '12 at 19:52
Well, thank you! I tried to do my best, even though I am not an expert in nuclear/high energy physics. – DaniH Mar 8 '12 at 21:49
"Notice that the only way to know the orientation of your charge distribution ("particle") is by the orientation of the angular momentum." This is exactly the part I don't understand. Electric dipole moment requires only uneven charge distribution, but it does not require those charges to move. Also why are non-zero EDM of atoms not violation of T symmetry? – C.R. Mar 9 '12 at 4:27
Concerning the orientation of the charge distribution, perhaps it is more precise to say "orientation of the electric dipole moment". The electric dipole moment is a vector and to apply, for instance, a parity transformation you need to place it in a reference frame. The $\hat{\mathbf{z}}$ direction in this reference frame is given by the z- component of the angular momentum, $\mathbf{J}=\mathbf{L}+\mathbf{S}$. – DaniH Mar 9 '12 at 9:05
About non-zero EDM of a molecule: it is indeed interesting to think why this not violates $T$ symmetry. This is because the molecule/atoms have non-zero ground states that are invariant under parity so that $T$ needs not to be broken to give non-zero $\mathbf{D}$. – DaniH Mar 9 '12 at 9:16
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144758582115173, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/48913/uneven-spaced-time-series/48948
|
## uneven spaced time series
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $(t_k), k \in \mathbb{N}$, be an increasing sequence of real numbers ($t_{k-1} < t_k$) and $(X_{t_k}$) be a sequence of real numbers indexed by $(t_k)$. Such a sequence is sometimes called a time series.
The idea is that this series represents a sequence of measurements of some sort, like, for example, the average temperature of some location at time $t_k$.
The analysis of time series is an established area of statistics. In concrete applications, for example in climate science, there are two common problems when applying statistical algorithms to time series:
1. The time series are finite, which produces artefacts in statistical algorithms that are designed for infinite time series. This problem is well known and there exist several approches to handle it.
2. The times series are uneven spaced, that is $t_k - t_{k-1}$ is not independent of $k$.
I don't know of any textbook, algorithm or paper that explicitly addresses the latter problem. My question is therefore: Is this not a problem, is the solution trivial or, if not, are there any treatments?
Of course it is possible to interpolate missing values to generate a time series with an even time spacing $\min_k (t_k - t_{k-1})$, but it seems to me that this is not a solution, because algorithms like the fast fourier transform, nonlinear regression analysis or wavelet transforms would produce artefacts that depend on the kind of interpolation (linear, qubic splines, whatever). And therefore an explicit explanation of why the kind of interpolation one uses does not produce any artefacts in the analysis of the time series seems to be warranted to me, but I have never seen one in the literature.
-
## 3 Answers
Unfortunately the problem is not trivial. Right now, there is virtually no theory for analyzing unevenly-spaced time series in their unaltered form. I have been working extensively on the problem over the past year and have typed up some notes that might be helpful (they can be found at http://www.eckner.com/research.html)
-
Thanks! Keep me up to date about what you find out this year :-) – Tim van Beek Jan 3 2011 at 8:47
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Fourier transforms depend upon the fact that the modeled signal are going to be infinite in time-span and time-extent. While it is possible to get a very good example of a time-limited signal by using a finite set of Fourier coefficients, the finite-fourier-coefficient-approximation always ends up with "ringing artefacts" at any high-frequency edges beyond the bandwidth-limited approximation. These artifacts arise from the fact that Fourier decomposition using the "infinite-time-extent" sine-wave as its base-component.
This type of problem in representing "limited-time-span" signals is what led to the concepts of "wavelets" and wavelet-transforms, using such limited-time-span base components such as the Haar wavelet. This is a slightly different problem from having non-equally-spaced-in-time samples extracted from a time series, but even then in these cases, there is the assumption that the underlying time series is continuous over time or is composed of the superposition of multiple discrete events occuring as Bernoulli or Poisson processes over time with some convolution of the discrete events by a smoothing factor (volcano eruption or geyser spouting, with the effluent "smoothed out" by prevailing winds or water currents).
-
All the algorithms that I know assume an even spacing of the time series. While it is true that these algorithms are developed from the assumption of a continuous process, maybe with a certain class of stochastic processes in mind, which may be invalid or only approximatly valid, I'm looking for a solution to the very down-to-earth practical problem that I cannot use any code I know with an uneven spaced time series, I have to convert it to an even spaced first. And I don't know how this transformation will affect the ensuing analysis. – Tim van Beek Dec 10 2010 at 14:46
... This is a problem, there is no trivial solution, just because you cannot and must not solve both problems (the interpolation problem and your problem of interest) separately: you must solve them jointly.
However, if adapting the algorithm of interest to uneven spaced times or solving both problem jointly is too difficult, you may consider resorting to "Poincaré-Jaynes-Bretthorst" interpolation that can be easily adapted to handle uneven spaced times. Please see my question
http://mathoverflow.net/questions/48913/uneven-spaced-time-series
for references.
"Poincaré-Jaynes-Bretthorst" interpolation is in some extent exact: it is necessary (according to Poincaré) and also sufficient, provided that you equip yourself with Jaynes' Principle of Maximum Entropy. Essentially, you "just" need to choose the order of the derivative to constrain (that makes no big difference in some cases.)
-
@Pascal-Orosco, you accidentally pasted the wrong pointer for your question. The correct pointer is mathoverflow.net/questions/47675/… for mathoverflow.net/questions/47675/ You have a very interesting question. – sleepless in beantown Dec 11 2010 at 6:19
@sleepless: sorry for the wrong link. I also believe that my question is important. I will give it a second chance by extracting the problem from Bretthorst' paper... – Pascal Orosco Dec 11 2010 at 13:09
Thanx, I'll need to reserve some time to look into that. – Tim van Beek Dec 13 2010 at 15:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931267499923706, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/21603/have-red-shifted-photons-lost-energy-and-where-did-it-go
|
# Have red shifted photons lost energy and where did it go?
I think the title says it. Did expansion of the universe steal the energy somehow?
-
– Qmechanic♦ Feb 28 '12 at 23:05
– Michael Luciuk Feb 29 '12 at 13:00
## 2 Answers
Energy isn't a nice concept in GR, so all I'm giving is an intuitive way of looking at it.
For gravitationally redshifted stuff: A photon has energy, thus it gravitates (as energy can gravitate analogous to mass from $E=mc^2$), thus it has some (negative) gravitational potential energy when on the surface of a planet. If it's emitted, its GPE eventually becomes 0. So, this increase in GPE had to come from somewhere: the photon's redshift gave the energy. It's pretty much the same thing that happens when you throw a ball up. It loses kinetic energy (slows down).
The GPE in relativity is basically related to the energy stored in spacetime curvature; in a complicated way that I don't know.
For a normally redshifted photon from a moving body: Energy need not be conserved if you swith frames. Energy is different from each reference frame.
See the answers to the question provided by Qmechanic above as well. Over there, they're talking about the entire universe, though, which leads to additional issues.
-
You know, it's like why's it expanding? No one say's oh yes, of course, why wouldn't it be. – sonardude Feb 29 '12 at 2:28
You can write GR in so many ways that there's no rightish way to look at it. – sonardude Feb 29 '12 at 2:32
@sonardude That's why every way of looking at it is wrongish :P. – Manishearth♦ Feb 29 '12 at 2:40
1
This is not wrongish, but correct. Still, many people will tell you that it is wrong, even though it is correct, because they are uncomfortable with the energy in GR, because it is a pseudotensor, which is only globally interesting, and only uncontroversial for asymptotically flat spaces. But whatever. +1. – Ron Maimon Feb 29 '12 at 4:58
@RonMaimon Didn't know that, thanks . You've managed to to point out that I'm wrong when I think I'm right, and right when I think I'm wrong. Wierd. :P – Manishearth♦ Feb 29 '12 at 12:58
show 2 more comments
The short answer is "yes". The energy lost from the photons is taken up by the energy in the gravitational field. Of course energy is a relative concept but if you take the simplest case of a spatially flat homogeneous cosmology with no cosmological constant then the equation for energy in an expanding volume $V(t) = a(t)^3$ is
$E = Mc^2 + \frac{P}{a} - \frac{3a}{\kappa} (\frac{da}{dt})^2 = 0$
$M$ is the fixed mass of cold matter in the volume, $\frac{P}{a}$ is the decreasing radiation energy in the volume with $P$ constant, and the third term is the gravitational energy in the volume which is negative. The rate of expansion $\frac{da}{dt}$ will evolve in such a way that the (negative) gravitational energy increases to keep the total constant and zero.
For a more general discussion of energy conservation in general relativity see my paper http://vixra.org/abs/1305.0034
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489606022834778, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/41984/smooth-bijection-between-non-diffeomorphic-smooth-manifolds/41993
|
## Smooth bijection between non-diffeomorphic smooth manifolds?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The "textbook" example of a smooth bijection between smooth manifolds that is not a diffeomorphism is the map $\mathbb{R} \rightarrow \mathbb{R}$ sending $x \mapsto x^3$. However, in this example, the source and target manifolds are diffeomorphic -- just not by the given map. Is there an example of a smooth bijection $X \rightarrow Y$ of smooth manifolds where $X,Y$ are not diffeomorphic at all? (and if so, what?)
(For instance, is it possible to arrange a smooth bijection from a sphere to an exotic sphere, failing to be a diffeomorphism because of the existence of critical points? or do homeomorphisms between different smooth structures on spheres fail to be everywhere smooth in some catastrophic way?)
-
1
I guess the word "manifold" forbids me from just rolling the half-open interval around the circle, and perhaps also from sticking continuum-many discrete points on the line.... – Theo Johnson-Freyd Oct 13 2010 at 8:00
Alternately, maybe you don't want to rest that much on the word "manifold", and instead mean to ask for smooth homeomorphisms that are not diffeomorphisms? – Theo Johnson-Freyd Oct 13 2010 at 8:01
Hi Theo, yes, "manifold" with (what I think of as) the default meaning -- second countable, without boundary, etc.... – D. Savitt Oct 13 2010 at 8:07
(and of course a smooth homeomorphism would be even better, but I'd be happy with just a smooth bijection) – D. Savitt Oct 13 2010 at 8:09
1
In the compact case, every continuous bijection is a homeomorphism. – Greg Kuperberg Oct 13 2010 at 8:36
## 1 Answer
Every smooth manifold has a smooth triangulation, which yields a pseudofunctor from the category of smooth manifolds to the category of PL manifolds. (There is no actual functor; that would be crazy.) If two smooth manifolds are PL isomorphic, then the answer is yes. You can start with the PL isomorphism, and then build a homeomorphism that follows it and that has the property that all derivatives vanish in all directions perpendicular to every simplex. You can build the homeomorphism by induction from the $k$-skeleton to the $(k+1)$-skeleton using bump functions.
The PL Poincaré conjecture is true in dimensions other than 4, so all exotic spheres in the same dimension $n \ge 5$ are PL homeomorphic. (High-dimensional examples of exotic spheres start in dimension 7, it was calculated.) In dimension 4, by contrast, every PL manifold has a unique smooth structure, and it is not known whether there are any exotic spheres.
On the other hand, if the manifolds are homeomorphic but not even PL homeomorphic, then I don't know. It is known that every manifold of dimension $n \ge 5$ has a unique Lipschitz structure, but I do not know a Lipschitz version of the above argument. On the positive side, passing from smooth to Lipschitz is an actual functor, so the answer to a modified question, is there a Lipschitz-smooth homeomorphism, is yes, and you can even make it bi-Lipschitz.
-
Perhaps a side issue, but I don't understand what you mean by pseudofunctor here. To me, a pseudofunctor is a type of map between 2-category or bicategory structures, but I am not aware of any interesting 2-category structure on the category of PL manifolds. – Todd Trimble Oct 13 2010 at 13:43
1
It looks like I used the wrong word; I just didn't check the definition. What I meant is a "mock functor", a partial functor defined for objects and isomorphisms that has no reasonable behavior for general morphisms. And even compositions of isomorphisms are somewhat problematic. – Greg Kuperberg Oct 13 2010 at 14:07
Nice. Thanks, Greg! – D. Savitt Oct 13 2010 at 14:26
Thanks for that response, Greg. – Todd Trimble Oct 13 2010 at 15:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323365092277527, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/188335-integral-definition-gamma-function-problem.html
|
# Thread:
1. ## Integral definition of gamma function problem
Hey guys got a tricky question (for me at least) that i'm not sure about. Calculus was never really my strong point.
Not sure how to use the math tags very well either so i just attached a picture of my exam revision sheet. I get the basic gist of integration by parts and Laplace Transform functions but I can't seem to get this one going......
I seem to get Gamma Function Symbol (Can't remember the name) equals zero, now i know that's not right haha.
Some help would be great if you guys could manage.
Cheers
EDIT: Attached file : )
Attached Thumbnails
2. ## Re: Integral definition of the gamma function problem
Originally Posted by Potato
Hey guys got a tricky question (for me at least) that i'm not sure about. Calculus was never really my strong point.
Not sure how to use the math tags very well either so i just attached a picture of my exam revision sheet. I get the basic gist of integration by parts and Laplace Transform functions but I can't seem to get this one going......
I seem to get Gamma Function Symbol (Can't remember the name) equals zero, now i know that's not right haha.
Some help would be great if you guys could manage.
Cheers
EDIT: Attached file : )
Why is the subject line "Laplace transform", thit is not an LT question.
For (a) try integration by parts:
$\Gamma(x)=\int_0^{\infty} t^{x-1}e^{-x}\; dt= \int_0^{\infty} [t] \left[t^{x-2}e^{-x}\right]\; dt$
If you have any problems with that post a follow up question in this thread.
CB
3. ## Re: Integral definition of gamma function problem
Sorry for my late reply,
Thank you very much, a simple solution, I should have seen that straight away, funny how that works.
And I do apologise for the inaccurate title, I originally recognised the function as a Laplace Transform which is again, my mistake.
Thanks
4. ## Re: Integral definition of gamma function problem
can you please put the solution steps
5. ## Re: Integral definition of gamma function problem
Originally Posted by prime
can you please put the solution steps
What problems are you having with the integration by parts suggested up thread?
(We are here to help you not do your work for you)
CB
6. ## Re: Integral definition of gamma function problem
For the second problem, you know that $\Gamma (x)=(x-1)\Gamma(x-1)$ that means $\Gamma(x-1)=(x-2)\Gamma(x-2)$ and so on, therefore you get:
$\Gamma(x)=(x-1)\Gamma(x-1)=(x-1)(x-2)\Gamma(x-2)=(x-1)(x-2)(x-3)\Gamma(x-3)$
Thus, You get:
$\Gamma(x)=(x-1)(x-2)(x-3)...3.2.\Gamma(1)$
The only thing you have to do now is to prove that $\Gamma(1)=1$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927844762802124, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Semigroup
|
Semigroup
In mathematics, a semigroup is an algebraic structure consisting of a set together with an associative binary operation. A semigroup generalizes a monoid in that a semigroup need not have an identity element. It also (originally) generalized a group (a monoid with all inverses) to a type where every element did not have to have an inverse, thus the name semigroup.
The binary operation of a semigroup is most often denoted multiplicatively: $x \cdot y$, or simply $xy$, denotes the result of applying the semigroup operation to the ordered pair $(x,y)$. The operation is required to be associative so that $(x \cdot y) \cdot z = x \cdot (y \cdot z)$ for all x, y and z, but need not be commutative so that $x \cdot y$ does not have to equal $y \cdot x$ (contrast to the standard multiplication operator on real numbers, where xy = yx).
By definition, a semigroup is an associative magma. A semigroup with an identity element is called a monoid. A group is then a monoid in which every element has an inverse element. Semigroups must not be confused with quasigroups which are sets with a not necessarily associative binary operation such that division is always possible.
The formal study of semigroups began in the early 20th century. Semigroups are important in many areas of mathematics because they are the abstract algebraic underpinning of "memoryless" systems: time-dependent systems that start from scratch at each iteration. In applied mathematics, semigroups are fundamental models for linear time-invariant systems. In partial differential equations, a semigroup is associated to any equation whose spatial evolution is independent of time. The theory of finite semigroups has been of particular importance in theoretical computer science since the 1950s because of the natural link between finite semigroups and finite automata. In probability theory, semigroups are associated with Markov processes (Feller 1971).
Algebraic structures
Group-like
• Semigroup / Monoid
• Quasigroup and Loop
Ring-like
Lattice-like
Module-like
Algebra-like
Definition
A semigroup is a set $S$ together with a binary operation "$\cdot$" (that is, a function $\cdot:S\times S\rightarrow S$) that satisfies the associative property:
For all $a,b,c\in S$, the equation $(a\cdot b)\cdot c = a\cdot(b\cdot c)$ holds.
More succinctly, a semigroup is an associative magma.
Examples of semigroups
• Empty semigroup: the empty set forms a semigroup with the empty function as the binary operation.
• Semigroup with one element: there is essentially just one, the singleton {a} with operation a · a = a.
• Semigroup with two elements: there are five which are essentially different.
• The set of positive integers with addition.
• Square nonnegative matrices of a given size with matrix multiplication.
• Any ideal of a ring with the multiplication of the ring.
• The set of all finite strings over a fixed alphabet Σ with concatenation of strings as the semigroup operation — the so-called "free semigroup over Σ". With the empty string included, this semigroup becomes the free monoid over Σ.
• A probability distribution F together with all convolution powers of F, with convolution as operation. This is called a convolution semigroup.
• A monoid is a semigroup with an identity element.
• A group is a monoid in which every element has an inverse element.
• Transformation semigroups and monoids
Basic concepts
Identity and zero
Every semigroup, in fact every magma, has at most one identity element. A semigroup with identity is called a monoid. A semigroup without identity may be embedded into a monoid simply by adjoining an element $e \notin S$ to $S\$ and defining $e \cdot s = s \cdot e = s$ for all $s \in S \cup \{e\}$.[1][2] The notation S1 denotes a monoid obtained from S by adjoining an identity if necessary (S1 = S for a monoid).[2]
Similarly, every magma has at most one absorbing element, which in semigroup theory is called a zero. Analogous to the above construction, for every semigroup S, one defines S0, a semigroup with 0 that embeds S.
Subsemigroups and ideals
The semigroup operation induces an operation on the collection of its subsets: given subsets A and B of a semigroup S, their product A*B, written commonly as AB, is the set { ab | a in A and b in B }. In terms of this operations, a subset A is called
• a subsemigroup if AA is a subset of A,
• a right ideal if AS is a subset of A, and
• a left ideal if SA is a subset of A.
If A is both a left ideal and a right ideal then it is called an ideal (or a two-sided ideal).
If S is a semigroup, then the intersection of any collection of subsemigroups of S is also a subsemigroup of S. So the subsemigroups of S form a complete lattice.
An example of semigroup with no minimal ideal is the set of positive integers under addition. The minimal ideal of a commutative semigroup, when it exists, is a group.
Green's relations, a set of five equivalence relations that characterise the elements in terms of the principal ideals they generate, are important tools for analysing the ideals of a semigroup and related notions of structure.
Homomorphisms and congruences
A semigroup homomorphism is a function that preserves semigroup structure. A function f: S → T between two semigroups is a homomorphism if the equation
f(ab) = f(a)f(b).
holds for all elements a, b in S, i.e. the result is the same when performing the semigroup operation after or before applying the map f.
A semigroup homomorphism between monoids preserves identity if it is a monoid homomorphism. But there are semigroup homomorphisms which are not monoid homomorphisms, e.g. the canonical embedding of a semigroup $S$ without identity into $S^1$. Conditions characterizing monoid homomorphisms are discussed further. Let $f:S_0\to S_1$ be a semigroup homomorphism. The image of $f$ is also a semigroup. If $S_0$ is a monoid with an identity element $e_0$, then $f(e_0)$ is the identity element in the image of $f$. If $S_1$ is also a monoid with an identity element $e_1$ and $e_1$ belongs to the image of $f$, then $f(e_0)=e_1$, i.e. $f$ is a monoid homomorphism. Particularly, if $f$ is surjective, then it is a monoid homomorphism.
Two semigroups S and T are said to be isomorphic if there is a bijection f : S ↔ T with the property that, for any elements a, b in S, f(ab) = f(a)f(b). Isomorphic semigroups have the same structure.
A semigroup congruence $\sim$ is an equivalence relation that is compatible with the semigroup operation. That is, a subset $\sim\;\subseteq S\times S$ that is an equivalence relation and $x\sim y\,$ and $u\sim v\,$ implies $xu\sim yv\,$ for every $x,y,u,v$ in S. Like any equivalence relation, a semigroup congruence $\sim$ induces congruence classes
$[a]_\sim = \{x\in S\vert\; x\sim a\}$
and the semigroup operation induces a binary operation $\circ$ on the congruence classes:
$[u]_\sim\circ [v]_\sim = [uv]_\sim$
Because $\sim$ is a congruence, the set of all congruence classes of $\sim$ forms a semigroup with $\circ$, called the quotient semigroup or factor semigroup, and denoted $S/\sim$. The mapping $x \mapsto [x]_\sim$ is a semigroup homomorphism, called the quotient map, canonical surjection or projection; if S is a monoid then quotient semigroup is a monoid with identity $[1]_\sim$. Conversely, the kernel of any semigroup homomorphism is a semigroup congruence. These results are nothing more than a particularization of the first isomorphism theorem in universal algebra. Congruence classes and factor monoids are the objects of study in string rewriting systems.
A nuclear congruence on S is one which is the kernel of an endomorphism of S.[3]
A semigroup S satisfies the maximal condition on congruences if any family of congruences on S, ordered by inclusion, has a maximal element. By Zorn's lemma, this is equivalent to saying that the ascending chain condition holds: there is no infinite strictly ascending chain of congruences on S.[4]
Every ideal I of a semigroup induces a subsemigroup, the Rees factor semigroup via the congruence x ρ y ⇔ either x = y or both x and y are in I.
Structure of semigroups
For any subset A of S there is a smallest subsemigroup T of S which contains A, and we say that A generates T. A single element x of S generates the subsemigroup { xn | n is a positive integer }. If this is finite, then x is said to be of finite order, otherwise it is of infinite order. A semigroup is said to be periodic if all of its elements are of finite order. A semigroup generated by a single element is said to be monogenic (or cyclic). If a monogenic semigroup is infinite then it is isomorphic to the semigroup of positive integers with the operation of addition. If it is finite and nonempty, then it must contain at least one idempotent. It follows that every nonempty periodic semigroup has at least one idempotent.
A subsemigroup which is also a group is called a subgroup. There is a close relationship between the subgroups of a semigroup and its idempotents. Each subgroup contains exactly one idempotent, namely the identity element of the subgroup. For each idempotent e of the semigroup there is a unique maximal subgroup containing e. Each maximal subgroup arises in this way, so there is a one-to-one correspondence between idempotents and maximal subgroups. Here the term maximal subgroup differs from its standard use in group theory.
More can often be said when the order is finite. For example, every nonempty finite semigroup is periodic, and has a minimal ideal and at least one idempotent. For more on the structure of finite semigroups, see Krohn–Rhodes theory.
Special classes of semigroups
Main article: Special classes of semigroups
• A monoid is a semigroup with identity.
• A subsemigroup is a subset of a semigroup that is closed under the semigroup operation.
• A band is a semigroup the operation of which is idempotent.
• A cancellative semigroup is one having the cancellation property:[5] a · b = a · c implies b = c and similarly for b · a = c · a.
• A semilattice is a semigroup whose operation is idempotent and commutative.
• 0-simple semigroups.
• Transformation semigroups: any finite semigroup S can be represented by transformations of a (state-) set Q of at most |S|+1 states. Each element x of S then maps Q into itself x: Q → Q and sequence xy is defined by q(xy) = (qx)y for each q in Q. Sequencing clearly is an associative operation, here equivalent to function composition. This representation is basic for any automaton or finite state machine (FSM).
• The bicyclic semigroup is in fact a monoid, which can be described as the free semigroup on two generators p and q, under the relation p q = 1.
• C0-semigroups.
• Regular semigroups. Every element x has at least one inverse y satisfying xyx=x and yxy=y; the elements x and y are sometimes called "mutually inverse".
• Inverse semigroups are regular semigroups where every element has exactly one inverse. Alternatively, a regular semigroup is inverse if and only if any two idempotents commute.
• Affine semigroup: a semigroup that is isomorphic to a finitely-generated subsemigroup of Zd. These semigroups have applications to commutative algebra.
Group of fractions
The group of fractions of a semigroup S is the group G = G(S) generated by the elements of S as generators and all equations xy=z which hold true in S as relations.[6] This has a universal property for morphisms from S to a group.[7] There is an obvious map from S to G(S) by sending each element of S to the corresponding generator.
An important question is to characterize those semigroups for which this map is an embedding. This need not always be the case: for example, take S to be the semigroup of subsets of some set X with set-theoretic intersection as the binary operation (this is an example of a semilattice). Since A.A = A holds for all elements of S, this must be true for all generators of G(S) as well: which is therefore the trivial group. It is clearly necessary for embeddability that S have the cancellation property. When S is commutative this condition is also sufficient[8] and the Grothendieck group of the semigroup provides a construction of the group of fractions. The problem for non-commutative semigroups can be traced to the first substantial paper on semigroups, (Suschkewitsch 1928).[9] Anatoly Maltsev gave necessary and conditions for embeddability in 1937.[10]
Semigroup methods in partial differential equations
Further information: C0-semigroup
Semigroup theory can be used to study some problems in the field of partial differential equations. Roughly speaking, the semigroup approach is to regard a time-dependent partial differential equation as an ordinary differential equation on a function space. For example, consider the following initial/boundary value problem for the heat equation on the spatial interval (0, 1) ⊂ R and times t ≥ 0:
$\begin{cases} \partial_{t} u(t, x) = \partial_{x}^{2} u(t, x), & x \in (0, 1), t > 0; \\ u(t, x) = 0, & x \in \{ 0, 1 \}, t > 0; \\ u(t, x) = u_{0} (x), & x \in (0, 1), t = 0. \end{cases}$
Let X be the Lp space L2((0, 1); R) and let A be the second-derivative operator with domain
$D(A) = \big\{ u \in H^{2} ((0, 1); \mathbf{R}) \big| u(0) = u(1) = 0 \big\}.$
Then the above initial/boundary value problem can be interpreted as an initial value problem for an ordinary differential equation on the space X:
$\begin{cases} \dot{u}(t) = A u (t); \\ u(0) = u_{0}. \end{cases}$
On an heuristic level, the solution to this problem "ought" to be u(t) = exp(tA)u0. However, for a rigorous treatment, a meaning must be given to the exponential of tA. As a function of t, exp(tA) is a semigroup of operators from X to itself, taking the initial state u0 at time t = 0 to the state u(t) = exp(tA)u0 at time t. The operator A is said to be the infinitesimal generator of the semigroup.
History
The study of semigroups trailed behind that of other algebraic structures with more complex axioms such as groups or rings. A number of sources[11][12] attribute the first use of the term (in French) to J.-A. de Séguier in Élements de la Théorie des Groupes Abstraits (Elements of the Theory of Abstract Groups) in 1904. The term is used in English in 1908 in Harold Hinton's Theory of Groups of Finite Order.
Anton Suschkewitsch obtained the first non-trivial results about semigroups. His 1928 paper Über die endlichen Gruppen ohne das Gesetz der eindeutigen Umkehrbarkeit (On finite groups without the rule of unique invertibility) determined the structure of finite simple semigroups and showed that the minimal ideal (or Green's relations J-class) of a finite semigroup is simple.[12] From that point on, the foundations of semigroup theory were further laid by David Rees, James Alexander Green, Evgenii Sergeevich Lyapin, Alfred H. Clifford and Gordon Preston. The latter two published a two-volume monograph on semigroup theory in 1961 and 1967 respectively. In 1970, a new periodical called Semigroup Forum (currently edited by Springer Verlag) became one of the few mathematical journals devoted entirely to semigroup theory.
In recent years researchers in the field have become more specialized with dedicated monographs appearing on important classes of semigroups, like inverse semigroups, as well as monographs focusing on applications in algebraic automata theory, particularly for finite automata, and also in functional analysis.
Generalizations
| | Totality* | Associativity | Identity | Inverses |
|-----------------------|------------------------------------------------------------------------------------------------------------------------------------------|-----------------|------------|------------|
| Group-like structures | | | | |
| | | | | |
| Yes | No | No | No | No |
| Yes | Yes | No | No | No |
| Yes | Yes | Yes | No | No |
| Yes | Yes | Yes | Yes | No |
| Yes | Yes | Yes | Yes | Yes |
| Yes | No | Yes | Yes | No |
| Yes | No | No | Yes | No |
| No | Yes | Yes | Yes | No |
| No | Yes | Yes | No | No |
| No | Yes | No | No | No |
| | *Closure, which is used in many sources to define group-like structures, is an equivalent axiom to totality, though defined differently. | | | |
If the associativity axiom of a semigroup is dropped, the result is a magma, which is nothing more than a set M equipped with a binary operation M × M → M.
Generalizing in a different direction, an n-ary semigroup (also n-semigroup, polyadic semigroup or multiary semigroup) is a generalization of a semigroup to a set G with a n-ary operation instead of a binary operation.[13] The associative law is generalized as follows: ternary associativity is (abc)de = a(bcd)e = ab(cde), i.e. the string abcde with any three adjacent elements bracketed. N-ary associativity is a string of length n + (n − 1) with any n adjacent elements bracketed. A 2-ary semigroup is just a semigroup. Further axioms lead to an n-ary group.
A third generalization is the semigroupoid, in which the requirement that the binary relation be total is lifted. As categories generalize monoids in the same way, a semigroupoid behaves much like a category but lacks identities.
Notes
1. Jacobson (2009), p. 30, ex. 5
2. ^ a b Lawson (1998),
3. Lothaire (2011) p.463
4. Lothaire (2011) p.465
5. B. Farb, Problems on mapping class groups and related topics (Amer. Math. Soc., 2006) page 357. ISBN 0-8218-3838-5
6. M. Auslander and D.A. Buchsbaum, Groups, rings, modules (Harper&Row, 1974) page 50. ISBN 0-06-040378-X
7. G. B. Preston (1990). "Personal reminiscences of the early history of semigroups". Retrieved 2009-05-12.
8. Maltsev, A. (1937), "On the immersion of an algebraic ring into a field", Math. Annalen 113: 686–691, doi:10.1007/BF01571659.
9. ^ a b
10. Dudek, W.A. (2001), "On some old problems in n-ary groups", Quasigroups and Related Systems 8: 15–36
References
General references
• Howie, John M. (1995), Fundamentals of Semigroup Theory, Clarendon Press, ISBN 0-19-851194-9 .
• Clifford, A. H.; Preston, G. B. (1961), The algebraic theory of semigroups, volume 1, American Mathematical Society .
• Clifford, A. H.; Preston, G. B. (1967), The algebraic theory of semigroups, volume 2, American Mathematical Society .
• Grillet, Pierre Antoine (1995), Semigroups: an introduction to the structure theory, Marcel Dekker, Inc.
Specific references
• Feller, William (1971), An introduction to probability theory and its applications. Vol. II., Second edition, New York: John Wiley & Sons, MR 0270403 .
• Hille, Einar; Phillips, Ralph S. (1974), Functional analysis and semi-groups, Providence, R.I.: American Mathematical Society, MR 0423094 .
• Suschkewitsch, Anton (1928), "Über die endlichen Gruppen ohne das Gesetz der eindeutigen Umkehrbarkeit", 99 (1): 30–50, doi:10.1007/BF01459084, ISSN 0025-5831, MR 1512437 .
• Kantorovitz, Shmuel (2010), Topics in Operator Semigroups., Boston, MA: Birkhauser .
• Jacobson, Nathan (2009), Basic algebra 1 (2nd ed.), Dover, ISBN 978-0-486-47189-1
• Lawson, M.V. (1998), Inverse semigroups: the theory of partial symmetries, World Scientific, ISBN 978-981-02-3316-7
• Lothaire, M. (2011), Algebraic combinatorics on words, Encyclopedia of Mathematics and Its Applications 90, With preface by Jean Berstel and Dominique Perrin (Reprint of the 2002 hardback ed.), Cambridge University Press, ISBN 978-0-521-18071-9, Zbl 1221.68183
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 49, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.881919264793396, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/14717?sort=oldest
|
## Mittag-Leffler condition: what’s the origin of its name?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Why the Mittag-Leffler condition on a short exact sequence of, say, abelian groups, that ensures that the first derived functor of the inverse limit vanishes, is so named?
-
## 1 Answer
The wording of your question suggests that you're familiar with the "classical" Mittag-Leffler theorem from complex analysis, which assures us that meromorphic functions can be constructed with prescribed poles (as long as the specified points don't accumulate in the region).
It turns out - or so I'm told, I must admit to never working through the details - that parts of the proof can be abstracted, and from this point of view a key ingredient (implicit or explicit in the proof, according to taste) is the vanishing of a certain $\lim_1$ group -- as assured by the "abstract" ML-theorem that you mention.
I'm not sure where this was first recorded - I hesitate to say "folklore" since that's just another way of saying "don't really know am and not a historian". One place this is discussed is in Runde's book A taste of topology: see Google Books for the relevant part.
IIRC, Runde says that the use of the "abstract" Mittag-Leffler theorem to prove the "classical" one, and to prove things like the Baire category theorem, can be found in Bourbaki. Perhaps someone better versed in the mathematical literature (or at least better versed in the works of Bourbaki) can confirm or refute this?
-
Gopal Prasad said in class that the abstract M-L condition was discovered by Bourbaki along with a very slick proof. – Harry Gindi Feb 9 2010 at 0:53
Thanks Yemon. I followed your suggestion and took a look at Runde's (Appendix A) and the "abstract" Bourbaki's M-L version for a sequence of complete metric spaces and continuous funcions $f_n:X_n\rightarrow X_{n-1}$ with dense image. It looks that this can be further abstracted to the "algebraic" M-L. I'll do the details later on to see if this is the case. By the way, Runde quotes Bourbaki's "General Topology" volume. I don't have access to that one right now but I'll check it tomorrow. Thanks again. – F Zaldivar Feb 9 2010 at 1:26
7
This is in Bourbaki's General Topology, Chapter II, section 3.5. The main theorem is attributed to Mittag-Leffler, and is concerned with inverse systems of "complete Hausdorff uniform spaces". The Mittag Leffler condition mentioned there says the functions in the system have dense image. The usual theorem about inverse limits is a corollary, for sets with the 'discrete uniformity'. Classical Mittag-Leffler is given as an example of the main theorem. The spaces there are essentially holomorphic functions on balls centred at 0, continuous on the boundary, with the uniform metric. – Zavosh Feb 9 2010 at 2:15
2
The condition for groups that the question asks about is in Bourbaki's Algebra. It appears that the name for the condition on groups comes from the name of the condition on complete hausdorff uniform spaces, which indeed comes from the classical case. So it appears that Bourbaki named it the M-L condition because they wrote the book on topology first (yes, I know it is book 3, but it was published before book 2). – Harry Gindi Feb 9 2010 at 2:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500008225440979, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/261069/two-steps-away-from-the-hamilton-cycle?answertab=active
|
# Two Steps away from the Hamilton Cycle
Assume an at least $2$-vertex connected, cubic, bipartite, planar graph $G$ that contains a Hamilton cycle (HC) $abcdefg\dots yx\dots za$ (in fact $G$ would then have at least four HCs, see here; it is $yx$ due to the picture I've drawn below and an assumption disproven here). My question is:
How can one deviate from a given Hamilton cycle in such a way, that one introduces exactly two errors, resp. does introducing two errors rely on certain subgraphs?
By errors I mean, that two vertices are met twice, while two others are not met at all.
EDIT: To be a little more specific: The Hamilton cycle and the deviation from it with two errors both start and end at the same vertex and are of the same length. We don't add or remove edges.
With the notation above the HC would be e.g. $adef\color{red}{ef}\dots yx \dots za$, so the $\color{grey}{cd}$-edge is missing. I found three possibilities that are PAINTed below. Let the edges of the HC be coloured in $\color{green}{green}$ and edges that introduce errors in $\color{red}{red}$. $\color{black}{Black}$ edges don't contribute to the case under investigation.
Figure 1: We can introduce two errors if we go directly from $a$ to $d$, thereby miss $bc$, and then:
1. use backtracking along the HC $ad\dots ef\color{red}{ef}\dots yx$ $\scriptstyle \text{(this can happen at any edge along the HC)}$
2. use backtracking aside the HC $ad\dots e\color{red}{xe}f\dots yx$ $\scriptstyle \text{(this also can happen everywhere on the HC)}$ or
3. make a detour at a second square $ad\dots e\color{red}{xy}f\dots yx$ $\scriptstyle \text{(only here we need a 2nd square)}$.
EDIT For Figure 1)ii. I found another version without backtracking:
Let the part of the HC be $\dots abcd\dots ef \dots yx\dots$, so that there is another cycle part $C_{fy}$ joining $f$ and $y$. Then two errors can be introduced as follows:
• make the usual shortcut at a square, missing the $bc$-edge
• go from $e$ to $x$
• walk along $C_{fy}$ in the opposite direction
• again go from $e$ to $x$, which is traversed twice
You may think of backtracking as a $2$-cycle. This variant may also happen everywhere.
Figure 2: "$\color{blue}4$-$\color{goldenrod}2$-hexagon": Again the original HC $\underbrace{abcd}_{\color{blue}4} \dots \underbrace{yx}_{\color{goldenrod}2}$ is depicted in green. Two errors can be introduced by $a\color{red}{xy}d\dots yx$.
My question rephrased:
$\hskip1.3in$ Are these subgraphs the only ones to introduce two errors?
Some remarks:
• In Figure 1 the vertices $d$ and $e$ don't need to be adjacent as indicated by the dashed line.
• In both figures, I think (not sure) that I could have also chosen $yx$ at the end.
• I tried to use the "$2$-$2$-$2$-hexagon" (where Hamilton runs through like $ab...cd...yx$, so without the $bc$-, $dy$- and $xa$-edges) without success
• and sorry for the PAINT work (feel free to improve it!)...
.. and I'm pretty sure that the answer is no, see below...
EDIT: Some more remarks concerning a general solution
I also thought about a more general approach by using $n$th powers of adjacency (sub)matrices, which corresponds to doing $n$ steps on the subgraphs. It's even possible to exclude backtracking as you can read here.
But since the the set of adjacency (sub)matrices, i.e. the (sub)graphs sould be the result of such an approach, I admit that for now, I don't know how to work this out. How does the HC come into play then?
Thanks for your help and special thanks to Brian Rushton,...
-
What's the motivation for this? It seems like something that underlies an interesting problem in another area. How did you come to it? – Brian Rushton Dec 30 '12 at 4:30
Would $abeafbg\ldots za$ count as a two-error path? – Hagen von Eitzen Jan 1 at 22:52
@Hagen Yes you miss cd and get ab twice. Do you have another subgraph? Sorry the late answer... – draks ... Jan 3 at 13:05
## 2 Answers
Ok, I think one can split the problem into two parts on two subgraphs:
1. Miss two vertices and
2. meet two vertices twice.
This was already indicated in Fig.1, where the upper square subgraph was always the same and only the lower one varied. When you miss two vertices, the number of edges used in that subgraph drops by two as well ( two additional edges are used in the "meet twice" case).
I think I found another way to miss two vertices:
Combine this with, let's say backtracking or any other option to meet vertices twice from the question's figure 1 (edit: or the example below). I think is really different from the way above shown, since the vertices that are missing are not connected by an edge as above.
Let the 10-vertex-Hamiltonian path be $abcdefghij$, then the two error variant is 8 vertices long and may be written as $afghicdj$. It should be possible to extend the inner hexagon to larger structures by introducing additional edges into the outer squares.
EDIT
Here's another example of how to meet two vertices twice:
Let the part of the HC be $abcd$, then the path on the rhs is $abadcd$, where $a$ and $d$ are met twice. The difference to example 1.iii.(lower part) is the HC we start out with.
So the above shown are not the only onces...
Now for something analytical. Think of $A$ as the adjacencey matrix of the subgraph with $n$ vertices (without the black edges going out of the subgraph itself, so it might not be cubic anylonger, which is not important for the moment). A Hamilton path would need $n$ steps to traverse it. Starting from $\vec {v_{start}}=(1,0,0,\dots ,0)^T$ we get $\vec{v_{final=n}}=A^n\vec{v_0}=(a_1,a_2,\dots,a_n)^T$ where $a_k$ denotes the number of ways from the starting vertices to vertex $k$. So $a_n$ would contain the Hamiltonian paths next to other paths that reach vertex $n$ in $n$, but maybe don't touch all other points.
To miss two vertices wee need to check if it's possible to reach $n$ after $n-2$ steps as well and meanwhile meet $n-2$ vertices. Similar for the "meet twice" case.
Instead of using $A^n$, one could use propagation without backtracking (see here), to get rid of some of the obsolete paths (at least in the "missing" case, in the "meeting" case backtracking should be allowed), but, all in all and after all I personally think that one has to check infinitely large subgraphs for the property sketched above, which is not possible in finite time.
The question remains, if these are all possible subgraphs if the largest face degree is $6$?
-
Just to clarify, it seems like you want your deviation to result in an edge path still (no jumping from one vertex to another). Is that right? In fact, it seems you want a closed edge path, one that starts and stops at the same vertices.
When you introduce your first kind of error (missing two vertices completely), you have to remove three line segments, the ones going into, between, and out of the pair of points (call them $a$ and $b$). Removing these breaks the closed circuit into a single arc, whose end points must be connected by some new path to form a new closed circuit. Any such path will either:
1. Go directly from $a$ to $b$, or
2. Go through some even number of other points on the way to $b$ (even by bipartiteness).
In case 1, the three deleted edges and the new edge start and end at the same points, and so they form a simple closed curve separating the plane. So we get a square, possibly with part of the graph inside and part outside. I do not know if your conditions prevent this possibility. We then need to go through two vertices an extra time. So we can only do this by adding a new edge somewhere. We can either follow this new edge by its inverse, and continue onward (I.e. backtracking) or not. If not, call the vertex we labded at x. We must go over some subsequent edge to a point y. However, I believe your figure 3 is false, as x and y need not be connected by an edge of the Hamiltonian cycle. In any case, this can't go on forever, as we can't visit any other vertex twice, so we must return immediately to the place where we left off (f in your picture). As above, this does give a square, although it may be non-empty inside.
In case 2, our new path can only go through two other vertices, since otherwise we added too many errors. Thus, we must have a hexagon, although, again, some of the grah might be inside of the hexagon and some outside.
So, the only things not covered by your pictures are:
1. The square/hexagon might contain parts of the graph inside and outside at the same time.
2. The new edge added between x and y might not be an edge in the old Hamiltonian cycle.
This proof method should generalize easily, although I haven't thought it through. Adding twice as many errors either gives copies of the above errors, or octagon errors, etc. I may post more. Thanks for sharing your problem! Let me know if there are any problems in my analysis.
-
Brian, thanks for your answer, here are mine: Yes, the path should be closed (it was/is a cycle) and of the same length as the original HC. Not sure if this is called an edge path, but it sounds right. (i)But I don't want to remove/add edges to the graph. (ii)To admit the drawing are a little wrong since some of the vertices miss the cubic property (I will update it). (iii) Also "2 squares" doesn't really fit. I removed the $ex$/$fy$-edge(s) (see the post revisions), because you don't need them, to introduce two errors. – draks ... Dec 30 '12 at 15:54
(iv) How is Figure 3 false? When the original HC is $abcd..ef..yx$ and you have exactly these two 2 squares (here the subtitle is correct), you could introduce two errors if take the path $ad\dots e\color{red}{xy}f\dots xy$. Maybe a picture of yours would help. Thanks for your time so far... I will put some of my motivation in the next edit together with some more clarifications... – draks ... Dec 30 '12 at 15:58
Figure three isn't false, it just splits into two cases. The points x and y might not have a green edge between them; you can think of the path going into x, diving down and to the left and coming back up and to the right to y in a kind of "hairpin" turn. – Brian Rushton Dec 31 '12 at 13:18
What if the original path is abcd...ef...y...x and the new path is abcd...exyf...y...x? That's what I was saying;I should draw a picture, but I'm doing this n an iPad. – Brian Rushton Jan 2 at 1:46
Where do you go from the second $y$? To $f$,$x$ or back to where you came from (call it $t$), but anyway you introduce more than two errors. Draw the picture by hand, make a photo and post that. – draks ... Jan 2 at 19:00
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 82, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489426016807556, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/16991/what-are-the-connections-between-pi-and-prime-numbers/19501
|
## What are the connections between pi and prime numbers?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I watched a video that said the probability for Gaussian integers to be relatively prime is an expression in $\pi$, and I also know about $\zeta(2) = \pi^2/6$ but I am wondering what are more connections between $\pi$ and prime numbers?
-
5
You can use the fact that zeta(2)=pi^2/6 to prove the infinitude of primes. If there were finitely many, then the Euler product for zeta(2) would be a rational number, contradicting the irrationality of pi. – Ben Linowitz Mar 3 2010 at 19:54
14
This question should in my opinion be Community Wiki. – Grétar Amazeen Mar 3 2010 at 20:56
4
More generally, for all positive integers $n$, $\zeta(2n)$ is a rational multiple of $\pi^{2n}$. – Gerry Myerson Mar 3 2010 at 22:42
3
See also this related question - mathoverflow.net/questions/21367/… – François G. Dorais♦ Jun 19 2010 at 14:52
## 7 Answers
Well, first of all, $\pi$ is not just a random real number. Almost every real number is transcendental so how can we make the notion "$\pi$ is special" (in a number-theoretical sense) more precise?
Start by noticing that $$\pi=\int_{-\infty}^{\infty}\frac{dx}{1+x^2}$$ This already tells us that $\pi$ has something to do with rational numbers. It can be expressed as "a complex number whose real and imaginary parts are values of absolutely convergent integrals of rational functions with rational coefficients, over domains in $\mathbb{R}^n$ given by polynomial inequalities with rational coefficients." Such numbers are called periods. Coming back to the identity $$\zeta(2)=\frac{\pi^2}{6}$$ There is a very nice proof of this (that at first seems very unnatural) due to Calabi, it shows that $$\frac{3\zeta(2)}{4}=\int_0^1\int_0^1\frac{dxdy}{1-x^2y^2}$$ by expanding the corresponding geometric series, and then evaluates the integral to $\pi^2/8$ (So yes, $\pi^2$ and all other powers of $\pi$ are periods.) But the story doesn't end here as it is believed that there are truly deep connections between values of zeta functions (or L-functions) and certain evaluations involving periods, such as $\pi$. Another famous problem about primes is Sylvester's problem of which primes can be written as a sum of two rational cubes. So one studies the Elliptic curve $$E_p: p=x^3+y^3$$ and one wants to know if there is one rational solution, the central value of the corresponding L-function will again involve $\pi$ up to some integer factor and some Gamma factor. Next, periods are also values of multiple zeta functions: $$\zeta(s_1,s_2,\dots,s_k)=\sum_{n_1>n_2>\cdots>n_k\geq 1}\frac{1}{n_1^{s_1}\cdots n_k^{s_k}}$$ And they also appear in other very important conjectures such as the Birch and Swinnerton-Dyer conjecture. But of course all of this is really hard to explain without using appropriate terminology, the language of motives etc. So, though, this answer doesn't mean much, it's trying to show that there is an answer to your question out there, and if you study a lot of modern number theory, it might just be satisfactory :-).
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The probability that two Gaussian integers are relatively prime is $6/(\pi^2 K) = 0.66370080461385348\cdots$, where $K= 1 - \frac{1}{3^2}+\frac{1}{5^2}-\frac{1}{7^2}+\cdots$ (Catalan's constant). There is no known simple expression for $K$ in terms of $\pi$. See http://www.springerlink.com/content/y826m64747254t87.
-
A formula that surely belongs here linking $\pi$ and the primes is `$$2.3.5.7...=4\pi^2.$$` This is obtained via a zeta regularization in a similar way to the more well-known $\infty!=\sqrt{2\pi}$ (see e.g. here for a short discussion of this). However, to find the product of the primes, one uses the prime zeta function `$$\sum_{p\; prime} \frac{1}{p^s}$$` which has the unfortunate property of having infinitely many singularities between 0 and 1 which breaks the standard regularization procedure. E. Muñoz García and R. Pérez Marco circumvent this problem (literally) by adding in an extra variable and taking the limit from a different direction.
-
1
What is funny is that they consider 1 to be a prime number... (although this makes no difference for the product). – ACL Mar 15 at 17:20
I certainly would not say that that divergent product has a value. Writing the equation that way causes much more harm, in my opinion, than the slight good that it communicates to people who already know exactly what is trying to be communicated. – Greg Martin Mar 16 at 6:18
There are a few formulas relating $\pi$ to arithmetic functions. For example, if $\sigma(n)$ is the sum of the divisors of $n$, then $\sum_1^n\sigma(n)=\pi^2n^2/12+O(n\log n)$. If $d(n)$ is the number of divisors of $n$, then $\sum_1^{\infty}n^{-2}d(n)=\pi^4/36$. If $\phi(n)$ is the Euler phi-function, then $\sum_1^n\phi(n)=3n^2\pi^{-2}+O(n\log n)$. These all appear in Section 3.5 of Eymard and Lafon, The Number $\pi$.
-
By the way, do you happen to know what constant appears if we change $\sigma(n)$ into the sum $\sigma'$ of all Gaussian integer divisors of n with positive real parts? I.e. this sequence: oeis.org/A078930 Computations show that $\sum_1^n\sigma'(i)\approx Cn^2$, where C=1.7972... – mathreader Oct 18 at 9:44
There is a nice story, initiated by L. Van Hamme, which relates several Ramanujan's formulas for $\pi$ to supercongruences modulo powers of primes. The simplest way to witness this route is to make a look at (my) http://arxiv.org/abs/0805.2788 .
-
More elementary: the probability that two positive integers have GCD=1 is $6/\pi^2 = 1/\zeta(2)$ because the probability that a prime $p$ divides the GCD is 1/p^2 by considering each p by p block of pairs of positive integers. More generally, the probability that k positive integers have GCD 1 is $1/\zeta(k)$ by a similar argument.
-
Here is an example of a way to use $\pi$ to prove the infinitude of primes without calculating its value, or using the relatively deep fact that $\pi$ is irrational, but starting from the knowledge of $\zeta(2)$ and $\zeta(4).$ Suppose that there were only finitely many prime numbers $2= p_{1}, 3= p_{2}, \ldots, p_{k-1},p_{k}.$ From the formulae $\sum_{n=1}^{\infty} \frac{1}{n^{2}} = \frac{\pi^{2}}{6}$ and $\sum_{n=1}^{\infty} \frac{1}{n^{4}} = \frac{\pi^{4}}{90}$, we may conclude after the fashion of Euler that (respectively) we have: `$\prod_{j=1}^{k} \frac{p_{j}^{2}}{p_{j}^{2}-1} = \frac{\pi^{2}}{6}$' and $\prod_{j=1}^{k} \frac{p_{j}^{4}}{p_{j}^{4}-1} = \frac{\pi^{4}}{90}.$ Squaring the first equation and dividing by the second leads quickly to $\prod_{j=1}^{k} \frac{p_{j}^{2}+1}{p_{j}^{2}-1} = \frac{5}{2}$, so $5\prod_{j=1}^{k} (p_{j}^{2}-1) = 2 \prod_{j=1}^{k}(p_{j}^{2}+1).$ This is a contradiction, since the product on the left is certainly divisible by $3$, whereas every term in the rightmost product except that for $j = 2$ is congruent to $-1$ (mod 3), so we obtain $0 \equiv (-1)^{k}$ (mod 3), which is absurd. (I would be grateful if anyone knows a reference for a proof like this. I can't believe that I am the first person to think of it).
-
Neat. Is there an easier way to see that $\zeta(4)/\zeta(2) = 5/2$? – François G. Dorais♦ Mar 16 at 0:18
@Francois: I do not know. I think that there are quite a few instances in number theory where computations which eventually have a rational answer require $\pi$ in an apparently essential fashion along the way. – Geoff Robinson Mar 16 at 1:01
@Geoff: Another way of seeing this: $\sum_{(a,b)=1} \frac{1}{a^2b^2}=\frac{5}{2}$. – i707107 Mar 16 at 4:22
@Francois: A typo, should be $\zeta(4)/\zeta^2(2)$. – i707107 Mar 16 at 4:24
@i707107: Are you saying that you can derive that formula without using values of $\zeta$ at all, or that you can calculate it from $\zeta(2)$ alone? – Geoff Robinson Mar 16 at 14:51
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448946714401245, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/64576/list
|
## Return to Answer
Post Made Community Wiki by S. Carnahan♦
1
IN addition to what has been said, I would like to mention the following aspect: a group action usually does not come alone. When $G$ acts on a set $X$ you will almost for sure be interested also in the induced action of $G$ on certain kind of functions on $X$: think of a smooth manifold $M$ and of smooth functions, or of smooth tensor fields on $M$, or what not. Then the induced group action can be either viewed as a right action when $G$ acts from the left on $X$ or you have to plug in a $^{-1}$ to turn things again into a left action. In various situations (say equivariant maps in the dual of a Lie algebra etc) it might become a notational desaster if you want to specify all kind of actions with a separate symbol. However, it might be important to keep track of whether you have a left or a right action.
So my habit is to denote the action of a group element $g$ on some object $x \in X$ by $g \triangleright x$ if it is a left action and by $g \triangleleft x$ or better $x \triangleleft g$ if it is a right action. Then you don't have to bother whether you have already included the $^{-1}$ in the coadjoint action to make it a left action or just stay with a right action :)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458324909210205, "perplexity_flag": "head"}
|
http://leftcensored.skepsi.net/2011/04/02/a-very-short-and-unoriginal-introduction-to-snow/
|
Where to begin…
# A very short and unoriginal introduction to snow
Posted on April 2, 2011 by
As Jian-Feng rightly pointed out in a comment on my guide to setting up snow on the OSC cluster, it was probably somewhat cavalier of me to say:
Getting `snow` to run properly on single machines, or ever with a cluster of machines via `ssh` connections is fairly trivial.
In an effort to redeem myself, I provide this very short and unoriginal introduction to using `snow`. But first a caveat: to make the most of parallel processing in R, or any other environment, the problem you are trying to solve must be amenable to being broken up into smaller, (mostly) independent pieces. In other words, the results from one piece should not be dependent on the results from another. In statistics, depending on the problem at hand, this may or may not apply. Bootstrapping, a simple example of which I provide below, is one place where parallel processing can provide excellent returns from parallelization. On the other hand, a typical maximum likelihood estimate using, for instance, a BFGS optimization routine would gain little from parallel processing since step $$n+1$$ is dependent on the results of step $$n$$. (Unsurprisingly, things are a bit more complicated than this, and if you are really interested in learning about parallel processing, you may want to start with reading the Wikipedia entry.)
This simple example demonstrates how to calculate bootstrapped sample means of a given vector in parallel across a cluster. First, load the `snow` and `rlecuyer` libraries. Of course, `snow` is what provides the parallel processing, but `rlecuyer` is equally important as it guarantees the random numbers generated in each process are independent (`snow` also supports the `rsprng` library).
```> library(snow)
> library(rlecuyer)```
Now set up some sample data. Here I take 100 random draws, with replacement, from the integers in $$[0,5]$$.
```> x <- sample(0:5, 100, replace = TRUE)
> mean(x)
[1] 2.64```
Define a simple function to calculate a single bootstrapped mean from a given vector:
```> bs.mean <- function(v) {
+ s <- sample(v, length(v), replace = TRUE)
+ mean(s)
+ }```
Now it’s time to set up the cluster. Here I set up a SOCK-type connection, which can be used to set up multiple R instances on the local machine and/or to set up R instances on remote machines through `ssh` connections. `snow` offers other connection options that may be more convenient or necessary depending on your environment (for instance, MPI was needed on the OSC cluster).
`> cl <- makeCluster(c("localhost", "localhost"), type = "SOCK")`
Here, `c("localhost", "localhost")` tells snow where to set up the R instances, while `type = "SOCK"` is obviously the connection type. If I also wanted to run a single instance on a remote machine named `chuck`, I could specify `c("localhost", "localhost", "chuck")`. In this case, I would be prompted for my `ssh` password for `chuck`, though `snow` would take care of the rest once the connection was authenticated.
Once the connections are set up, you will want to provide unique random seeds on each of the instances.
```> clusterSetupRNG(cl)
[1] "RNGstream"```
The return value, `RNGstream`, just tells you what type of RNG was set up. Finally, it’s time to do some work.
```> clusterCall(cl, bs.mean, x)
[[1]]
[1] 2.81
[[2]]
[1] 2.61```
`clusterCall` instructs all instances in `cl` to execute the function `bs.mean` on the vector `x`, both of which we defined above. The results are returned in a list with a length equal to the number of instances; e.g., had we included `chuck` in our call to `makeCluster`, `clusterCall` would have returned a list of three bootstrapped means. Because `bs.mean` doesn’t depend on anything calculated by the other processes, these bootstrapped means are calculated in parallel.
When you are done with the cluster, you should always stop it. Otherwise, you may have to kill R instances by hand.
`> stopCluster(cl)`
Like I said at the outset, this was just a very short and unoriginal introduction to parallel processing with `snow`. There are many other examples available online, a couple of which I provide links to below.
• Luke Tierney’s (the author of `snow`) detailed guide can be found here: http://www.stat.uiowa.edu/~luke/R/cluster/cluster.html
• snow Simplified: http://www.sfu.ca/~sblay/R/snow.html
• Some REvolution Computing alternatives to snow are introduced here.
• Tal Galili provides a guide for parallel processing on Windows over at R-statistics.
This entry was posted in R. Bookmark the permalink.
### 4 Responses to A very short and unoriginal introduction to snow
1. Tal Galili says:
Hi there,
I wrote about this topic (for windows) some time ago, here:
http://www.r-statistics.com/2010/04/parallel-multicore-processing-with-r-on-windows/
And I have a question: is there an (easy) way for having snow using different cores?
I remember that when I played with it, it simply streamed all of my SOCKets into the same CPU.
p.s: please add the “subscribe to comments” plugin
Cheers,
Tal
• Jason says:
Great. Thanks for the reference, Tal. I’ve added a link to your page. As for guaranteeing that R uses all available cores, I am not sure. I operate almost exclusively in a Unix environment, where cores are used as needed (the OS does the scheduling). Fairly recently, I successfully used `snow` with `rgenoud` and 4 cores on Windows without any modification to the batch file I run on Unix systems. I believe the above example would work on Windows as well. If I remember, I will try it out some time this week.
2. Jian-Feng, Mao says:
Dear Jason, It is very GREAT. Your introduction will ease my way to implement parallel computation with R, though I have not so many knowledge of computer. Thanks a lot.
• Jason says:
No problem. Though it’s quite introductory, it should get you started with the basics. I will also try to add more links as I run across them. I am sure there are many very good introductions available.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365819692611694, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-topics/145324-fluid-flow-pipe.html
|
# Thread:
1. ## Fluid flow in pipe.
Hi,
I have what I'm sure is an easy question for a maths genius. However I need to know the answer for an insurance claim:-
"How much water in litres will flow through a 20mm pipe in one hour with the water pressure at 3 bar"
Any answers will be much appreciated.
Thank you
welshman2010
p.s. I hope I've posted this in the right section
2. You might be able to find a table somewhere that gives flow rates for pipes of various sizes and various pressures. Once you get that, you just have to convert the units.
I think you need to know the length of the pipe. The water will flow more slowly through a longer pipe, right?
If you can't find a flow rate online, I think it's possible to calculate it. You'd need someone who knows more about fluid dynamics than I do.
- Hollywood
3. Hello welshman2010
Welcome to Math Help Forum!
Originally Posted by welshman2010
Hi,
I have what I'm sure is an easy question for a maths genius. However I need to know the answer for an insurance claim:-
"How much water in litres will flow through a 20mm pipe in one hour with the water pressure at 3 bar"
Any answers will be much appreciated.
Thank you
welshman2010
p.s. I hope I've posted this in the right section
The equation that gives the volume, $V$, of fluid flowing through a pipe in time $t$ is:
$V = \frac{\pi P r^4 t}{8\eta l}$
where:
$P$ is the pressure difference between the ends of the pipe
$r$ is the radius of the pipe
$\eta$ is the viscosity of the fluid
$l$ is the length of the pipe
This is known as Poiseulle's Law.
So, you will need to know the length of the pipe and, to get an exact answer, the approximate temperature of the water (since this affects its viscosity).
You need to use the correct units to get a sensible answer. In the case you mention:
Pressure $P=3$ bar = $3\times 10^6$ dynes per square cm
$r = 1$ cm (assuming the $20$ mm refers to the diameter of the pipe)
$t = 3600$ seconds
At $20^o$ C, $\eta \approx 1\times 10^{-2}$ dyne-sec per square cm
So I reckon that for a 1 metre length of pipe, this works out at about $4.2\times 10^8 \text{ cm}^3 =4.2\times 10^5$ litres per hour.
If you double the length of the pipe, you'll halve the rate of flow.
Grandad
4. Thanks for stepping in with the formula I needed.
- Hollywood
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402997493743896, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4273619
|
Physics Forums
## The principle of equivalence
What’s your understanding of the principle of equivalence?
In the literature, I find two meanings:
(1) Gravitational mass is numerically equal to inertial “mass.” (This is a postulate.)
(2) A mass at rest in a frame is equivalent to being in a “gravitational field” in an accelerated frame. (This one is a pseudogravity.)
General relativity does not seem to explain either!
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Blog Entries: 3
Recognitions:
Gold Member
Quote by SinghRP What’s your understanding of the principle of equivalence? In the literature, I find two meanings: (1) Gravitational mass is numerically equal to inertial “mass.” (This is a postulate.) (2) A mass at rest in a frame is equivalent to being in a “gravitational field” in an accelerated frame. (This one is a pseudogravity.) General relativity does not seem to explain either!
There is experimental evidence for (1) See http://en.wikipedia.org/wiki/E%C3%B6...B6s_experiment
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
Quote by SinghRP (1) Gravitational mass is numerically equal to inertial “mass.” (This is a postulate.)
It's a postulate as far as the theory is concerned, but it's also an observed fact, at least to as high an accuracy as we have tested. See the link Mentz114 gave.
The key thing about this version of the principle is that it enables us to view gravity as due to spacetime curvature; if the ratio of gravitational mass to inertial mass were different for different objects, we could not do that.
Quote by SinghRP (2) A mass at rest in a frame is equivalent to being in a “gravitational field” in an accelerated frame. (This one is a pseudogravity.)
This is not quite correct as a statement of this version of the principle. The correct statement is: Being at rest in an accelerated frame (for example, inside a rocket whose engines are firing), and feeling a certain force (say 1 g), is equivalent to being at rest in a gravitational field and feeling the same force (for example, being at rest on the surface of the Earth and feeling a 1 g force).
Quote by SinghRP General relativity does not seem to explain either!
What makes you think that?
## The principle of equivalence
Quote by SinghRP What’s your understanding of the principle of equivalence? In the literature, I find two meanings: (1) Gravitational mass is numerically equal to inertial “mass.” (This is a postulate.) (2) A mass at rest in a frame is equivalent to being in a “gravitational field” in an accelerated frame. (This one is a pseudogravity.) General relativity does not seem to explain either!
The way I would put it is a variation of number (2):
(2') An object in freefall in a gravitational field is locally equivalent to an object travelling inertially in gravity-free space.
In other words, the local effects of gravity can be made to vanish by choosing freefall coordinates.
For example, in gravity-free space, a tiny charged particle will obey an equation of motion given by:
$m \dfrac{dU^\mu}{d\tau} = q F^\mu_\nu U^\nu$
where $U^\mu$ is 4-velocity, $\tau$ is proper time, $q$ is the charge, and $F^\mu_\nu$ is the electromagnetic field strength tensor (which incorporates both the electric and magnetic fields).
The exact same equation will hold locally in the presence of a gravitational field, provided that you choose "freefall" coordinates.
I believe there are a couple of equivalence principles known as weak (WEP) and strong (EEP). The weak is the one about the equivalence of masses, the strong is really a statement about co-ordinate independence, gauge, diffeomorphisms etc.
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
Quote by stevendaryl (2') An object in freefall in a gravitational field is locally equivalent to an object travelling inertially in gravity-free space.
I realize this is largely a matter of definition, but to me this is part of (1), not (2). The reason we can make the local effects of gravity vanish is that inertial and gravitational mass are equal; if they weren't we couldn't do that. This is another way of saying that we can view gravity as a manifestation of spacetime curvature because inertial and gravitational mass are equal.
(2) is talking about something different: what happens when objects are *not* in free fall. The key point about (2) is that equal proper acceleration is what defines "equivalent" states of motion. The free fall case, zero proper acceleration, can be viewed as a special case of this; but that special case alone is not enough. We need the full principle of equivalence, covering *all* possible values of proper acceleration (not just zero), to justify the full machinery of GR for dealing with all kinds of motion in curved spacetime, not just inertial motion.
To PeterDonis: Answer to "What makes you think that? I get lost in mathematical jungle. I am a physicist, and I like to think in terms of physical models. I recall Feynman also: The glory of mathematics is that you don't have to say what you are talking about. The genesis of POE is in: (Gravitational mass) (Gravitational field intensity) = (Inertial mass) (Acceleration). (I agree with your "corrected" stateent.) But my statement is just an interpretation the above. Another expression: (Electical charge) (Electrical field intensity) = (Inertial mass) (Acceleration). I may interpret it as: an electrical charge at rest in a frame is equivalent to being in an "electrical field" in an acclerating frame.
Blog Entries: 3
Recognitions:
Gold Member
Quote by SinghRP Another expression: (Electical charge) (Electrical field intensity) = (Inertial mass) (Acceleration). I may interpret it as: an electrical charge at rest in a frame is equivalent to being in an "electrical field" in an acclerating frame.
You cannot say that, because the resistance to motion of the charge is supplied by the matter, and the mass does not cancel from the acceleration equation as it does with gravity.
See post#10 below.
You're completely right, what I wrote was not as general as I had intended. What I meant to say was that physics in a gravitational field when described by freefall coordinates is equivalent to physics in a gravity-free space when described by inertial Cartesian coordinates. That's not what I said.
Quote by SinghRP To PeterDonis: Answer to "What makes you think that? I get lost in mathematical jungle. I am a physicist, and I like to think in terms of physical models. I recall Feynman also: The glory of mathematics is that you don't have to say what you are talking about. The genesis of POE is in: (Gravitational mass) (Gravitational field intensity) = (Inertial mass) (Acceleration). (I agree with your "corrected" stateent.) But my statement is just an interpretation the above. Another expression: (Electical charge) (Electrical field intensity) = (Inertial mass) (Acceleration). I may interpret it as: an electrical charge at rest in a frame is equivalent to being in an "electrical field" in an acclerating frame.
Not quite. Mathematically,
$M_{inertial}\ \vec{A} = M_{grav}\ \vec{g}_{grav}$
where $\vec{g}_{grav}$ is the gravitational field.
Since $M_{grav} = M_{inertial}$, you can divide through by $M_{grav}$ to get:
$\vec{A} = \vec{g}_{grav}$
Which means that acceleration due to gravity is universal, the same for all objects, regardless of their mass, or what they are made out of, or whatever. That's the principle character of "fictitious" or "inertial" forces such as the "g" forces that arise in an accelerating rocket. This means that gravity can be understand as locally equivalent to a fictitious or inertial force.
In contrast, if you start with the force due to electric fields, you have:
$M_{inertial}\ \vec{A} = Q \vec{E}$
where $Q$ is the electric charge, and $\vec{E}$ is the electric field.
If you do the same trick of dividing through by $M_{inertial}$, you find:
$\vec{A} = \dfrac{Q}{M_{inertial}} \vec{E}$
So the acceleration due to electrical forces is not universal; the acceleration depends on the charge-to-mass ratio. So electrical forces can't be interpreted as inertial, or fictitious forces, since they accelerate different objects in different ways.
Quote by PeterDonis I realize this is largely a matter of definition, but to me this is part of (1), not (2). The reason we can make the local effects of gravity vanish is that inertial and gravitational mass are equal; if they weren't we couldn't do that. This is another way of saying that we can view gravity as a manifestation of spacetime curvature because inertial and gravitational mass are equal. (2) is talking about something different: what happens when objects are *not* in free fall. The key point about (2) is that equal proper acceleration is what defines "equivalent" states of motion. The free fall case, zero proper acceleration, can be viewed as a special case of this; but that special case alone is not enough. We need the full principle of equivalence, covering *all* possible values of proper acceleration (not just zero), to justify the full machinery of GR for dealing with all kinds of motion in curved spacetime, not just inertial motion.
Doh! I misread my own post. You're exactly right, and I was wrong. What I meant to say was that physics in a gravitational field described using freefall coordinates is locally equivalent to physics in gravity-free space described by inertial coordinates.
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
[Edit: Looks like our posts crossed in the mail, so to speak. The following may be superfluous, but I'll leave it in case there is any further comment.]
Quote by stevendaryl I didn't say what is being described is in free fall, I said that you're using freefall coordinates to describe it. You can use inertial coordinates to describe the motion of an object that is accelerating, and you can similarly use freefall coordinates to describe the motion of an object that is not in freefall.
As long as inertial and gravitational mass are equal. If they're not, then trying to set up freefall coordinates in the presence of gravity will fail. That's why I said I view the use of freefall coordinates as an aspect of (1). But it is, as I said, a matter of definition; what's really important is the physics and how we can usefully model the physics, and I agree that local inertial frames in curved spacetime are the key to doing that.
Quote by stevendaryl If you know how the physics works in one coordinate system (freefall coordinates), you can certainly do a coordinate transformation to find out what the equations look like in another coordinate system (such as a coordinate system in which an object on the surface of the Earth is considered at rest). It's just a coordinate change, and that's just mathematics, not physics.
Agreed. But there still remains the question of what the result will actually look like when you do this, which is what the (2) part of the principle as the OP stated it is talking about, IMO. See below.
Quote by stevendaryl The physical content is exhausted by saying that freefall coordinates are the same (locally, which basically means ignoring the variation of the metric tensor with location) as inertial coordinates.
No, this isn't enough by itself, because this doesn't tell you what, specifically, being at rest in a gravitational field looks like in freefall coordinates. That's what the (2) part of the principle is for: it tells you that being at rest in a gravitational field is equivalent to following a particular kind of accelerated worldline in a local inertial frame.
Remember that Einstein originally enunciated the equivalence principle before he had derived the Field Equation. Today we would first solve the Field Equation to derive the Schwarzschild solution, and then observe that an object at rest in Schwarzschild coordinates follows an accelerated hyperbola in a local inertial frame; but Einstein couldn't do that yet. So one way of looking at (2) is that it was Einstein trying to guess what a particular solution would look like to a field equation he hadn't yet derived.
Blog Entries: 1 Recognitions: Gold Member Science Advisor Well, if a free fall frame has the standard physics of SR locally, then it follows that any other frame will behave like an accelerated frame in SR. This is why modern authors use different (but essentially similar) EP's to Einstein. There are many categorizations used. The following is a common one, from: : http://relativity.livingreviews.org/.../fulltext.html Weak Equivalence Principle: "the trajectory of a freely falling “test” body (one not acted upon by such forces as electromagnetism and too small to be affected by tidal gravitational forces) is independent of its internal structure and composition. " Einstein Equivalence Principle: "1) WEP is valid. 2) The outcome of any local non-gravitational experiment is independent of the velocity of the freely-falling reference frame in which it is performed. 3) The outcome of any local non-gravitational experiment is independent of where and when in the universe it is performed." Strong Equivalence Principle: "1) WEP is valid for self-gravitating bodies as well as for test bodies. 2) The outcome of any local test experiment is independent of the velocity of the (freely falling) apparatus. 3) The outcome of any local test experiment is independent of where and when in the universe it is performed."
To PeterDonis - After rereading my statement (2) and then reading yours, I think I found the problem. I restate (2) as follows: A mass at rest in a frame is equivalent to being in a "gravitational field" when the frame is acclerating. To PAllen: I will read your post and come back to you after I understand it. To all: I will work out a text to explain without invoking POE: the bending of light beam, the speed of time, and the length of a rod in an acclerating rocket far from any mass; centrifugal force, and weightlessness. I will post it tomorrow. The clearest exposition of POE is given by George Gamow in -- guess what -- one of his popularization-series book: Gravity, ch 9, Dover Pubs, 2002. Thank all you.
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
Quote by SinghRP A mass at rest in a frame is equivalent to being in a "gravitational field" when the frame is acclerating.
Yes, I think this is OK.
As I said before, here is my take on POE-related topics. The principle of equivalence The genesis of the principle of equivalence is in the following equation: (Gravitational mass) ∙ (Gravitational field intensity) = (Inertial mass) ∙ (Acceleration) (1) That is, a mass in a frame is equivalent to being in a “gravitational field” when the frame is accelerating. We re-state (1) for an electrical charge: (Electrical charge) ∙ (Electrical field intensity) = (Inertial mass) ∙ (Acceleration) (2) That is, an electrical charge in a frame is similar to being in an “electrical field” when the frame is accelerating. We make the following inferences: (1) An observer studying a phenomenon in an accelerating frame, in the absence of external information, may need to invent a pseudofield in opposition to the acceleration to make sense of the phenomenon. (This is what classical physics says about a non-inertial observer applying Newton's law of motion.) (2) Gravitational mass and gravitational field are respectively analogous to electrical charge and electrical field. Bending of light beam We take an accelerating rocket far from any mass. The rocket has an observer in it. We then examine what happens to a beam of light propagating across the rocket chamber from one wall to the other – from the point of view of the observer. As the light beam traverses the rocket chamber, the observer is accelerating toward it. Relatively, the observer finds the light beam tracing a parabolic path and may conclude that gravity is present. In reality, however, the light beam itself never changed its direction as no gravity was mediated! It is not necessary to invent pseudogravity opposite to the applied acceleration to explain the bending of light beams. Weightlessness There are two aspects to weight. In one aspect, the weight of a body is the force with which the earth attracts it. In the other aspect, weight is the “feeling” a body gets in gravitational field from the reaction force from the “floor” on which it rests; weightlessness is the feeling the body has when that reaction force is unavailable. Weight may be increased or decreased by adding to or subtracting from the reaction force. In an ascending elevator, an observer feels heavier as an external upward force adds to the reaction force and may infer that more downward gravity is present. In reality, however, no new downward gravity was ever mediated! While the elevator is in controlled (or free) descent, the gravitational attraction of the earth is being “employed” partially (or wholly) in accelerating it downward; the observer in it gets reduced (or zero) reaction force, feels lighter (or weightless), and may infer that the earth’s downward gravity has been reduced (or cancelled) by a mysterious upward gravity. In reality, however, no new upward gravity was ever mediated! It is not necessary to invent pseudogravity opposite to the applied acceleration to explain weightlessness. Weight is real; weightlessness is apparent. Centrifugal force A force normal to a body’s uniform velocity keeps the body in a circular orbit; that force is called centripetal force. A body in a satellite around the earth is under the centripetal force of the earth’s gravity. The centripetal force is being “employed” wholly in keeping the satellite and the body in orbit; the body gets no reaction force and feels weightlessness. A body on a merry-go-round must have three reaction agents to keep it in the place: a seat to push it up against the downward gravity; a backrest to accelerate it to the needed uniform tangential velocity; and a side rest to push it toward the center providing the needed centripetal force to keep it along the circle. When the centripetal force is turned off, the body moves along the tangent, not the radius, with the current velocity. It is practical but not necessary to invent pseudogravity (centrifugal force) to cancel the centripetal force in order to avoid radial motion. A centripetal force may be of any type, such as electromagnetic. Centripetal force is real; centrifugal force is fictitious. Speed of time and length of rod In an accelerating rocket far from any mass, an observer detects no changes in the run of the time of an atomic clock and in the length of a material rod, because no gravity is being mediated.
To stevendaryl at Post 694: The choice of word "equivalent" was unfortunate. As a matter of fact this word is not quite correct in the case of mass either. In the case of mass -- the pseudo gravity field is equal in magnitude but opposite in direction to the accleration. In the case of electric charge -- the pseudo electric field is not equal in magnitude but is opposite in direction to the accleration. Thanks! To PAllen at Post 3,628: I understand the principles now. But I am still figuring out their utility. Thanks!
Thread Tools
Similar Threads for: The principle of equivalence
Thread Forum Replies
Special & General Relativity 10
Special & General Relativity 1
Special & General Relativity 5
Special & General Relativity 0
Classical Physics 4
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404662847518921, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/109771/limits-of-von-neumann-algebras/109791
|
## Limits of von Neumann algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider (abstract) von Neumann algebras topologised by their weak*-topology arising from the unique predual.
In the theory of topological vector spaces, there is a natural notion of an inductive limit: recall that $E$ is an inductive limit of a directed system $(E_\alpha, \beta_\alpha)$ when the limit vector space $E$ is endowed with the final topology making all the embeddings $\hat{\beta_\alpha} \colon E_\alpha \to E$ continuous.
Is there an internal definition of an inductivie limit of von Neumann algebras? By internal I mean, a construction which does not appeal to the WOT-closure of the limit *-algebra represented on some Hilbert space. For example, can we perform the construction of the $2^\infty$-factor as an inductive limit in the language of topolgical vector spaces?
-
## 2 Answers
It sounds like you want a representation independent construction, but did you know that every von Neumann algebra has an essentially unique canonical representation (the "standard form")? This is done in Takesaki vol II. I guess it's pretty straightforward to take the direct limit of a system of standard forms and I suspect that should do what you want.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Yes, see Proposition 5.5 and Proposition 5.7 in Andre Kornell's paper Quantum Collections.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230080842971802, "perplexity_flag": "head"}
|
http://noncommutativeanalysis.wordpress.com/2012/12/04/the-remarkable-hilbert-space-h2-part-i-definition-and-interpolation-theory/
|
# Noncommutative Analysis
### The remarkable Hilbert space H^2 (Part I – definition and interpolation theory)
#### by Orr Shalit
This series of posts is based on the colloquium talk that I was supposed to give on November 20, at our department. As fate had it, that week studies were cancelled.
Several people in our department thought that it would be a nice idea if alongside the usual colloquium talks given by invited speakers which highlight their recent achievements, we would also have some talks by department members that will be more of an exposition to the fields they work in. So my talk was supposed to be an exposition to the setting in which much of the research I do goes on.
The topic of the “talk” is the Hilbert space $H^2_d$. There will be three parts to this series:
1. Definition and interpolation theory.
2. Multivariate operator theory and model theory
3. Current research problems
#### 1. Introduction
What is $H^2$?
$H^2$ is a Hilbert space. One may ask: what could be remarkable about a Hilbert space? A Hilbert space is a Hilbert space, and they are all isomorphic, are they not?
This is a real question, and I was actually asked this question by a member of my department a week before this talk, after the abstract was published. I have two answers to this question.
The first answer is that $H^2$ is not just a Hilbert space, it is a Hilbert function space, so it has a much richer structure than a mere Hilbert space. The function theory that arises in the context of the space $H^2$ connects in a very fruitful way with the Hilbert space structure. More on this soon.
The second answer is that whenever we pick a particular construction of Hilbert space to work in, we are choosing a representation for some object of interest. In other words, the particular Hilbert space we choose to work with comes along with a set of natural operators. As an example, consider a countable group $G$, and consider $\ell^2(G)$. This is the same Hilbert space as $\ell^2(\mathbb{N})$, but $\ell^2(G)$ invites us to represent $G$ on it in a very natural way, while $\ell^2(\mathbb{N})$ makes no such invitation. The operator theory that naturally arises in the context of the space $H^2$ is what makes this Hilbert space special. More on this later.
#### 2. The function space $H^2$
In fact, there is a sequence of spaces $H^2_d$ which interest us. Let $d$ be a positive integer or $\infty$. Let $B_d$ denote the open unit ball in $\mathbb{C}^d$ (where $\mathbb{C}^d$ is understood to be $\ell^2$ when $d=\infty$). I ask of you, for this talk, don’t get your mind bogged with questions about the $d = \infty$ case, in fact, if you are not an operator theorist you may as well take $d=1$, things are interesting enough. We define $H^2_d$ to be the space of holomorphic functions $f : B_d \rightarrow \mathbb{C}^d$ with Taylor series
$f(z) = \sum_{n \in \mathbb{N}^d} a_n z^n$
which satisfies
(*) $\|f\|^2 = \sum_n \frac{n!}{|n|!}|a_n|^2 < \infty$.
Here we are using the standard multi–index notation: for $n = (n_1, \ldots, n_d) \in \mathbb{N}^d$ we put $n! = n_1! n_2 ! \cdots n_d!$ and $|n| = n_1 + \ldots + n_d$ (and of course $\mathbb{N} = \{0,1,2,\ldots,\}$. Also, $\mathbb{N}^\infty$ is the sum, not product).
Equation (*) defines a Hilbert space norm on $H^2_d$ and it is very easy to figure out what the inner product has to be.
So $H^2_d$ can be very naturally identified with a weighted $\ell^2$–space, but we really want to think about it as a space of functions. These are not equivalence classes of functions, like we have when we look at the space $L^2[0,1]$, these are honest–to–God analytic functions that have well defined values at every point. The crucial fact is that point evaluation is a bounded functional on $H^2_d$. The easiest way to show this is to exhibit for every $w \in B_d$, an element $k_w(z) \in H^2_d$ such that for all $f$,
$\langle f, k_w \rangle = f(w) .$
A simple computation (using the orthogonality of the monomials) shows that the unique function that satisfies this is
$k_w(z) = \frac{1}{1 - \langle z, w \rangle} .$
Let us carry out the computation in the case $d = 1$. Let $f(z) = \sum a_n z^n$, and $k_w(z) = \frac{1}{1-z \overline{w}} = \sum (z\overline{w})^n$. Then
$\langle f, k_w \rangle = \langle \sum a_n z^n, \sum \overline{w}^n z^n \rangle = \sum a_n w^n = f(w).$
The fact that point evaluation is a bounded linear functional is the starting point of an intimate relationship between the Hilbert space structure and the operator theory of $H^2_d$, on the one hand, and the function theory of $H^2_d$, on the other hand. It is remarkable that both sides have benefited from this relationship.
I personally find the more interesting (or surprising) side of this story to be that operator theory has applications to complex function theory. I will tell you about my favorite example.
#### 3. Nevanlinna–Pick interpolation
Let $z_1, \ldots, z_k$ be points in the unit disc $D$, and let $w_1, \ldots, w_k$ be complex numbers. One can always find an analytic function $f :D \rightarrow \mathbb{C}$ that interpolates this data, meaning that $f(z_i) = w_i$ for $i=1, \ldots, k$. This is easy to do with polynomials. However, for some applications such as control theory (and also for the glory of human kind) it is desirable to find an optimal solution to this interpolating problem. For example, one would like to find an analytic function $f :D \rightarrow \mathbb{C}$ that interpolates data and has the smallest possible sup norm
$\|f\|_\infty = \sup_{z \in D}|f(z)| .$
It is not hard to see that we will be able to figure out what is the minimal norm of an interpolating function if we know how to solve the following problem. Denote by $H^\infty = H^\infty(D)$ the Banach algebra of bounded analytic functions on the disc with the sup norm.
Nevanlinna–Pick interpolation problem: Given $z_1, \ldots, z_k,\in D$ and $w_1, \ldots, w_k \in D$, does there exist $f \in H^\infty$, with $\|f\|_\infty \leq 1$ that satisfies $f(z_i) = w_i$ for $i = 1, \ldots, k$?
G. Pick (1916) and R. Nevanlinna (1919) independently solved this problem. They provided the following very satisfying solution.
Theorem 1: The Nevanlinna–Pick problem has a solution if and only if the matrix
$\left(\frac{1-w_i \overline{w}_j}{1-z_i \overline{z}_j} \right)_{i,j=1}^k .$
is positive definite.
This is a very satisfying solution because given the data you can actually form this matrix and check whether or not it is positive definite.
In 1967 D. Sarason introduced a new approach to this problem, which could simultaneously treat Nevanlinna–Pick interpolation problems as well as other interpolation problems of interest. His approach used operator theory on $H^2$ (the case $d=1$) in an essential way, and among other things, it gave the following result (first proved by Sz.-Nagy–Koryani, also by operator theoretic techniques).
Theorem 2: Given $z_1, \ldots, z_k,\in D$ and $W_1, \ldots, W_k \in M_n(\mathbb{C})$, there exists a bounded analytic matrix valued function $f$, with $\|f\|_\infty \leq 1$ that satisfies $f(z_i) = W_i$ for $i = 1, \ldots, k$, if and only if the $nk \times nk$ matrix
$\left(\frac{1-W_i {W}^*_j}{1-z_i \overline{z}_j} \right)_{i,j=1}^k$
is positive definite.
So what do these beautiful theorems have to do with our space? It seems as if the problem is in the wrong space: we just introduced the Hilbert space $H^2$, but in the NP problem we are looking for a function in $H^\infty$. It turns out that $H^\infty$ is very closely related to $H^2$. $H^\infty$ is equal to the so–called multiplier algebra of $H^2$, that is,
$H^\infty = \{f : D \rightarrow \mathbb{C} : \forall h \in H^2 . fh \in H^2 \} .$
Moreover, the space of bounded analytic $n \times n$ matrix valued functions is the multiplier algebra of the space of vector valued functions $H^2 \otimes \mathbb{C}^n$. This simple connection allows us to harness all the power of operator theory to the function theoretic NP problem.
#### 4. Reproducing kernel Hilbert spaces and complete NP kernels
Our discussion fits in a larger framework.
Definition 3: Let $X$ be a set. A reproducing kernel Hilbert space (RKHS for short, also called a Hilbert function space) is a Hilbert space $H$ that consists of functions $X \rightarrow \mathbb{C}$, in which point evaluation at any point $x \in X$ is a bounded functional on $H$.
Since point evaluation is bounded, we have, for any $x \in X$, a unique $k_x \in H$ such that
(*) $\forall f \in H . f(x) = \langle f, k_x \rangle .$
So $k_x$ is itself a function on $X$. The functions $k_x$ are called kernel functions. Denote $k(x,y) = k_y(x) = \langle k_y, k_x \rangle$. The function $k: X \times X \rightarrow \mathbb{C}$ satisfies that for every $x_1, \ldots, x_n \in X$, the matrix
$\big(k(x_i,x_j) \big)_{i,j=1}^n$
is positive definite. Such a function is said to be a positive definite kernel. It is also referred to as a reproducing kernel, because the kernel functions “reproduce” the functions in $H$ by (*). It is a fact (known as Aronszajn’s Theorem) that every positive definite kernel is the reproducing kernel of a RKHS. If $k$ is a positive definite kernel, one sometimes denotes by $H(k)$ the RKHS that has $k$ as its reproducing kernel.
Every RKHS $H$ has a multiplier algebra, defined
$Mult(H) = \{ f: X \rightarrow \mathbb{C} : \forall h \in H . f h \in H\} .$
The multplier algebra has a natural norm:
$\|f\|_{Mult(H)} = \sup \{\|fh\|_H : h \in H, \|h\| \leq 1\} .$
The matrix valued NP problem makes sense in any multiplier algebra:
Matrix valued NP interpolation problem: Given $x_1, \ldots, x_n,\in X$ and $W_1, \ldots, W_n \in M_N(\mathbb{C})$, does there exist $f \in Mult(H) \otimes M_N(\mathbb{C})$, with $\|f\| \leq 1$ that satisfies $f(x_i) = W_i$ for $i = 1, \ldots, n$?
To clarify, $Mult(H) \otimes M_N(\mathbb{C})$ can be simply considered as the algebra of $N \times N$ matrices with entries in $Mult(H)$. This algebra acts naturally on the Hilbert space $H \oplus \ldots \oplus H$ ($N$ times), and the norm is the operator norm.
Many RKHS are known, and many have been studied. In some of them there is a nice solution to the NP interpolation problem, in some of them there is a solution but it is not nice, and in some of them nobody knows a characterization of when the problem is solvable. The most favorable case is the following one:
Definition 4: A kernel $k$ is said to be a complete Nevanlinna–Pick kernel (or, for short, a complete NP kernel) if for all $N$, the matrix valued NP interpolation problem for $x_1, \ldots, x_n,\in X$ and $W_1, \ldots, W_n \in M_N(\mathbb{C})$, has a solution in $Mult(H(k)) \otimes M_N(\mathbb{C})$ of norm less than or equal to $1$ if and only if the matrix
$\left( (1-W_i W_j^* ) k(x_i, x_j) \right)_{i,j=1}^n$
is positive definite. In this case, $H(k)$ is said to be a complete NP space. A multiplier algebra of complete NP space is said to be a complete NP algebra.
I hope nobody will confuse between “complete NP” and “NP complete”.
Remark: Sometimes one uses the terminology “complete Pick” instead of “complete Nevanlinna–Pick”.
Theorem 2 can be restated by saying that the kernel $k(z,w) = \frac{1}{1-z\overline{w}}$ (known as the Szego kernel) is a complete NP kernel. Now, this kernel is the kernel for $H^2_1$. It is a fact (proved by Arias–Popescu, Davidson–Pitts, and Agler–McCarthy) that the kernel of $H^2_d$ is a complete NP kernel for all $d$.
Theorem 5: For all $d$, $H^2_d$ is a complete NP space.
Thus, NP interpolation problem in $Mult(H^2_d)$ has a very nice solution.
#### 5. Universality of $H^2_d$
Are there any other complete NP spaces besides $H^2_d$? Yes, there are. The Sobolev space $W^{1,2}([0,1])$ as well as the Dirichlet space are complete NP spaces, for example. These spaces look very different from $H^2_d$; especially the Sobolev space, which is not even a space of analytic functions. However, the following remarkable theorem of Agler and McCarthy shows $H^2_d$ is the universal complete NP space.
Let us say that a kernel $k$ is irreducible if for all $x,y \in X$, $k_x$ and $k_y$ are linearly independent, but not orthogonal.
Theorem 6: Let $k$ be an irreducible kernel on the set $X$, and suppose that $H = H(k)$ is separable. Then $k$ has the complete NP property if and only if there exist $d = 1, 2, \ldots, \infty$, an injection $b: X \rightarrow B_d$ and a nowhere vanishing function $c$ on $X$ such that
$k(x,y) = \frac{c(x) \overline{c(y)}}{1 - \langle x,y \rangle} .$
The theorem has the following consequence:
Corollary 7: Let $M$ be a complete NP multiplier algebra. Then there is a $d$ and an analytic variety $V \subseteq B_d$ such that
$M \cong Mult(H^2_d)\big|_V = \{f\big|_V : f \in Mult(H^2_d)\} .$
The symbol $\cong$ stands for completely isometrically isomorphic.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 154, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932457685470581, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/79218?sort=oldest
|
## gradient of convex functions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello. Can somebody help me with the following question that I have thought over for quite some time, to no avail?
Let $f$ be a smooth function (class $\mathrm{C}^{\infty}$), $f:\mathbb{R}^n \longrightarrow \mathbb{R}$ and suppose that $f$ is a positive convex function and we define $$\varphi: \mathbb{R}^n \longrightarrow \mathbb{R}^n, \varphi(X) = \frac{\operatorname{grad}(f)(X)}{f(X)}$$
My question is this:
Is it true that the image of $\varphi$ is a convex set?
This is not my research area so I will appreciate any help or comment.
-
Where does this question come from? – Igor Rivin Oct 27 2011 at 8:12
2
Now that a counterexample has been posted: might it still be true if $f$ is logarithmically convex? Equivalently (with $g=\log f\phantom.$): if $g: {\bf R}^n \rightarrow {\bf R}$ is convex, is the image of ${\rm grad}(g)$ a convex subset of ${\bf R}^n$? – Noam D. Elkies Oct 27 2011 at 16:11
## 2 Answers
No. Consider $f(x,y)=e^x+y^2$, then $\varphi(x,y)=(e^x,2y)/(e^x+y^2)$. The image of $\varphi$ has only one point $(1,0)$ on the axis $y=0$. The points $a:=\varphi(0,1)=(\frac12,1)$ and $b:=\varphi(0,-1)=(\frac12,-1)$ belong to the image of $\varphi$ but their midpoint $\frac{a+b}2 = (\frac12,0)$ does not.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Greetings everyone.
I have a similar problem. In my case, the function $f$ is $$f:\mathbb{R}^n \longrightarrow \mathbb{R}, f(X)=\sum_{i=1}^{m}a_i^2 \operatorname{e}^{2\langle X, \alpha_i \rangle}$$ where $a_1, \ldots, a_m$ are not null, $\alpha_1, \dots, \alpha_m$ are any vectors in $\mathbb{R}^n$ and $\langle \cdot , \cdot \rangle$ is the usual inner product of $\mathbb{R}^n$.
I know that my problem is related to the Atiyah-Guillemin-Sternberg Convexity theorem, but I don't know how to prove it in my case by using elementary methods. I would really appreciate any comment.
-
3
Looks like this should be a new question, not an answer! Is the image of the logarithmic gradient here just the convex hull of $\lbrace 2 \alpha_i \rbrace$? If so then it can probably be proved using something like the Brouwer fixed-point theorem; I don't know how elementary that would be for your purposes. – Noam D. Elkies Oct 28 2011 at 16:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9348657727241516, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/140183-limit-comparison-concept-help.html
|
# Thread:
1. ## Limit Comparison Concept Help
I need help understanding an informal principle when doing these equations.
In my book it says that you can discard all but the leading terms. When can you do this?
For example, one problem has sigma from 1 to infinity of (3k^3-2k^2+4)/(k^7-k^3+2)
which can be compared to 3k^3/k^7 or 3/k^4 to check for covergence or divergence
Why can't you get rid of the 3 in the numerator? Basically I don't understand how to use this rule. Can someone explain it in as simple terms as possible?
2. You can get rid of the 3 in the numerator. A constant multiple won't affect the result.
Comparing it to $\frac{1}{k^4}$ is fine.
Basically you are just taking the term that approaches infinity the "fastest" (in layman's terms). If it's a polynomial, then the term of the highest degree (exponent) is approaching infinity the "fastest", therefore we can disregard the other terms. We can do the same for the denominator.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230056405067444, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/210887/is-the-order-of-the-product-of-two-commutative-finite-order-elements-necessarily
|
# Is the order of the product of two commutative finite-order elements necessarily finite?
In some group G, can we exhibit an example of two elements $x,y$ that
• commute with each other
• have finite order
but whose product $xy$ (or $yx$, since they commute) have infinite order?
I can give an example of when the elements don't commute, say of the permutation group on countable digits, and by defining $f(x) = 1 -x$, $g(x) = 2-x$. Then $f \circ g$ and $g \circ f$ are both infinite order.
-
## 1 Answer
If $x$ has order $n$, $y$ has order $m$ and $xy=yx$ then $o(xy) \mid nm$. This follows from observing that $$(xy)^{nm}=x^{nm}y^{nm}=e^me^n=e.$$
-
2
Argh, my bad. I fretted unnecessarily over the possibility that the powers of (say) $a$ may not commute with $b$! Thanks for the quick answer. I'll accept your answer in a bit as the system is not letting me do that now. – math1793 Oct 11 '12 at 4:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366837739944458, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/221904/how-does-a-myopic-interpret-wieners-tauberian
|
# How does a myopic interpret Wiener's Tauberian?
I just read about this post on the intuition behind convolution. In Terence Tao's answer convolution is interpreted as the blur of image in near-sighted eyes. In Harald Hanche-Olsen's it is made clearer that such a blur is due to the combining effect of translated images.
I wonder whether this interpretation can be carried on to explain Wiener's Tauberian theorem. I mean the following version
Suppose $\phi\in\mathcal{L}^{\infty}$, and $K\in\mathcal{L}^{1}$ be such that \begin{equation} lim_{x\to\infty} (\phi*K)(x)=a\hat{K}(0),\end{equation}and \begin{equation} \hat{K}(s)\neq 0\end{equation} for all $s$.
Then \begin{equation} lim_{x\to\infty} (\phi*f)(x)=a\hat{f}(0)\end{equation} for all $f\in\mathcal{L}^1$.
So the first equation describe the image in a near-sighted eye, and the last one says something about blurred image in other myopic eyes. But I do not know how to understand the right hand sight of both equations, and how the nonvanishing property relates to eye-sight.
Thanks!
-
## 1 Answer
Well, $\hat K(0) = \int_{-\infty}^\infty K(x)\,\mathrm dx =: \int K$, so for a start perhaps one should divide both sides by $\int K$. Then we have $$\lim_{x\to\infty}\left(\phi*\frac K{\int K}\right)(x) = a.$$ Absorbing the division by $\int K$ into $K$ itself, it is sufficient to only consider those $K$ which are already normalized, i.e. which integrate to $1$. (Intuitively, the total amount of light entering your eyes is conserved, not amplified or diminished e.g. by sunglasses.) Similarly, we may do the same thing for $f$.
Then the theorem states the following: Suppose $\phi\in\mathcal L^\infty$ is an infinitely large image, and your blurry vision is described by a kernel $K\in\mathcal L^1$ such that:
1. $\int_{-\infty}^\infty K(x)\,\mathrm dx = 1$, i.e. you are not wearing sunglasses, and
2. $\hat K(s) \ne 0$ for any $s$, i.e. you can always tell a sinusoid from a featureless constant field. (This would not be true if, for example, your $K$ was a box function of width $w$ and $\phi$ was a sinusoid of wavelength exactly $w$: you would see it as a constant.)
Then the theorem states that, even though your perception of the image $\phi$ at any finite $x$ is poor, your perception of the limit as $x\to\infty$ is as good as anyone else's: $$\lim_{x\to\infty}(\phi*K)(x) = \lim_{x\to\infty}(\phi*f)(x)$$ for any $f\in\mathcal L^1$ with $\int f=1$. This feels like an intuitive result: no matter how big your blurring kernel is, it can't be so big that it corrupts your view of "$\phi(\infty)$" with values of $\phi(x)$ at any finite $x$. The worst that could happen is that there are never-ending oscillations to infinity that blur out to constant under your myopia, so you think the limit exists but it doesn't, but this is prevented by condition (2) above.
(Of course, if you or someone else is wearing sunglasses, all you have to do is account for the fact that you or they see the image as dimmer than usual, and scale the expected limit by the corresponding factor.)
-
Good one! Thanks! – Hui Yu Oct 30 '12 at 7:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938612699508667, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/04/16/an-example-of-a-measure/?like=1&source=post_flair&_wpnonce=966197e62c
|
# The Unapologetic Mathematician
## An Example of a Measure
At last we can show that the set function we defined on semiclosed intervals is a measure. It’s clearly real-valued and non-negative. We already showed that it’s monotonic, and this will come in handy as we show that it’s countably additive.
So, if $\{E_i\}$ is a countable disjoint sequence of semiclosed intervals whose union is also a semiclosed interval $E$, then our first monotonicity property shows that for any finite $n$ we have
$\displaystyle\sum\limits_{i=1}^n\mu(E_i)\leq\mu(E)$
and so in the limit we must still have
$\displaystyle\sum\limits_{i=1}^\infty\mu(E_i)\leq\mu(E)$
But the sequence $\{E_i\}$ covers $E$, and so our other monotonicity property shows that
$\displaystyle\mu(E)\leq\sum\limits_{i=1}^\infty\mu(E_i)$
which gives us the equality we want.
But this still isn’t quite a measure. Why not? It’s only defined on the collection $\mathcal{P}$ of semiclosed intervals, and not on the ring $\mathcal{R}$ of finite disjoint unions. But we’re in luck: there is a unique finite measure $\bar{\mu}$ on $\mathcal{R}$ extending $\mu$ on $\mathcal{P}$. That is, if $E\in\mathcal{P}$, then $\bar{\mu}(E)=\mu(E)$.
Every set in $\mathcal{R}$ is a finite disjoint union of semiclosed intervals, but not necessarily uniquely. Let’s say we have both
$\displaystyle E=\bigcup\limits_{i=1}^mE_i$
$\displaystyle E=\bigcup\limits_{j=1}^nF_j$
Then for each $i$ we have
$\displaystyle E_i=\bigcup\limits_{j=1}^n(E_i\cap F_j)$
which represents $E_i\in\mathcal{P}$ as a finite disjoint union of other sets in $\mathcal{P}$. Since $\mu$ is finitely additive, we must have
$\displaystyle\sum\limits_{i=1}^m\mu(E_i)=\sum\limits_{i=1}^m\sum\limits_{j=1}^n\mu(E_i\cap F_j)$
and, similarly
$\displaystyle\sum\limits_{j=1}^n\mu(F_j)=\sum\limits_{j=1}^n\sum\limits_{i=1}^m\mu(E_i\cap F_j)$
But since these sums are finite we can switch their order with no trouble. Thus we can unambiguously define
$\bar{\mu}(E)=\sum\limits_{i=1}^m\mu(E_i)$
which doesn’t depend on how we represent $E$ as a finite disjoin union of semiclosed intervals.
This function $\bar{\mu}$ clearly extends $\mu$, since if $E\in\mathcal{P}$ we can just use $E$ itself as our finite disjoint union. It’s also easily seen to be finitely additive, and that there’s not really any other way to define a finitely additive set function to extend $\mu$. But we still need to show countable additivity.
So, let $\{E_i\}$ be a disjoint sequence of sets in $\mathcal{R}$ whose union $E$ is also in $\mathcal{R}$. Then for each $i$ we have
$\displaystyle E_i=\bigcup\limits_{j=1}^{n_j}E_{ij}$
$\displaystyle\bar{\mu}(E_i)=\sum\limits_{j=1}^{n_j}\mu(E_{ij})$
If $E$ happens to be in $\mathcal{P}$, then the collection of all the $E_{ij}$ is countable and disjoint, and we can use the countable additivity of $\mu$ we proved above to show
$\displaystyle\bar{\mu}(E)=\mu(E)=\sum\limits_{i=1}^\infty\sum\limits_{j=1}^{n_j}\mu(E_{ij})=\sum\limits_{i=1}^\infty\bar{\mu}(E_i)$
In general, though, $E$ is a finite disjoint union
$\displaystyle E=\bigcup\limits_{k=1}^nF_k$
and we can apply the previous result to each of the $F_k$:
$\displaystyle\begin{aligned}\bar{\mu}(E)&=\sum\limits_{k=1}^n\bar{\mu}(F_k)\\&=\sum\limits_{k=1}^n\sum\limits_{i=1}^\infty\bar{\mu}(E_i\cap F_k)\\&=\sum\limits_{i=1}^\infty\sum\limits_{k=1}^n\bar{\mu}(E_i\cap F_k)\\&=\sum\limits_{i=1}^\infty\bar{\mu}(E_i)\end{aligned}$
From here on out, we’ll just write $\mu$ instead of $\bar{\mu}$ for this measure on $\mathcal{R}$.
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
## 1 Comment »
1. [...] So we’ve identified a measure on the ring of finite disjoint unions of semiclosed intervals. Now we want to apply our extension [...]
Pingback by | April 19, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 52, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9187071919441223, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-statistics/16609-regression.html
|
# Thread:
1. ## regression
I do not even know where to begin with this problem...please help. I am in intermediate algebra, summer session, which is moving quite fast. It seems as if there is no time for understanding.
Please explain how would I know when to use log in the equation when I have a word problem and it is only asking to find an equation for f.
Also what is the base b of the function f(t)= ab^t?
How do I find regression equations?
2. Originally Posted by getnaphd
I do not even know where to begin with this problem...please help. I am in intermediate algebra, summer session, which is moving quite fast. It seems as if there is no time for understanding.
Please explain how would I know when to use log in the equation when I have a word problem and it is only asking to find an equation for f.
Also what is the base b of the function f(t)= ab^t?
How do I find regression equations?
First try to put only one question in a post, you can post any number
seperatly.
Second, can you give n example of the type of word problems you have.
Third, b is a constant in the equation which is raised to the variable power t.
Fourth. look here.
RonL
3. ## summer college math...help
Sorry for the many questions..I was frustrated and up late. Still I have had no sleep.
Here is an example:Before 1997, the Library of Congress sold copies of its 3x5 catalog cards to other libraries. Due to libraries using computer catalogs, the Library of Congress stopped selling such cards in March 1997. The number of cards sold (in millions) for various years are listed in the table below.
Year Number of Cards Sold (in millions)
1971 74.47
1976 39.82
1981 15.64
1986 8.08
1991 2.36
1996 0.57
Let f(t) represent the number of cards in millions in the year that is t years since 1970.
a) Find an equation for f.
b) Use your model to estimate the number of cards sold in 1960.
c) What is the base b of your model f(t)=ab^t? What does it mean in terms of the situation?
Thanks
4. Originally Posted by getnaphd
Sorry for the many questions..I was frustrated and up late. Still I have had no sleep.
Here is an example:Before 1997, the Library of Congress sold copies of its 3x5 catalog cards to other libraries. Due to libraries using computer catalogs, the Library of Congress stopped selling such cards in March 1997. The number of cards sold (in millions) for various years are listed in the table below.
Year Number of Cards Sold (in millions)
1971 74.47
1976 39.82
1981 15.64
1986 8.08
1991 2.36
1996 0.57
Let f(t) represent the number of cards in millions in the year that is t years since 1970.
a) Find an equation for f.
b) Use your model to estimate the number of cards sold in 1960.
c) What is the base b of your model f(t)=ab^t? What does it mean in terms of the situation?
Thanks
You are going to do a power law fit. The model is of the form:
$<br /> f(t)=a~b^t<br />$
Now we know how to do linear regerssion so we want to convert this lo a linear model, so we take logs to get:
$<br /> \log(f(t))=b \log(t)+log(a)<br />$
so we put $\tau = \log(t)$, $F(\tau)=\log(f(e^t))$, and [tex]\alpha=log(a) , which gives us the linear model:
$<br /> F(\tau)=b\ \tau + \alpha<br />$
and take logs of the data in the table, and we have a bog standard linear regression problem.
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379673004150391, "perplexity_flag": "middle"}
|
http://blankonthemap.blogspot.com/
|
# Blank On The Map
Explorations of physics and whimsy.
## Tuesday, April 9, 2013
### Celebrating Tom Lehrer
This is not a post about physics, but one to mark the birthday today of mathematician, teacher, satirist, lyricist and performer Tom Lehrer. Today he turns 85 – or, since he apparently prefers to measure his age in Centigrade – 29 (I must remember to use that one myself sometime!).
To commemorate the occasion, the BBC ran a half-hour long radio feature on his life and work last Saturday. This is available to listen to here for another four days; do try to catch it before then!
Even readers who have not heard of Lehrer might have heard of some of his better-known songs, such as The Elements Song. Other pieces of simple comedy gold include Lobachevsky, or New Math. But for me the best of Lehrer's songs are the ones with darkly satirical lyrics juxtaposed with curiously uplifting melodies. (These were probably also part of the reason that he never achieved the mainstream popularity he deserved.) So I want to feature one such example here:
Kim Jong-un, I hope you are listening.
## Sunday, April 7, 2013
### Unnecessary spin
A few people have asked me why I have not blogged about the recent announcement about, and publication of, results from the Alpha Magnetic Spectrometer, which were widely touted as a possible breakthrough in the search for dark matter.
The reason I have not is simply that there are many other better informed commenters who have already done so. In case you have not yet read these accounts, you could do worse than going to Résonaances, or Ethan Siegel, or Stuart Clark in the Guardian, who provide commentary at different levels of technical detail. The simple short summary would be: AMS has not provided evidence about the nature of dark matter, nor is it likely to do so in the near future. The dramatic claims to the contrary are spin, pure and simple. Siegel in fact goes so far as to say "calling it misleading is generous, because I personally believe it is deceitful" (emphasis his own).
So I'm not going to make any more comments about that.
However, since this incident brought it up again, I do want to comment on a related piece of annoying spin, which is the habit of physicists in the business of communicating science to the public of making vastly exaggerated claims about the possible practical applications of fundamental physics. The example that caught my attention this week occurred when Maggie Aderin-Pocock – who is apparently a space science research fellow at UCL – appeared on the BBC's Today programme on Thursday to discuss the significance of the AMS findings.
At one point in the discussion the interviewer John Humphrys asked a slightly tricky question: I understand that dark matter and dark energy are endlessly fascinating, he said, and that learning about the composition of the universe is very exciting. But what practical benefits might it bring? The answer Aderin-Pocock gave was that if we understood what dark matter and dark energy were, we might be able to use them to supply ourselves with energy – dark matter as a fuel source.
I'm sorry, but that is just rubbish.
Unfortunately, it's the kind of rubbish that is increasingly commonly voiced by scientists. You may argue that Aderin-Pocock was simply commenting on something she didn't understand – and if you listen to the whole interview (available here for a few days; skip to the segment between 1h 23m and 1h 26m), including the cringe-worthy suggestion that dark matter and dark energy are the same thing really (because E=mc2, apparently), it's hard to avoid that conclusion. But a few weeks ago I thought I heard Andrew Pontzen and Tom Whyntie suggest something similar about the Higgs boson on BBC Radio 5 (unfortunately this episode is no longer on the iPlayer so I can't check the exact words they used). And here is Jon Butterworth seeming to suggest (in the midst of an otherwise reasonable piece) that the Higgs could be used to power interstellar travel ...
Why do people feel the need to do this? It's patently rubbish, and they know better. Do we as a scientific community feel that continued public support of science is so important that we should mislead or deceive the public in order to guarantee our future access to it? Do we feel that there is no convincing honest case to be made instead? Or are we just too lazy to make the honest case, and so rely on the catchy but inaccurate soundbite instead?
I think the sensible answer to the question John Humphrys posed would go something like this. Discovering the nature dark matter is a fascinating and exciting adventure. Knowing the answer will almost certainly have no practical applications whatsoever. However, on the journey to the answer we will have to develop new technologies and equipment (made of ordinary matter!) which may serendipitously turn out to have spin-off applications that we cannot yet foresee. More importantly, the very fact that the search is fascinating is part of what draws talented and creative young minds to physics – indeed to science – in the first place, from where they go on to enrich our society in a myriad different ways, none of which may later be connected to dark matter at all. I tried to make this case at greater length here in the early days of this blog.
It's a more subtle argument than just throwing empty phrases about "energy source" around, and it might be hard to reduce to a sound-byte. But it is justifiable, and also honest. And since science is after all about careful argumentation, let's have less spin all round please.
## Wednesday, March 27, 2013
### Explaining Planck by analogy
Explaining physics to the public is hard. Most physicists do a lousy job of conveying a summary of what their research really means and why it is important, without the use of jargon and in terms that can be readily understood. So it is not particularly surprising that occasionally non-experts trying to translate these statements for the benefit of other non-experts come up with misleading headlines such as this, or this.
Just to be clear: Planck has not mapped the universe as it was in the first tiny fraction of a second. (To be fair, most other reports correctly make this distinction, though they differ widely on when inflation is supposed to have occurred.) I think this is an important thing to get right, and I'm going to try to explain why, and what the CMB actually is.
However, I'm going to try to do so with the help of an analogy. This analogy is not my original invention – I heard Simon White use it during the Planck science briefing – but I think it is brilliant, simple to understand and not vastly misleading. So, despite the health warning about analogies above, I'm going to run with it and see how far we get.
## Thursday, March 21, 2013
### What Planck has seen
Update at 16:30 CET: I've now had a chance to listen to the main science briefing, and also to glance at some of the scientific papers released today, albeit very briefly. So here are a few more thoughts, though in actual fact it will take quite some time for cosmologists to fully assess the Planck results.
The first thing to say – and it's something easy to forget to say – is just what a wonderful achievement it is to send a satellite carrying two such precise instruments up into space, station it at L2, cool the instruments to a tenth of a degree above absolute zero with a fluctuations of less than one part in a million about that, spin the satellite once per minute, scan the whole sky in 9 different frequency bands, subtract all the messy foreground radiation from our own galaxy and even our solar system, all to obtain this perfect image of the universe as it was nearly 14 billion years ago:
The CMB sky according to Planck.
So congratulations and thanks to the Planck team!
Now I said all that first up because I don't want to now sound churlish when I say that overall the results are a little disappointing for cosmologists. This is because, as I noted earlier in the day, there isn't much by way of exciting new results to challenge our current model of the universe. And of course physicists are more excited by evidence that what they have hitherto believed was wrong than by evidence that it continues to appear to be right.
There are however still some results that will be of interest, and where I think you can expect to see a fair amount of debate and new research in the near future.
Firstly, as I pointed out earlier, Planck sees the same large scale anomalies as WMAP, thus confirming that they are real rather than artifacts of some systematic error or foreground contamination (I believe Planck even account for possible contamination from our own solar system, which WMAP didn't do). These anomalies include not enough power on large angular scales ($\ell\leq30$), an asymmetry between the power in two hemispheres, a colder-than-expected large cold spot, and so on.
The problem with these anomalies is that they lie in the grey zone between being not particularly unusual and being definitely something to worry about. Roughly speaking, they're unlikely at around a 1% level. This means that how seriously you take them depends a lot on your personal prejudices priors. One school of thought – let's call it the "North American school" – tends to downplay the importance of anomalies and question the robustness of the statistical methods by which they were analysed. The other – shall we say "European" – school tends instead to play them up a bit: to highlight the differences with theory and to stress the importance of further investigation. Neither approach is wrong, because as I said this is a grey area. But the Planck team, for what it's worth, seem to be in the "European" camp.
The second surprise is the change in the best-fit values for the parameters of the simplest $\Lambda$CDM model. In particular the Hubble parameter is lower than WMAP's, which was already getting a bit low compared to distance-ladder measurements from supernovae. This will be a cause for concern for the people working on distance-ladder measurements, and potentially something interesting for inventive theorists.
And finally, something close to my own heart. A few days ago I wrote a post about the discrepancy in the integrated Sachs-Wolfe signal seen from very rare structures, and pointed out that this effect had now been confirmed in two independent measurements. Almost immediately I had to change that statement, because one of those independent measurements had been partially retracted.
Well, the Planck team have been on the case (here, paper XIX), and have now filled that gap with a second independent measurement (as well as re-confirming the first one). The effect is definitely there to be seen, and it is still discrepant with $\Lambda$CDM theory (though I'll need to read the paper in more detail before quantifying that statement).
So there's a ray of hope for something exciting.
11:30 am CET: Well, ESA's first press conference to announce the cosmological results from Planck has just concluded. The full scientific papers will be released in about an hour, and there will be a proper technical briefing on the results in the afternoon (this first announcement was aimed primarily at the media). However, here is a very quick summary of what I gathered is going to be presented:
• The standard Lambda Cold Dark Matter Model continues to be a good fit to CMB data
• However, the best fit parameters have changed: in particular, Planck indicates slightly more dark matter and ordinary (baryonic) matter than WMAP did, and slightly less dark energy. (This is possibly not a very fair comparison – my hunch is that the Planck values are obtained from Planck data alone, whereas the "WMAP values" that were quoted were actually the best fit to WMAP plus additional (non-CMB) datasets.)
• The value of the Hubble parameter has decreased a bit, to around 67 km/s/Mpc. Given the error bars this is actually getting a bit far away from the value measured from supernovae, whch is around 74 km/s/Mpc. I think the quoted error bars on the measurement from supernovae are underestimated.
• The Planck value of the spectral tilt is a bit smaller than, but consistent with, what WMAP found.
• There is no evidence for extra neutrino-like species.
• There is no evidence for non-zero neutrino masses.
• There is no evidence for non-Gaussianity.
• There is no evidence for deviations from a simple power-law form of the primordial power spectrum.
• No polarisation data, and therefore no evidence of gravitational waves or their absence, for around another year.
• There is evidence for anomalies in the large-scale power, consistent with what was seen in WMAP. We'll have to wait and see how statistically significant this is – the general response to the anomalies WMAP saw could be summarised as "interesting, but inconclusive"; I don't think Planck is going to do a lot better than this (and the bigging-up of it in the press conference might have had more to do with the lack of other truly exciting discoveries), but I'd love to be surprised!
That's about all I got out of the media briefing. Obviously we are all waiting for more details this afternoon!
## Wednesday, March 20, 2013
### The Planck guessing game
At 10 am CET on Thursday morning, the Planck mission will hold a press conference and announce the first cosmology results based on data from their satellite, which has now been in orbit for nearly 1406 days, according to the little clock on their website. (I think the conference information will be available live here, though the website's not as clear as it could be.)
Planck is an incredible instrument, which has been measuring the pattern of cosmic microwave background (CMB) temperature anisotropies with great precision. And the CMB itself is an incredible treasure trove of information about the history of the universe, telling us not only about how it began, but what it consists of, and what might happen to it in the future. When the COBE and WMAP satellites first published detailed data from measurements of the CMB, the result was basically a revolution in cosmology and our understanding of the universe we live in. Planck will provide a great improvement in sensitivity over WMAP, which in turn was a great improvement on everything that came before it.
Another feature of the Planck mission has been the great secrecy with which they have guarded their results. The members of the mission themselves have known most of their results for some time now. Apparently on the morning of March 21st they will release a cache of something like 20 to 30 scientific papers detailing their findings, but so far nobody outside the Planck team itself has much of an idea what will be in them.
So let's have a little guessing game. What do you think they will announce? Dramatic new results, or a mere confirmation of WMAP results and nothing else? I'll list below some of the things they might announce and how likely I think they are (I have no inside information about what they actually have seen). Add your own suggestions via the comments box!
Tensors: Planck is much more sensitive to a primordial tensor perturbation spectrum than the best current limits. If they did see a non-zero tensor-to-scalar ratio, indicative of primordial gravitational waves, this would be pretty big news, because it is a clear smoking gun signal for the theory of inflation. Of course there are other bits of evidence that make us think that inflation probably did happen, but this really would nail it.
Unfortunately, I think it is unlikely that they will see any tensor signal – not least because many (and some would argue the most natural) inflation models predict it should be too small for Planck's sensitivity.
Number of relativistic species: CMB measurements can place constraints on the number of relativistic species in the early universe, usually parameterised as the effective number of neutrino species. I wrote about this a bit here. The current best fit value is $N_{\rm eff}=3.28\pm0.40$ according to an analysis of the latest WMAP, ACT and SPT data combined with measurements of baryon acoustic oscillations and the Hubble parameter (though some other people find a slightly larger number).
I would be very surprised indeed if Planck did not confirm the basic compatibility of the data with the Standard Model value $N_{\rm eff}=3.04$. It will help to resolve the slight differences between the ACT and SPT results and the error bars will probably shrink, but I wouldn't bet on any dramatic results.
Non-Gaussianity: One thing that all theorists would love to hear is that Planck has found strong evidence for non-zero non-Gaussianity of the primordial perturbations. At a stroke this would rule out a large class of models of inflation (and there are far too many models of inflation to choose between), meaning we would have to somehow incorporate non-minimal kinetic terms, multiple scalar fields or complicated violations of slow-roll dynamics during inflation. Not that there is a shortage of these sorts of models either …
Current WMAP and large-scale structure data sort of weakly favour a positive value of the non-Gaussianity parameter $f_{\rm NL}^{\rm local}$ that is larger than the sensitivity claimed for Planck before its launch. So if it lives up to that sensitivity billing we might be in luck. On the other hand, my guess (based on not very much) is it's more likely that they will report a detection of the orthogonal form, $f_{\rm NL}^{\rm ortho}$, which is more difficult – but not impossible – to explain from inflationary models. Let's see.
Neutrino mass: The CMB power spectrum is sensitive to the total mass of all neutrino species, $\Sigma m_\nu$, through a number of different effects. Massive neutrinos form (hot) dark matter, contributing to the total mass density of the universe and affecting the distance scale to the last-scattering surface. They also increase the sound horizon distance at decoupling and increase the early ISW effect by altering the epoch of matter-radiation equality.
WMAP claim a current upper bound of $$\Sigma m_\nu<0.44\;{\rm eV}$$ at 95% confidence from the CMB and baryon acoustic oscillations and the Hubble parameter value. But a more recent SPT analysis suggests that WMAP and SPT data alone give weak indications of a non-zero value, so it is possible that Planck could place a lower bound on $\Sigma m_\nu$. This would be cool from an observational point of view, but it's not really "new" physics, since we know that neutrinos have mass.
Running of the spectral index: Purely based on extrapolating from WMAP results, I expect Planck will find some evidence for non-zero running of the spectral index. But given the difficulty in explaining such a value in most inflationary models, I also expect the community will continue to ignore this, especially since the vanilla model with no running will probably still provide an acceptable fit to the data.
Anything else? Speculate away … we'll find out on Thursday!
Labels: CMB, cosmology, maps, neutrinos, the universe
## Tuesday, March 19, 2013
### A real puzzle in cosmology: part II
(This post continues the discussion of the very puzzling observation of the integrated Sachs-Wolfe effect, the first part of which is here. Part II is a bit more detailed: many of these questions are real ones that have been put to me in seminars and other discussions.)
Update: I've been informed that one of the papers mentioned in this discussion has just been withdrawn by the authors pending some re-investigation of the results. I'll leave the original text of the post up here for comparison, but add some new material in the relevant sections.
Last time you told me about what you said was an unusual observation of the ISW effect made in 2008.
Yes. Granett, Neyrinck and Szapudi looked at the average CMB temperature anisotropies along directions on the sky where they had previously identified large structures in the galaxy distribution. For 100 structures (50 "supervoids" and 50 "superclusters"), after applying a specific aperture photometry method, they found an average temperature shift of almost 10 micro Kelvin, which was more than 4 standard deviations away from zero.
Then you claimed that this observed value was five times too large. That if our understanding of the universe was correct, the value they should have seen should not have been definitely bigger than zero. Theory and observation grossly disagree.
Right again. Our theoretical calculation showed the signal should have been at most around 2 micro Kelvin, which is pretty much the same size as the contamination from random noise.
But you used a simple theoretical model for your calculations. I don't like your model. I think it is too simple. That's the answer to your problem – your calculation is wrong.
That could be true – thought I don't think so. Why don't we go over your objections one by one?
## Thursday, March 7, 2013
### Higgs animations
The news recently from the LHC experiments hasn't been very exciting for my colleagues on the particle theory side of things (see for instance here for summaries and discussion). But via the clever chaps at ATLAS we do have a series of very nice gif animations showing how the evidence for the existence of the Higgs changed with time, as they collected more and more data.
This example shows the development of one plot, for the Higgs-to-gamma-gamma channel:
That's pretty cool. Also nice to see the gif format being put to better use than endless animations of cats doing silly things! (Though if you are a PhD student, you might find this use of gifs amusing ... )
Here's another one, this time for the decay channel to 4 leptons:
Note that in this case the scale on the y axis is also changing with time! There's a version of this animation with a fixed axis here, and one of the gamma-gamma channel with a floating axis here.
Labels: CERN, Higgs boson, particle physics
## Tuesday, March 5, 2013
### A real puzzle in cosmology: part I
In a previous post, I wrote about recent updates to the evidence from the cosmic microwave background for extra neutrino species. This was something that a lot of people in cosmology were prepared to get excited about, but I argued that reality turned out to be really rather boring. This is because the new data neither showed anything wrong with the current model of what the universe is made of, nor managed to rule out any competing models.
Today I'd like to write about something else, which currently is a really exciting puzzle. Measurements have been made of a particular cosmological effect, known as the integrated Sachs-Wolfe or ISW effect, and the data show a measured value that is five times larger than it should be if our understanding of gravitational physics, our model of the universe, and our analysis of the experimental method are correct. No one yet knows why this should be so. The point of this post is to try to explain what is going on, and to speculate on how we might hope to solve the puzzle. It has been written in a conversational format with the lay reader in mind, but there should be some useful information even for experts.
Before beginning, I should point out that this what a lot of my own research is about at the moment. In fact, this was the topic of a seminar I gave at the University of Helsinki last week (and much of this post is taken from the seminar). My host in Helsinki, Shaun Hotchkiss, with whom I have written two papers on this "ISW mystery", has also put up several posts about it at The Trenches of Discovery blog over the last year (see here for parts I, II, III, IV, V, and VI). I will be more concise and limit myself to just two!
Obviously you could view this as a bit of an effort at self-publicity. But at a time when, both in particle physics and cosmology, many experiments are disappointingly failing to provide much guidance on new directions for theorists to follow, this is one of the few results that could do so. (Unlike a lot of the rubbish you might read in other popular science reports, it also has a pretty good chance of being true.) So I won't apologise for it!
What is the integrated Sachs-Wolfe effect?
The entire universe is filled with very cold photons. These photons weren't always very cold; on the contrary, they are leftovers from the time soon after the Big Bang when the universe was still very young and very small and very hot, so hot that all the protons and electrons (and a few helium nuclei) formed a single hot plasma, the photons and electrons bouncing off each so often that they all had the same temperature. And as the universe was expanding, this plasma was also cooling, until suddenly it was cool enough for the electrons and protons to come together to form hydrogen atoms, without immediately getting swept apart again. And when this happened, the photons stopped bouncing off the electrons, and instead just continued travelling straight through space minding their own business, cooling as the universe continued to expand. (The neutrinos, which only interact weakly with other stuff, had stopped bouncing and started minding their own business some time before this.)
What I've just told you is a cartoon picture of the history of the early universe. These cooling photons streaming through space form the cosmic microwave background radiation, or CMB for short. They fill the universe, and they arrive at Earth from all directions – they even make up about 1% of the 'snow' you see on an (old-school) untuned TV set.
The most important property of the CMB photons is that, to a very great degree of accuracy, they are all at the same temperature, whichever direction they come from. This is how we know that the universe used to be very hot, and how we learned that it has been expanding since then. It is also why we think it is probably very uniform. The second important property of CMB photons is that they are not all at the same temperature – by looking carefully enough with an extremely sensitive instrument, we can see tiny anisotropies in temperature across the sky. These differences in temperature are the signs of the very small inhomogeneities in the early matter-radiation plasma which are responsible for all the structure we see around us in the night sky today. When the photons decoupled from the primordial plasma, they kept the traces of the tiny inhomogeneities as they streamed across the universe. The matter, on the other hand, was subject to gravity, which took the small initial lumpiness and over billions of years caused it to become bigger and lumpier, forming stars, galaxies, clusters of galaxies and vast clusters of clusters.
The CMB sky as seen by the WMAP satellite. The colours represent deviations of the measured CMB temperature from the mean value – the CMB anisotropies (red is hot and blue is cold). This map uses the Mollweide projection to display a sphere in two dimensions. Image credit: NASA / WMAP Science team.
Yes I knew that, but what is the integrated Sachs-Wolfe effect?
Labels: CMB, cosmology, dark energy, physics, the universe
## Thursday, February 28, 2013
### The nature of publications
A paper in the journal of Genome Biology and Evolution has been doing the rounds on the internet recently and was shown to me by a friend. It is titled "On the immortality of television sets: “function” in the human genome according to the evolution-free gospel of ENCODE", by Graur et al. The title is blunt enough, but the abstract is extraordinarily so. Let me quote the entire thing here:
A recent slew of ENCODE Consortium publications, specifically the article signed by all Consortium members, put forward the idea that more than 80% of the human genome is functional. This claim flies in the face of current estimates according to which the fraction of the genome that is evolutionarily conserved through purifying selection is under 10%. Thus, according to the ENCODE Consortium, a biological function can be maintained indefinitely without selection, which implies that at least 80 – 10 = 70% of the genome is perfectly invulnerable to deleterious mutations, either because no mutation can ever occur in these “functional” regions, or because no mutation in these regions can ever be deleterious. This absurd conclusion was reached through various means, chiefly (1) by employing the seldom used “causal role” definition of biological function and then applying it inconsistently to different biochemical properties, (2) by committing a logical fallacy known as “affirming the consequent,” (3) by failing to appreciate the crucial difference between “junk DNA” and “garbage DNA,” (4) by using analytical methods that yield biased errors and inflate estimates of functionality, (5) by favoring statistical sensitivity over specificity, and (6) by emphasizing statistical significance rather than the magnitude of the effect. Here, we detail the many logical and methodological transgressions involved in assigning functionality to almost every nucleotide in the human genome. The ENCODE results were predicted by one of its authors to necessitate the rewriting of textbooks. We agree, many textbooks dealing with marketing, mass-media hype, and public relations may well have to be rewritten.
Ouch.
The paper that Graur et al. implicitly deride as "marketing, mass-media hype and public relations" is one of series of publications in Nature (link here for those interested) by the ENCODE consortium. I'm not going to claim any expertise in genetics, though the arguments put forward by Graur appear sensible and convincing.1 But I do think it is interesting that the ENCODE papers were published in Nature.
Nature is of course a very prestigious journal to publish in. In some fields, the presence or lack of a Nature article on a young researcher's CV can make or break their career chances. It is very selective in accepting articles: not only must contributions meet all the usual requirements of peer-review, they should also be judged to be in "the five most significant papers" published in that discipline that year. It has a very high Impact Factor rating, probably one of the highest of all science journals. In fact it is apparently one of the very few journals that does better on citation counts than the arXiv, which accepts everything.
But among some cosmologists, Nature has a reputation for often publishing claims that are over-exaggerated, describe dramatic results that turn out to be less dramatic in subsequent experiments, or are just plain wrong.2 One professor even once told me – and he was only half-joking – that he wouldn't believe a particular result because it had been published in Nature.
It is easy to see how such things can happen. The immense benefit of a high-profile Nature publication to a scientist's career leads to a pressure to find results that are dramatic enough to pass the "significance test" imposed by the journal, or to exaggerate the interpretation of results that are not quite dramatic enough. On the other hand, if a particular result does start to look interesting enough for Nature, the authors may be – perhaps unwittingly – less likely to subject it to the same level of close scrutiny they would otherwise give it. The journal then is more reliant on its referee's to provide the scrutiny to weed out the hype from the substance, but even with the most efficient refereeing system in the world given enough submitted papers full of earth-shattering results, some amount of rubbish will always slip through.
I was thinking along these lines after seeing Graur et al.'s paper, and I was reminded of a post by Sabine Hossenfelder at the Backreaction blog, which linked to this recent pre-print on the arXiv titled "Deep Impact: Unintended Consequences of Journal Rank". As Sabine discusses, the authors point to quite a few undesirable aspects of the ranking of journals according to "impact factor", and the consequent rush to try to publish in the top-ranked journals. The publication bias effect (and in some cases, the subsequent retractions that follow) appear to be influenced to a degree by the impact factor of the journal in which the study is published. Another thing that might be interesting (though probably hard to check) is the link between the likelihood of scientists holding a press conference or issuing a press release to announce a result, and the likelihood of that result being wrong. I'd guess the correlation is quite high!
Of course the only real reason that the impact factor of the journal in which your paper is published matters is that it can be used a proxy indication of the quality of your work for the benefit of people who can't be bothered, or are unable, to read the original work and judge it on merit.
The other yardstick by which researchers are often judged is the number of citations their papers receive, which at least has the (relative) merit of being based on those papers alone, rather than other people's papers. Combining impact factor and citation count is even sillier – unless they are counted in opposition, so that a paper that is highly cited despite being in a low-impact journal gets more credit, and a moderately cited one in a high-impact journal gets less!
Anyway, bear these things in mind if you ever find yourself making a reflexive judgement about the quality of a paper you haven't read based on where it was published.
1The paper includes a quote which pretty well sums up the problem for ENCODE:
"The onion test is a simple reality check for anyone who thinks they can assign a function to every nucleotide in the human genome. Whatever your proposed functions are, ask yourself this question: Why does an onion need a genome that is about five times larger than ours?"
2Cosmologists (the theoretical ones, at any rate) actually hardly ever publish in Nature. Even observational cosmology is rarely included. So you might regard this as a bit a of a case of sour grapes. I don't think that is the case, simply because it isn't really relevant to us. Not having a Nature publication is not a career-defining gap for a cosmologist: it's just normal.
Labels: biology, peer review, publishing, science
## Tuesday, February 19, 2013
### Things to Read, 19th February
I have just arrived in Helsinki, where I am visiting current collaborators and future colleagues at the Helsinki Institute of Physics for a few days. I will give a talk next Wednesday, about which more later. In the meantime though, a quick selection of interesting things I have read recently:
• Did you know that about 6 million years ago, the Mediterranean sea is believed to have basically evaporated, leaving a dry seabed? This is called the Messinian Salinity Crisis, which I first learned about from this blog. There's also an animated video showing a hypothesised course of events leading to the drying up:
Very soon after, the Atlantic probably came flooding back in over the straits of Gibraltar – an event known as the Zanclean Flood – and, according to some models, could have refilled the whole basin back up in a very short time. Spare a thought for the poor hippopotamuses that got stuck on the seabed ...
• A long feature in next month's issue of National Geographic Magazine is called The Drones Come Home, by John Horgan. Horgan has written a blog piece about this at Scientific American, which he has titled 'Why Drones Should Make You Afraid'. In the blog piece he has a bullet-point summary of the most disturbing facts about unmanned aircraft (military or otherwise) taken from the main piece. Some of these include:
- "The Air Force has produced [a video showing] possible applications of Micro Air Vehicles [...] swarming out of the belly of a plane and descending on a city, where [they] stalk and kill a suspect."
- "The Obama regime has quietly compiled legal arguments for assassinations of American citizens without a trial"
- "The enthusiasm of the U.S. for drones has triggered an international arms race. More than 50 other nations now possess drones, as well as non-governmental militant groups such as Hezbollah."
Scary stuff; worth reading the whole thing.
• I wrote some time ago about Niall Ferguson's argument about economics with Paul Krugman (this was in the context of a lot of nonsense Ferguson was coming up with at the time, both in his Reith lectures for the BBC, and in other publications). I just learned (via a post by Krugman, who also just learned) that Ferguson had already apparently admitted that he got it wrong, about a year ago. Krugman's response to that is here; I'd add that I notice this admission didn't seem to stop Ferguson continuing the same economic reasoning in his Reith lectures a few months later!
• A review of John Lanchester's new novel Capital, by Michael Lewis in the New York Review of Books. Quite often Almost always with the NYRB, I read reviews of books before I have read the actual book. In this case the result was to make me resolve to buy a copy.
Labels: BBC, economics and politics, geology, Krugman
Subscribe to: Posts (Atom)
## Search This Blog
Sesh Nadathur
I am a postdoctoral researcher in theoretical cosmology, currently at the University of Bielefeld in Germany. However, the views expressed in this blog are my own and not necessarily those of my employers past or present.
View my complete profile
## Blog Archive
• ▼ 2013 (14)
• ► March (6)
• ► February (4)
• ► January (2)
• ► 2012 (45)
• ► December (2)
• ► November (2)
• ► October (2)
• ► September (5)
• ► August (5)
• ► July (9)
• ► June (8)
• ► May (12)
Posts
Posts
All Comments
All Comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9604219198226929, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/9041/how-to-think-physically-about-basic-fields?answertab=oldest
|
How to think physically about basic “fields”
"Field" is a name for associating a value with each point in space. This value can be a scalar, vector or tensor etc. I read the wikipedia article and got that much, but then it goes it into more unfamiliar concepts.
My question is how to interpret a basic field. Lets say there is a field with momentum and energy. Does that mean, that any "object" which can interact with that field, can borrow that momentum and energy from that field and give back to it as well. Much like an electron in some (not its own) electromagnetic field, right? Now how do I extend this understanding to concepts like:
$U_\lambda$ is the radiation energy density per unit wavelength of a thermodynamic equilibrium radiation field trapped in some cavity."
Does that mean, that the "numbers" that make up the field in each point in space stay constant with temperature or time (or both, I'm not sure.) And if there are object is that field which can interact with it, for example electrons, then they can take energy from that field or lose energy to that field. Then the author calls this field a "photon gas" without explanation. So does that mean a bag of photons in cavity is mathematically equivalent to specifying some numbers in space? Or something else.
-
3 Answers
Think about this: a function that maps points on a 2D space to numbers can describe the shape of terrain, but I wouldn't say that it is the terrain. In the same way, a mapping of points to objects (scalars, vectors, tensors, etc.) is the mathematical description of a field, but if you think of the field as just the mapping, you're missing out.
Fields can have various physical properties. For example, just as a particle can have a certain amount of energy, so can an electromagnetic field. The difference is, since the field is spread throughout space, so is the energy; therefore, it makes more sense to talk about the density of energy rather than the amount. Same applies for momentum, or any other physical quantity carried by the field.
Just as the field could be described by a mapping of points to vectors, so the energy density can be described by a mapping of points to numbers. Given the vector value of the field at any point, you can calculate the numeric value of the energy density at that point. But remember that these numbers (i.e. the mapping) are just a mathematical description of the energy density.
Now, you may notice that the mapping that describes the energy density ($u(x)$) satisfies the naive definition of a mathematical description of a field: it associates a number with each point in space. But physicists wouldn't normally call that mapping a "field," because in a sense, it's not really independent. Mathematically, you can calculate $u(x)$ from $A(x)$; physically, the energy "field" is completely determined by the EM field. In physics, we tend to reserve the term "field" to talk about something that can't be obtained by a simple calculation from some other field.
Does that mean, that the "numbers" that make up the field in each point in space stay constant with temperature or time (or both, I'm not sure.)
I'm not sure how you got that from the quote you listed... no, the numbers that make up the mathematical description of the EM field do not stay constant with either time or temperature. In fact, one of the things that characterizes a physical field is that it has dynamics - mathematically, this means that the numbers (or whatever) making up the field change with respect to time and space, but in a predictable manner which can be described with differential equations.
But there are things you can calculate from a field that do stay constant. For instance, you can calculate the total energy stored in the field by calculating the energy density and then integrating it over the volume of the field. You could also calculate the temperature of the field, by some mathematical procedure. In many cases, these quantities are more closely based on the manner in which the field changes than the actual values that describe it. (In fact, in some sense, it turns out that you can describe a field by the way that the numbers change, just as well as you can with the numbers themselves. Read up on the Fourier transform and momentum space if you are interested.)
-
I'll start here -- "So does that mean a bag of photons in cavity is mathematically equivalent to specifying some numbers in space?"
It means much more than that. First, you define a field -- in the electromagnetic case, it's a set of vectors everywhere in space. Then you allow it to be dynamic, that is you write a Lagrangian for it that leads to classical equations of motion. Then you quantize it, which leads to (1) the idea of a vacuum, that is a state of the field that contains no excitations and (2) particles -- excitations of the field that can carry momentum and energy around.
So a photon gas is much more than just a simple set of numbers everywhere. The field is dynamic. It's not in the vacuum state and so it contains particles. And finally, it is in a very precise kind of state - the photons are distributed in energy according to the Bose-Einstein distribution at a particular temperature.
How did it get into that equilibrium state? To answer that, you need to extend your field from the plain EM one to include coupling to other matter fields, say the electron. So the electron will have it's own kind of field and your combined Lagrangian will contain a term that couples the two fields together so that they can interact. This is how "objects" like electrons can borrow and lend energy to the EM field, through the coupling between the electron field (which by the way is a spinor field, and not a vector field) and the EM field.
You can have fields for all the other kinds of particles and have interaction terms for all the other known forces. So now you can have a cavity made of matter that interacts with the EM field. And through complicated dynamics that depend crucially on the Second Law of Thermodynamics, the photon gas can come into equilibrium with the matter cavity, all this mediated by the couplings in the Lagrangian.
-
dbrane and DZ have given Useful conventional Answers.
dbrane's characterization of the vacuum as "contains no excitations", however, is not very helpful because there is no clear Correspondence with the classical idea of no excitations, the everywhere zero classical field. Local measurements of the vacuum in general give non-zero results.
IMO (not a conventional Answer, so Useful only with care for the purposes of exams, etc.) it's more helpful to think of particles as modulations of the vacuum, which is abstractly constructed as a Poincaré invariant state over an algebra of observables. An alternative perspective is that a free field can be constructed from its Wightman functions, which are essentially correlations between the measured local values of the field when the localizations are at space-like separation [it's more usual to construct the Wightman functions from the quantum field, $W(x_1,x_2,...x_n)=\left<0\right|\hat\phi(x_1)\hat\phi(x_2)...\hat\phi(x_n)\left|0\right>$, but the other way round works too; note that, as generalized correlations, the Wightman functions are functions on the (symmetrized) space $M\oplus (M\times M)\oplus(M\times M\times M)\oplus...$, not on Minkowski space $M$ simpliciter]. The Wightman functions are not zero for the vacuum state.
Non-vacuum states have Wightman functions that are different from the vacuum state. In particular, they are not Poincaré invariant (which is why we can say that they "carry momentum and energy around"). Nonetheless, they are systematic deformations of the vacuum state, and hence of the Wightman functions, constructed by the action of the quantum field on the vacuum vector, which I choose to call modulations because I wish to emphasize the signal processing aspect of quantum field theory. [This doesn't make noncommutativity of measurements go away, but the relationship between quantum fields and classical signal processing is substantially different from the relationship between quantum mechanics and classical particle physics.] Although there is a continuum of possible modulations, there is also a discrete structure of states that can be constructed by the action of $1, 2, ...$ field operators on the vacuum vector, which we call the number of particles in the state. Unless there are superselection rules in place, we can construct (1) linear compositions of vectors and (2) linear compositions of states (superpositions and mixtures, respectively).
Note that by the vacuum state I mean a linear map from the space of operators $\omega_0:\mathcal{A}\rightarrow\mathbb{C};\omega_0(\hat A)=\left<0\right|\hat A\left|0\right>$, where I've used the vacuum vector $\left|0\right>$ to construct the vacuum state $\omega_0$. This is now a fairly universal distinction in mathematical physics, but not, I think, in Physics generally.
Now to your Question's more specific points. Interacting quantum fields defined in terms of deformed Lagrangians and Hamiltonians are only as well-defined as the degree of your acceptance of the mathematics of renormalization. From moment to moment, unitary evolution of a state preserves the Hilbert space norm (by definition), but may or may not conserve the number of particles in the state, nor even the energy and momentum if we construct an ad-hoc Hamiltonian (a thermal field in a box is an ad-hoc system, insofar as the Hamiltonian is not translation or boost invariant). Unitary evolutions are generalizations of sinusoidal motion to higher dimensional spaces, so insofar as we think of sinusoidal motion as borrowing potential energy to create kinetic energy, sure, it's "borrowing", but there are other, and I think better ways to think about such models.
From a field perspective that starts from the vacuum, a thermal state is a thermodynamic limit of mixtures of $0,1,2,...,n$ particle states (thermodynamic in the sense that $n\rightarrow\infty$), with weight for different states that is determined by the energy. The energy and the thermal state constructed using it are invariant under translations and under rotations, but not under boosts. Thermal states are special because the limit is not in the Hilbert space of bounded states. I regret that I don't have the time (I'm not sure if it's an hour, a week, or a Ph.D. thesis) to construct a field theoretic analysis of your thermal field in a box example, I'll have to leave it to you, expanding upon the principled approach I've laid out above.
Yayu, I've sent you to my papers before, so I won't do so again. I've riffed on your interestingly asked Question more than I've Answered it; I think of it as more up to you to make it Useful rather than Useful in itself. Best wishes.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459900259971619, "perplexity_flag": "head"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Inverse_function
|
# All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Inverse function
In mathematics, an inverse function is in simple terms a function which "does the reverse" of a given function. More formally, if f is a function with domain X, then f -1 is its inverse function if and only if for every $x \in X$ we have:
f - 1(f(x)) = f(f - 1(x)) = x.
For example, if the function x → 3x + 2 is given, then its inverse function is x → (x - 2) / 3. This is usually written as:
$f\colon x\to 3x+2$
$f^{-1}\colon x\to(x-2)/3$
The superscript "-1" is not an exponent. Similarly, as long as we are not in trigonometry, f 2(x) means "do f twice", that is f(f(x)), not the square of f(x). For example, if : f : x → 3x + 2, then f 2 : x = 3*((3x + 2)) + 2, or 9x + 8. However, in trigonometry, for historical reasons, sin2(x) usually does mean the square of sin(x). As such, the prefix arc is sometimes used to denote inverse trigonometric functions, e.g. arcsin x for the inverse of sin(x).
### Simplifying rule
Generally, if f(x) is any function, and g is its inverse, then g(f(x)) = x and f(g(x)) = x. In other words, an inverse function undoes what the original function does. In the above example, we can prove f -1 is the inverse by substituting (x - 2) / 3 into f, so
3(x - 2) / 3 + 2 = x.
Similarly this can be shown for substituting f into f -1.
Indeed, an alternative definition of an inverse function g of f is to require that g o f resp. f o g be the identity function on the domain resp. codomain of f.
### Existence
For a function f to have a valid inverse, it must be a bijection, that is:
• each element in the codomain must be "hit" by f: otherwise there would be no way of defining the inverse of f for some elements
• each element in the codomain must be "hit" by f only once: otherwise the inverse function would have to send that element back to more than one value.
If f is a real-valued function, then for f to have a valid inverse, it must pass the horizontal line test, that is a horizontal line y = k placed on the graph of f must pass through f exactly once for all real k.
It is possible to work around this condition, by redefining f's codomain to be precisely its range, and by admitting a multi-valued function as an inverse.
If one represents the function f graphically in an x-y coordinate system, then the graph of f -1 is the reflection of the graph of f across the line y = x.
Algebraically, one computes the inverse function of f by solving the equation
y = f(x)
for x, and then exchanging y and x to get
y = f - 1(x)
This is not always easy; if the function f(x) is analytic, the Lagrange inversion theorem may be used.
The symbol f -1 is also used for the (set valued) function associating to an element or a subset of the codomain, the inverse image of this subset (or element, seen as a singleton).
## See also
03-10-2013 05:06:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8355918526649475, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/22204/lorentz-invariance-of-the-3-1-decomposition-of-spacetime
|
# Lorentz invariance of the 3 + 1 decomposition of spacetime
Why is allowed decompose the spacetime metric into a spatial part + temporal part like this for example
$$ds^2 ~=~ (-N^2 + N_aN^a)dt^2 + 2N_adtdx^a + q_{ab}dx^adx^b$$
($N$ is called lapse, $N_a$ is the shift vector and $q_{ab}$ is the spatial part of the metric.)
in order to arrive at a Hamiltonian formulation of GR? How is a breaking of Lorentz invariance avoided by doing this ?
Sorry it this is a dumb question; maybe I should just read on to get it but I`m curious about this now ... :-)
-
2
NB: really we should be discussing "diffeomorphism invariance" when working with general relativity. In the special case of Minkowski flat spacetime, the diffeomorphism invariance amounts to Lorentz invariance. (I'm a mathematician: I'm pathological about these terminological issues!) – Alex Nelson Jun 24 '12 at 16:16
## 2 Answers
Well, perhaps one should consider reading The Hamiltonian formulation of General Relativity: myths and reality for further mathematical details. But I would like to remind to you with most constrained Hamiltonian systems, the Poisson bracket of the constraint generates gauge transformations.
For General Relativity, foliating spacetime $\mathcal{M}$ as $\mathbb{R}\times\Sigma$ ends up producing diffeomorphism constraints $\mathcal{H}^{i}\approx0$ and a Hamiltonian constraint $\mathcal{H}\approx 0$. Note I denote weak equalities as $\approx$.
This is first considered in Peter G. Bergmann and Arthur Komar's "The coordinate group symmetries of general relativity" Inter. J. The. Phys. 5 no 1 (1972) pp 15-28.
Since you asked, I'll give you a few exercises to consider!
Exercise 1: Lie Derivative of the Metric
The Lie derivative of the metric along a vector $\xi^{a}$ is $$\mathcal{L}_{\xi}g_{ab}=g_{ac}\partial_{b}\xi^{c}+g_{bc}\partial_{a}\xi^{c}+\xi^{c}\partial_{c}g_{ab}$$ Show that this may be rewritten as $$\mathcal{L}_{\xi}g_{ab}=\nabla_{a}\xi_{b}+\nabla_{b}\xi_{a}$$ where $\nabla$ is the standard covariant derivative.
Exercise 2: Constraints generate diffeomorphisms
Recall that the Hamiltonian and momentum constraints are $$\mathcal{H} = \frac{16\pi G}{\sqrt{q}}\left(\pi_{ij}\pi^{ij}-\frac{1}{2}\pi^{2}\right)-\frac{\sqrt{q}}{16\pi G}{}^{(3)}\!R,\quad\mathcal{H}^{i} = -2D_{j}\pi^{ij}$$ and $\pi^{ij}=\displaystyle\frac{1}{16\pi G}\sqrt{q}(K^{ij}-q^{ij}K)$ with $K_{ij}=\displaystyle\frac{1}{2N}(\partial_{t}q_{ij}-D_{i}N_{j}-D_{j}N_{i})$. Let $$H[\hat{\xi}] = \int d^{3}x\left[\hat{\xi}^{\bot}\mathcal{H}+\hat{\xi}^{i}\mathcal{H}_{i} \right]$$ Show that $\mathcal{H}[\hat{\xi}]$ generates (spacetime) diffeomorphisms of $q_{ij}$, that is, $$\left\{H[\hat{\xi}],q_{ij}\right\}=(\mathcal{L}_{\xi}g)_{ij}$$ where $\mathcal{L}_{\xi}$ is the full spacetime Lie derivative and the spacetime vector field $\xi^{\mu}$ is given by $$\hat{\xi}^{\bot}=N\xi^{0}, \quad \hat{\xi}^{i}=\xi^{i}+N^{i}\xi^{0}$$ The parameters $\{\hat{\xi}^{\bot},\hat{\xi}^{i}\}$ are known as "surface deformation" parameters.
(Hint: use problem 1 and express the Lie derivative of the spacetime metric in terms of the ADM decomposition.)
Addendum: I'd like to give a few more references on the relation between the diffeomorphism group and the Bergmann-Komar group.
From the Hamiltonian formalism, there are a few references:
1. C.J. Isham, K.V. Kuchar "Representations of spacetime diffeomorphisms. I. Canonical parametrized field theories". Annals of Physics 164 2 (1985) pp 288–315
2. C.J. Isham, K.V. Kuchar "Representations of spacetime diffeomorphisms. II. Canonical geometrodynamics" Ann. Phys. 164 2 (1985) pp 316–333
The Lagrangian analysis of the symmetries are presented in:
1. Josep M Pons, "Generally covariant theories: the Noether obstruction for realizing certain space-time diffeomorphisms in phase space." Classical and Quantum Gravity 20 (2003) 3279-3294; arXiv:gr-qc/0306035
2. J.M. Pons, D.C. Salisbury, L.C. Shepley, "Gauge transformations in the Lagrangian and Hamiltonian formalisms of generally covariant theories". Phys. Rev. D 55 (1997) pp 658–668; arXiv:gr-qc/9612037
3. J. Antonio García, J. M. Pons "Lagrangian Noether symmetries as canonical transformations." Int.J.Mod.Phys. A 16 (2001) pp. 3897-3914; arXiv:hep-th/0012094
For more on the hypersurface deformation algebra, it was first really investigated in Hojman, Kuchar, and Teitelboim's "Geometrodynamics Regained" (Annals of Physics 96 1 (1976) pp.88-135).
-
Thanks @AlexNelson for this rich answer, since I`ve not yet penetrated deep enough into the Hamiltonian formulation of GR etc it will keep me busy for a while. And it is a nice extension of things only briefly explained in the book I am reading :-) – Dilaton Jun 24 '12 at 8:42
1
@Nemo: you might want to also read Bojowald's book Canonical Gravity and Applications: Cosmology, Black Holes, and Quantum Gravity which is, perhaps, one of the better books on the ADM formalism. Another good book (with a numerical focus) is Baumgarte and Shapiro's Numerical Relativity. – Alex Nelson Jun 24 '12 at 15:07
@AlexNelson Quick question: did you omit a $^3R$ term out of the formula for the momentum constraint $\mathcal{H}$? – twistor59 Apr 13 at 12:05
@twistor59: hmm...lemme double check that later today. But for now, I've edited it in. – Alex Nelson Apr 15 at 16:43
As I understand it, you're right that splitting the metric into spatial and temporal parts does break the Lorentz symmetry of the metric. When you define a lapse function and shift vector, you start by foliating spacetime into a bunch of spacelike "slices," and these slices can be used to identify a particular reference frame.
However, the key is that you can do this in any manner you want. There's no one "special" choice of the lapse function and shift vector. The trick to showing that the Hamiltonian formulation is Lorentz invariant is making sure that the conclusions you get are equally valid no matter which foliation of spacetime you choose.
-
Thanks @DavidZaslavsky, I could have bet that there must be a quite "simple" explanation. Your answer makes a lot of sense to me and in addition it gives me some other keywords to look up :-) – Dilaton Mar 11 '12 at 9:08
1
Thanks, but do keep in mind that I'm not a GR expert, so other people might have something to add here - perhaps you could get a better answer from someone who is more familiar with this stuff. – David Zaslavsky♦ Mar 11 '12 at 11:31
Jep, additional (and more detailed ?) answers are still welcomed and appreciated of course. – Dilaton Mar 11 '12 at 11:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343557357788086, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/201477/what-is-the-easiest-way-to-see-langle-sigma-x-y-rangle-cong-langle-x-omeg
|
# What is the easiest way to see $\langle \Sigma X, Y \rangle\cong \langle X,\Omega Y\rangle$
Let $X$ and $Y$ be topological spaces. Let $\langle X,Y\rangle$ denote the homotopy classes of maps from $X$ and $Y$. The reduced suspension $\Sigma(-)$ has the adjoint $\Omega(-)$. In other words, we have $$\langle \Sigma X, Y \rangle\cong \langle X,\Omega Y\rangle$$ for all $X$ and $Y$.
I am always confused with on which side I should put $\Sigma$. What is the easiest or intuitive way to think of this isomorphism? Are there a good way to memorize this this formula?
-
I think of the suspension as being made-up of a bunch of "loops" that crash through $X$. So a map out of $\Sigma X$ to $Y$ is a map from $X$ to $\Omega Y$. – Ryan Budney Sep 24 '12 at 6:37
I agree with Ryan. Given a map $f:\Sigma X \rightarrow Y$, one can get a loop $f_{x}:\{x\}\times I\rightarrow Y$ for each $x\in X$, where $I$ is the unit interval appearing in the definition of reduced suspension. – M. K. Sep 24 '12 at 8:00
## 1 Answer
The easiest way is to think about the adjunctions you do know. I like to think about the Hom-Tensor adjunction. That is all the above is (in the category of pointed spaces). Let me know if you need me to elaborate.
-
I would like to see how tensor appears in this context. – M. K. Sep 24 '12 at 8:01
1
– Juan S Sep 24 '12 at 9:38
My problem is why the smash product plays the role of the tensor product in the category of pointed spaces. – M. K. Sep 25 '12 at 7:24
1
– Juan S Sep 25 '12 at 9:50
Thanks, Juan. I will think about it more. – M. K. Sep 26 '12 at 3:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406176209449768, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/31808/schwinger-representation-of-operators-for-n-particle-2-mode-symmetric-states
|
# Schwinger representation of operators for n-particle 2-mode symmetric states
A bosonic (i.e. permutation-symmetric) state of $n$ particles in $2$ modes can be written as a homogenous polynomial in the creation operators, that is $$\left(c_0 \hat{a}^{\dagger n} + c_1 \hat{a}^{\dagger (n-1)} \hat{b}^{\dagger} +c_2 \hat{a}^{\dagger (n-2)} \hat{b}^{\dagger2} + \ldots+ c_n \hat{b}^{\dagger n}\right)|\Omega\rangle,$$ where $\hat{a}$ and $\hat{b}$ are the annihilation operators, $c_i$ are complex coefficients and $|\Omega\rangle$ is the vacuum state.
Alternatively, one can express the same state as a state in the fully permutation symmetric subspace of $n$ qubits (equivalently - as a state of the maximal total angular momentum, that is, $n/2$).
The question is the following - for a general symmetric operator $$\sum_{perm} (\mathbb{I})^{\otimes (n-n_x-n_y-n_z)} \otimes (\sigma^x)^{\otimes n_x} \otimes (\sigma^y)^{\otimes n_y} \otimes (\sigma^z)^{\otimes n_z},$$ what is its equivalent in the terms of creation and annihilation operators?
Partial solution:
For the simplest cases (i.e. $(n_x,n_y,n_z)\in\{(1,0,0),(0,1,0),(0,0,1)\}$ we get the following: $$\sum_{i=1}^n \sigma^x_i \cong \hat{a}^\dagger \hat{b} + \hat{b}^\dagger \hat{a}$$ $$\sum_{i=1}^n \sigma^y_i \cong -i\hat{a}^\dagger \hat{b} + i \hat{b}^\dagger \hat{a}$$ $$\sum_{i=1}^n \sigma^z_i \cong \hat{a}^\dagger \hat{a} - \hat{b}^\dagger \hat{b}$$ (AFAIR it is called the Schwinger representation). It can be checked directly on Dicke states, i.e. (${n \choose k}^{-1/2}\hat{a}^{\dagger (n-k)} \hat{b}^{\dagger k}|\Omega\rangle$).
For general case it seems that we get $$: \left( \hat{a}^\dagger \hat{b} + \hat{b}^\dagger \hat{a} \right)^{n_x} \left( - i \hat{a}^\dagger \hat{b} + i \hat{b}^\dagger \hat{a} \right)^{n_y} \left( \hat{a}^\dagger \hat{a} - \hat{b}^\dagger \hat{b}\right)^{n_z} :,$$ where :expr: stands for the normal ordering, i.e. putting the creation operators on the left and the annihilation - on the right. However, it's neither checked (besides correlators for 1-2 particles) nor proven.
Of course one can construct the operators recursively, e.g. $$\sum_{i\neq j}^n \sigma^x_i \otimes \sigma^y_j = \left(\sum_{i=1}^n \sigma^x_i\right)\left( \sum_{i=1}^n \sigma^y_i\right) - i \sum_{i=1}^n \sigma^z_i \\ \cong \left( \hat{a}^\dagger \hat{b} + \hat{b}^\dagger \hat{a} \right)\left( -i\hat{a}^\dagger \hat{b} + i \hat{b}^\dagger \hat{a} \right) - i \left( \hat{a}^\dagger \hat{a} - \hat{b}^\dagger \hat{b} \right),$$ but the question is on a general closed-form result.
-
## 1 Answer
Previous answer completely rewritten
It seems to me that your hypothesis is true, up to a constant correction: $$\sum_{\textstyle{\pi: \{1,2,\ldots,n\} \to \{\mathbb{I}, \sigma_x, \sigma_y, \sigma_z\} \atop \forall i \in \{x,y,z\}:\ \mathrm{card}(\pi^{-1}(\sigma_i))=n_i}} \!\!\bigotimes_{i=1}^n\ \ \pi(i) \cong \frac1{n_x! n_y! n_z!} \mathopen{:} \left( a^\dagger b + b^\dagger a \right)^{n_x} \left( - i a^\dagger b + i b^\dagger a \right)^{n_y} \left( a^\dagger a - b^\dagger b\right)^{n_z} \mathclose{:},$$ and that this can be proven using multivariate induction.
For generic $n_x, n_y, n_z$, let $$[\![n_x, n_y, n_z]\!]$$ symbolically denote the operator on the left-hand side, in the qubit picture.
The induction step may be based upon the observation $$[\![1,0,0]\!][\![n_x, n_y, n_z]\!] = (n_x+1)[\![n_x+1, n_y, n_z]\!] - i(n_y+1)[\![n_x, n_y+1, n_z-1]\!] + i(n_z+1)[\![n_x, n_y-1, n_z+1]\!] + (n-n_x-n_y-n_z+1)[\![n_x-1, n_y, n_z]\!].$$ (It's a matter of simple combinatorics to find the multipliers.) There are perfectly analogous relations for multiplying by $[\![0,1,0]\!]$ or $[\![0,0,1]\!]$ from the left, too. Defining the double square brackets to be zero when either of the components is negative, this set of relations holds universally and the sufficient set of anchor points is the Wigner representation of $[\![1,0,0]\!]$, $[\![0,1,0]\!]$, $[\![0,0,1]\!]$, where we prove the equivalence to the Fock picture formulas directly.
Now one would just prove either of the relations (arguing by symmetry) to hold for the right-hand side, using re-ordering of the products after a multiplication by $$[\![1,0,0]\!] \cong a^\dagger b + ab^\dagger$$ and $$n \cong a^\dagger a + b^\dagger b$$ from the left. It requires a certain amount of work but should be doable. (I tried but ran into some kind of a numerical error.)
With a future edit on my mind, I will try to finish the proof of the induction step to see if I am right about the additional factor.
Assuming the relation is true, I doubt that there is any more "closed form" than the expansion $$\sum_{j=0}^{n_x} \sum_{k=0}^{n_y} \sum_{l=0}^{n_z} \frac{(-1)^{k+n_z-l} i^{n_y}}{j!k!l!(n_x-j)!(n_y-k)!(n_z-l)!} (a^\dagger)^{j+k+l} a^{n_x+n_y-j-k+l} (b^\dagger)^{n_x+n_y+n_z-j-k-l} b^{j+k+n_z-l}$$ as this does not seem to have any factorisation except for the one employing the normal reordering brackets.
-
Yeah, I generally feel it. However, the quest is to actually proof it (or provide a counterexample). – Piotr Migdal Sep 19 '12 at 15:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9187295436859131, "perplexity_flag": "head"}
|
http://cdsmith.wordpress.com/2009/11/02/monads-from-two-perspectives/?like=1&source=post_flair&_wpnonce=a21ac6eb7c
|
software, programming languages, and other ideas
November 2, 2009 / cdsmith
Just random scribblings. This is pretty standard stuff, but I worked this out for someone recently, so I’m typing it out.
## Monads For Mathematicians
If you talk to a category theorist and ask for a definition of a monad, it looks something like this (taken from Wikipedia): a monad is a triple $(F, \eta, \mu)$ consisting of an endofunctor $F$, and natural transformations $\eta : 1 \to F$ and $\mu : F^2 \to F$, satisfying the following laws:
• $\mu \circ F \mu = \mu \circ \mu F$
• $\mu \circ F \eta = \mu \circ \eta F = 1_F$
Enough mumbo-jumbo. It’s worth taking a second to think about what that means. The two natural transformations just give us a single way to treat any of the infinite family of objects $F^n X$ (for any integer $n \geq 0$) as just $F X$. We can flatten them, in essence. If there are too few $F$‘s, you apply $\eta$. If there are too many, you apply $\mu$. The monad laws above just guarantee that it doesn’t matter at what level (number of applictaions of the functor) you work; you’ll get the same result.
It’s definitely time for an example. Suppose you have a fixed monoid $M$. Now, in the category $\mathbf{Set}$, there is a functor $F: X \mapsto X \times M$, which just takes any set to the cartesian product of itself and the monoid – i.e., $F X$ consists of values from the underlying set, plus a “side value” from the monoid. We can also define natural transformations $\eta(x) = (x,1_M)$, and $\mu((x, a), b) = (x, a \cdot b)$ where multiplication is taken from the monoid structure. In other words, if you don’t give me a side value (that’s $F^0 X$), then I’ll take the side value to be the identity from the monoid. If you give me several side values in a given order (that’s $F^n X, n > 1$) I’ll multiply them in the monoid to decide what the side value is.
Verifying the monad laws, stated above, is fairly straightforward. Really, the tricky part is just to understand the meaning of the pieces. Generally speaking, for a natural transformation $\tau$, think of $F \tau$ as applying the transformation on the left hand side of a tuple. Then think of $\tau F$ as being the specialization of $\tau$ to values of type $F X$. Here’s what happens:
• If $\mu : ((x,a),b) \mapsto (x, a \cdot b)$, then $F \mu : (((x,a),b),c) \mapsto ((x, a \cdot b), c)$. Then $\mu \circ F \mu : (((x,a),b), c) \mapsto (x, a \cdot b \cdot c)$.
• If $\mu : ((x,a),b) \mapsto (x, a \cdot b)$, then $\mu F : (((x,a),b),c) \mapsto ((x, a), b \cdot c)$. Then $\mu \circ \mu F : (((x,a),b),c) \mapsto (x, a \cdot b \cdot c)$. By associativity in the monoid, this is the same as the above, satisfying the first law.
• If $\eta : x \mapsto (x, 1_M)$, then $F \eta : (x, a) \mapsto ((x, 1_M), a)$ and $\eta F : (x,a) \mapsto ((x,a), 1_M)$. Verifying the second monad law is left as an exercise (just write out what the function composition means, and apply $\mu$).
## Monads for Haskell Programmers
In Haskell, of course, we prefer to think of monads differently, in terms of a type constructor and two functions “return” and “(>>=)”, satisfying the following laws:
```return a >>= f = f a
m >>= return = m
(m >>= f) >>= g = m >>= (\x -> f x >>= g)```
The “return” function takes any value, and wraps it in the monad. The second operation, often read as “bind”, is a little trickier. It takes one value wrapped in the monad, and function from a plain value of the first type to a second value wrapped in the monad, and produces the second value wrapped in the monad. Here are a few examples:
1. Containers can be seen as monads. In this case, if I have a crate of thingamabobs, and each of those thingamabobs can be unpacked into a crate of whatsits, then I may as well (by unpacking the thingamabobs as needed) have a very large crate full of whatsits.
2. Famously, I/O actions are monads. If I have an I/O action that produces a character string, and for any character string, I could get an I/O action that produces a number… then I can string them together into a single interaction producing a number. (The resulting I/O action will get the character string, compute the second action, and perform it to get the number, all in one go.)
A lot of this corresponds exactly to the mathematical definition above. The type constructor corresponds with an endofunctor (in the category of Haskell types), and the “return” function with the natural transformation $\eta$. That is, “return” takes a value of any possible type T, and turns it into a value of type M T (M being our monad type constructor). The idea of a natural transformation is partially captured by polymorphic functions in Haskell. (Partially because we’d also need to verify that the tranformation commutes with morphisms in the category; i.e., that “return (f x)” is the same as “fmap f (return x)” for all values of f and x.)
It turns out that $\mu$ is actually the standard library operation called “join”, which is defined in terms of the above, as follows:
`join a = a >>= id`
In other words, if I have a value of type (M (M T)), then I can turn it into a value of type (M T), by grabbing the encapsulated value of type (M T) and, well, doing nothing at all with it.
There’s actually one more piece we’re missing. In mathematical language, a functor simultaneously maps objects to other objects in a category, and also morphisms to other morphisms. So to recover a functor from the Haskell definition of a monad, we also need to decide how to map functions from type X to type Y into other functions from type M X into type M Y. It turns out we can recover this from the two operations, return and bind, that are standard for monads.
`fmap f x = x >>= (return . f)`
In other words, if I have a function $f : A \to B$, then I can easily obtain a function $M f : M A \to M B$ as follows. Use the bind operation to grab the encapsulated value, apply the function f to it, and then wrap it back up again with the return operation.
In the other direction, given an appropriate “fmap” and “join”, the bind operation is easy to implement as well:
`a >>= f = join (fmap f a)`
Basically, the idea here is that because M is a functor, a function $f : A \to M B$ can be turned into a function $M f : M A \to M (M B)$. Then one can just apply the join ($\mu$) operation on the result to obtain the desired result.
## The Equivalence Between the Two
It is a little tedious, but not really difficult, to show that these two structures are actually completely equivalent. In other words, using the definitions and correspondences above, if you have a Haskell monad satisfying the monad laws, then you also have a mathematical monad for the category of Haskell types. Similarly, if you have a mathematical monad for the category of Haskell types, then you also have a Haskell monad. In fact, as many people probably noticed, my first example of a monad at the beginning of this article is none other than Haskell’s Writer monad.
If you haven’t done it before, it’s really worth doing.
### Like this:
Filed under Categories, Computer Science, Haskell, Math
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8859091401100159, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/41483?sort=newest
|
## Proving boundedness of continuous images of [0,1] in WKL0
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I've been reading about reverse mathematics (mostly on wikipedia), and I had been thinking that I understood how to prove the equivalences to WKL0 and ACA0 mentioned in the its article. However, I now realize that my idea of how WKL0 can prove that every continuous real valued function on [0,1] is bounded. My idea would have started "since f is continuous, there f is locally bounded near each point, so there is an open cover of [0,1] such that f is bounded on each member of the cover", but I can't figure out how to express "f is bounded on the interval with rational endpoints (q,r)" as a Sigma_1 property, and I can't figure out how to get around this issue, either.
How does WKL0 prove that every continuous real valued function on [0,1] is bounded?
-
## 1 Answer
Here is a sketch of how the proof might go.
If $f:[0,1]\rightarrow\mathbb R$ is continuous but not bounded then the sets $S_n=[0,1]\setminus f^{-1}[(-n,n)]$ are closed with $S_n\ne\emptyset$ for all $n\in\mathbb N$. According to the definition of continuous function the sets $f^{-1}[(-n,n)]$ are represented as $\Sigma_1$ sets of (endpoints of) rational intervals ($\Sigma_1$ definable in the model). So the closed dyadic rational intervals intersecting $S_n$ for $n\in\mathbb N$ can be represented as an infinite $\Pi_1$ (or by a standard trick, equivalently $\Delta_1$) tree, which by Weak König's Lemma must have a path, so $\cap_n S_n\ne\emptyset$ which is absurd.
-
How are the definition of continuity show that the sets $f^{-1}([-n,n)]$ are represented as the union of $\Sigma_1$ collections of open intervals with rational endpoints? (I'm also not sure how it's provable that those collections are sets, but that's not needed.) – Ricky Demer Oct 8 2010 at 20:43
I think that's basically the definition of continuity. Since every object in reverse math that belongs to the model must be represented by an element or a subset of $\mathbb N$, one cannot just use the set-theoretical definition of a function. You're right, a set being $\Sigma_1$ definable in the model does not imply the set is in the model, just that there is a $\Sigma_1$ formula with parameters from the model that defines the set. – Bjørn Kjos-Hanssen Oct 8 2010 at 21:40
The definition of continuity gives "for all x in [0,1], there exists an open interval with rational endpoints such that x is in the interval and for all z in [0,1], if z is in the interval then -n < f(z) < n". I don't see how that can be used to show that "there exists a sequence of open intervals with rational endpoints such that for all x in [0,1], if -n < f(x) < n, then x is in one of the intervals". – Ricky Demer Oct 8 2010 at 22:35
"Continuous function" has a special definition in reverse mathematics which does not seem to mentioned in Wikipedia yet, but which I allude to in my answer. For in-depth study of this subject one should get hold of Simpson's book Subsystems of Second Order Arithmetic. – Bjørn Kjos-Hanssen Oct 8 2010 at 23:19
I can't vote up comments on an iphone, apparently, but Bjorn has the right point. In reverse mathematics a continuous function is accompanied by its coded representstion. That coded representation is key to defining sets relative to the continuous function. – Carl Mummert Oct 9 2010 at 13:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9669879078865051, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/161405-fraction-lowest-terms-print.html
|
# Fraction in Lowest terms
Printable View
• October 28th 2010, 07:55 PM
I-Think
Fraction in Lowest terms
Prove $\frac{n^3+2n}{n^4+3n^2+1}$ is a fraction in its lowest terms
Rewriting
$\frac{n(n^2+2)}{(n^2+1)^2+n^2}$
The numerator is a product, so we shall consider them separately
$\frac{n^2+2}{(n^2+1)^2+n^2}$
Let $z=n^2+1$
$\frac{z+1}{z^2+z-1}$. Numerator and denominator share no common factors
Consider
$\frac{n}{(n^2+1)^2+n^2}$
This also has no common factors
Hence
$\frac{n^3+2n}{n^4+3n^2+1}$
has no common factors
QED
Is this proof valid?
• October 29th 2010, 05:43 AM
emakarov
It believe it is correct, but whether it is acceptable to an instructor depends on your course. You may have studied an algorithm to prove similar claims, and then you need to go over the steps without justifying them. However, for this to look like a proof to an outsider, it should include not only computational steps, but also invocation of some lemmas that prove that the computations indeed show what you claim they do.
Quote:
Originally Posted by I-Think
Rewriting
$\frac{n(n^2+2)}{(n^2+1)^2+n^2}$
The numerator is a product, so we shall consider them separately
Here one should say that n and n^2 + 1 are irreducible (why?) and so, if some polynomial divides n(n^2+2), then it is either n or n^2+2, using something like Euclid's lemma for polynomials. Therefore, it is sufficient to check whether the denominator is divisible by these two polynomials. Then one should say a word or two why $z+1$ does not divide $z^2+z-1$ and $n$ does not divide $(n^2+1)^2+n^2$.
• October 29th 2010, 05:44 AM
TheCoffeeMachine
I think that's correct. (Yes) Here is another way to approach it:
Write $n^3+2n = n(n^2+2)$. Since $n^2+2$ is irriducible over $\mathbb{Q}$, either $n$ or $n^2+2$ must be a factor of
$f(n) = n^4+3n^2+1$ for the fraction not to be on the lowest form. It's obvious that $n$ isn't a factor of
$f(n)$. For $n^2+2$ to be a factor, we must have $n^4+3n^2+1 = (n^2+2)(an^2+bn+c)$, for some
$a, b, c \in\mathbb{R}$. Expanding the RHS we have $n^4+3n^2+1 = an^4+bn^3+(c+2a)n^2+2bn+2c$.
Comparing the coefficients we have (from the coefficient of [LaTeX ERROR: Convert failed] ) $a = 1$, and (from the coefficient of $n^2$)
$c+2a = 3$, so $c = 3-2a = 1$. But (from the coefficient of $n^0$) $2c = 1$, so $c = \frac{1}{2}$. A contradiction.
All times are GMT -8. The time now is 10:36 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388481378555298, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/33848/is-the-theory-of-minimum-variance-unbiased-estimation-overemphasized-in-graduate
|
# Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
Recently I was very embarrassed when I gave an off the cuff answer about minimum variance unbiased estimates for parameters of a uniform distribution that was completely wrong. Fortunately I was immediately corrected by cardinal and Henry with Henry providing the correct answers for the OP.
This got me thinking though. I learned the theory of best unbiased estimators in my graduate math stat class at Stanford some 37 years ago. I have recollections of the Rao-Blackwell theorem, the Cramer - Rao lower bound and the Lehmann-Scheffe Theorem. But as an applied statistician I don't think very much about UMVUEs in my daily life whereas maximum likelihood estimation comes up a lot.
Why is that? Do we overemphasize the UMVUE theory too much in graduate school? I think so. First of all unbiasedness is not a crucial property. Many perfectly good MLEs are biased. Stein shrinkage estimators are biased but dominate the unbiased MLE in terms of mean square error loss. It is a very beautiful theory (UMVUE estimation), but very incomplete and I think not very useful. What do others think?
-
4
(+1) I concur that this would make a good question for the main site and will upvote it. It is somewhat subjective, so it might be best as a CW question. (Also, there is no reason to be embarrassed.) – cardinal Aug 7 '12 at 13:17
1
I do not think that, in general, this sort of estimation is overemphasised. I remember that my professors used to focus more on examples where UMVUE are "silly". People tend to use point estimators belonging to popular theories, for the sake of safety, but there is a complete theory of estimating equations. Some professors focus on UMVUE because they are a good source of difficult problems for homework. I think that bias reduction is a more popular and useful theory nowadays than finding the UMVUE (which not always exist). – user10525 Aug 7 '12 at 14:59
1
We see a lot of questions here on UMVUE I guess because they make good homework problems. Maybe this is more of a problem with undergraduate and masters level statistics programs than with PhD programs. – Michael Chernick Aug 7 '12 at 15:08
3
Well, UMVU estimation is a classic idea so should maybe be teached for that reason? And it is a good starting point for discussing/critcizing criteria such as unbiasedness! Just because they are not so much used in practice, in itself is no reason to not teach them. – kjetil b halvorsen Aug 8 '12 at 4:21
1
The emphasis is likely to vary across time and departments. My department presents the material in the first year math stat course, but after that it is gone, so I couldn't reasonably say that it is overemphasized (even in the PhD inference course, it typically isn't taught, in favor of more time with Bayesian and minimax estimators, admissibility, and multivariate estimation), even though I wish there was more of an emphasis on why bias is a useful thing and hence why unbiased estimation is an unnecessarily extreme paradigm. – guy Aug 10 '12 at 4:18
show 2 more comments
## 1 Answer
We know that
If $X_1,X_2, \dots X_n$ be a random sample from $Poisson(\lambda)$ then for any $\alpha \in (0,1),~T_\alpha =\alpha \bar X+(1-\alpha)S^2$ is an UE of $\lambda$
Hence there exists infinitely many UE's of $\lambda$. Now a question occur which of these should we choose? so we call UMVUE.Along unbiasedness is not a good property but UMVUE is a good property.But it is not extremely good.
If $X_1,X_2, \dots X_n$ be a random sample from $N(\mu, \sigma^2)$ then minimum MSE estimator of the form $T_\alpha =\alpha S^2$, with $(n-1)S^2=\sum_{i=1}^{n}(X_i-\bar X)^2$ for the parameter $\sigma^2$, is $\frac{n-1}{n+1}S^2=\frac{1}{n+1}\sum_{i=1}^{n}(X_i-\bar X)^2$ But it is biased that is it is not UMVUE though it is best in terms of minimum MSE.
Note that Rao-Blackwell Theorem says that to find UMVUE we can concentrate only on those UE which are function of sufficient statistic that is the UMVUE is the estimator which has minimum variance among all UEs which are function of sufficient statistic.Hence UMVUE is necessarily a function of a sufficient statistic.
MLE and UMVUE both are good from a point of view.But we can never say that one of them is better than other.In statistics we deal with uncertain and random data.So there is always scope for improvement.We may get a better estimator than MLE and UMVUE.
I think we don't overemphasize the UMVUE theory too much in graduate school.It is purely my personal view.I think graduation stage is a learning stage.So, a graduated student must need to carry a good basis about UMVUE and others estimators,
-
1
I think any valid theory of inference is good to know. While unbiasedness can be a good property, bias is not necessarily bad. When emphasis is put on UMVUEs there can be a tendency to attribute "optimality" to it. But there may not be any very good estimators in the class of unbiased estimators. Accuracy is important and it involves both bias and variance. What is better about the MLE is that there are conditions under which it can be shown to be asymptotically efficient. – Michael Chernick Aug 11 '12 at 13:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501612186431885, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/82402/proof-a-cup-b-omega-longleftrightarrowac-subseteq-b/82404
|
# Proof: $A\cup B = \Omega \Longleftrightarrow{A^c \subseteq B}$
I need to prove $$A\cup B = \Omega \Longleftrightarrow{A^c \subseteq B}$$
How can I do it? This is what I've got so far, but I don't know if it is a valid demonstration:
$A\cup B = \Omega \Longleftrightarrow{x\in{A}\vee x\in{B}}\Longleftrightarrow{x\notin{A}\rightarrow{x\in{B}}}\Longleftrightarrow{A^c\subseteq B}$
Is this correct? If not, how can I prove that? Thanks in advanced.
-
There is the problem of the missing quantifiers, identified by Arturo Magidin. Also, if you have any doubts, don't do things in one line. You need to prove two things, $A \cup B\rightarrow A^c\subseteq B$, and the implication that runs the other way. Prove them separately, you will retain better control of the logic. Now for the first direction. Suppose that $A\cup B=\Omega$. Let $x\in A^c$. So $x$ is not in $A$. Since $x\in A\cup B$, $x$ must be in $B$. I leave the other direction to you. – André Nicolas Nov 15 '11 at 18:17
## 1 Answer
The second clause should really be "For all $x$, $x\in A\lor x\in B$" (or $\forall x(x\in A\lor x\in B)$), and the quantifier should be repeated in the next one; otherwise, it looks correct.
(Except that you are proving, not proofing).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499861001968384, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/33917/when-does-boundedness-imply-totally-boundedness-in-a-metric-space/33921
|
# When does boundedness imply totally boundedness in a metric space
For a subset of a metric space, quoted from Wikipedia:
Total boundedness implies boundedness. For subsets of $\mathbb{R}^n$ the two are equivalent.
I was wondering what are some more general conditions for a metric space than being $\mathbb{R}^n$, so that boundedness can imply totally boundedness?
Thanks and regards!
-
2
Compactness certainly does the trick. – JSchlather Apr 19 '11 at 22:14
## 1 Answer
A metric space is totally bounded if and only if its completion is compact. A subset of a complete metric space is totally bounded if and only if its closure is compact. A metric space $X$ has the property that its bounded subsets are totally bounded if and only if the completion of $X$ has the property that its closed and bounded subsets are compact, a property sometimes called the Heine-Borel property.
Montel spaces are examples of these.
Here's an open access article by Williamson and Janos you may find interesting. For example, Theorem 1 (which they credit to a 1937 paper of Vaughan) says that a metrizable, $\sigma$-compact, locally compact topological space has a compatible metric with the Heine-Borel property.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9477363228797913, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/tagged/numerics+vector
|
# Tagged Questions
1answer
229 views
### How to fix errors in Gram-Schmidt process when using random vectors?
I first make a function to get a random vector on unit sphere in a swath around the equator. That is what the parameter $\gamma$ controls; if $\gamma = 1/2$, the vectors can be chosen anywhere on the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7643114328384399, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/2954/is-there-any-measure-that-is-a-non-trivial-combination-of-vwap-and-twap?answertab=active
|
# Is there any measure that is a non-trivial combination of VWAP and TWAP?
Is there any measure that is a non-trivial combination of VWAP and TWAP? For example:
\begin{equation} \textrm{VTWAP} = \frac{\textrm{VWAP}+\textrm{TWAP}}{2} \end{equation}
I'm thinking about something like this:
\begin{equation} \textrm{VTWAP}_{\textrm{exp}}(\alpha,T) = \frac{\sum{P_i * V_i * e^{-i*\alpha}}}{\sum{V_i * e^{-i*\alpha}}} \end{equation}
where $P_i$ is the price at time $T-i+1$ and $V_i$ is the volume at time $T-i+1$.
Influence of past volumes is exponentially decayed with factor $\alpha$.
We can see that $\textrm{VTWAP}_{\textrm{exp}}(0,T)=\textrm{VWAP}(T)$.
I think that good point to start to analyse this problem is to find out types of existing TWAPs.
Second part of the question:
Are there any mathematical requirements or equations that measures like TWAP and VWAP should meet?
Something like that, but more advanced: $\textrm{VWAP}(T+1)=\textrm{VWAP}(T)$ for $V_T=0$ which state that there was no trade at time $T$.
-
Note sure whether such a measure exist as when looking at the definition it are two different concepts. What kind of applications could such a measure have? – JohnAndrews Apr 13 '12 at 0:45
## 1 Answer
In fact if you make the time change $$t\rightarrow \int_{\tau\leq t} V_\tau d\tau$$ a TWAP is a VWAP.
So just define the FWAP associate to a transform F: (you should ask to F to be an adapted stochastic process if you want to use models) $$t\rightarrow \int_{\tau\leq t} F(\tau) d\tau$$
You will have a new benchmark.
The real question is "what do you want to capture?"
You can also see a VWAP as the expectation of a volume at price density $d\mu(P)$: $${\rm VWAP} = \mathbb{E}_\mu (P)$$
In such a case just define a GWAP (associating a measure to a measure) as: $${\rm GWAP} = \mathbb{E}_G(\mu) (P)$$
For $G$ transforming a measure into the uniform one over its support: a GWAP is a TWAP (and for G being identity, it is a VWAP).
-
1
Nice contributions last couple of days! @lehalle – Quant Guy Apr 19 '12 at 5:47
1
You are welcome, I am surprised taht this stackexchange is not more active. It is far more useful than a forum or wikipedia. – lehalle Apr 19 '12 at 10:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8919500112533569, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/293725/homogeneous-linear-equation-general-solution/293732
|
# Homogeneous Linear Equation General Solution
I’m having some difficulty understanding the solution to the following differential equation problem.
Find a general solution to the given differential equation
$4y’’ – 4y’ + y = 0$
The steps I’ve taken in solving this problem was to first find the auxiliary equation and then factor to find the roots. I listed the steps below:
$4r^2 – 4r + 1$
$(2r – 1) \cdot (2r-1)$
$\therefore r = \frac{1}{2} \text{is the root}$
Given this information, I supposed that the general solution to the differential equation would be as follows:
$y(t) = c_{1} \cdot e^{\frac{1}{2} t}$
But when I look at the back of my textbook, the correct answer is supposed to be
$y(t) = c_{1} \cdot e^{\frac{1}{2} t} + c_{2} \cdot te^{\frac{1}{2} t}$
Now I know that understanding the correct solution has something to do with linear independence, but I’m having a hard time getting a deep understanding of what’s going on. Any help would be appreciated in understanding the solution.
-
## 1 Answer
The story behind what is going on here, is exactly the same we always see when search a Basis for a vector space over $V$ a field $K$. There; we look for a set of linear independent vectors which can generate the whole space. In the space of all solutions for an OE, we do the same as well. For any Homogeneous Linear OE with constant coefficients, there is a routine way to find out the solutions. And you did it right for this one. When the solution of axillary equation is one and this solution frequents two times, one solution is as you noted $y_1(t)=\exp(0.5t)$ but we don't have another solution. So, we should find another solution which is independent to the first one and the number of whole solutions is equal to the order of OE which is two here. It means that, we need one solution that the set $$\{\exp(0.5t),y_2(t)\}$$ is a fundamental set of solutions. For doing this you can use the method of Reduction of Order to find $$y_2(t)=t\exp(0.5t)$$
-
I like your thought process - it helps to show the "why", not just "what" the answer is! +1 – amWhy Feb 4 at 1:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9634480476379395, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Square_wave
|
# Square wave
Sine, square, triangle, and sawtooth waveforms
A square wave is a non-sinusoidal periodic waveform (which can be represented as an infinite summation of sinusoidal waves), in which the amplitude alternates at a steady frequency between fixed minimum and maximum values, with the same duration at minimum and maximum. The transition between minimum to maximum is instantaneous for an ideal square wave; this is not realisable in physical systems. Square waves are often encountered in electronics and signal processing. Its stochastic counterpart is a two-state trajectory. A similar but not necessarily symmetrical wave, with different durations at minimum and maximum, is called a rectangular wave (of which the square wave is a special case).
## Origins and uses
Square waves are universally encountered in digital switching circuits and are naturally generated by binary (two-level) logic devices. They are used as timing references or "clock signals", because their fast transitions are suitable for triggering synchronous logic circuits at precisely determined intervals. However, as the frequency-domain graph shows, square waves contain a wide range of harmonics; these can generate electromagnetic radiation or pulses of current that interfere with other nearby circuits, causing noise or errors. To avoid this problem in very sensitive circuits such as precision analog-to-digital converters, sine waves are used instead of square waves as timing references.
In musical terms, they are often described as sounding hollow, and are therefore used as the basis for wind instrument sounds created using subtractive synthesis. Additionally, the distortion effect used on electric guitars clips the outermost regions of the waveform, causing it to increasingly resemble a square wave as more distortion is applied.
Simple two-level Rademacher functions are square waves.
## Examining the square wave
(odd) harmonics of a square wave with 1000 Hz
Using Fourier expansion with cycle frequency f over time t, we can represent an ideal square wave as an infinite series of the form
$\begin{align} x_{\mathrm{square}}(t) & {} = \frac{4}{\pi} \sum_{k=1}^\infty {\sin{\left (2\pi (2k-1) ft \right )}\over(2k-1)} \\ & {} = \frac{4}{\pi}\left (\sin(2\pi ft) + {1\over3}\sin(6\pi ft) + {1\over5}\sin(10\pi ft) + \cdots\right ) \end{align}$
The ideal square wave contains only components of odd-integer harmonic frequencies (of the form 2π(2k-1)f). Sawtooth waves and real-world signals contain all integer harmonics.
A curiosity of the convergence of the Fourier series representation of the square wave is the Gibbs phenomenon. Ringing artifacts in non-ideal square waves can be shown to be related to this phenomenon. The Gibbs phenomenon can be prevented by the use of σ-approximation, which uses the Lanczos sigma factors to help the sequence converge more smoothly.
An ideal mathematical square wave changes between the high and the low state instantaneously, and without under- or over-shooting. This is impossible to achieve in physical systems, as it would require infinite bandwidth.
Animation of the additive synthesis of a square wave with an increasing number of harmonics
Square-waves in physical systems have only finite bandwidth, and often exhibit ringing effects similar to those of the Gibbs phenomenon, or ripple effects similar to those of the σ-approximation.
For a reasonable approximation to the square-wave shape, at least the fundamental and third harmonic need to be present, with the fifth harmonic being desirable. These bandwidth requirements are important in digital electronics, where finite-bandwidth analog approximations to square-wave-like waveforms are used. (The ringing transients are an important electronic consideration here, as they may go beyond the electrical rating limits of a circuit or cause a badly positioned threshold to be crossed multiple times.)
The ratio of the high period to the total period of any rectangular wave is called the duty cycle. A true square wave has a 50% duty cycle - equal high and low periods. The average level of a rectangular wave is also given by the duty cycle, so by varying the on and off periods and then averaging it is possible to represent any value between the two limiting levels. This is the basis of pulse width modulation.
Sorry, your browser either has JavaScript disabled or does not have any supported player. You can download the clip or download a player to play the clip in your browser. 5 seconds of square wave at 1 kHz
## Characteristics of imperfect square waves
As already mentioned, an ideal square wave has instantaneous transitions between the high and low levels. In practice, this is never achieved because of physical limitations of the system that generates the waveform. The times taken for the signal to rise from the low level to the high level and back again are called the rise time and the fall time respectively.
If the system is overdamped, then the waveform may never actually reach the theoretical high and low levels, and if the system is underdamped, it will oscillate about the high and low levels before settling down. In these cases, the rise and fall times are measured between specified intermediate levels, such as 5% and 95%, or 10% and 90%. The bandwidth of a system is related to the transition times of the waveform; there are formulas allowing one to be determined approximately from the other.
## Other definitions
The square wave in mathematics has many definitions, which are equivalent except at the discontinuities:
It can be defined as simply the sign function of a periodic function, an example being a sinusoid:
$\ x(t) = \sgn(\sin[t])$
$\ v(t) = \sgn(\cos[t])$
which will be 1 when the sinusoid is positive, −1 when the sinusoid is negative, and 0 at the discontinuities. The sinusoid can be replaced by any other periodic function, like the modulo operation:
$\ x(t) = \sgn\left(\mod\left(t-\frac{\lambda}{2},\lambda\right)-\frac{\lambda}{2}\right)$
It can also be defined with respect to the Heaviside step function u(t) or the rectangular function ⊓(t):
$\ x(t) = \sum_{n=-\infty}^{+\infty} \sqcap(t - nT) = \sum_{n=-\infty}^{+\infty} \left( u \left[t - nT + {1 \over 2} \right] - u \left[t - nT - {1 \over 2} \right] \right)$
T is 2 for a 50% duty cycle. It can also be defined in a piecewise way:
$\ x(t) = \begin{cases} 1, & |t| < T_1 \\ 0, & T_1 < |t| \leq {1 \over 2}T \end{cases}$
when
$\ x(t + T) = x(t)$
## See also
• List of periodic functions
• Rectangular function
• Pulse wave
• Sine wave
• Triangle wave
• Sawtooth wave
• Waveform
• Sound
• Multivibrator
• Ronchi ruling, a square-wave stripe target used in imaging.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9148004055023193, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/5351/whats-an-example-of-a-space-that-needs-the-hahn-banach-theorem
|
## What’s an example of a space that needs the Hahn-Banach Theorem?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Hahn-Banach theorem is rightly seen as one of the Big Theorems in functional analysis. Indeed, it can be said to be where functional analysis really starts. But as it's one of those "there exists ..." theorems that doesn't give you any information as to how to find it; indeed, it's quite usual when teaching it to introduce the separable case first (which is reasonably constructive) before going on to the full theorem. So it's real use is in situations where just knowing the functional exists is enough - if you can write down a functional that does the job then there's no need for the Hahn-Banach Theorem.
So my question is: what's a good example of a space where you need the Hahn-Banach theorem?
Ideally the space itself shouldn't be too difficult to express, and normed vector spaces are preferable to non-normed ones (a good non-normed vector space would still be nice to know but would be of less use pedagogically).
Edit: It seems wrong to accept one of these answers as "the" answer so I'm not going to do that. If forced, I would say that $\ell^\infty$ is the best example: it's probably the easiest non-separable space to think about and, as I've learnt, it does need the Hahn-Banach theorem.
Incidentally, one thing that wasn't said, and which I forgot about when asking the question, was that such an example is by necessity going to be non-separable since countable Hahn-Banach is provable merely with induction.
-
There are other classes of Banach spaces (e.g., uniformly convex spaces) for which Hahn-Banach can also be proved constructively. As it happens, $\ell^\infty$ is one of the first non-examples to come to mind for those other classes as well. So although it's certainly not "the" answer, it's a pretty good one from many points of view. – Mark Meckes Nov 20 2009 at 19:03
## 7 Answers
I'd like to summarize the answer that has developed from Eric Shechter's book, via Mark Meckes, plus the remark from Gerald Edgar. Since it's not really my answer, I'm making this a community answer.
1. The Hahn-Banach theorem is really the Hahn-Banach axiom. Like the axiom of choice, Hahn-Banach cannot be proved from ZF. What Hahn and Banach proved is that AC implies HB. The converse is not true: Logicians have constructed axiom sets that contradict HB, and they have constructed reasonable axioms strictly between AC and HB. So a version of Andrew's question is, is there a natural Banach space that requires the HB axiom? For the question, let's take HB to say that every Banach space $X$ embeds in its second dual `$X^{**}$`.
2. As Shechter explains, Shelah showed the relative consistency of ZF + DC + BP (dependent choice plus Baire property). As he also explains, these axioms imply that `$(\ell^\infty)^* = \ell^1$`. This is contrary to the Hahn-Banach theorem as explained in the next point. A striking way to phrase the conclusion is that $\ell^1$ and its dual $\ell^\infty$ become reflexive Banach spaces.
3. `$c_0$` is the closed subspace of $\ell^\infty$ consisting of sequences that converge to 0. The quotient `$\ell^\infty/c_0$` is an eminently natural Banach space in which the norm of a sequence is $\max(\lim \sup,-\lim \inf)$. (Another example is $c$, the subspace of convergent sequences. In $\ell^\infty/c$, the norm is half of $\lim \sup - \lim \inf$.) The inner product between $\ell^1$ and `$c_0$` is non-degenerate, so in Shelah's axiom system, `$(\ell^\infty/c_0)^* = 0$`. Without the Hahn-Banach axiom, the Banach space `$\ell^\infty/c_0$` need not have any non-zero bounded functionals at all.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is another logical tidbit here.
Hahn-Banach (HB) is strictly weaker than Axiom of Choice (AC), meaning--under the assumption of consistence as usual--in ZF, AC implies HB but not the other way around. Another intermediate theorem of functional analysis is the Krein-Milman theorem: "A compact convex nonempty set in a locally convex space has an extreme point" Call it KM. In ZF, AC implies KM but not the other way around. And the interesting point is that, taken both together, we do get AC. So in ZF, HB+KM implies and is implied by AC.
-
I particularly like this answer, not because it gives an example but because it answers something closely related that I'd pondered about the relationship of AC to HB. – Andrew Stacey Nov 17 2009 at 14:40
I'm not sure exactly what you have in mind by "need the Hahn-Banach theorem". One standard example of something pretty concrete for which Hahn-Banach in some form is needed is to show that there are linear functionals on $\ell^\infty$ which are not represented by elements of $\ell^1$. It's a standard exercise to show that $\ell^1$ acts as the dual of sequences converging to 0; since that is a proper closed subspace of $\ell^\infty$, Hahn-Banach produces further linear functionals on $\ell^\infty$.
This is closely related to the subject of Banach limits.
-
Couldn't you just write down a functional on $\ell^\infty$ that clearly wasn't in $\ell^1$? – Andrew Stacey Nov 13 2009 at 15:27
7
I don't think so - try it and see how long it takes you to give up in frustration. More seriously, I believe that Eric Schechter's book "Handbook of Analysis and its Foundations" contains a proof that you can't do this - some form of AC is necessary. For some closely related issues, take a look at Greg Kuperberg's question "Basis of l^infinity" and his own answer to it. – Mark Meckes Nov 13 2009 at 15:54
1
I do not know of any that can be written down eplicitely. The cannonical example begins defining on the subspace of converget sequences the functional that to each sequence assigns its limit and then extend it to $\ell^\infty$ by the Hann-Banach theorem. The extended functional cannot be represented by a sequence in $\ell^1$. – Julián Aguirre Nov 13 2009 at 16:04
Actually it wasn't my question, I just retagged it. – Greg Kuperberg Nov 13 2009 at 18:48
4
And if $l^\infty$ has no continuous linear functionals other than those from $l^1$, then the interesting Banach space $l^\infty/c_0$, although non-separable, has no continuous linear functionals at all (except $0$). – Gerald Edgar Nov 13 2009 at 21:47
show 1 more comment
The Gelfand-Naimark theorem says that any $C^\ast$-algebra is isomorphic to a norm closed $\ast$-subalgebra of $B(H)$ for a suitable Hilbert space $H$. In order to prove this theorem, it is imperative that our $C^\ast$-algebra have states, which uses the Hahn-Banach theorem.
One can also use the Hahn-Banach theorem instead of Tychonoff's theorem to construct $\beta X$, the Stone-Cech compactification of $X$. This is closely related to @Mark Meckes' answer.
-
If $G$ is a group, Bavard showed that the stable commutator length vanishes on $[G,G]$ if and only if $G$ admits no nontrivial "homogeneous quasimorphisms". These functions (on the space of group $1$-boundaries) are constructed using the Hahn-Banach theorem, but are usually very hard (or impossible) to write down explicitly.
Another example: let $M$ be a triangulated manifold, and suppose we orient every edge of every simplex in such a way that the orientations come from a "total order" on each triangle. We would like to assign positive "lengths" to each edge in such a way that on each triangle, the sum of the values on the "short edges" is equal to the value on the "long edge" (where "short" and "long" are defined according to the orientations). The (finite-dimensional!) Hahn-Banach theorem tells us we can do this if and only if every oriented loop in the 1-skeleton is homologically essential; i.e. "homological positivity" can be improved to "chain positivity". Of course the finite-dimensional Hahn-Banach theorem is just a psychological crutch, but versions of this construction in other categories need the "real" Hahn-Banach theorem (applied to certain spaces of de Rham currents).
-
I don't fully understand what you're asking, but the one proof I know of the fact that (up to topological isomorphism) the only complete Archimedean fields are $\mathbb{C}$ and $\mathbb{R}$ uses the Hahn-Banach theorem.
-
The simplest example I know are the real numbers as vector space over the rationals. The Hahn-Banach theorem asserts the existence of additive functionals other than standard addition, For instance, $f$ with $f(1)=1, f(\pi)=0$.
-
Does the Hahn-Banach theorem allow you to construct Hamel bases? – Qiaochu Yuan Nov 14 2009 at 4:59
1
The existence of a Hamel basis for every vector space is equivalent to the axiom of choice, and strictly stronger than Hahn Banach. Even if Hahn Banach is logically strong enough to imply that there is a Hamel basis for the reals over the rationals, it is not true that Hahn-Banach "asserts" such a thing. There are various forms of Hahn-Banach, but none I have seen are relevant to this example. – Jonas Meyer Feb 19 2010 at 8:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490453004837036, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/217982/finding-the-power-of-a-matrix/217984
|
# Finding the power of a matrix
I am trying to solve this question
$D$ is the matrix given by $$D = \begin{pmatrix} h & 0 \\ 0 & k \end{pmatrix},$$ where $h = 36$ and $k = -3$. What is the lower-right entry in the matrix $D^n$, where $n = 6$?
This is my attempt, but I am stuck half-way as it just doesn't seem right
-
So rather than trying to compute a few powers, you filled in random numbers and began to compute the eigenvalues? Strange... – rschwieb Oct 21 '12 at 13:04
– JackyBoi Oct 21 '12 at 13:05
1
Oh, well that clears things up a bit: you were trying to diagonalize a diagonal matrix. Also strange... The usefulness of diagonalization here is that if you can diagonalize the matrix, then the powers of the matrix are easy to compute, because diagonal matrices are easy to exponentiate. Anyhow, the lesson to be learned here is to definitely spend less time randomly applying things you see in the text, and more time looking at the problem itself. – rschwieb Oct 21 '12 at 13:06
There's also an error in the computation of the eigenvalues: $(-3 - \lambda) = 0$ if and only if $\lambda = -3$ and not $\lambda = 3$. – TMM Oct 21 '12 at 13:18
## 3 Answers
What do you need the determinant for? It's way easier: check (by induction, for example) that
$$\begin{pmatrix}h&0\\0&k\end{pmatrix}^n=\begin{pmatrix}h^n&0\\0&k^n\end{pmatrix}$$
no matter what $\,h,k,n\,$ are
-
clearly this is way easier to do! – JackyBoi Oct 21 '12 at 13:03
For a diagonal matrix, the entries of $D^n$ are precisely the $n$th powers of the entries of $D$.
-
And eigenvalues of a diagonal matrix are precisely diagonal elements by the definition.
-
How does this give the $n^{\text{th}}$ power of $D$? – robjohn♦ Oct 21 '12 at 13:59
Well it does not, but in my opinion it should be usefull to know for the OP, given that he started computing eigenvalues of the diagonal matrix. Anyway having better understanding shall not harm. Don't you think so? – arkadiy Oct 21 '12 at 18:26
Then this might better be a comment to the question. – robjohn♦ Oct 21 '12 at 20:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9155215620994568, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/89913/showing-that-a-group-g-as-many-right-cosets-as-there-are-left-cosets
|
# Showing that a group $G$ as many right cosets as there are left cosets.
Let $G$ be a group. I want to show that there's a bijection $f$ from the set of $G$'s right cosets to the set of $G$'s left cosets, such that $f(Ha) = a^{-1}H$. I thought about it for a while, but I'm not sure where to begin. Any hints?
-
2
Can you show that $aH=bH$ $\Leftrightarrow$ $b^{-1}a\in H$? Can you show that $Hc=Hd$ $\Leftrightarrow$ $cd^{-1}\in H$? I think this could help. – Martin Sleziak Dec 9 '11 at 15:50
## 1 Answer
You need this: $g_1H=g_2H\Leftrightarrow Hg_1^{-1}=Hg_2^{-1}$ Then your $f$ is a bijection.
-
Thanks, that's very helpful. I didn't know this. – mahin Dec 9 '11 at 15:59
2
That shows that your function is well-defined and 1-1. It is onto because $g\to g^{-1}$ is a bijection of $G$. – Thomas Andrews Dec 9 '11 at 16:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9651817679405212, "perplexity_flag": "head"}
|
http://terrytao.wordpress.com/tag/local-lie-groups/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘local Lie groups’ tag.
## Associativity of the Baker-Campbell-Hausdorff formula
29 October, 2011 in expository, math.GR, math.RA | Tags: Baker-Campbell-Hausdorff formula, Lie algebras, Lie groups, local Lie groups | by Terence Tao | 5 comments
Let ${{\mathfrak g}}$ be a finite-dimensional Lie algebra (over the reals). Given two sufficiently small elements ${x, y}$ of ${{\mathfrak g}}$, define the right Baker-Campbell-Hausdorff-Dynkin law
$\displaystyle R_y(x) := x + \int_0^1 F_R( \hbox{Ad}_x \hbox{Ad}_{ty} ) y \ dt \ \ \ \ \ (1)$
where ${\hbox{Ad}_x := \exp(\hbox{ad}_x)}$, ${\hbox{ad}_x: {\mathfrak g} \rightarrow {\mathfrak g}}$ is the adjoint map ${\hbox{ad}_x(y) := [x,y]}$, and ${F_R}$ is the function ${F_R(z) := \frac{z \log z}{z-1}}$, which is analytic for ${z}$ near ${1}$. Similarly, define the left Baker-Campbell-Hausdorff-Dynkin law
$\displaystyle L_x(y) := y + \int_0^1 F_L( \hbox{Ad}_{tx} \hbox{Ad}_y ) x\ dt \ \ \ \ \ (2)$
where ${F_L(z) := \frac{\log z}{z-1}}$. One easily verifies that these expressions are well-defined (and depend smoothly on ${x}$ and ${y}$) when ${x}$ and ${y}$ are sufficiently small.
We have the famous Baker-Campbell-Hausdoff-Dynkin formula:
Theorem 1 (BCH formula) Let ${G}$ be a finite-dimensional Lie group over the reals with Lie algebra ${{\mathfrak g}}$. Let ${\log}$ be a local inverse of the exponential map ${\exp: {\mathfrak g} \rightarrow G}$, defined in a neighbourhood of the identity. Then for sufficiently small ${x, y \in {\mathfrak g}}$, one has
$\displaystyle \log( \exp(x) \exp(y) ) = R_y(x) = L_x(y).$
See for instance these notes of mine for a proof of this formula (it is for ${R_y}$, but one easily obtains a similar proof for ${L_x}$).
In particular, one can give a neighbourhood of the identity in ${{\mathfrak g}}$ the structure of a local Lie group by defining the group operation ${\ast}$ as
$\displaystyle x \ast y := R_y(x) = L_x(y) \ \ \ \ \ (3)$
for sufficiently small ${x, y}$, and the inverse operation by ${x^{-1} := -x}$ (one easily verifies that ${R_x(-x) = L_x(-x) = 0}$ for all small ${x}$).
It is tempting to reverse the BCH formula and conclude (the local form of) Lie’s third theorem, that every finite-dimensional Lie algebra is isomorphic to the Lie algebra of some local Lie group, by using (3) to define a smooth local group structure on a neighbourhood of the identity. (See this previous post for a definition of a local Lie group.) The main difficulty in doing so is in verifying that the definition (3) is well-defined (i.e. that ${R_y(x)}$ is always equal to ${L_x(y)}$) and locally associative. The well-definedness issue can be trivially disposed of by using just one of the expressions ${R_y(x)}$ or ${L_x(y)}$ as the definition of ${\ast}$ (though, as we shall see, it will be very convenient to use both of them simultaneously). However, the associativity is not obvious at all.
With the assistance of Ado’s theorem, which places ${{\mathfrak g}}$ inside the general linear Lie algebra ${\mathfrak{gl}_n({\bf R})}$ for some ${n}$, one can deduce both the well-definedness and associativity of (3) from the Baker-Campbell-Hausdorff formula for ${\mathfrak{gl}_n({\bf R})}$. However, Ado’s theorem is rather difficult to prove (see for instance this previous blog post for a proof), and it is natural to ask whether there is a way to establish these facts without Ado’s theorem.
After playing around with this for some time, I managed to extract a direct proof of well-definedness and local associativity of (3), giving a proof of Lie’s third theorem independent of Ado’s theorem. This is not a new result by any means, (indeed, the original proofs of Lie and Cartan of Lie’s third theorem did not use Ado’s theorem), but I found it an instructive exercise to work out the details, and so I am putting it up on this blog in case anyone else is interested (and also because I want to be able to find the argument again if I ever need it in the future).
Read the rest of this entry »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913040041923523, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/75381?sort=oldest
|
## Relative version of Symplectic Thom conjecture.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Ozsváth and Szabó proved Symplectic Thom conjecture [Annals of Mathematics, 151(2000), 93-124]. It states: An embedded symplectic surface in a closed, symplectic 4-manifold is genus-minimizing in its homology class.
Does a suitable relative version of above hold true ? More specifically, suppose we start with an embedded symplectic surface $\Sigma$ with boundary in symplectic 4-manifold with contact type boundary then is it true that $\Sigma$ is genus-minimizing in its (relative) homology class?
-
## 2 Answers
I think this must be a consequence of the version of the slice-Bennequin inequality proved by Mrowka and Rollin (but I might be wrong). Perhaps the argument also requires the boundary to have the "strong filling" property (Stein near the boundary).
Given a Legendrian knot $L$ in (to start with) the 3-sphere $S^3$, that inequality says ```$$
2 g_*(L) - 1 \ge tb(L) - r(L).
$$``` Every transverse knot $K$ is a push-off of a Legendrian approximation $L$, and the self-linking number of $K$ is related to the invariants of $L$ by $$sl(K) = tb(L) - r(L).$$ So the slice-Bennequin inequality says ```$$
2 g_*(K) -1 \ge sl(K).
$$``` Unless I'm mistaken, this inequality is an equality for the case of a transverse knot bounding a symplectic surface in the 4-ball.
All this generalizes to the case of a symplectic 4-manifold $X$ that is Stein near its (contact) boundary. Given a Legendrian knot $L$ in the boundary, and a homology class $s$ of surfaces in $X$ with boundary $L$, one has invariants $tb(L,s)$ and $r(L,s)$. Then there is an inequality (as in Mrowka-Rollin), ```$$
2 g_*(L,s) - 1 \ge tb(L,s) - r(L,s),
$$``` where `$g_*(L,s)$` is the smallest possible genus of a surface in the class $s$ with boundary $L$. In terms of a transverse push-off $K$, one again has ```$$
2 g_*(K,s) - 1 \ge sl(K,s).
$$``` If $K$ is actually the transverse boundary of a symplectic surface, then one has equality, this being (I think) just the adjunction formula in a relative version.
The Mrowka-Rollin version of the result can today be deduced from the existence of concave fillings (caps). We may assume from the outset that $tb(L,s)$ is positive: if it is not, we may sum in a bunch of Legendrian trefoils until it is. Now enlarge $X$ by first adding a 2-handle along $L$ (standard contact surgery) and then closing it up with a concave filling. The inequality one wants is just the adjunction inequality applied to the homology class formed from $s$ and the 2-disk in the core of the handle.
So with less notation, the answer to the original question is supposed to be: take a Legendrian approximation to the transverse knot and alter things so as to make $tb$ postive; then add a 2-handle and a cap to get a closed symplectic manifold. Then apply the adjuntion inequality to the homology class formed from the symplectic surface and the Lagrangian 2-disk.
-
Two other thoughts: (1) You probably don't need the "strong" contact condition at the boundary in order to add the contact handle, so just contact-type boundary may be fine here. (2) Another take on the argument might be that one can apply the symplectic Thom conjecture to surfaces that are only semi-symplectic (the form restricted to the surface is non-negative), provided that the homology class has positive self-intersection. This is because you can alter the symplectic form in the tubular neighborhood, to make it positive on the surface. – Peter Kronheimer Oct 10 2011 at 15:36
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is a natural question, and I'm a bit startled to realise that, in this generality, I can't locate a reference for it.
To frame it precisely, let's suppose that $X$ is a compact symplectic 4-manifold with convex contact-type boundary $Y$, and ask whether a compact symplectic surface $\Sigma$ in $X$, transverse to $Y$ and bounding a link $L\subset Y$ transverse to the contact structure, minimises minus the Euler characteristic among surfaces bounding $L$ and homologous to $\Sigma$ relative to $L$.
There's lots in the literature about Bennequin-type inequalities for Legendrian links, notably Mrowka-Rollin's adjunction inequality: http://arxiv.org/abs/math/0410559. But when considering boundaries of symplectic surfaces it seems more natural to take $L$ transverse to the contact structure.
A sufficient condition. Suppose that we can cap $X$ to a closed symplectic manifold $Z$, and cap $\Sigma$ inside $Z$ to a closed symplectic surface $S$. It then follows from the symplectic Thom conjecture in $Z$ that $\Sigma$ is genus-minimizing in the sense I indicated.
A famous example is Kronheimer-Mrowka's proof (see http://www.math.harvard.edu/~kronheim/thom1.pdf) of the Milnor conjecture about the slice genus of algebraic links, in which one completes the (blown up) 4-ball to the (blown up) projective plane and applies the Thom conjecture there. [Experts will spot an anachronism in this summary.]
We do know that any $X$ can be closed up symplectically; see, for instance, Eliashberg's article http://arxiv.org/pdf/math/0311459. It seems plausible that every pair $(X,\Sigma)$, where the boundary of $\Sigma$ is a transverse link, can be closed to a pair $(Z,S)$. Perhaps Eliashberg's argument can be refined to accomplish this.
-
1
I am not quite sure that every pair $(X,\Sigma)$, where the boundary of $\Sigma$ is a transverse link, can be closed to a pair $(Z,S)$ as you indicate. I seem to have a potential counter-example to it (but I am not sure of it either! :) ), I will post it here if it turns out be, actually, a counter-example. – Dheeraj Kulkarni Sep 17 2011 at 17:58
1
Ah, so you're serious! ;) Well, if you have an obstruction to finding such a cap, that would be interesting. – Tim Perutz Sep 17 2011 at 19:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113946557044983, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/120819?sort=oldest
|
## open problems in Seiberg-Witten Theory on 4-Manifolds
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What are some of the open problems in Seiberg-Witten Theory on 4-Manifolds.I tried googling but couldn't any. I tried googling it, but couldn't find any resources.The places where I can a survey or review of them would be welcome.
-
1
Since you are interested in Ncg, I want to make the remark that a former fellow PhD student of mine - Vadim Alekseev - has studied a non-commutative generalization of Seiberg-Witten invariants, i.e., defined in the context of spectral triples. Here is the link to his PhD thesis: webdoc.sub.gwdg.de/diss/2011/alekseev – Marc Palm Feb 5 at 5:29
this was precisely the thing I was wondering after glossing through Ian Nicoalescu's book on "Sieberg-Witten Theory on 4 manifolds" – Koushik Feb 5 at 11:35
## 3 Answers
One basic structural problem about the SW invariants is the question of simple type: suppose that $X$ is a simply connected 4-manifold with $b^+>1$, and $\mathfrak{s}$ a $\mathrm{Spin}^c$-structure such that $SW_X(\mathfrak{s})\neq 0$. Must $\mathfrak{s}$ arise from an almost complex structure? This is true when $X$ is symplectic (Taubes in "$SW\Rightarrow Gr$") but open in general.
The 11/8-conjecture (that for a closed Spin 4-manifold $X$ of signature $\sigma$, one has $b_2(X)\geq 11|\sigma|/8$) is open. SW theory has yielded strong results in this direction (Furuta's 10/8 theorem); proving the conjecture via SW theory is very hard but might be possible.
Essentially all of the fundamental questions about the classification of smooth 4-manifolds, or about the existence and uniqueness of symplectic structures on them, are open. We do not know how much Seiberg-Witten theory sees. For instance:
Suppose $X$ is a closed 4-manifold with an almost complex structure $J$. Let $w\in H^2(X;\mathbb{R})$ be a class with $w^2>0$. Is there a symplectic form $\omega$ with compatible almost complex structure homotopic to $J$ and symplectic class $w$? The "Taubes constraints" are the following necessary conditions, which constrain the SW invariants in terms of $w$ and $c=c_1(TX,J)$ (see e.g. Donaldson's survey on the SW equations): (i) $SW(\mathfrak{s}_{can})=\pm 1$ (the sign can be made precise) where `$\mathfrak{s}_{can}$` is the $\mathrm{Spin}^c$-structure arising from $J$; (ii) $-c\cdot w\geq 0$; and (iii) if $SW(\mathfrak{s})\neq 0$ then $|c_1(\mathfrak{s})\cdot [\omega]| \leq -c \cdot [\omega]$, with equality iff $\mathfrak{s}$ is isomorphic to $\mathfrak{s}_{can}$ or its conjugate. The question is: if $X$ is simply connected, are these sufficient conditions? (Example: Fintushel-Stern knot surgery on an elliptically fibered K3 surface along a knot with monic Alexander polynomial.)
-
I omitted to say that the Taubes constraints apply when $b^+>1$, and the question I mentioned at the end concerns that case. – Tim Perutz Feb 15 at 2:36
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
One basic problem is determining the relationship between Seiberg-Witten invariants and Donaldson invariants of $4$-manifolds. Witten himself proposed the precise relationship between the two in the original paper Monopoles and 4-Manifolds, but as far as I know the relationship has not been proven in general. Witten's conjecture has been proven in many cases, however. See the answer to this question for a good overview of the current status of this problem.
-
it's just amazing to see the witten's works.thanks for the link. – Koushik Feb 5 at 3:25
It might be useful to generalize a theorem of Donaldson and Sullivan, that the Donaldson invariants are defined for quasi-conformal 4-manifolds, to the category of Seiberg-Witten invariants. More generally, one would like to know which smooth invariants of 4-manifolds are defined for quasi-conformal 4-manifolds.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394388794898987, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/177362-analytic-functions.html
|
# Thread:
1. ## Analytic functions
The question
Let z = x + iy, with x and y real
If $f(z) = x^3 + iy^3$, find the value of the derivative of f at every point z where the derivative exists. Where is f analytic?
My attempt:
I used Cauchy Riemann sums as follows,
$u_x = 3x^2$
$u_y = 0$
$v_x = 0$
$v_y = 3y^2$
The equations hold when $3x^2 = 3y^2$. This equals |x| = |y| which are intersecting lines through the origin of the x-y plane. So, the function has a derivative on these intersecting lines. If my understanding of analyticity is correct, this function is analytic nowhere since there's no epsilon neighbourhood where the function is differentiable.
My issue is, how do I find the value of the derivative? Do I use the definition of a derivative, or is there an easy way to find it using my result above?
Thanks.
2. Originally Posted by Glitch
The question
If my understanding of analyticity is correct, this function is analytic nowhere since there's no epsilon neighbourhood where the function is differentiable.
Right, according to the definition of analytic function, $f$ must be complex differentiable on a region $R\subset\mathbb{C}$ .
3. Just restrict h to real values when applying the limit definition of the derivative. Since you know that f is differentiable at |x|=|y|, then the real-restricted limit must be equal to the complex limit. We end up getting
$f'(x+iy)=u_x(x,y)+iv_x(x,y)$.
By the way, the reason we know f is differentiable on those lines is because all the first-order partial derivatives (1) satisfy the Cauchy-Riemann equations; and (2) are continuous. You mentioned (1) but don't forget that (2) is also required.
4. Thanks guys. So should I be using the definition of the derivative, or is there a faster way to compute it using the partial derivatives?
5. Well it's generally true that $f'(x+iy)=u_x(x,y)+iv_x(x,y)$. So if all you need to do is compute it, then your task is easy. But if you need to prove it, then the definition of the derivative will have to do, unless you have a theorem at your disposal.
6. Thank you again. That saves me a lot of time!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249229431152344, "perplexity_flag": "head"}
|
http://en.wikisource.org/wiki/Page:A_Treatise_on_Electricity_and_Magnetism_-_Volume_1.djvu/246
|
Page:A Treatise on Electricity and Magnetism - Volume 1.djvu/246
From Wikisource
For the alternate images $P_{1},P_{2},P_{3}$ are ranged round the circle at angular intervals equal to $2AOB$, and the intermediate images $Q_{1},Q_{2},Q_{3}$ are at intervals of the same magnitude. Hence, if $2AOB$ is a sub multiple of 2$\pi$, there will be a finite number of images, and none of these will fall within the angle $AOB$. If, however, $AOB$ is not a submultiple of it, it will be impossible to represent the actual electrification as the result of a finite series of electrified points.
If $AOB=\frac{\pi}{n}$, there will be $n$ negative images $Q_{1},Q_{2},$ &c., each equal and of opposite sign to $P$, and $n-1$ positive images $P_{2}, P_{3}$, &c., each equal to $P$, and of the same sign.
The angle between successive images of the same sign is $\tfrac{2\pi}{n}$. If we consider either of the conducting planes as a plane of symmetry, we shall find the positive and negative images placed symmetrically with regard to that plane, so that for every positive image there is a negative image in the same normal, and at an equal distance on the opposite side of the plane.
If we now invert this system with respect to any point, the two planes become two spheres, or a sphere and a plane intersecting at an angle $\tfrac{\pi}{n}$, the influencing point $P$ being within this angle.
The successive images lie on the circle which passes through $P$ and intersects both spheres at right angles.
To find the position of the images we may either make use of the principle that a point and its image are in the same radius of the sphere, and draw successive chords of the circle beginning at $P$ and passing through the centres of the two spheres alternately.
To find the charge which must be attributed to each image, take any point in the circle of intersection, then the charge of each image is proportional to its distance from this point, and its sign is positive or negative according as it belongs to the first or the second system.
166.] We have thus found the distribution of the images when any space bounded by a conductor consisting of two spherical surfaces meeting at an angle $\tfrac{\pi}{n}$, and kept at potential zero, is influenced by an electrified point.
We may by inversion deduce the case of a conductor consisting
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281769394874573, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/59768/has-abstract-algebra-ever-been-of-service-to-analysis
|
# Has Abstract Algebra ever been of service to Analysis?
I’m not saying that it ought to be. I was just wondering whether it has. What I have in mind is that it would have been of material help in proving, say, the Hahn-Banach Theorem, or some such. If it has, what is the most important/impressive instance of this?
-
14
I suppose it's hard to answer your question without setting some strict boundaries for what those subjects are. For example, the subject of Lie groups (just to name one) or abstract harmonic analysis, straddle both abstract algebra and analysis pretty much any way you define them. In that regard I view it as only an occasionally convenient fiction that "analysis" and "abstract algebra" are in any way distinct things. – Ryan Budney Aug 25 '11 at 21:38
2
– Akhil Mathew Aug 25 '11 at 21:57
15
A cheeky way to make my comment would be "Q: Has red ever been of service to green? A: Yes, to make yellow!" i.e. terms like "Algebra" and "Analysis" tend to be used more to describe how mathematics is flavoured, rather than what it actually is. At least, that's how I understand these words. – Ryan Budney Aug 25 '11 at 23:00
2
Personally, I'd like to see an instance of the opposite phenomenon... (Other than the Fundamental Theorem of Algebra) – Jesse Madnick Aug 26 '11 at 2:09
2
@Jesse: the Poincare conjecture is known to be equivalent to problems in combinatorial group theory (group presentations). Since the proof of the Poincare conjecture largely lies in the realm of analysis, that would be yet another example. – Ryan Budney Sep 2 '11 at 22:14
## 6 Answers
Speaking as someone who is basically an algebraist, I think of algebra as using structure to help understand or simplify a mathematical situation. The common algebraic objects (groups, rings, Lie algebras, etc.) reflect common structures that appear in many different contexts.
Now analysis often seems to have a certain slipperiness that makes it hard to pin down precise structures that meaningfully persist across different problems and contexts, and hence seems to have been somewhat resistant to methods of algebra in general. (This is an outsider's impression, and shouldn't be taken too seriously. But it does honestly reflect my impression ... .)
On the other hand, there do seem to be places where algebra can sneak in and play a role. One is mentioned by Qiaochu: Wiener proved that if $f$ is a nonwhere zero periodic function with absolutely convergent Fourier series, then $1/f$ again has an absolutely convergent Fourier series. Wiener's proof was by hand harmonic analysis, but a conceptually simpler proof (in a much more general setting) was supplied by Gelfand (I believe) using the theory of Banach algebras, which hinges on algebraic concepts such as maximal ideals and radicals. Wiener proved his result as a step along the way to proving his general Tauberian theorem, and this result again admits a conceptually simpler proof, and generalization, via Banach algebra methods.
Another, more recent, example is the work of Green and Tao on asymptotics for the Hardy--Littlewood problem of solving linear equations in primes. Here they introduced algebraic ideas related to nilpotent Lie groups, which play a key role in understanding and analyzing the complexity and solubility of such equations.
-
2
Gelfand gave the conceptually simpler proof (or at least a sketch) in "To the theory of normed rings II" (title of the English translation), 1939. (I'm mentioning this because of the "I believe"; it was indeed Gelfand.) – Jonas Meyer Aug 26 '11 at 0:49
@Jonas: Dear Jonas, Thanks for the confirmation! Regards, – Matt E Aug 26 '11 at 11:05
1
Thanks. That's the kind of thing I was fishing for. I've upvoted your answer and accepted it. – Mike Jones Aug 26 '11 at 19:10
@Mike Jones: Dear Mike, You're welcome; I'm glad that my answer was of some use. Regards, – Matt E Aug 29 '11 at 2:46
Yes, differential equations is, intuitively speaking, probably about as far as you can get from abstract algebra in the realm of analysis. Yet algebra still manages to rear its ugly head there `:)`.
One example is the entire subject of D modules (see also Sato's algebraic analysis that Akhil mentioned in his comment.)
But my favourite example in the somewhat surprising application of algebra to analysis is that of lacunas for hyperbolic differential operators. Given a constant coefficient hyperbolic partial differential operator $P(D)$ on $\mathbb{R}^n$, it has associated to it a fundamental solution $E$, which solves the equation that $P(D)E = \delta$, the Dirac distribution. Using the fundamental solution we can write the solution to the inhomogeneous initial value problem $P(D)u = f$, $(u, \partial_t u)|_{t=0} = (u_0,u_1)$ in integral form. A Petrowsky lacuna of $P(D)$ is a region in which the fundamental solution $E$ vanishes.
Now, by hyperbolicity, the fundamental solution is supported within a bi-cone with vertex at the origin, a condition known as finite speed of propagation. So that forms a trivial region in which $E$ vanishes. But other than that the situation can be complicated. Just consider the fundamental solution for the linear wave equation. In odd number of spatial dimensions, the fundamental solution is supported precisely on the set $\{t^2 = r^2\}$. So the region $\{t^2 > r^2\}$ form lacunas for $P(D)$. On the other hand, in even number of spatial dimensions, the fundamental solution is supported on the whole set $\{ t^2 \geq r^2\}$: the equations look almost exactly the same, but the difference between even and odd dimensions is huge!
Petrowsky wrote a paper in 1945 giving precise "topological" conditions for the existence and characterisation of lacunas. Later on, Atiyah, Bott, and Garding wrote two papers revisiting this problem, in which the theorem(s) of Petrowsky are proved using an algebraic geometric framework. One can read more about this in Atiyah's Seminaire Bourbaki notes.
-
I guess there are going to be times when I wish I could accept more than one answer, and this is one of those times. I've accepted the answer of Matt E, but I would ALSO accept your answer if I could. This, too, is the kind of thing that I was fishing for. Anyway, I've upvoted your answer. – Mike Jones Aug 26 '11 at 19:13
I am not sure how to respond to this question. Functional analysis is a big part of analysis, and functional analysis is largely considered with topological vector spaces; do vector spaces count as "abstract algebra"?
Does Fourier analysis count as "service"? It and its more general companions surely count as the most important intersection of algebra and analysis both in pure and applied mathematics.
How about the theory of Banach algebras? They are the natural setting for spectral theory, which is surely an important analytic topic, and they can also be used to prove an important lemma of Wiener and study quantum mechanics and all sorts of other things.
-
2
I think you mean Wiener, at least if you are referring to the proof of Gelfand that if $f$ is a nonvanishing continuous complex-valued function on the circle whose sequence of Fourier coefficients is in $\ell^1$, then the sequence of Fourier coefficients of $1/f$ is also in $\ell^1$. I am skeptical about it being accurate to say that Gelfand invented the theory for this purpose. This is a good answer. – Jonas Meyer Aug 25 '11 at 23:29
@Jonas: yes, for some reason I thought "Wiener" and wrote "Wigner." I may be misremembering something. I'll edit. I suppose Gelfand wanted to study spectral theory. – Qiaochu Yuan Aug 26 '11 at 1:58
You can make a very good case that functional analysis was born the day someone decided it was ok to mix these 2 seemingly unrelated areas of mathematics. – Mathemagician1234 Aug 26 '11 at 4:40
I admit that the question is a bit fuzzy, but it occurred to me in an odd moment and seemed to me like a question that ought to be asked / considered at least one time in the history of the human race, and so now we can let it rest in peace:-) – Mike Jones Aug 26 '11 at 19:15
Lie-theoretic methods play a big role in the group-theoretic approach to special functions and separation of variables in differential equations. For example, Willard Miller showed that the powerful Infeld-Hull factorization / ladder method - widely exploited by physicists - is equivalent to the representation theory of four local Lie groups. This Lie-theoretic approach served to powerfully unify and "explain" all prior similar attempts to provide a unfied theory of such classes of special functions, e.g. Truesdell's influential book An Essay Toward a Unified Theory of Special Functions. Below is the first paragraph of the introduction to Willard Miller's classic monograph Lie theory and special functions
This monograph is the result of an attempt to understand the role played by special function theory in the formalism of mathematical physics. It demonstrates explicitly that special functions which arise in the study of mathematical models of physical phenomena and the identities which these functions obey are in many cases dictated by symmetry groups admitted by the models. In particular it will be shown that the factorization method, a powerful tool for computing eigenvalues and recurrence relations for solutions of second order ordinary differential equations (Infeld and Hull), is equivalent to the representation theory of four local Lie groups. A detailed study of these four groups and their Lie algebras leads to a unified treatment of a significant proportion of special function theory, especially that part of the theory which is most useful in mathematical physics.
See also Miller's sequel Symmetry and Separation of Variables. Again I quote from the preface:
This book is concerned with the relationship between symmetries of a linear second-order partial differential equation of mathematical physics, the coordinate systems in which the equation admits solutions via separation of variables, and the properties of the special functions that arise in this manner. It is an introduction intended for anyone with experience in partial differential equations, special functions, or Lie group theory, such as group theorists, applied mathematicians, theoretical physicists and chemists, and electrical engineers. We will exhibit some modern group-theoretic twists in the ancient method of separation of variables that can be used to provide a foundation for much of special function theory. In particular, we will show explicitly that all special functions that arise via separation of variables in the equations of mathematical physics can be studied using group theory. These include the functions of Lame, Ince, Mathieu, and others, as well as those of hypergeometric type.
This is a very critical time in the history of group-theoretic methods in special function theory. The basic relations between Lie groups, special functions, and the method of separation of variables have recently been clarified. One can now construct a group-theoretic machine that, when applied to a given differential equation of mathematical physics, describes in a rational manner the possible coordinate systems in which the equation admits solutions via separation of variables and the various expansion theorems relating the separable (special function) solutions in distinct coordinate systems. Indeed for the most important linear equations, the separated solutions are characterized as common eigenfunctions of sets of second-order commuting elements in the universal enveloping algebra of the Lie symmetry algebra corresponding to the equation. The problem of expanding one set of separable solutions in terms of another reduces to a problem in the representation theory of the Lie symmetry algebra.
See Koornwinder's review of this book for a very nice concise introduction to the group-theoretic approach to separation of variables.
-
Wow. Like Bertram, I said it before, and I'll say it again: Wow. (in order to meet the minimum number of characters requirement:-) – Mike Jones Sep 3 '11 at 7:28
I am really thankful you referenced me to that Essay Bill! Thanks! – Peter Tamaroff Jun 9 '12 at 19:16
Here is how Galois theory is applied to differential equations. Although I know nothing about this subject here is an excerpt from Rotman's book on Galois theory.
From Rotman's Galois theory book.
• There is Galois theory in differential equations, due to Ritt and Kolchin. A derivation of a field $F$ is an additive homomorphism $D: F \to F$ with $D(xy) = xD(y) + D(x)y$; an ordered pair $(F,D)$ is called a Differential Field. Given a differential field $(F,D)$ with $F$ a (possibly infinite) extension of $\mathbb{C}$, its differential Galois group is the subgroup of $\text{Gal}(F/\mathbb{C})$ consisting of all $\sigma$ commuting with $D$. If this group is suitably topologized and if the extension $F/\mathbb{C}$ satisfies conditions analogous to being a Galois extension (it is called a Picard - Vessiot extension), then there is a bijection between the intermeditate differential fields and the closed subgroups of the differential Galois group. The latest developments are in "$\text{A. Magid}$ Lectures on differential Galois theory" published by the American Mathematical Society.
-
I found Rosenlicht's a very readable introduction to these ideas. Here's also a sci.math post containing many references. Finally, I'd like to point to Risch's algorithm. – t.b. Sep 4 '11 at 12:38
It is much like studying Infinite Galois extensions(Since the galois group is not finite and assumes some topology similar to krull's to retain the bijective correspondence as in finite case). – Dinesh Sep 15 '11 at 10:24
Developing analysis beyond Euclidean space (e.g., on manifolds that don't a priori have a natural embedding into $\mathbb R^n$) requires a fair dose of multilinear algebra in order to define differential forms, which are the objects that are integrated. I think integration on manifolds counts as something of basic interest to analysts. :)
-
Excellent. I find I'm always in danger of overlooking the obvious. Anyway, I've up-voted your answer. – Mike Jones Sep 5 '11 at 19:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405832290649414, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/91284/the-number-of-sylow-subgroups-in-groups-of-lie-type
|
## The number of Sylow subgroups in groups of Lie type
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Dear friends,
I want to know how we can find the number of Sylow subgroups in groups of Lie type? For example in the group PSL(n,q), where $q=p^s$ how we can find the number of Sylow p-subgroups or the number of Sylow r-subgroups where $r$ is in $pi(G)$?
Best regards
-
## 2 Answers
There are two distinct questions here. As pointed out by gauss, the "equicharacteristic" case (when you look at Sylow `$p$`-subgroups for the defining characteristic `$p$`) is fairly straightforward, since it's easy to compute the index of a Borel subgroup in a finite group of Lie type. Standard structure theory (in terms of BN-pairs) found in many sources shows that a Borel subgroup is the Sylow normalizer in this case.
For other primes `$r$` dividing the group order, it's much harder to make general statements about the number or structure of Sylow `$r$`-subgroups. Here it's very useful to study a comprehensive summary of properties of the known finite simple groups: Number 3 (1998) in the series of AMS monographs by Gorenstein, Lyons, Solomon The Classification of the Finite Simple Groups. See in particular sections 3.3 and 4.10 for the two types of primes. This volume has lots of other information about the groups of Lie type, including their orders.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Any Sylow p subgroup P of a group of lie type contained in a Borel subgroup B and if P is a sylow p subgroup in B then B is its normalizer in G Since all Borel subgroups are conjugate, we conclude that the number of sylow subgroups are exactly the number of Borel subgroups of G.
Now for PSL(2,q ) , it is well known that sylow p- subgroups of it are elementary abelian groups. So P can be considered as $s$ - dimensional vector space over $F_{p}$ where $q= p^{s}$. So the number of -$p^{k}$ -subgroups of P are exactly the number of k- dimensional vector spaces of P.
-
1
I am confused by your second paragraph - in what sense Sylow $p$-subgroups of $PSL_n(p^s)$ are abelian? In $GL_n(p^s)$, a Sylow $p$-subgroup is conjugate to unitriangular matrices... – Vladimir Dotsenko Mar 15 2012 at 15:28
2
That is incorrect, and only correct for $n = 2.$ For example, ${\rm SL}(3,2)$ has dihedral Sylow $2$-subgroups of order $8,$ and these are self-normalizing, so there are $21$ of them. – Geoff Robinson Mar 15 2012 at 15:37
Yes you are right. for n=2 my answer is correct. but in the other case it is incorrect.Thanks for your comments. What cause such mistake is thinking of $PSL_{2}$ as doubly transitive permutation group.In this case the subgroup of it that fixes one point contain unique sylow p-subgroup which is elementary abelian with order $p^{n-1}$ – gauss Mar 15 2012 at 16:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9239705801010132, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/52199/find-the-complex-fourier-coefficients?answertab=active
|
# Find the Complex Fourier coefficients
This is a revision question I've been working on.
Show that if a $2\pi$-periodic function $f$ has the complex Fourier coefficients $c_{k}$ and $g(t)=f(t+a)$, where $a$ is a constant, the the Fourier coefficients $y_{k}$ of $g$ and given by $y_{k}=e^{ika}c_{k}$.
Now suppose $f$ has Fourier coefficients $c_{k}=e^{-k^{2}}$ and $g$ has Fourier coefficients $p_{k}=(1+k^{2})^{-1}$. Define $h(t)=2f(t+1)-g(t-2)-3$. Find the complex Fourier coefficients of $h$.
-
4
What have you tried? I mean, there is not much more to do here than plugging in the definitions. – t.b. Jul 18 '11 at 16:52
## 1 Answer
We have, from definitions, $$c_k = \frac{1}{2\pi}\int_{-\pi}^\pi f(t)e^{ikt}dt.$$
What happens if you substitute $t + a$ for $t$ in this formula? Recall that $e^{(a+b)} = e^ae^b$.
For the next part, you can use some facts you now know to find $2f(t+1)$ and $-g(t-2).$ It just remains to find the complex Fourier coefficients of 3 (hint: this will not be hard!)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9106607437133789, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/13380?sort=newest
|
## Sparse approximate representation of a collection of vectors
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose I have a collection of $n$ vectors $C \subset \mathbb{F}_2^n$. They are of course spanned by the canonical set of $n$ basis vectors.
What I would like to find is a much smaller (~ $\log n$) collection of basis vectors that span a collection of vectors which well approximate $C$. That is, I would like basis vectors $b_1,\ldots,b_k$ such that for every $v \in C$, there exists a $u \in span(b_1,\ldots,b_k)$ such that $||u-v||_1 \leq \epsilon$.
When is this possible? Is there a property that $C$ might posses to allow such a sparse approximation?
-
Does `$\|u-v\|<\varepsilon$` mean that $u$ and $v$ differ in at most $\varepsilon n$ positions or you normalize the norm in some other way? – fedja Jan 29 2010 at 18:33
That's the Hamming metric which is also the same as the $L^1$ norm and counts the number of coordinates where they differ. So they differ at $\epsilon$ positions. I'm wondering what values of $\epsilon$ the OP has in mind though. The problem is interesting, but quite broad as it is. – Sonia Balagopalan Jan 29 2010 at 19:14
Is there a particular reason why you want the size of $span(b_1,\ldots,b_k)$ to be roughly the same as the size of $C$, which is the dimension of the space. Just curious! – Sonia Balagopalan Jan 29 2010 at 19:25
I'm interested in the $\ell_1$ norm, not necessarily the $\ell_\infty$ norm (Hamming). I am willing to let $b_i \in \mathbb{R}^n$, so they are not necessarily the same. $||u-v||_1 = \sum_{i=1}^n|u_i-v_i|$. I'm thinking of values of $\epsilon = c\dot n$ for some constant $c < 1$, the smaller the better. Sonia -- I'm thinking of approximating the value of an exponentially sized set of functions over a domain given a polynomially sized set of evaluations (of possibly different functions) over the same domain. – Donald Jan 29 2010 at 19:58
## 1 Answer
By triangle inequality, preserving the property you wish for means that you can find "representatives" for each $v$ so that the $\ell_1$ distances between any $v, v'$ are preserved to within 2$\epsilon$ additive error.
There is a general result by Brinkman and Charikar that says that in general, for a collection of $n$ vectors in an $\ell_1$ space, there's no way to construct a set of $n$ vectors in a smaller (e.g $\log n$) dimensional space) that preserves distances approximately even multiplicatively (let alone additively). This distinction is important if you have vectors in the original space that are $O(\epsilon)$ apart, but otherwise doesn't matter greatly.
Brinkman, B. and Charikar, M. 2005. On the impossibility of dimension reduction in l1. J. ACM 52, 5 (Sep. 2005), 766-788. DOI= http://doi.acm.org/10.1145/1089023.1089026
So I'm guessing that the answer to your first question should be no.
-
The Brinkman-Charikar examples are the diamond graphs, which live in the Hamming cube. The only positive result for $\ell_1$ is that $n$ vectors well embed into $\ell_1^k$ with $k$ of order $n\log$n! Probably you know that you can do what you want if you are in $\ell_2$; if not, Google "Johnson-Lindenstrauss Lemma". – Bill Johnson Jan 30 2010 at 18:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230031371116638, "perplexity_flag": "head"}
|
http://nrich.maths.org/5468
|
Pebbles
Place four pebbles on the sand in the form of a square. Keep adding as few pebbles as necessary to double the area. How many extra pebbles are added each time?
It Figures
Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all?
Bracelets
Investigate the different shaped bracelets you could make from 18 different spherical beads. How do they compare if you use 24 beads?
Factors and Multiples Game
Stage: 2 and 3 Challenge Level:
This is a game for two players.
The first player chooses a positive even number that is less than $50$, and crosses it out on the grid.
The second player chooses a number to cross out. The number must be a factor or multiple of the first number.
Players continue to take it in turns to cross out numbers, at each stage choosing a number that is a factor or multiple of the number just crossed out by the other player.
The first person who is unable to cross out a number loses.
Here is an interactive version of the game in which you drag the numbers from the left hand grid and drop them on the right hand grid. Alternatively, click on a number in the left hand grid and it will transport to the earliest empty location in the right hand grid. You can rearrange the numbers in the right hand grid by dragging and dropping them in position. The integer in the top right hand corner grows with the number of factors/multiples you have in a row.
This text is usually replaced by the Flash movie.
An extension to the game, or a suitable activity for just one person, is suggested in the Possible extension in the Teachers' Notes
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355258941650391, "perplexity_flag": "middle"}
|
http://mishabucko.wordpress.com/
|
# Seeking objectiveness
## Prime notion
As I am dying, I would like to cross-investigate the notion of a prime. Prime notion yields the notion of a prime.
A number defines the quantity of objects, i.e. the power of the set of objects. To discuss about a prime, we therefore have to talk about sets of objects. As for a notion of the set, we observe that if we talk about a specific set, then we change the relation of objects within this set and all other objects outside the set, and here we need to assume that this relation does not influence choosing a set.
If such a relation itself would be able to influence the choice of a set, then what would define the choice of relations, and sets of relations etc. After all there might be a predefined structure defining how the data is to be aggregated. In this sense, the idea behind decomposing the information of quantity into a multidimensional vector using prime numbers is appealing.
The very decomposition, though, is not about geometry itself, as geometry, as such, but about spreading the information into a vector of information while preserving the character of the quantity-based information within the number. So now, again, back to the clue, i.e. the character of the information hidden in the number.
Number contains information about quantity. There is 1-1 connection between the quantity of elements and the number. However, the objects need to be of the same type. Going deeper into the objects, we observe that the objects consists of yet another objects, and also that the connection is not a “set” type of connection, but rather a more sophisticated structure. That shows that the counting of the objects may not be as simple as using the integers.
Then, we notice that there have been introduced numerous number systems and number types, like complex or rational numbers. Those numbers have certain characteristics and ought to describe 1-1 the relation that is hidden in the real world. Still, in order to achieve that we would have to understand what is there to be modelled, i.e. what type of information do we need to address.
Rational numbers address the idea that the line starting from point A and ending in point B might consist of infinite amount of points. So, the idea that there is this infitnity between two points. But then we would need to have this infinity somehow tackled.
Complex numbers address the idea that we might use the unit i that does not exist among the integers to look at the numbers on 2d plane. There, we also notice that all of the points on the circle have the distance to the center point, i.e. (0,0). Such an approach could be used to put the numbers into infinite number of planes, for all of the planes we could have circles, and then generalized circles for multidimensional spaces.
Still, is it this notion of having the same distance from a certain point that should shape creating the rationale for a number? Does it allow enough connection between these numbers? If not, have we thought about a notion of number where the numbers would be very strongly interconnected, i.e. interconnected in a way that would resemeble some sort of multidimensional structure, as if we were looking at the crystal structure, but multidimensional.
Now, lets focus that we number the elements by “incrementing” this generalized notion of quantity, i.e. counting along the goemetries of objects consisting of yet another objects etc. How could such numbers look like?
Now, think again about prime numbers. If you multiply a number by a prime (a prime that is not a divisor of the very number) then you somehow save the information about that object and give it a specific character. That could be used for handling a object that has some structure, i.e. using the notion of prime we might model the geometrical (not for eyes) structure of objects and connections between objects. In this sense, objects with common divisors should be connected.
To even talk about such ordering, we would have to make it starting from low-level, i.e. a particle level. Still, it is very likely that our first attempts would be not good enough, which means we would have to have the numbering scalable and allowing changes within its structure. For such a modelling we could use a generalized notion of graphs, assigning primes to relation types and nodes. In the final round we would have it with primes only and could forget about the notion of graph.
This new enhanced notion of graph could be created with prime numbers only. A few questions that arise involve:
- are we capable of making it fully scalable and allowing changes to its structure?
- how would then look counting? ie. adding objects? it would probably have to look like addition in chemistry, ie. adding H2 to 02 would give water in some sense?
We observe that having infinite number of prime allows us to infinitely develop this system. Building this system must be more like sketching and constant improvement rather than making everything depict the universe perfect from the beginning. It has been addressed in one of the previous posts, though, that having low-level-based strategies for the understanding of universe functioning is unlikely to bring the true results. Low-level strategies are more interconnected, having graphs in mind, with low-level adaptations, such as algorithms and higher level are more into physics and end-products, more with engineering and experimentation. Connecting two dots: one of the very low mathematical level, lets say, a prime, and one of very high abstraction, like universe expansion is unlikely to be true and the mathematical-level theorem would just give a physical level background, and, as such, might also be often developed w/o that much of low level abstraction.
New ideas of how to connect dots of high level in compact systems, but of high parameter space, involve a lot of decomposition and machine learning and still require a lot from the mankind. Don’t be too proud, humans.
## Knowledge of the perceived only
1. Based on the model of time and space, which we currently use, the universe expands. Here, our aparatus includes “G=T” and topology, therein partial, lie and covariant derivatives, multiple operators and products (e.g. tensor, wedge, exterior, etc.). We could not long think that mass generates a gravitational field according to Poisson’s equation and a gravitational field creates acceleration. That would mean that a geometrical n->1 (losing information) notion of mass would allow us to understand the change in movement (velocity). At the same time, keep in mind that the definitions used in the sentence are arguable.
2. It is built of information carrying elements. Two arbitrary elements are potentially different, i.e. to learn about all we have O(n), assuming we might enumerate them max in O(n). For that we have the numbering (integers) , and therefore everything that comes with the integers, including the primes. A prime allows picturing an arbitrary in (understandable to brain) geometry that is, in a sense, not arbitrary, as allows no loss of generality. The question would be whether a notion of integer itself results in a loss of information.
3. In problem solving, due to parameter space (what we now even more understand from the machine learning). the process of creating a model of the perceived (everything we might access may be referred to as the perceived, either by eyes, or a set of senses, brain etc.) should involve a possibly strict picture of what we perceive correct (from axioms to lemmas, theorems in mathematics). Then, the strategic approach would be to find the places where the theorems most likely connect to build new knowledge. Building working products rather than low-level algorithms involves then connecting the dots in the engineering rather than low-level dots. Due to parameter space. Finding the places of dense connections should also be possibly effective and that has been thoroughly studied in the machine learning.
4. All elements of engineering and low level development of abstract thinking may be automated. One of current advantage of humans over machines boils down to building models of high resolution and then application of more fuzzy logic based on pattern recognition, where two “similar” patterns might only be slightly connected (inspiration) or a bit more, still slightly (rough guessing). Therefore, using the newly implemented pattern recognition, we might also automate higher level knowledge-building through fuzzier pattern recognition.
5. Knowledge is confined to the perceived, as is, as it is built on the perceived. Also, cannot be built on anything else rather than the perceived. The “good” side of it is that it allows learning about the entire perceivable. The “bad” is the same, i.e. it is only the perceivable that might be learnt. Defining what is perceivable and what is not remains an open question. For now, we could start by saying that everything we can imagine is perceivable. Therein, assuming that we could imagine a human, we could create it ourselves after the process of learning how to do it. To make it clearer, if we can imagine a notion of human (or, in general, a notion of being, if human is one of the beings not distinctive through having a non-perceivable element), then we will be able to create it ourselves.
6. Claiming the existence of the non-perceivable cannot be supported, by it definition, i.e. we assume that we can imagine an object, therefore we might be able to create it. Any development within the perceivable exists in the perceivable, thus our current methodology of knowledge building. Claiming the existence of the non-perceivable might make sense but cannot help us develop better unless we also contain an element of the unperceivable, being unperceivable ourselves, i.e. being capable of understanding the non-perceivable, a contradiction.
7. From the 6. we learnt that there might exist new ways of cognition, which could let us learn about an element that does not belong to the plane containing our potential learnings. Therefore, the non-perceivable would be not perceived only in the context of the learning methodology. Improving it might allow to go further.
8. Going further than in 7., what if we could develop our cognition to learn about the things that stay beyond of our current perception. Imagine that you are blind, thinking how the world must look like. You see it black, but then fall or bump into a wall, perceive different temperature throughout the day, become more tired after a long period of running. You then assign a model for describing new vectors of information: temperature, time, etc. The model involves a number of parameters which then turn out to be somewhat connected. We are that blind, i.e. this is exactly our situation.
9. Based on the learnings from 8., we observe that finding out more description vectors and their connections, as well as automation, might allow a faster development of the unperceived in the context of “previous” learning capabilities. Assuming that there is nothing globally unperceivable, then we could learn everything. Otherwise, if we assume an element of the globally unperceivable within us, then we cannot perceive the entire “I”, therefore cannot take responsiblity for “I”. So, we assume no globally unperceivable within us.
## Elaboration on two known issues in the context of finding an automated approach
Each post marked with “known” in the title is about the known results and concerns my notes, many of the notes are not thoroughly checked and the solutions are non-rigorous. In this post, we are going to start with the problems, have informal look a these as well as the surroundings of the problems and then try to elaborate on a potentially interesting direction in problem solving. The key point would be to use multi-variable approach, ie. the conversion of the variables of the original task to a multivariable task, then introduce strategies that would just take advantage of contraction search and then involve 2nd price auction problem solvers bidding for their chance, which would finally lead to more structured problem solving. This post is very informal and remains part of my internal analysis of certain problems and the reader might find it difficult to read.
cos1 irrational
As many of you know the answer to the problem, I will make a short introduction as how to handle problems in a generic way. What we currently know is that we will be dealing with a function. Will we be analyzing its values or only one value? We will be analyzing its values in general unless we notice something special about the given value for an argument equal to 1. Here, we are dealing with a known function and students know that the cosine function has also irrational values. Assuming that we will not pursue the path claiming that argument 1 is in some way special, we will focus on taking a closer look at more values of the cosine function.
Knowing that will focus on more values of the cosine function, lets think what to do next. Here we have the case that we know much about the function itself, in generic cases that is not the case and we should learn more about the function. That is indeed the very first thing as might show us that, for instance, argument 1 is a special case. Here, we don’t claim it.
As we know that cosine function has irrational values, we might try to build a chain of implications based on the character of the cosine function to connect $cos1$ with $cos30$, value of which is irrational. If that were possible, we could be able to prove the task.
Let us now assume that $\cos 1$ be rational as we know that . From the character of the cosine function we have that $\cos(2x)=2\cos(x)^2-1$, which leads to a conclusion that $cos2$ is also rational. Knowing that $\cos (n+1)+\cos (n-1)=2\cos n\cos 1$, we will see that for $n=2$ we could find a rational value of $cos3$. Iterating that, we could find a rational value of $cos30$, which leads to contradiction, making our claim false, q.e.d.
What we managed to do was to find the chain of implications based on the character of the cosine function to show that if rationality of $cos1$ depends on rationality of $cos30$. But, why did we choose this solution?
Lets take a closer look at the problem again. We have a special case. If we don’t say it is special indeed, we must know how to connect values of $cos30$ and $cos1$. In the same time, we don’t know if $cos1$,$cos2$,…,$cos89$ are rational or not. If we were not able to connect these two values, we would have to find something special about $cos1$.
Reductio ad absurdum, as is, is an assumption of the opposite so what matters is using it is to find the place where the logical chain leads to contradiction with the very assumption. So, before diving into the line of the proof, we need to be able to tell how we could be able to find the contradiction.
A good example of this is a proof of irrationality of $\pi$ or $e$. In the first proof, (for the sake of contradiction) we assumme $\pi =\frac{p}{q},$, where $p, q$ are comprime integers. And then we consider $I_n =\int_0^{p/q}\frac{x^n(p-qx)^n}{n!}\sin x dx, n\in N$, showing that $I_n$ is an integer, $I_n \ge 1$ and converges to 0, which is a contradiction. In the second proof, for any natural integer $n$, we have $e =\sum_{k=0}^n\frac {1}{k!}+\int_0^1\frac{(1-t)^n}{n!}e^t dt$ and for any $n, 0 < \int_0^1\frac{(1-t)^n}{n!}e^t dt <\frac {3}{(n+1)!}.$ .To conclude, in both cases we made assumptions of the opposite as we had a strategy of finding a contradiction during the analysis of the function (another presentation of the same value) that takes advantage of the element in question, namely either $\pi$ or $e$. Having that in mind, before diving into the very line of contradiction, we should think as how to embed the element in question into the environment in which we feel more comfortable, i.e. in one where we could foresee potential contradictions. To find the contradiction, we just analyze different aspects of the function in the given environment.
Prove that $ab+cd$ is not a prime as an example of looking at the same thing from different perspectives
Let $a, b, c, d$ be integers with $a>b>c>d>0$. Suppose that $ac+bd =k(b+d+a-c)$ $k \in Z$. Prove that $ab+cd$ is not prime. Before going further with the proof, I will address a couple of issues.
Firstly, we don’t know what’s the most crucial elements of a certain task. Even if such does not exist, the more we know about the problem, the more likely it is that a reasonably interesting solution or new theory will be created. New perspective enable different decomposition of the problem and a new decomposition allows new attack strategies. To increase the amount of information we know about a specific problem we are going to use computers for gathering information and deduction. In the long run the computers will also be used even more for deduction, thus the need to redefine the position of humans shows up in the horizon.
Secondly, in our task, we do have a certain equation we got used to, thus the very first idea for many is to re-arrange it. That, combined with the fact that the equation contains additional parameter, results in a vision that we are going to be dealing with a proof with multiple cases. Still, such proofs are rarely elegant as instead of the logical chain those represent a set of “if” clauses handled with smaller chains. For the automated theorem proving such an approach would have the potential to give good results due to computational capabilities of computers whilst won’t due to the fact that heuristics is often enough for computers to assume associations without the need for a legitimate rigorous proof.
Those rigorous proofs are still required for the sake of development of strategies thinking whilst it is often the case that computers might take advantage of assumptions, which in certain cases turn into reductio ad absurdium. For problems in the area of automated theorem proving or computational complexity automation of conjecturing the assumptions cuts the domain of definition effectively enough to enable further heuristics. Given a certain degree of pure iteration we face the very useful assumptions and a decision tree with certain branches handled with proofs by contradiction (those branches are pruned later on).
Lets now get back to our initial question. Firstly, in our task there are many trivial cases, those will be omitted in the analysis. We notice that the number we will be checking for primarility is a sum. We don’t have any tools for verification of sums for primarility. I have made a proposition of research in the field of number spectral analysis and the R-sequence, but here omitted. So, for now, we don’t have any method for verification of primility of a sum of two numbers. Now, another set of words from the given task: “not prime”. To show that an integer is not prime, we need to show that it has more than one divisor, i.e. its number of divisors belongs to a set $\{$2,3,4,5,…\$\}\$. Lets now assume the number in question is prime. We then have $ab+cd=(a+d)c+(b-a)a=m(a+d,b-c)$ $m\in N$. For $m=1$, we have that $ab+cd=(a+d,b-c) < a+d$, which contradicts with the assumption. For a case $(a+d,b-c)=1$, we have $b+d+a-c|ac+bd$, i.e. there exists $p$ such that $ac+bd=p(b+d+a-c)$. Also, $ac+bd=(a+d)b-(b-c)a=p(b+d+a-c)=p(a+d)-(b-c)p$, thus $(p-b)(a+d)=(a+p)(b-c)$. From the assumption for this case, $(a+d,b-c)=1$, we have $a+p=(a+d)k \rightarrow b-p=k(b-c)$ $\rightarrow a+b=k(a+b+d-c)$. We also see that if $k=1$ then $c=d$, which is a contradiction again. And, for $k\ge 2$ we have that $a+b\ge 2(a+b+d-c) \rightarrow 2c\ge a+b+2d$, which again is a contradiction. Given that we fulfilled all the cases, we are done.
From a perspective of automated theorem proving, we would have many special cases and verification of contradictions. So, a computer could do our thing here and this was the reason why I chose this task here. Another thing is to try to understand what what equation might mean. Geometrically, if we have the sum of two fields (a carrot per one lattice point) the same as the field from the RHS, ie. the multiplication, we already see that the LHSes field could be shown as 2d field in integers. And knowing that this equation is true, we have to prove that yet another sum of two fields can also be shown in 2d (it is enough not being prime means that it has at least 2 divisors, i.e. having two divisors is already enough).
And now, knowing the result from the proof with many special cases, we already know the answer, thus based on that might try to judge notice geometrical connections. These could be used for looking at such proofs from yet another perspective, for instance, seeing $p_1p_2$ as a field of a rectangle. Like in the example of a rectange with $a$ and $b$ is arbitrarily cut into $n$ squares with sides $x_{1},...,x_{n}$, where we need to show that $\frac{x_{i}}{a}\in Q$ and $\frac{x_{i}}{b}\in Q$ for all $i\in\{1,..., n\}$. At first sight we see that it is interesting result connecting geometry and numbers. The generally known solution to this task is to express the intersections in the square tilting as a grid and then use a basis of the set of lengths of the squares as representation for constructing two lemmas regarding the grid structure to finally be able to deduce the answer to the question while counting as area of the rectangle by adding all the squares from the grid.
The point
Now, if we take a closer look at both issues, we notice that for building long-way implications regarding and- at the same time- keeping the argument quite self-contained, it might be an effective approach to introduce more varaibles that describe somehow differently the known definition and then look for potential contradictions. In the case of cos1, it was relatively easier than in the case of either $\pi$ or $e$. Still, in the general case we would have a situation quite similar to the case of the latter, also in many variables. Combining that with visual representation that would be described by the newly introduced variables would give us a chance to deal with the problem twofold, i.e. by visually “noticing” contradictions (using a finite computational resource, either our head or a computer) or through finding the contradictions as indicated in the first examples and many known results. That combined with a more computational approach to set theorz would let us use finiatary formulations of infinitiary statements and quite a different approach to mathematical objects (one is more finitary than the other, oracles) and that could be used for automated problem solving using the problem decomposition (from the examples), contradiction search as well as convex optimization for minimizing time of the task being solved.
So, for yet another problem, we would introduce a model based new variables that would describe the known facts and in the same time might allow contradiction search. Such a multi-variable approach would also allow more effective problem handling using a computer. Then, we would have different problem solving strategies ready based on different problem decomposition. The strategies would bid based on the decomposition of the problem as not all might have enough resources to work simulataneously and would bid truthful values to ensure dominant strategy of k-th price auction, taking into consideration resources required. Then, we’d let the winner work first trying to look for contradictions based on the pre-programmed knowledge. The ideas to do that would be used we can derive from these examples, ie. to deduce the most trivial knowledge about the character of two sides of equalities of a specific problem.
Also, regarding the relative construction of geometrical objects so that would could decompose such problems too or decompose other problems into such problems so that our human brain would also be more likely to assist in finding vulnerable points. It would be crucial to note that as of now human brains are better at finding patterns than the computes due to the fact that the knowledge based on numerous senses is, by default, way more strucutred and the patterns are likely more blurred, ie. the comparision of two extensive branches of a tree are less accurate but give useful hints and approximations. Such approximations could be used to build the problem solving strategy before launching the contradiction search process due to the parameter space.
## From the idea of M-tree to number M transformation
Firstly, short elaboration on the decimal system and similar systems (e.g. binary) in general. The primitive of the number shown in the very system carries information about its value (quantity related), i.e. does not carry information about the connections between this and other primitives (digits). Currently, from what we understand about digits is that everything we learn from them is the absolute quantitative information. Still, say, Alice has 4 eggs and Bob has 3 eggs. Could we learn more?
Take 2838195719. Divide the digits into subgroups: (28)(38)(19)(57)(19). For all subgroups find prime divisor list (exclude 1): (2,2^2,7)(2,19)(19)(3,19)(19). Find if you can find a connection between the divisors for subgroups for the entire number.
For instance, here:
The M-transformation returns as the output the path built from the numbers connected with a red line. Can this information help us to decide about primarility?
## Prime exponential k-function: a task about the symmetry of numbers
Let $f_k(x,y,z,t) =2^x+3^y+5^z+7^t, x,y,z,t \in N$, where $k=4$ and defines the number of consecutive primes used for the function, i.e. here: {2,3,5,7}. For all prime $p$ find all such $(x,y,z,t)$ such that $f(x,y,z,t)=p$ (if exists).
Additional question: how does that change when $f_2(x,y)=2^x+3^y$ (and the rest of the task changes accordingly)?
## Logical struggles regarding the vagueness and partial knowledge
Some of my recent struggles concern logic. Therein, embedded in the logic that we know, we deal with binary assessment of a logical value based on the environment of a claim surrounding the sentence in question. What does that mean? We could say that a certain sentence is right given certain knowledge about the universe that surrounds it. However, gaining more knowledge about the universe could lead to making the same sentence false. In this context we could mention set theory axioms that faced numerous contradictions in the past. An example to that would be, for instance, enumeration of all integers (countable). The enumeration takes place in an environment when we can afford to enumerate all those and the structure of the universe is not in contradiction with that capability. If we think of the very enumeration as of enumeration in naive set theory, i.e. the enumeration of a set of integers, it might be the case that not all the integers are in that set etc.
Dr. Tao referrs to “infinitiary” and “more infinitiary” sets as well as their corresponding computational requirements, which clearly indicates the problem behind the word “naive” that should be used for all the theories that we know. Under very specific conditions certain theory might be true, but at the same time it might be the case that the larger picture would allow us find a leak within the very theorem. Yet another thing is that the level of formality used to build the model used for the proof of the theorem might involve low level problems that we fail to see. For instance, in a standard axiom system (ZFC) in set theory, we currently don’t know of any paradoxes. In the same time, it might be possible that there exist ones. Enumeration of those is also a function of tree traversal and space partitioning (problem space) and therefore has own corresponding computational requirements.
Based on the aforementioned, it is often the case that objects that we create cannot really exist and just exist due to vaugeness. Detection of the very vagueness requires in-depth verification of each object used in the proof. Each object is constructed using a language that is very likely to leak more when we get to know more about the universe. And it is the set, ie. the construction of “multiple” primitives carried in a bag that exists next to the primitives (numbers) and is widely used in the language of mathematics, where we have found so many problems. The same applies to understanding the structure of numbers.
All that combined with Tarski’s undefinability theorem and impredicavity allows more space for rethinking how numbers as primitives and sets as aggregates of primitive should describe the universe as of now, i.e. the one we know so little about, so that we don’t have to re-write too much in case of fundamental contradictions in our understanding of the universe. Interesting approach to thinking of sets has been presented by dr. Tao who put forward Banach-Tarski paradox and Cantor’s theorem in a “finitary” manner, using the notion of oracle. It might be the case that we should think about the mathematical tool for finding contradictions within the mathematics that we currently use, i.e. an analyser of a formal language used for constructing mathematical objects and proofs.
Question list:
1. Is there a way to describe the movement of planets assuming that there are k>=2 planets in k-example using the GTR?
2. From Mr Lipton’s blog:
“GLL: How can the same system be complete and incomplete? How can making something stronger make it incomplete?
Gödel: Ach—your words in English are too short. We have longer words so we think more before finishing them. Less confusion.”
The question is: could we redefine all the words and use only very accurate wording for whatever we want to state?
3. What are good examples of proofs that take advantage of induction and in the same time use other than integer number ordering (n->n+1)? What about p(n)->p(n+1)?
4. Can we be using an induction using a Turing machine built out of logical statements and then use induction with an ordering defined by how deep we are in the three? (using a tree traversal rather than known sequential traversal)?
5. In case of a complex game where it is difficult to define the goal and there is a certain number of constraints that keep us alive in the game (e.g. life)- should we always look for implied odds rather than odds for the certain decision?
6. How is it now possible for a single person to conduct real astronomical research which would connect mathematical modelling with large amount of data?
Second part of this post as I am a little bit tired of logic now.
Whenever I think of the theorems regarding the inequalities (Muirhead, Jensen, WPM, Holder, Rearrangement, Chebyshev, Schur, Maclaurin, majorization, Bernoulli, etc.), I am thinking about three different things:
- how a certain function acts given a specific “extensive” argument,
- what are the types of interesting “extensive” arguments (this “extensive” argument could be either $A = x_1+x_2+...x_n$ or $B = w_1 x_1 + w_2 x_ + .. +w_n x_n$), then we could think if we can generalize the relation between $f(A^B)$ and $f(B^A)$.
- ways to settle the generalizations about the function $f(C) = {{f(A)}\over{f(B)}}$ (or similar) using the (human) analytic capabilities, automated (computation) analytic capabilities or heuristics.
What comes to mind is that for creating such theorems we are using:
- analytic (non-combinatoric) analysis of $f(C)$, from there we can proceed in the area of more automated analytic approach using the problem solvers, here also we would use some sort of theory creating tool that would describe a feature of the function and then investigate it for $f(C)$,
- permutations, our brain will only succeed in finding the easiest to spot connections, whereas what we need is a tool that will permute and test,
When it comes to the analytic approach, we do have manifold notions including smoothing (unsmoothing), convex (concave),, extrema, constrained extrema, derivative test, hessian test etc. When it comes to permutations, we have majorization, symmetric sums, etc. It might also be the case that it is possible to reduce the dimensionality of the problem, i.e. decrease the amount of variables used in the problem.
## Measuring number symmetry and introduction of M(k)-tree
Starting from the last digit of every prime number (1,3,7,9), we then might iteratively append to numbers on the left side of the last digit (first added). This could be a way to create all (but divisible by 5) odd numbers. Now, to decrease the probability of constructing a non-prime number, we might think of a new to handle divisibility. We notice that divisibility by 3 and 9 is connected with the sums of digits. Due to the character of sum, we notice that for a number with digits abcdefg, when we cluster (ab)(cdef)(g) or (abc)(defg), it’s always the same about the remainders of the clusters, i.e. those cannot add up to a number divisible by 3 and 9. Not only the remainders but also potentially primiarility of the clusters also matters. Or, symmetry. This post is just an indication that I would like to think about the symmetry of numbers. How to measure the symmetry of numbers to say that a number is prime or not. What could be the interesting clustering of the number?
As for the symmetry, I would like to introduce yet another abstract idea that came out of my dream. Take 1042341, lets paint its M(1)-tree.
1042341
Starting from the last digit as this is the digit that starts the number and might already tell us that we are not dealing with a prime. Lets start substraction: (n+1)th digit -nth digit.1. We have 1. (black)
The next digit is 4, so 4-1=3.(green, as positive)
2. So, we have 3.
Then, we have 3-4 = -1 (red, as negative)
3. So, we have -1. Then: -1,2,-4,1.
4. So, all in all, 1 black, 3 green, -1 red,-1 red, 2 green, -4 red, 1 green.
Painting it here:
That we could paint for all the numbers but for M(1) its length is the length of the number minus 1. In general case, we could cluster the number of digits into clusters of the same length and then add the digits forming the clusters. For instance, for 1042341, we have only one option as the number of digits is prime but for 10423411, we might have (10)(42)(34)(11) or (1042)(3411). In the first case the solution would be 23, 8,-32. In the second one: -2369.
And the question: does the primiarility of number of digits of a certain number influences the probability of a number being prime?
## Primes modulo 3 either 1 or 2, ie. concatenating for prime finding
Divisibility by 3, 5, 7, 11 etc. (for many numbers) is something that is not new to us, we understand it. For the concatenated quarcs to be prime, the number cannot be divisible by any of those numbers, therein by 3. For this case, we should understand that we have two types of primes: primes mod 3 that give either 1 or 2. When concatenating, that must be borne in mind. That”s for a good start. Also, the last quarc factor cannot be divisible by 5, i.e. the last digit of the last quarc factor cannot be 5 etc. These rules. Without the generalization of divisibility rules, one cannot do much about that do find big primes by hand. Therefore, yet another type of approach suggested.
211,311 and 811 are primes. 211 = (2)(11)=(21)(1). In the first case both are primes. In the second one, there is 1 and a composite 21. For 311, we have 311=(3)(11)=(31)(1), i.e. in the first case both are primes and in the second case there is 1 and prime 31. In the third case, we have 811=(8)(11)=(81)(1), the same case. 1091,1559 and 2053 are all prime. 1091=(109)(1)=(10)(91) etc.
And now, we could use do the following:
- for different pattern-based number construction models check if a number is prime,
- (A) cluster the results
- (B) use matrix factorization with kernelling for disclosing the pattern,
- conjecture a linear problem based on the results,
- try out convex optimization to redefine the clusters and go back to point (B)
Below, just a peomatic infographic of my thoughts. Mostly, it is about choosing own goals and passing the needing in the streets.
As for the lessons from Euler, who was indeed mathematically gifted for his age, I will refer to things in a longer book. That’d be more than referring to all his papers. As for the letters in the infographic, it says GOEDEL as it was his thoughts about using contradictions to get rid of “wrong paths” within the thought process. Therefore, what we should decide about the structure life. Indeed, we could start from memento mori. But, it is right there, where we notice that we cannot really be dead, as we influenced the world and it is also part of us, not only the physical body. For those that focus on physical “being”, such being, as is, would require from us accepting its gradual loss.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 76, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469311237335205, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/264799/calculating-the-total-number-of-surjective-functions
|
# Calculating the total number of surjective functions
It is quite easy to calculate the total number of functions from a set $X$ with $m$ elements to a set $Y$ with $n$ elements ($n^{m}$), and also the total number of injective functions ($n^{\underline{m}}$, denoting the falling factorial). But I am thinking about how to calculate the total number of surjective functions $f\colon X \twoheadrightarrow Y$.
The way I thought of doing this is as follows: firstly, since all $n$ elements of the codomain $Y$ need to be mapped to, you choose any $n$ elements from the $m$ elements of the set $X$ to be mapped one-to-one with the $n$ elements of $Y$. This results in $n!$ possible pairings. But the number of ways of choosing $n$ elements from $m$ elements is $\frac{m!}{(m-n)!\,n!}$, so the total number of ways of matching $n$ elements in $X$ to be one-to-one with the $n$ elements of $Y$ is $\frac{m!}{(m-n)!\,n!} \times n! = \frac{m!}{(m-n)!}$.
Now we have 'covered' the codomain $Y$ with $n$ elements from $X$, the remaining unpaired $m-n$ elements from $X$ can be mapped to any of the elements of $Y$, so there are $n^{m-n}$ ways of doing this. Therefore I think that the total number of surjective functions should be $\frac{m!}{(m-n)!} \, n^{m-n}$.
Is this anything like correct or have I made a major mistake here?
-
## 2 Answers
Consider $f^{-1}(y)$, $y \in Y$. This set must be non-empty, regardless of $y$. What you're asking for is the number of ways to distribute the elements of $X$ into these sets.
The number of ways to distribute m elements into n non-empty sets is given by the Stirling numbers of the second kind, $S(m,n)$. However, each element of $Y$ can be associated with any of these sets, so you pick up an extra factor of $n!$: the total number should be $S(m,n) n!$
The Stirling numbers have interesting properties. They're worth checking out for their own sake.
-
I think this is why combinatorics is so interesting, you have to find just the right way of looking at the problem to solve it. I hadn't heard of the Stirling numbers, I wonder why they are not included more often in texts about functions? Is it not as useful to know how many surjective functions there are as opposed to how many functions in total or how many injective functions? – user50229 Dec 25 '12 at 13:02
Certainly. Often (as in this case) there will not be an easy closed-form expression for the quantity you're looking for, but if you set up the problem in a specific way, you can develop recurrence relations, generating functions, asymptotics, and lots of other tools to help you calculate what you need, and this is basically just as good. For instance, once you look at this as distributing m things into n boxes, you can ask (inductively) what happens if you add one more thing, to derive the recurrence $S(m+1,n) = nS(m,n) + S(m,n-1)$, and from there you're off to the races. – AndrewG Dec 25 '12 at 22:46
Can someone explain the statement "However, each element of $Y$ can be associated with any of these sets, so you pick up an extra factor of $n!$ – CodeKingPlusPlus Jan 23 at 5:26
@CodeKingPlusPlus everything is done up to permutation. You can think of each element of Y as a "label" on a corresponding "box" containing some elements of X. The labeling itself is arbitrary, and there are n! different ways to do it. – AndrewG Jan 23 at 7:12
This gives an overcount of the surjective functions, because your construction can produce the same onto function in more than one way.
Consider a simple case, $m=3$ and $n=2$. There are six nonempty proper subsets of the domain, and any of these can be the preimage of (say) the first element of the range, thereafter assigning the remaining elements of the domain to the second element of the range. In other words there are six surjective functions in this case.
But your formula gives $\frac{3!}{1!} 2^{3-2} = 12$.
Added: A correct count of surjective functions is tantamount to computing Stirling numbers of the second kind. The Wikipedia section under Twelvefold way has details. For small values of $m,n$ one can use counting by inclusion/exclusion as explained in the final portion of these lecture notes.
-
1
Thanks for the useful links. I suppose the moral here is I should try simple cases to see if they fit the formula! – user50229 Dec 25 '12 at 13:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9296107292175293, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/49971/gromov-witten-invariants-counting-curves-passing-through-two-points
|
## Gromov-Witten invariants counting curves passing through two points
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let us say that a closed symplectic manifold $X$ is $GW_g$-connected if there is a nonvanishing Gromov-Witten invariant of the form $GW_{g,n}^{X,A}(\beta,point, point,\alpha_3,\ldots,\alpha_n)$ --in other words a nonvanishing invariant that formally counts (pseudo-)holomorphic curves of genus g passing through two generic points and satisfying some other constraints $\beta$ coming from $\bar{M}_{g,n}$ and $\alpha_i$ coming from other incidence conditions in $X$.
When $g=0$ this is something like saying that $X$ is rationally connected in the algebro-geometric sense, and there's been recent work (such as arXiv:1006.2486) relating to the question of whether the notions are the same. But in higher genus an analogous statement should fail--for instance in the product X of two elliptic curves there's a (reducible) genus two curve passing through any two points, but X is certainly not $GW_2$-connected.
Question: For which symplectic four-manifolds (or Kahler surfaces) $X$ does there exist g such that $X$ is $GW_g$-connected?
My personal motivation for this question comes from the fact that if $X$ is $GW_g$-connected for some g then by a result of Lu $X$ has finite Hofer-Zehnder capacity; however the question seems reasonably interesting aside from that. I restrict to dimension four here only because I expect doing so to make the question more tractable; insights into higher-dimensional cases would also be welcome.
Here are some preliminary observations in the direction of an answer:
It's an easy consequence of a result of McDuff that the only symplectic four-manifolds that are $GW_0$-connected are the rational ones (i.e. those related to $\mathbb{C}P^2$ by blowups and blowdowns).
For larger g, I've convinced myself that it's likely that any ruled surface over a curve of genus g ought to be $GW_g$-connected, though I haven't written down a careful proof--if someone knows where one can be found or knows that I'm wrong about this I'd be glad to hear about it.
I'd expect that symplectic four-manifolds with $b^+>1$ (for complex surfaces this means $p_g>0$) should rarely if ever have this property, since they typically don't have GW invariants counting curves with nontrivial incidence constraints. In fact for Kahler surfaces with $p_g>0$ this follows from a result of Lee and Parker.
For symplectic manifolds with $b^+=1$ which are not rational or ruled I'm not really sure what to expect. These usually have a decent supply of nontrivial Gromov-Witten invariants (as can be seen from Taubes-Seiberg-Witten theory), but it's not clear to me in general whether one should expect a nonvanishing invariant with two point constraints.
EDIT: Since originally posting this question I looked a little more carefully at the literature on four-manifolds with $b^+=1$, and found that work of Li, Liu, and others based on Taubes-Seiberg-Witten theory is enough to show that any closed symplectic four-manifold with $b^+=1$ is $GW_g$-connected for some $g$. I've provided details of the argument in the appendix of this preprint.
So it seems likely that the answer to the original question is that a closed symplectic four-manifold is $GW_g$-connected iff it has $b^+=1$: the backward implication is always true (by Li-Liu), and the forward implication is definitely true if one restricts to Kahler surfaces (by Lee-Parker), and there are also many non-Kahler examples for which it can be checked. It seems a good deal harder to say anything about higher-dimensional cases.
-
## 1 Answer
I am not familiar with symplectic geometry so let's assume everything here is at least K\'ahler.
If $g=1$, then the condition $\langle [pt], [pt], \ldots \rangle^X_{1, [C]}\neq 0$ implies that the variety is uniruled, which is equivalent to $\langle [pt], \ldots \rangle^X_{0, {C}}$.
I hope it is true that for a rationally connected fibration over a curve of any genus, your condition $\langle \beta, [pt], [pt], \ldots \rangle^X_{g, [C]}\neq 0$ is always true. And it is true when the fiber dimension is at most $2$. Basically as long as you know that there is a section which gives non-zero GW invariant, you can glue this section with curves in a general fiber which is minimal among all curves with non-vanishing GW invariant $\langle [pt], [pt], \ldots \rangle$.
For ruled surface, what you said is true. The methods used in the paper here certainly work.
-
Thanks, Zhiyu! I wasn't aware of the fact that the condition when g=1 implied uniruledness. Is there a simple explanation for why this is? Any reason to think it might be true for higher genera? – Mike Usher Dec 21 2010 at 2:44
1
In case of $g=1$, the condition implies that there is a curve of arithmetic genus 1 passing through 2 general points. If this is a irreducible embedded curve, then you can deform it with 1 point fixed. Then bend and break tells you that there is a rational curve through the fixed point. If this is not an irreducible smooth embedded curve, then there are components of genus 0, passing through 1 of the two general points. In any case, there is a rational curve through a general point. In general, I think you need a genus g curve with g+1 points. – Zhiyu Dec 21 2010 at 15:55
1
Sorry but you need to be a little bit more careful about the bend-and-break. It only works if you can deform the elliptic curve with a fixed complex structure. But if you can vary the complex structure, then it will specialize to a nodal rational curve. In this case you are still fine. – Zhiyu Dec 21 2010 at 16:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552801251411438, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Deceleration_parameter
|
# Deceleration parameter
Physical cosmology
Part of a series on
Early universe
Expanding universe
Structure formation
Future of universe
Components
History of cosmological theories
Experiments
Scientists
Social impact
The deceleration parameter $\! q$ in cosmology is a dimensionless measure of the cosmic acceleration of the expansion of space in a Friedmann–Lemaître–Robertson–Walker universe. It is defined by:
$q \ \stackrel{\mathrm{def}}{=}\ -\frac{\ddot{a} a }{\dot{a}^2}$
where $\! a$ is the scale factor of the universe and the dots indicate derivatives by proper time. The expansion of the universe is said to be "accelerating" if $\ddot{a}$ is positive (recent measurements suggest it is), and in this case the deceleration parameter will be negative.[1] The minus sign and name "deceleration parameter" are historical; at the time of definition $\! q$ was thought to be positive, now it is believed to be negative.
The Friedmann acceleration equation can be written as
$3\frac{\ddot{a}}{a} =-4 \pi G (\rho+3p)=-4\pi G(1+3w)\rho,$
where $\! \rho$ is the energy density of the universe, $\! p$ is its pressure, and $\! w$ is the equation of state of the universe.
This can be rewritten as
$q=\frac{1}{2}(1+3w)\left(1+K/(aH)^2\right)$
by using the first Friedmann equation, where $\! H$ is the Hubble parameter and $\! K=1,0$ or $\! -1$ depending on whether the universe is hyperspherical, flat or hyperbolic respectively.
The derivative of the Hubble parameter can be written in terms of the deceleration parameter:
$\frac{\dot{H}}{H^2}=-(1+q).$
Except in the speculative case of phantom energy (which violates all the energy conditions), all postulated forms of matter yield a deceleration parameter $\! q \ge -1$. Thus, any expanding universe should have a decreasing Hubble parameter and the local expansion of space is always slowing (or, in the case of a cosmological constant, proceeds at a constant rate, as in de Sitter space).
Observations of the cosmic microwave background demonstrate that the universe is very nearly flat, so:
$q=\frac{1}{2}(1+3w)$
This implies that the universe is decelerating for any cosmic fluid with equation of state $\! w$ greater than $\! -1/3$ (any fluid satisfying the strong energy condition does so, as does any form of matter present in the Standard Model, but excluding inflation). However observations of distant type Ia supernovae indicate that $\! q$ is negative; the expansion of the universe is accelerating. This is an indication that the gravitational attraction of matter, on the cosmological scale, is more than counteracted by negative pressure dark energy, in the form of either quintessence or a positive cosmological constant.
Before the first indications of an accelerating universe, in 1998, it was thought that the universe was dominated by dust with negligible equation of state, $\! w \approx 0$. This had suggested that the deceleration parameter was equal to one half; the experimental effort to confirm this prediction led to the discovery of possible acceleration.
## References
1. Jones, Mark H.; Robert J. Lambourne (2004). An Introduction to Galaxies and Cosmology. Cambridge University Press. p. 244. ISBN 978-0-521-83738-5.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8676059246063232, "perplexity_flag": "head"}
|
http://nrich.maths.org/277/index
|
### Root to Poly
Find the polynomial p(x) with integer coefficients such that one solution of the equation p(x)=0 is $1+\sqrt 2+\sqrt 3$.
### Common Divisor
Find the largest integer which divides every member of the following sequence: 1^5-1, 2^5-2, 3^5-3, ... n^5-n.
### Janine's Conjecture
Janine noticed, while studying some cube numbers, that if you take three consecutive whole numbers and multiply them together and then add the middle number of the three, you get the middle number. Does this always work? Can you prove or disprove this conjecture?
# Two Cubes
##### Stage: 4 Challenge Level:
Two cubes, each with integral side lengths, have a combined volume equal to the total of the lengths of their edges. How big are the cubes? [If you find a result by 'trial and error' you'll need to prove you have found all possible solutions.]
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9257918000221252, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/users/4552/ben-crowell?tab=activity
|
# Ben Crowell
reputation
1234
bio
website lightandmatter.com
location
age
member for 1 year, 10 months
seen 11 mins ago
profile views 935
I teach physics at Fullerton College, a community college in Southern California. I have an undergrad degree in math and physics from Berkeley and a PhD in physics from Yale. Back when I was doing research, my field was experimental low-energy nuclear physics.
| | | bio | visits | | |
|-------------|------------------|---------|--------------------|-------------|-------------------|
| | 6,002 reputation | website | lightandmatter.com | member for | 1 year, 10 months |
| 1234 badges | location | | seen | 11 mins ago | |
# 983 Actions
| | | |
|-----|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 4m | comment | What causes the permittivity and permeability of vacuum?@KoenVanDamme: I don't think the interpretation given in your comment is correct. Materials don't slow down electromagnetic waves by partially blocking them. When an EM wave encounters a material, the charges in the material oscillate and produce a secondary wave. The superposition of the two waves is a wave that moves at less than $c$. |
| 15m | comment | What counts as a measurement? |
| 21m | comment | Is every quantum measurement reducible to measurements of position and time?@DanStahlke: The "ink" argument is cute, but the nontrivial issue here is that particle spins live in their own vector space, which has no obvious connection to the t-x-y-z vector space of the spacetime that classical physics describes and that we experience in daily life. There are two steps: (1) showing that it is possible to connect the spin vector space to our experience in any way at all, and (2) showing that any such experience reduces to a position measurement. The ink argument takes care of #2, but it's #1 that's nontrivial. |
| 2h | comment | What are units actually?You can actually get away with taking the log of something that has units. Changing the units just introduces an arbitrary additive constant. In a context where an additive constant doesn't matter, there's no problem with doing this. |
| 2h | revised | Doubts about the definition of massadded 83 characters in body |
| 3h | revised | Doubts about the definition of massadded 101 characters in body |
| 3h | answered | Doubts about the definition of mass |
| 3h | comment | Why distinguish between row and column vectors? |
| 19h | comment | Is every quantum measurement reducible to measurements of position and time?@Lagerbaer: Beat me to it! |
| 19h | comment | Is every quantum measurement reducible to measurements of position and time?I don't see how it applies to spin measurements. Maybe the assumption is that you can impose an external field, and then do something like Stern-Gerlach? |
| 19h | comment | How can I determine whether the mass of an object is evenly distributed?The non-rigid version is possibly useful in practice, but doesn't seem likely to lend itself to a definite answer that could be given on this site. BTW, it's not obvious to me how to generalize this to relativity, since you basically can't have rigid objects in relativity. |
| 20h | comment | How can I determine whether the mass of an object is evenly distributed?In GR you could spin the sphere. Then Birkhoff wouldn't apply, and I think you'd get geodetic and frame-dragging effects that might be different in the two cases. Even in SR, the rotational dynamics are different, and in any case it's not possible for it to be perfectly rigid as assumed. |
| 20h | revised | How can I determine whether the mass of an object is evenly distributed?added 102 characters in body |
| 20h | comment | How can I determine whether the mass of an object is evenly distributed?I'm construing the question to mean that we assume the object is rigid. Otherwise we could shake it, probe it with ultrasound, or find out that it contained gyroscopes or Mexican jumping beans. |
| 21h | comment | How can I determine whether the mass of an object is evenly distributed?@Mike: By the parallel axis theorem, you can't get any additional information that way. |
| 21h | revised | How can I determine whether the mass of an object is evenly distributed?simpler, better example with same shape |
| 21h | comment | How can I determine whether the mass of an object is evenly distributed?Nice. This suggests a way of simplifying my own example, which I'll do. |
| 21h | comment | How can I determine whether the mass of an object is evenly distributed?This doesn't work. The buoyant force only depends on the weight of the displaced water. |
| 22h | comment | How can I determine whether the mass of an object is evenly distributed?@Manishearth: The boxes can be made to have the same uniform density as the uniform object. I've edited the answer to show this explicitly. |
| 22h | revised | How can I determine whether the mass of an object is evenly distributed?modified to clarify issues raised by Manishearth |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315721988677979, "perplexity_flag": "middle"}
|
http://www.nag.com/numeric/CL/nagdoc_cl23/html/E02/e02bbc.html
|
# NAG Library Function Documentnag_1d_spline_evaluate (e02bbc)
## 1 Purpose
nag_1d_spline_evaluate (e02bbc) evaluates a cubic spline from its B-spline representation.
## 2 Specification
#include <nag.h>
#include <nage02.h>
void nag_1d_spline_evaluate (double x, double *s, Nag_Spline *spline, NagError *fail)
## 3 Description
nag_1d_spline_evaluate (e02bbc) evaluates the cubic spline $s\left(x\right)$ at a prescribed argument $x$ from its augmented knot set ${\lambda }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,\stackrel{-}{n}+7$, (see nag_1d_spline_fit_knots (e02bac)) and from the coefficients ${c}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,q$, in its B-spline representation
$s x = ∑ i=1 q c i N i x$
Here $q=\stackrel{-}{n}+3$, where $\stackrel{-}{n}$ is the number of intervals of the spline, and ${N}_{i}\left(x\right)$ denotes the normalized B-spline of degree 3 defined upon the knots ${\lambda }_{i},{\lambda }_{i+1},\dots ,{\lambda }_{i+4}$. The prescribed argument $x$ must satisfy ${\lambda }_{4}\le x\le {\lambda }_{\stackrel{-}{n}+4}$.
It is assumed that ${\lambda }_{\mathit{j}}\ge {\lambda }_{\mathit{j}-1}$, for $\mathit{j}=2,3,\dots ,\stackrel{-}{n}+7$, and ${\lambda }_{\stackrel{-}{n}+4}>{\lambda }_{4}$.
The method employed is that of evaluation by taking convex combinations due to de Boor (1972). For further details of the algorithm and its use see Cox (1972) and Cox (1978).
It is expected that a common use of nag_1d_spline_evaluate (e02bbc) will be the evaluation of the cubic spline approximations produced by nag_1d_spline_fit_knots (e02bac). A generalization of nag_1d_spline_evaluate (e02bbc) which also forms the derivative of $s\left(x\right)$ is nag_1d_spline_deriv (e02bcc). nag_1d_spline_deriv (e02bcc) takes about 50% longer than nag_1d_spline_evaluate (e02bbc).
## 4 References
Cox M G (1972) The numerical evaluation of B-splines J. Inst. Math. Appl. 10 134–149
Cox M G (1978) The numerical evaluation of a spline from its B-spline representation J. Inst. Math. Appl. 21 135–143
Cox M G and Hayes J G (1973) Curve fitting: a guide and suite of algorithms for the non-specialist user NPL Report NAC26 National Physical Laboratory
de Boor C (1972) On calculating with B-splines J. Approx. Theory 6 50–62
## 5 Arguments
1: x – doubleInput
On entry: the argument $x$ at which the cubic spline is to be evaluated.
Constraint: $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[3\right]\le {\mathbf{x}}\le \mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[\mathbf{spline}\mathbf{\to }\mathbf{n}-4\right]$.
2: s – double *Output
On exit: the value of the spline, $s\left(x\right)$.
3: spline – Nag_Spline *
Pointer to structure of type Nag_Spline with the following members:
n – IntegerInput
On entry: $\stackrel{-}{n}+7$, where $\stackrel{-}{n}$ is the number of intervals (one greater than the number of interior knots, i.e., the knots strictly within the range ${\lambda }_{4}$ to ${\lambda }_{\stackrel{-}{n}+4}$) over which the spline is defined.
Constraint: $\mathbf{spline}\mathbf{\to }\mathbf{n}\ge 8$.
lamda – doubleInput
On entry: a pointer to which memory of size $\mathbf{spline}\mathbf{\to }\mathbf{n}$ must be allocated. $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[j-1\right]$ must be set to the value of the $j$th member of the complete set of knots, ${\lambda }_{j}$ for $j=1,2,\dots ,\stackrel{-}{n}+7$.
Constraint: the ${\lambda }_{j}$ must be in non-decreasing order with $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[\mathbf{spline}\mathbf{\to }\mathbf{n}-4\right]>\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[3\right]$.
c – doubleInput
On entry: a pointer to which memory of size $\mathbf{spline}\mathbf{\to }\mathbf{n}-4$ must be allocated. $\mathbf{spline}\mathbf{\to }\mathbf{c}$ holds the coefficient ${c}_{\mathit{i}}$ of the B-spline ${N}_{\mathit{i}}\left(x\right)$, for $\mathit{i}=1,2,\dots ,\stackrel{-}{n}+3$.
Under normal usage, the call to nag_1d_spline_evaluate (e02bbc) will follow a call to nag_1d_spline_fit_knots (e02bac), nag_1d_spline_interpolant (e01bac) or nag_1d_spline_fit (e02bec). In that case, the structure spline will have been set up correctly for input to nag_1d_spline_evaluate (e02bbc).
4: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_ABSCI_OUTSIDE_KNOT_INTVL
On entry, x must satisfy $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[3\right]\le {\mathbf{x}}\le \mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[\mathbf{spline}\mathbf{\to }\mathbf{n}-4\right]$:
$\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[3\right]=〈\mathit{\text{value}}〉$, ${\mathbf{x}}=〈\mathit{\text{value}}〉$, $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$.
In this case s is set arbitrarily to zero.
NE_INT_ARG_LT
On entry, $\mathbf{spline}\mathbf{\to }\mathbf{n}$ must not be less than 8: $\mathbf{spline}\mathbf{\to }\mathbf{n}=〈\mathit{\text{value}}〉$.
## 7 Accuracy
The computed value of $s\left(x\right)$ has negligible error in most practical situations. Specifically, this value has an absolute error bounded in modulus by $18×{c}_{\mathrm{max}}×$ machine precision, where ${c}_{\mathrm{max}}$ is the largest in modulus of ${c}_{j},{c}_{j+1},{c}_{j+2}$ and ${c}_{j+3}$, and $j$ is an integer such that ${\lambda }_{j+3}\le x\le {\lambda }_{j+4}$. If ${c}_{j},{c}_{j+1},{c}_{j+2}$ and ${c}_{j+3}$ are all of the same sign, then the computed value of $s\left(x\right)$ has a relative error not exceeding $20×$ machine precision in modulus. For further details see Cox (1978).
## 8 Further Comments
The time taken by nag_1d_spline_evaluate (e02bbc) is approximately C $×\left(1+0.1×\mathrm{log}\left(\stackrel{-}{n}+7\right)\right)$ seconds, where C is a machine-dependent constant.
Note: the function does not test all the conditions on the knots given in the description of $\mathbf{spline}\mathbf{\to }\mathbf{lamda}$ in Section 5, since to do this would result in a computation time approximately linear in $\stackrel{-}{n}+7$ instead of $\mathrm{log}\left(\stackrel{-}{n}+7\right)$. All the conditions are tested in nag_1d_spline_fit_knots (e02bac), however, and the knots returned by nag_1d_spline_interpolant (e01bac) or nag_1d_spline_fit (e02bec) will satisfy the conditions.
## 9 Example
Evaluate at 9 equally-spaced points in the interval $1.0\le x\le 9.0$ the cubic spline with (augmented) knots 1.0, 1.0, 1.0, 1.0, 3.0, 6.0, 8.0, 9.0, 9.0, 9.0, 9.0 and normalized cubic B-spline coefficients 1.0, 2.0, 4.0, 7.0, 6.0, 4.0, 3.0.
The example program is written in a general form that will enable a cubic spline with $\stackrel{-}{n}$ intervals, in its normalized cubic B-spline form, to be evaluated at $m$ equally-spaced points in the interval $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[3\right]\le x\le \mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[\stackrel{-}{n}+3\right]$. The program is self-starting in that any number of datasets may be supplied.
### 9.1 Program Text
Program Text (e02bbce.c)
### 9.2 Program Data
Program Data (e02bbce.d)
### 9.3 Program Results
Program Results (e02bbce.r)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 62, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7081049680709839, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/universe?page=4&sort=newest&pagesize=15
|
# Tagged Questions
The universe tag has no wiki summary.
2answers
46 views
### Is there evidence of a larger universe?
While mapping out the position, distance and movement of galaxies and quasars at the limits of the observable universe, have astronomers ever observed the motion of these object due to gravitational ...
2answers
102 views
### Is there a consensus on the fate of our universe?
We all know that our universe is inflating from what is known as the Big Bang. However, will our universe continue to inflate at the current rate? Or after reaching a maximum size, will it collapse in ...
3answers
2k views
### Why is the universe so big?
The Universe is approximately 13.7 billion years old. But yet it is 80 billion light years across. Isn't this a contradiction?
0answers
214 views
### Presence Of an Another Universe [duplicate]
Possible Duplicate: Experimental evidence for parallel universes Is there an universe similar to ours somewhere else, I mean I had heard from Einsteins theories that,actually when I am ...
3answers
1k views
### What has been proved about the big bang, and what has not?
Ok so the universe is in constant expansion, that has been proven, right? And that means that it was smaller in the past.. But what's the smallest size we can be sure the universe has ever had? I ...
3answers
104 views
### What is meant when it is said that the universe is homogeneous and isotropic?
It is sometimes said that the universe is homogeneous and isotropic. What is meant by each of these descriptions? Are they mutually exclusive, or does one require the other? And what implications rise ...
3answers
45 views
### Seeing cosmic activity now, really means it happens millions/billions of years ago?
A Recent report about a cosmic burst 3.8 billion light years away. It is written as though it is happening now. However, my question is, if the event is 3.8 billion light years away, doesn't that mean ...
1answer
335 views
### Universe Expansion as an absolute time reference
Why we call "constant" to the Hubble constant?, if the universe were really expanding then the Hubble "constant" should change, being variable, smaller and smaller..with "time". Other example/view ...
2answers
1k views
### Is reality discrete at the quantum level? (…and what does it imply not only mathematically?)
On a quantum scale the smallest unit is the Planck scale, which is a discrete measure. There several question that come to mind: Does that mean that particles can only live in a discrete grid-like ...
1answer
958 views
### How can something finite become infinite?
How can the universe become infinite in spatial extent if it started as a singularity, wouldn't it take infinite time to expand into an infinite universe?
7answers
1k views
### How many bits are needed to simulate the universe?
This is not the same as: How many bytes can the observable universe store? The Bekenstein bound tells us how many bits of data can be stored in a space. Using this value, we can determine the ...
2answers
239 views
### Expanding universe
If we know the universe is expanding in whatever direction we look, can't we reasonably estimate where the 'center' of the Universe is? Is the rate of expansions in all directions the same?
4answers
719 views
### How do we know the size of the universe?
Ok, from astronomical observations we can tell that the observable matter is separating - so rewind the clock about 13.7 billion years and it was all at a single point. However, how do we distinguish ...
4answers
354 views
### Origins of the universe questions
If the universe is expanding, what is it expanding into? Similarly when the big bang happened where and how did it occur? - Where did the energy come from? Energy can not be created or destroyed does ...
1answer
447 views
### Alternative theories to the big bang?
Hey all, are there any theories out there on the origins (or infinite existence of) the universe beside the big bang that actually adhere to current scientific knowledge and fact?
2answers
463 views
### Entropy of an empty universe
After watching the first episode of wonders of the solar system, one question came up which is not explained. Bryan Cox says that ultimately the universe will be devoid of matter, so not even a ...
1answer
312 views
### Is it possible for one side of the universe to “meet” the other?
I've variously heard the shape of the universe being described as multi-dimensional, like a helix or mobius strip, and super string theorem seems to say there are lots of universes all piled up next ...
1answer
173 views
### Gamma Ray Bubble at the center of our galaxy seen by Fermi Telescope
How could we measure high energy photons, whithout measuring them ? I can't understand how we can "see" those Gamma Ray Bubbles if they are not reaching here In this graph from Nasa you can see ...
1answer
1k views
### List of theories supporting origin of universe [closed]
Big Bang theory is widely accepted theory when it comes to origin of universe. What other really compelling theories are out there explaining/supporting the origin of universe. I know many people ...
3answers
184 views
### Is the “far” universe expanding more quickly?
I'm reading this silly Time article: http://www.time.com/time/health/article/0,8599,2044517,00.html And they say "Even at its best, the 20-year-old telescope never had the acuity to peer so far into ...
1answer
247 views
### What happend with the light from all the galaxies visibles from an earth telescope?
Supposing it's possible to see some distant galaxies with an earth telescope, then, at the tip of the telescope lens there are photons comming from the distant galaxy... So, if I extend my hand in a ...
6answers
840 views
### What are the most realistic ways of high speed space propulsion?
Liquid and solid chemical fuels in rockets are very expensive and inefficient. I have heard of solar sails but what are the most realistic space travel fuels that will be used in the future to get ...
5answers
732 views
### How many times has the “stuff” in our solar system been recycled from previous stars?
Is there a cosmologist in the house? I've got a basic understanding (with some degree of error) of some simple facts: The Universe is a little over 13 billion years old. Our galaxy is almost that ...
2answers
123 views
### can a system with nested building blocks (atoms,cells) NOT be “fine tuned”? [closed]
Never mind whether the universe is "fine tuned" for anything in particular,just the idea that there is a nested hierarchy seems incredibly constraining on the outcome, and anything but accidental.
8answers
569 views
### Is relativity necessary for the existence of life?
If the universe didn't have the relativity principle, would it be able to support life? Life consists of very complicated organisms. The operation of these organisms depends on the laws of physics. ...
3answers
342 views
### Why did decoherence start in the first place?
We learn that the quantum wave function $\Psi$ collapses when it interacts with a classic object (measurement). My question is: Why are there classic objects after all, how did it all start? In a ...
4answers
913 views
### Shape of the universe?
What is the exact shape of the universe? I know of the balloon analogy, and the bread with raisins in it. These clarify some points, like how the universe can have no centre, and how it can expand ...
3answers
1k views
### How long will the Universe's hydrogen reserves last for?
I recently became really interested in learning about physics and cosmology, but I still know very little. Hopefully someone with more knowledge will be able to shed some light on my questions. Here ...
4answers
1k views
### Did time exist before the creation of matter in the universe?
Does time stretch all the way back for infinity or was there a point when time appears to start in the universe? I remember reading long ago somewhere that according to one theory time began shortly ...
5answers
2k views
### Does red shift evidence necessarily imply that the universe started from a singularity?
We are taught that the universe began as a singularity - an infinitely small and infinitely dense point. At the beginning of time there was a 'Big Bang' or, more accurately, 'Inflation'. The main ...
1answer
554 views
### Supermassive black holes with the density of the Universe
This question was inspired by the answer to the question "If the universe were compressed into a super massive black hole, how big would it be" Assume that we have a matter with a uniform density ...
0answers
278 views
### What is the name of the physical space enveloping all universes? [closed]
Please excuse my question, as i don't come from a physics background, but i was really wondering. Assuming we are in one of numerous universes which all have physical dimensions: What is the name of ...
0answers
2k views
### What's the relationship between mass and time? [closed]
[This question has arisen from a wish to understand an end-of-universe scenario: heat death] Are time and mass intrinsically linked? If so, does time "run slower" (whatever that may mean) in a ...
6answers
1k views
### What is known about the topological structure of spacetime?
General relativity says that spacetime is a Lorentzian 4-manifold $M$ whose metric satisfies Einstein's field equations. I have two questions: What topological restrictions do Einstein's equations ...
6answers
2k views
### Why is the mapped universe shaped like an hourglass?
I've watched a video from the American National History Museum entitled The Known Universe. The video shows a continuous animation zooming out from earth to the entire known universe. It claims to ...
1answer
138 views
### Four-dimensionalism vs energy economy
Four-dimensionalism claims that the universe is basically one huge space-time worm and that everything exists at once (however you want to say that since "internal time" is then just another ...
1answer
282 views
### What if the Universe is made of recursions? [closed]
What do you guys think of Scott Adams' theory that the Universe might be made of recursions, created by observation? I find it fascinating but have no strong opinion yet if it makes sense.
7answers
2k views
### Does Quantum Physics really suggests this universe as a computer simulation? [closed]
I was reading about interesting article here which suggests that our universe is a big computer simulation and the proof of it is a Quantum Physics. I know quantum physics tries to provide some ...
6answers
981 views
### Does anything exist in the intergalactic space?
I am a part time physics enthusiast and I seldom wonder about the intergalactic space. First, it is my perception that all(almost all) the objects in the universe are organized in the forms of ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438404440879822, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/67171/hilbert-mumford-criterion-and-closedness/67197
|
## Hilbert-Mumford criterion and closedness
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A version of the Hilbert-Mumford criterion states the following: Let $G$ be a linearly reductive group and $V$ a representation of $G$ over a field $k$ (alg. closed, char. zero). Suppose that $y \in \overline{Gx} - Gx$. Then, there is a one-parameter subgroup $\lambda : k^\times \to G$ such that $$\lim_{t\to 0} \lambda(t)x \in \overline{Gy}.$$
My question is: Is there an example where every one parameter subgroup misses the orbit of $y$? I.e., is there an example where, for every $\lambda: k^\times \to G$ $$\lim_{t\to 0} \lambda(t)x \in \overline{Gy} \implies \lim_{t\to 0} \lambda(t)x \in \overline{Gy}-Gy?$$ If $G$ is a torus the answer is "no". What if $V$ is replaced by a more general scheme $X$ that is not itself a representation?
-
## 1 Answer
I have a counterexample now, thanks to some notes of Zinovy Reichstein I found. I think the counterexample is paraphrased as follows: Let $V$ be an irreducible representation of $G$ and suppose $x$ does not have a highest weight vector $y$ in its orbit, but $y \in \overline{Gx}$. There will be no way to get to $y$ from $x$ using only semi-simple elements.
To be precise (and to take the example from Reichstein) let $G = {\rm SL}_2(\mathbb{C})$ and $V = {\rm{Sym}} ^n \mathbb{C}^2$, $n \geq 2$. If $a$ and $b$ are the standard basis vectors of $\mathbb{C}^2$ then $a^{n-1} b \in V$ has the highest weight vector $a^n$ in its orbit closure, but not its orbit. Now check that there is no one-parameter subgroup $\lambda(t)$ such that $\lambda(t) a^{n-1} b \in G a^n$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9102498888969421, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/75413/angle-btween-coordinate-vector-and-normal-vector-of-facet-in-a-convex-polytope-a/75432
|
## Definitions
Let $\mathcal{C}$ be a convex polytope in $\mathbb{R}^{D}$ with $K$-facets $F_{1},\ldots,F_{K}$. I denote the normal vector of the $k^\mathrm{th}$ facet as $\mathbf{w}_k=(w_{k1},\ldots,w_{kD})$.
In the sequel, I will use $k$ as the index of $K$ facets and $d$ as the index of $D$ dimensions. Namely, $d\in \{1,\ldots,D\}$ and $k\in \{1,\ldots,K\}$.
Let $\mathbf{p}=(p_{1},\ldots,p_{D})$ be a point in $\mathbb{R}^{D}$. Define
$L_{d}=\{\mathbf{p}+\theta\mathbf{u}_{d}|\theta\in \mathbb{R}\},$
where $\mathbf{u}_{d}$ is the vector of the form $(0,\ldots,0,1,0,\ldots,0)$ with a $1$ only at the $d^{\mathrm{th}}$ dimension.
For $k=1,\ldots, K$, define
$G_{k}=\{d|L_{d}\cap F_{k}\neq \emptyset\}.$
Define $f:\mathbb{R}^{D}\times\mathbb{R}^{D}\rightarrow [0,1]$ as
$f(\mathbf{x},\mathbf{y})=\frac{|\mathbf{x}^\mathrm{T}\mathbf{y}|}{\left\|\mathbf{x}\right\|\left\|\mathbf{y}\right\|}.$
## My conjecture
For any $\mathbf{p}\in \mathrm{int}\mathcal{C}$, there exist $d$ and $k$ such that $d\in G_{k}$ and $f(\mathbf{u}_{d},\mathbf{w}_{k})=\max \{f(\mathbf{u}_{1},\mathbf{w}_{k}),\ldots,f(\mathbf{u}_{D},\mathbf{w}_{k})\}$.
Can anyone provide a counterexample?
### An illustrative example in $\mathbb{R}^2$
In particular, if we restrict ourself in $\mathbb{R}^2$, the above conjecture can be restated as follows:
Let $p$ be a point in the interior of a convex polygon $\mathcal{C}$. Let $L_x$ and $L_y$ be two lines through $p$, which are parallel to $x$-axis and $y$-axis respectively. Consider all acute angles at intersections of $L_x$ $L_y$ and $\partial \mathcal{C}$, there is at least one angle $\geq$45°.
The figure below gives an example.
I haven't found any counterexample in $\mathbb{R}^2$, and that's why I'm considering to generalise this conjecture into high dimensional space.
Finally, any problem reformulation is also welcome.
-
1
Your conjecture (at this writing) asserts what appears to me to be an obvious equality, and says nothing about the maximum value with respect to d. You might edit your conjecture to be more in align with your example. Gerhard "Ask Me About System Design" Paseman, 2011.09.14 – Gerhard Paseman Sep 14 2011 at 15:46
In case Gerhard's point isn't clear, your conjecture has the form, $c = \max \lbrace a, b, c, d, e, \ldots \rbrace$: it only states that the max of a finite set of numbers is one of the numbers. – Joseph O'Rourke Sep 14 2011 at 16:04
## 1 Answer
$\def\u{{\bf u}}\def\p{{\bf p}}\def\q{{\bf q}}$ Consider all the points of intersection of the lines $L_d$ with the hyperplanes $H_k$ defining the facets $F_k$. Let $\q$ be the one closest to $\p$; suppose $\q=L_d\cap H_k$. Then $(d,k)$ is a desired pair.
Firstly, $\q$ should belong to $F_k$, otherwise the segment $[\p,\q]$ would intersect the boundary of a polytope at a point on another facet; thus $d\in G_k$. Next, let $\q_1,\dots,\q_D$ be the intersection points of the hyperplane $H_k$ with the lines $L_1,\dots,L_D$ (some of these points may be ideal). Then $\|\p-\q\|=\min_i\|\p-\q_i\|$ which is equivalent to your relation.
EDIT: Surely, the convexity condition IS necessary.
-
Thanks Ilya Bogdanov for this nice and clear solution. – han Sep 15 2011 at 9:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9016980528831482, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/discrete
|
# Tagged Questions
The discrete tag has no wiki summary.
1answer
20 views
### How do we know that time and distance are not discrete?
I know that it is believed that energy is discrete, in that it travels in quanta. I was wondering if there is any evidence which either proves or disproves something similar with both time and ...
0answers
27 views
### Problem with Discrete Parseval's Theorem [migrated]
I think I must be missing something obvious, but I can't for the life of me see what it is. The discrete version of Parseval's theorem can be written like this: \$\sum_{n=0}^{N-1} |x[n]|^2 = ...
1answer
52 views
### Topological vs. non-topological noetherian charges
What (if any) is the relationship between the conserved (non-topological) noetherian charges and topological charges? Namely, is there any "generalization" of the Noether's first theorem that includes ...
1answer
55 views
### Gauging discrete symmetries
I read somewhere what performing an orbifolding (i.e. imposing a discrete symmetry on what would otherwise be a compactification torus) is equivalent to "gauging the discrete symmetry". Can anybody ...
2answers
144 views
### Integer physics
Are there interesting (aspects of) problems in modern physics that can be expressed solely in terms of integer numbers? Bonus points for quantum mechanics.
0answers
39 views
### Has Time in the Universe been found to be Discrete or Continuous? [duplicate]
I have a question, has the Universe been found to come in Discrete Quantum, like Quantum Physics or is it Continuous in Nature? I was wondering if time was like a Continuum, like the fluid in a soft ...
0answers
16 views
### Implementing Explicit formulation of 1D wave equation in Matlab #FiniteElements #FiniteDifferences [migrated]
so the theory is straight forward. we have: $\frac{d^2U}{(\Delta t)^2}=c^2 \frac{d^2U}{(\Delta x)^2}$ discretizing it gives: \$\frac{U(i+1,j)- 2U(i,j) + U(i-1,j)}{(\Delta t)^2} = c^2 ...
0answers
66 views
### What were Feynman's objection(s) to a cubic lattice universe? [duplicate]
In this video of Feynman discussing the scientific method, starting at around eight minutes and 30 seconds, Feynman describes the proposition that space consists of a cubic lattice of points (as ...
1answer
69 views
### At the smallest level, how do things move?
When we see something moving on a screen it's usually just pixels being turned off at one location and turned on at another. For example: This would render a dot moving from A to C. Turn on pixel ...
0answers
78 views
### Do semiclassical GR and charge quantisation imply magnetic monopoles?
Assuming charge quantisation and semiclassical gravity, would the absence of magnetically charged black holes lead to a violation of locality, or some other inconsistency? If so, how? (I am not ...
1answer
111 views
### Dirac magnetic monopoles and electric charge quantization
Wikipedia describes how assuming the existence of a single magnetic monopole leads to electric charge quantization. But what if there's more than one? The same argument would apply to each of them ...
2answers
144 views
### A universe of angular momentum?
I read this on Wikipedia: [...] That most tangible way of expressing the essence of quantum mechanics is that we live in a universe of quantized angular momentum and the Planck constant is the ...
1answer
168 views
### Is Space-Time Quantisation necessary or even meaningful?
It is believed among people working on Quantum Gravity, that space-time must be quantised at the Planck scale. Although it is very hard to verify such proposition, it is interesting from a ...
3answers
187 views
### Is velocity quantized?
If velocity is not quantized, then do moving objects have 'infinitely decimal place' velocities which we just can't measure to infinite decimal places? From my understanding the quantization of ...
1answer
76 views
### Is there a finite unit of distance that we cannot divide past?
If distance could be divided into an infinite no of units or points, then it seems to me that motion would be impossible since a meter for instance, having an infinite no of points within it (and the ...
4answers
181 views
### What entities in Quantum Mechanics are known to be “not quantized”?
Since all the traditional "continuous" quantities like time, energy, momentum, etc. are taken to be quantized implying that derived quantities will also be quantized, I was wondering if Quantum ...
0answers
79 views
### Division algebras $(\mathbb{R,C,H,O})$ and discrete symmetry [closed]
I once saw a statement about the relation between division algebra(which means you can define a division in this algebra, there is a theorem saying we only have 4 kinds of division algebra, real R, ...
4answers
565 views
### Reason for the discreteness arising in quantum mechanics?
What is the most essential reason that actually leads to the quantization. I am reading the book on quantum mechanics by Griffiths. The quanta in the infinite potential well for e.g. arise due to the ...
2answers
417 views
### Applying velocity Verlet algorithm
I want to implement a simple particules system using the velocity form of the Verlet algorithm as integrator. Initial conditions at $t=0$ for a given particule $p$: mass: $m$ position: ...
5answers
961 views
### Why position is not quantized in quantum mechanics?
Usually in all the standard examples in quantum mechanics textbooks the spectrum of the position operator is continuous. Are there (nontrivial) examples where position is quantized? or position ...
0answers
97 views
### Discrete sum over an exponential with imaginary argument, considering only every second lattice site?
Let's say I sum an exponential function $e^{\imath \left(k-k^{\prime}\right) x_{i}}$ over a chain system where every member of the chain is of the same type, e.g., A-A-A-...-A-A (total of N sites) ...
2answers
271 views
### Derivation of the Lagrangian method using discretized time axis
I'm watching this video lecture by Leonard Susskind of Stanford: http://www.youtube.com/watch?v=3apIZCpmdls After some preliminaries, at 34 minutes he jumps into a discretization of the time axis ...
1answer
216 views
### What is “charge discreteness”?
I assume it is some kind of quantity. Google only made things more confusing. I get that it has something to do with circuits. I also get what a discrete charge is. In fact, I thought charges ...
2answers
1k views
### Is reality discrete at the quantum level? (…and what does it imply not only mathematically?)
On a quantum scale the smallest unit is the Planck scale, which is a discrete measure. There several question that come to mind: Does that mean that particles can only live in a discrete grid-like ...
3answers
519 views
### How could spacetime become discretised at the Planck scale?
I didn't have much luck getting a response to this question before so I have tried to reword and expand it a little: In early 2010 I attended this inaugural lecture by string theorist- Prof. ...
3answers
346 views
### Are there any quantities in the physical world that are inherently rational/algebraic?
Whenever we measure something, it is usually inexact. For example, the mass of a baseball will never be measured exactly on a scale in any unit of measurement besides "mass in baseballs that are ...
3answers
624 views
### What are some approaches to discrete space-time used in modern physics?
This thought gave rise to some new questions in my mind. What are the consequences for: How would it affect duality i.e. particle, wave property of photons? How does this statement affect the ...
7answers
2k views
### Is there something similar to Noether's theorem for discrete symmetries?
Noether's theorem states that, for every continuous symmetry of a system, there exists a conserved quantity, e.g. energy conservation for time invariance, charge conservation for $U(1)$. Is there any ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255892038345337, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/209607-market-equilibrium-problem-depth-problem.html
|
# Thread:
1. ## Market Equilibrium Problem and Depth Problem
I have two problems. The first one im not sure how to set up and the second one i got an answer but im not sure if its correct.
1. A fissure in the earth appeared after an earthquake. To measure it vertical depth, a stone was dropped into it and the sound of the stone's impact was heard 4.3 seconds later. The distance (in feet) that the ston fell is given by S- 16t(1)^2, and the distance(in feet) that the sound traveled is given by S = 1090t(2). In the these equations, the distance traveled by the sound and the stone are the same, but their times are not. Using the fact that the total time is 4.3 seconds, find the depth of the fissure.
2.If the supply and demand function for a commodity is given by 4p - q = 42 and (p+2)q = 2100, respectively, find the price that will result in market equilibrium
I got...\$9.59
2. ## Re: Market Equilibrium Problem and Depth Problem
A fissure in the earth appeared after an earthquake. To measure it vertical depth, a stone was dropped into it and the sound of the stone's impact was heard 4.3 seconds later.
let $t$ = time for the stone to reach impact
$d = 16t^2$
$t + \frac{d}{1090} = 4.3$
sub $16t^2$ for $d$ in the second equation, solve the resulting quadratic equation for $t$ ... then calculate $d = 16t^2$
3. ## Re: Market Equilibrium Problem and Depth Problem
Okay, i got it. What about the second problem?
4. ## Re: Market Equilibrium Problem and Depth Problem
Originally Posted by Jperk94
Okay, i got it. What about the second problem?
I'm not familiar with those terms ... sorry.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380525946617126, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/65390/order-of-convergence?answertab=active
|
Order of convergence
Would I be right in thinking that: $x^ab^x\to0$ as $x\to \infty\,\,\forall a\in \mathbb R$ where $b\in [0,1)$? I think that $b^x$decays faster than the growth of $x^a$ but how might I prove that?
-
3
Yes, but how to prove it? Do you know some result that might help? Have you already done some case (like $xe^{-x}$) so that you can examine the proof and see what may apply in this case? In summary, SHOW YOUR ATTEMPT, don't expect us to do it for you. – GEdgar Sep 18 '11 at 0:28
@GEdgar In fairness, the OP has explained what he thinks will happen and why he thinks that will be the case. It's not like it's a verbatim copy of a homework question. – Fly by Night Mar 6 at 18:35
2 Answers
So its obvious for $a \le 0$ so take $a>0$. Then we have an indeterminate form and may use l'hopitals rule.
We change $x^a b^x$ into $\frac{x^a}{b^{-x}}$ and it is of the form $\frac{\infty}{\infty}$.
The strategy is to show it is of this form for some number of application of l'hopitals rule until it becomes $\frac{0}{\infty}\to 0$ and then the result will be proved.
Now you can prove by induction that $\lim_{x \to \infty} [\frac{d^n}{dx} b^{-x}] = \infty$ for all n.
We also know we can choose $n$ such that $a-n <0$ and if we differentiate $x^a$ n times we will have $\lim_{x \to \infty} \frac{d^n}{dx}x^a=0$.
So if we take a minimal n then all previous applications of l'hopitals rule were justified and the limit is indeed $0$.
-
Did you find this question interesting? Try our newsletter
Take for example $\lim_{x \to \infty}x^ab^x = \lim_{x \to \infty} \frac{x^a}{B^x}$ where $B=\frac{1}{b}$, so if you log numerator and denominator you will see that the numerator is $O(\log x)$ and denominator $O(x)$, so $\lim_{x \to \infty} \frac{x^a}{B^x} = 0$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520666599273682, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/56550/what-is-the-minimum-optical-power-detectable-by-human-eye
|
# What is the minimum optical power detectable by human eye?
If one is in complete darkness, what is the minimum optical power that the eye can "see" (let's say in 500-600 nm range).
I found that for 510 nm, 90 photons can be detected (http://en.wikipedia.org/wiki/Absolute_threshold).
Calculating the energy for these photons and considering 150 fs pulses with 80 Mhz repetition rate, results in an average power of ~ 3 nW.
-
– Cheeku Mar 11 at 15:15
There is actualy rather a lot of ambiguity in this question right now. The sensing cells in your eye can respond to single photons (though their QE is not high). However, there is a preprocessing layer (which as a particle experimenter I would describe as a hardware trigger) which discards isolated hits so they never register in the conscience mind. I assume you mean to ask about the dimmest light which can be used to make decisions. – dmckee♦ Mar 11 at 22:01
@dmckee Ya, he probably means the minimum number of photons required to judge whether light is coming or not. Citing the same example,90 photons be detected for 510 nm implies that if you detect about 50 photons at 510 nm, your eyes wouldn't know that you have light from 510 nm range. – Cheeku Mar 11 at 23:26
Sorry for being ambiguous, by detection I meant what excites the brain and takes to the sensation of vision. I was curious if there were other similar experiments like the one provided on Wikipedia link above. – Silviu Mar 12 at 19:47
## 2 Answers
Albert Rose studied this question in the 1940's and developed the Rose Criterion which states that the signal-to-noise ratio ($SNR$):
$$SNR=\dfrac{\mu}{\sigma}$$
For $100$% identification of an object by the human eye is $SNR \approx 5$. He based this off of quantum arguments where he looked at the average number of photons per unit area in an photo image and stated gave the equation $\Delta N = kN^{\frac{1}{2}}$ where $\Delta N$ is the smallest perceptible change and $N$ is the average number of quanta absorbed in a pixel and $k$ is the $SNR$ (see eq 1, 1a and figure 1).
If one uses this ratio, then in two independent pixels one can distinguish one from the other if there are an average of $100$ quanta absorbed in the pixels, then one would be able to distinguish between the two, if one had $50$ quanta more than the other. This relationship has an obvious limit when the average number of quanta are $> 7$ and all the photon ($> 14$ total for two pixels) are found in one pixel and not the other, as shown in the table below.
-
Many years ago I have read about experiments of Vavilov and collaborators who claimed to have registered single photons by the human eye. I cannot, however, recall the source.
-
– Raindrop Mar 12 at 16:05
This is the same as the info from Wikipedia from the link in the question. – Silviu Mar 12 at 20:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429638981819153, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/53816?sort=newest
|
## Fundamental group of a product of two curves
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $S$ be a complex surface whose fundamental group is isomorphic to the fundamental group of a product of two curves of genera $>1$. Does $S$ have to be a product of two curves?
-
Do you know anything about the higher homotopy groups? If they are non-zero then it isn't the product of two curves. If they are all zero then you have a fighting chance. – David Roberts Jan 30 2011 at 20:30
1
What do you mean by 'complex surface'? Is it compact? projective? Is $\mathbb{C}^2$ minus a suitable submanifold a candidate? – J.C. Ottem Jan 30 2011 at 20:40
6
Since the fundamental group is a birational invariant, I assume that you want the word "be" to mean "is birational to", right? Otherwise you could eg take the product of two curves and blow it up at some point. I suspect that the answer to the corrected question is "no", but I don't know an example off the top of my head. – Andy Putman Jan 30 2011 at 20:43
## 2 Answers
The answer in no, because of the following result:
Theorem 1. Let $X$ be a non-ruled minimal surface. Then there exists a finite ramified covering $S \to X$ of degree $>1$, such that $S$ is minimal of general type with $K_S$ very ample, $\pi_1(S) \cong \pi_1(X)$ and $S$ is not birationally equivalent to $X$. We can moreover assume that $S$ has negative index, i.e. $K_S^2 - 8 \chi(\mathcal{O}_S) <0$.
So the fundamental group $\pi_1(X)$ alone does not determine the birational type of $X$, and in general not even its diffeomorphism type.
When $X$ is the product of two curves, however, something more can be said, provided that one also knows the topological Euler number. More precisely one proves the following
Theorem 2. Let $C_1$, $C_2$ be smooth curves of genus $g_1$, $g_2$, with $g_i \geq 2$, and let $X=C_1 \times C_2$. Then any surface $S$ such that $\pi_1(S) \cong \pi_1(X)$ and $e(S)=e(X)$ is isomorphic to a product of two curves of the same genera.
Theorems 1 and 2 were proven by F. Catanese in his paper Fibred surfaces, varieties isogenous to a product and related moduli spaces, which considers the more general situation $X=(C_1 \times C_2)/G$, where $G$ is a finite group acting freely on the product $C_1 \times C_2$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
All two-dimensional complex tori $T$ have the same fundamental group, because such a torus is homeomorphic to a product of four copies of the unit circle $S^1$. Among them there are all the products of $E_1 \times E_2$ of elliptic curves. Since each elliptic curve is homeomorphic to a product of two copies of $S^1$, the fundamental groups of $T$ and $E_1 \times E_2$ are isomorphic (for all $T, E_1,E_2$). However, almost all two-dimensional complex tori are not biholomorphically isomorphic to a product of elliptic curves.
Shafarevich's Basic algebraic geometry" contains examples of two-dimensional complex tori that do not contain complete complex curves at all and therefore are not the products of two curves. One may also get an explicit example of such a torus (without curves), starting with a totally complex quartic number field $F$ that does not contain an imaginary quadratic subfield, choosing a rank 4 discrete lattice $\Gamma$ in the realification $F_R$ of $F$ and putting $T=F_R/\Gamma$ (Math. Ann. 303 (1995), 11--29).
As for complex abelian surfaces $A$ (i.e., algebraizable two-dimensional complex tori), almost all of them are also not isomorphic to a product of elliptic curves. An explicit example is provided by the jacobian $J(C)$ of the genus 2 curve $C:y^2=x^5-x-1$. Actually, it is known (arXiv:math/9909052 [math.AG]) that $J(C)$ has no nontrivial endomorphisms and therefore is not isomorphic to a product of elliptic curves.
-
1
OOPS! Sorry, somehow I did not notice that the curves must have genus greater than 1. – Yuri Zarhin Jan 31 2011 at 2:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9368066191673279, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/89505-more-basic-calculus.html
|
# Thread:
1. ## more basic calculus
can some show me step by step please
thanks
2. Show you what. I don' see anything. Are you talking about how you want someone to show you "Step-by-Step" the sitcom? I hated that show.
Nevermind, I can see it now.
3. f) $\frac{1}{4}x^4+C$ The only step you need for this is the power rule. Do you know the power rule? Oh and algebra too. $dy=x^3dx\Longleftrightarrow\frac{dy}{dx}=x^3$
4. Use the power rule for g too...........
$\frac{1}{3}x^3+x^2-3x+C$
You do know the power rule right?
5. thanks
but can u show me how to do it for all of them
thanks
6. j) Let me do j for you step by step.
$\int{\frac{{x}}{{x^2 + 2}}}dx$
Let $u = x^2 + 2$
Then $du = 2xdx$
Notice that our $du$ is the same as the numerator of our given except of the numerical coefficient $2$. Do you see it?
Now, we can put the $2$ by doing this.
$\frac{1}{2}\int{\frac{2xdx}{x^2+2}}$ Is this equal to the original given? Of course yes. So we can continue.
Then
$\frac{1}{2}\int{\frac{2xdx}{x^2+2}} = \frac{1}{2}\int{\frac{du}{u}}$.
There is a formula in integral calculus that states that the $\int{\frac{du}{u}} = \ln\left|u\right| + C$.
Hence in our problem..
$\frac{1}{2}\int{\frac{2xdx}{x^2+2}} = \frac{1}{2}\int{\frac{du}{u}} = \frac{1}{2}\ln\left|u\right| + C$.
Plugging in $u$.
$\int{\frac{{x}}{{x^2 + 2}}}dx = \frac{1}{2}\ln\left|x^2 + 2\right| + C$
You can check by getting the derivative of our final answer. If it results to the given, then we're pretty much correct.
I hope this helps.
7. thanks bro
can u help me to the other 1s aswell
cheers
8. For h, let me ask first, is it a Definite Integral? With limits 2 and 3? Do you know how to evaluate Definite Integrals?
The problem is quite easy and you can try it by yourself.
Try to continue this, just feel free to ask if you have some questions.
$2\int_2^3 {d\theta + } \frac{1}{2}\int_2^3 {\cos 2\theta (2d\theta )}$
9. im lost
10. Can you tell me what part of solving the problem you find difficulties?
11. Hey darth! If I were you, I would change my signature. You should write the work function so that it doesn't make you look like you don't know that the limit of a constant is that constant.
I'm just joking with you. Sometimes people can't pick up on humor through text.
The limit as x tends to infinity of work is, guess what?
MORE WORK! Get it?
12. Thanks. I'll think about changing my signature. Maybe I can come up with a better one. Haha.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943026065826416, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/10216/if-a-quadratic-form-is-positive-definite-on-a-convex-set-is-it-convex-on-that-se
|
## If a quadratic form is positive definite on a convex set, is it convex on that set?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider a real symmetric matrix $A\in\mathbb{R}^{n \times n}$. The associated quadratic form $x^T A x$ is a convex function on all of $\mathbb{R}^n$ iff $A$ is positive semidefinite, i.e., if $x^T A x \geq 0$ for all $x \in \mathbb{R}^n$.
Now suppose we have a convex subset $\Phi$ of $\mathbb{R}^n$ such that $x \in \Phi$ implies $x^T A x \geq 0$. Is $x^T A x$ a convex function on $\Phi$ (even if $A$ is not positive definite)? Of course, the answer in general is "no," but we can still ask about the most inclusive conditions under which convexity holds for a given $A$ and $\Phi$. In particular I'm interested in the question:
Suppose we have a quadratic form $Q:\mathbb{R}^{n \times n} \rightarrow \mathbb{R}$. What is the weakest condition on $Q$ that guarantees it will be convex when restricted to the set of positive semidefinite matrices?
-
$Q$ must be positive semidefinite on the set of all symmetric matrices. By the way, if you really need help with a mathematical question going beyond routine homeworks and think that it is too "specific" or "trivial" for MO, try College Playground on AoPS. We welcome there almost everything with mathematical content (no pure philosophy, please!) Just use your common sense when choosing the subforum and stick to some other easy to guess rules. You may not get as qualified answer there as on MO but you won't be told that your question is of no interest either. – fedja Dec 31 2009 at 5:06
Ok, thanks for the help! :) – fuzzytron Dec 31 2009 at 16:19
## 2 Answers
$x^2-y^2$ is positive on $[2,3]\times [-1,1]$ but not convex there. This creates problems for any convex sets not containing the origin. You are, probably, after something else not so obviously false. Why don't you just tell us what it is?
Edit: Even then it is false: just take $B_{11}B_{22}$. By the way, for a pure quadratic form, convexity on an open set and convexity on the entire space are the same thing.
-
Thanks -- I've added some details. – fuzzytron Dec 31 2009 at 3:04
And so did I :) – fedja Dec 31 2009 at 3:24
>>By the way, for a pure quadratic form, convexity on an open set and convexity on the entire space are the same thing. Thanks for pointing that out. Unfortunately, the cone of positive semidefinite matrices is not an open subset, so that doesn't give us an easy answer. – fuzzytron Dec 31 2009 at 4:50
This answer also covers Edit 2 of the question, since the set of positive definite matrices is open. – Harald Hanche-Olsen Dec 31 2009 at 5:07
1
Just an elaboration on fedja's answer. Let $\Phi\subset{\mathbb R}^n$ be any convex (nonempty) set. Then there is unique subspace $V\subset{\mathbb R}^n$ such that $\Phi$ contains an open subset of the coset $V+x$ for some $x\in{\mathbb R}^n$ ($V$ is spanned by the set $\Phi-x$ for any fixed vector $x$). In your example, $\Phi$ consists of positive semi-definite matrices, and $V$ consists of all symmetric matrices. Now if $Q$ is any quadratic form, it is convex on $\Phi$ if and only if it is convex (positive semi-definite) on $V$. This gives an explicit criterion. – t3suji Dec 31 2009 at 5:14
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The answer to the edited question is no. Let $Q\colon\Phi\to\mathbb{R}$ be the quadratic form $Q(B)=B_{11}B_{22}$. Clearly $Q\gt0$ on the set $\Phi$ of positive definite matrices. Equally clearly, this function is concave on the subset `$\{B\in\Phi\colon B_{11}+B_{22}=1\}$`.
-
Oops, only on coming back here did I realize that I had duplicated the result of the second paragraph of fedja's answer. Sorry about that. (Or did fedja's edit happen while I was typing in my answer?) – Harald Hanche-Olsen Dec 31 2009 at 5:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327435493469238, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2009/06/24/matrices-and-forms-i/?like=1&_wpnonce=1e4d043b68
|
# The Unapologetic Mathematician
## Matrices and Forms I
Yesterday, we defined a Hermitian matrix to be the matrix-theoretic analogue of a self-adjoint transformation. So why should we separate out the two concepts? Well, it turns out that there are more things we can do with a matrix than represent a linear transformation. In fact, we can use matrices to represent forms, as follows.
Let’s start with either a bilinear or a sesquilinear form $B\left(\underline{\hphantom{X}},\underline{\hphantom{X}}\right)$ on the vector space $V$. Let’s also pick an arbitrary basis $\left\{e_i\right\}$ of $V$. I want to emphasize that this basis is arbitrary, since recently we’ve been accustomed to automatically picking orthonormal bases. But notice that I’m not assuming that our form is even an inner product to begin with.
Now we can define a matrix $b_{ij}=B(e_i,e_j)$. This completely specifies the form, by either bilinearity or sesquilinearity. And properties of such forms are reflected in their matrices.
For example, suppose that $H$ is a conjugate-symmetric sesquilinear form. That is, $H(v,w)=\overline{H(w,v)}$. Then we look at the matrix and find
$\displaystyle\begin{aligned}h_{ij}&=H\left(e_i,e_j\right)\\&=\overline{H\left(e_j,e_i\right)}\\&=\overline{h_{ji}}\end{aligned}$
so $H$ is a Hermitian matrix!
Now the secret here is that the matrix of a form secretly is the matrix of a linear transformation. It’s the transformation that takes us from $V$ to $V^*$ by acting on one slot of the form, and written in terms of the basis $e_i$ and its dual. Let me be a little more explicit.
When we feed a basis vector into our form $B$, we get a linear functional $B(e_i,\underline{\hphantom{X}})$. We want to write that out in terms of the dual basis $\left\{\epsilon^j\right\}$ as a linear combination
$\displaystyle B(e_i,\underline{\hphantom{X}})=b_{ik}\epsilon^k$
So how do we read off these coefficients? Stick another basis vector into the form!
$\displaystyle\begin{aligned}B(e_i,e_j)&=b_{ik}\epsilon^k(e_j)\\&=b_{ik}\delta^k_j\\&=b_{ij}\end{aligned}$
which is just the same matrix as we found before.
### Like this:
Posted by John Armstrong | Algebra, Linear Algebra
## 12 Comments »
1. At last: something for Physics.
Comment by | June 24, 2009 | Reply
2. DO you mean “Hamiltonian matrix” or “Hermitian matrix”?
Comment by Blaise Pascal | June 24, 2009 | Reply
3. Sorry, Blaise I have no idea where that came from.. fixing.
Comment by | June 24, 2009 | Reply
4. Have you treated (or do you plan to treat) forms (p-forms, differential forms)? Your style of brief explanations with lots of examples would be very helpful in providing a basic understanding of this often-overlooked tool.
Comment by Charlie C | June 25, 2009 | Reply
5. Charlie, I haven’t even done calculus in more than one variable yet.
But! That’s because I want some linear algebra at hand to use where it comes up in multivariable calculus, including just the sort of subjects you mention.
Comment by | June 25, 2009 | Reply
6. Shouldn’t $B(e_i,_)=\sum_k b_{ik}\epsilon^k$ or is that the standard way to write it (some kind of Einstein’s notation)?
Comment by | June 25, 2009 | Reply
7. Yes, and I just noticed that I forgot to hit “publish” on today’s post, which mentions something closer to that.
Comment by | June 25, 2009 | Reply
8. [...] and Forms II Let’s pick up our discussion of matrices and forms and try to tie both views of our matrix together. We’re considering a bilinear form \$ on a [...]
Pingback by | June 25, 2009 | Reply
9. [...] Spaces in Dirac Notation Now, armed with Dirac notation, we can come back and reconsider matrices and forms. For our background, we’ve got an inner product space. That is, a vector space , [...]
Pingback by | July 8, 2009 | Reply
10. [...] forms on the underlying space forms a vector space itself. Indeed, such forms correspond to correspond to Hermitian matrices, which form a vector space. Anyway, rather than write the usual [...]
Pingback by | November 13, 2010 | Reply
11. [...] we pick a basis of , then we have a matrix for the bilinear [...]
Pingback by | August 9, 2012 | Reply
12. [...] we can write down the matrix of [...]
Pingback by | September 7, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300984740257263, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/104106-largest-rectangle.html
|
# Thread:
1. ## Largest rectangle
I really have no idea where to even begin with this problem. I did some guess and check, but I'm sure there's some method I'm suppose to use that we haven't covered. I understand what it's asking and all, but I'm at a loss as to how to solve it.
Consider the graph of
$<br /> y = \sqrt{x}$
limited to the domain of (0,9).
Find the largest rectangle that can be constructed inside of that shape, with sides parallel to the x- and y-axes and the right edge of the rectangle being the vertical line x = 9.
2. This is a relatively simple calculus problem, but being as this was posted in pre-calc, am I to understand you are not familiar with differentiation?
3. Yeah, I only recently started precalculus, so I don't know much yet.
Edit: Oh, and since we haven't learned differentiation, I'm assuming we're not supposed to use it. We're currently using graphs and maximums and such.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9823936223983765, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/10182/convergence-of-periodic-single-fermion-operators/19131
|
Convergence of periodic single fermion operators
First, a quick remark: I'm a mathematician, now working on some problems coming from physics (in particular Ising models on quasiperiodic chains). A few things I find rather mysterious. I would appreciate your help.
For the purpose of generality, let's consider the following Ising model on a chain of $N$ nodes.
$$H_N = - \sum_{i = 1}^N J_i\sigma_i^{(x)}\sigma_{i+1}^{(x)} - \sum_{i = 1}^N\sigma_i^{(z)},$$
with $J_i$ depending on the node $i$ (we assume no particular order for generality), and $\sigma_i^{(x),(z)}$ the Pauli matrices. By Jordan-Wigner, we can consider the corresponding Fermionic operator given by
$$\widehat{H}_N = \sum_{i,j}\left[c_i^{\dagger}A_{ij}c_j + \frac{1}{2}\left(c_i^{\dagger}B_{ij}c_j^{\dagger}+ H.c.\right)\right],$$
where $c_i$, $1\leq i \leq N$ are anticommuting Fermionic operators and $\left\{A_{ij}\right\},\left\{B_{ij}\right\}$, $1\leq i, j \leq N$ are the elements of appropriately chosen matrices $A, B$, which depend on $\left\{J_i\right\}_{1\leq i \leq N}$.
We use periodic boundary conditions. Now we can extend $\widehat{H}_N$ to a lattice of infinite size, by gluing the unit cell of size $N$ infinitely many times. Let us call this new extension $\tilde{H}_N$. Now the questions:
1) What is $H.c.$?
2) I am interested in the thermodynamic limit $N\rightarrow\infty$. Is it obvious whether the sequence of operators $\left\{\tilde{H}_N\right\}$ converges, say in strong operator topology, to some well-defined operator $\tilde{H}$ as $N\rightarrow\infty$?
Let me motivate the second question: For a certain sequence $\left\{J_i\right\}$, constructed deterministically with certain properties (so-called quasi-periodic sequence), I believe I can say something about what Physicists call the "energy-spectrum in the thermodynamic limit". I'm interested to know whether this energy spectrum is the spectrum (in the usual functional-analytic sense) of some operator $\tilde{H}$.
Thanks for any help!
-
H.c. stands for Hermitian conjugate, meaning the Hermitian conjugate of whatever directly precedes H.c. It's just a way to save space – Greg P May 21 '11 at 6:36
Thank you, Greg. I am still puzzled by the second question. It shouldn't be too difficult to check, but I want to know first whether this is something standard. – William May 21 '11 at 7:12
The second question is really a math question rather than a physics question. – Peter Shor May 21 '11 at 9:36
The "thermodynamic limit" usually means that one solves the problem (in principle at least) for finite N, thus obtaining thermodynamic quantities of interest, and then lets N go to infinity. In that case one never actually discusses an infinite lattice Ising model, and rather than issues of convergence of operators (which physicists are unlikely to worry about anyway) we are just dealing with convergence of functions. – Greg P May 21 '11 at 16:26
If you're just concerned about the existence of a well defined $N\rightarrow\infty$ limit, I can't tell you how to prove it -- but TFIMs with quasiperiodic couplings have been studied in the literature in the past, so if you have something neat to say, don't let that concern stop you! – wsc May 21 '11 at 16:37
show 1 more comment
1 Answer
Well, if it is not too immodest to answer my own question by referring to my own papers, I'd like to suggest reading of the following papers, where the answer is essentially spelled out for a specific model (i.e., the sequence of interactions $\{J_i\}$ following a quasiperiodic substitution). This, I suspect, can be extended to the general case.
I'd like to note that these papers were written some months after I posed the question above.
http://arxiv.org/abs/1203.2221
http://arxiv.org/abs/1110.6894
-
– Qmechanic♦ Jan 5 '12 at 11:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277923703193665, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/84341/list
|
## Return to Answer
3 spelling, grammar
You can do Monsky Monsky's theorem, that a square can not cannot be divided into an odd number of equal area triangles. On the way you will have to do
1. p-adic numbers,
2. Sperner's lemma,
3. present $\mathbb{R}$ as a vector space over $\mathbb{Q}$.
In case if you have more time, you could
• use Sperner's lemma to prove Brauwer's Brouwer's fixed point theorem.
• use (3) to do Dehn Invariants
• and I am sure you can find what to do with p-adic numbers
2 added 42 characters in body; added 62 characters in body
You can do Monsky theorem, that square can not be divided into odd number of equal area triangles. On the way you will have to do
1. p-adic numbers,
2. Sperner's lemma,
3. present $\mathbb{R}$ as a vector space over $\mathbb{Q}$.
In case if you have more time, you could
• use Sperner's lemma to prove Brauwer's fixed point theorem.
• use (3) to do Dehn Invariants
• and I am sure you can find what to do with p-adic numbers
1 [made Community Wiki]
You can do Monsky theorem, that square can not be divided into odd number of equal area triangles. On the way you will have to do
• p-adic numbers,
• Sperner's lemma,
• present $\mathbb{R}$ as a vector space over $\mathbb{Q}$.
In case if you have more time, you could use Sperner's lemma to prove Brauwer's fixed point theorem.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.875379741191864, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/11041/limit-on-geothermal-energy-that-could-be-extracted-before-the-earths-magnetic-f/11043
|
# Limit on geothermal energy that could be extracted before the earth's magnetic field collapsed?
This is more of a theoretical thought-experiment question.
Basically, how much geothermal energy can we extract before the loss of the magnetic field makes it a terribly bad idea?
Will the geothermal energy inside the earth now, which is lost through "natural processes", last until the end of the Sun (say, another 5billion years?) And if so, how much excess is leftover?
a) do we know how much thermal energy there is? Call this: A
b) do we know at what point the core would stop producing a magnetic field? (ie: is it when the iron in the core is no longer molten?) Call this: B
c) do we know the rate that heat is radiated out naturally? Call the annual rate: C
So I guess: A-B - (5.0 * 10^9 * C) is the amount of heat for "safe" geothermal extraction?
also: Considering such extreme timescales -- are there any other factors adding heat into the Earth? Maybe tidal interactions with Sun/Moon/etc add anything over billions of years?
Thanks!
-
An interesting side note is that the total geothermal power is very similar to the total power utilization of Terran civilization right now. – dmckee♦ Jun 14 '11 at 1:25
## 2 Answers
The amount of energy we can capture as geothermal is only a very tiny fraction of the amount that leaks naturally through the earth's crust. In addition, the distance from the earth's surface to the generator is so very long that the time constant for heat movement at one end to cause heat change at the other is very long.
To get an idea of the calculation for this last point, note that we can measure the climate for going back 80,000 years by looking at the temperatures in the Ural superdeep borehole which goes down 12km. When digging such a hole, the temperatures at the top of the hole is the current temperature. Deeper temperatures correspond to averages of temperatures over longer periods of time.
The temperature T at depth z at time t is given by: $$T(z,t) = \int_{-\infty}^{t}\frac{z\;\exp(-z^2/4\alpha(t-\tau))\;f(\tau)\;d\tau}{\sqrt{4\pi\alpha(t-\tau)^3}}$$ where $f(\tau)$ is the surface temperature and $\alpha$ is the thermal diffusivity. From this, we see that the temperature change at the surface is exponentially damped at a depth $z$ by $\sqrt{t}$.
From the borehole paper above, we have that it takes about 80,000 years to change the temperature of 12km of rock through conduction. The distance to the earth's center is around 6400 km, maybe 6000 km to the magnetic core. So the earliest time we can expect to see a (just barely measurable) change to the earth's magnetic core is around $$80,000\;\textrm{years}\;\times \frac{6000^2}{12^2} = 20\; \textrm{billion years}.$$
In actual fact, the earth's core will cool down a lot sooner than this. The reason for the discrepancy is that I've ignored convection. But in any case, there's nothing to worry about.
-
Carl you seem to have looked at this. Since natural reactors have been found in deposits in Africa, is it completely out of the question that some fission is contributing to the central heat source? – anna v Jun 13 '11 at 6:05
It's generally believed that (in addition to the remaining heat from gravitation) the earth's core has a lot of heating due to radioactivity some of which is uranium and so is fission. The argument is about whether or not there could be natural reactors down there (where the heating rate would be enhanced by neutron induced fission). – Carl Brannen Jun 13 '11 at 18:57
Pretty much an echo of Carl. We can really only mine heat from the top few kilometers. The impedence from the deep earth will be dominated by the vast distances to the deep. There really is a huge amount of heat down there, the cores temperature is similar to the surface tempoerature of the sun. Also we still have radioactive decay going on. I have read, that a limit that will hit earlier (with or without geothermal heat mining) is that mantle convection will decrease and no longer support plate tectonics. With plate tectonics, the recycling of important chemical elements stops, and import ones (for life) would get trapped in sediment. But that is a few billion years off. Solar evolution (its growing more luminous at about 10% per billion years at the moment) will cause a runaway greenhouse long before then.
I wouldn't say geothermal extract can't exceed the current rate of geothermal flux. It is heat mining, i.e. using up heat that took millions of years to accumulate in say a hundred years (at a given locality). It really should be considered more as an energy store, then as a renewable source, as any aggressive rate of exploitation would greatly exceed the rate of recharge.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425390958786011, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/18163-math-help-functions.html
|
# Thread:
1. ## math help:functions
y=e^-x^2
determine whether function symmetric to y-axis, orgin, or neither
this is last problem and im kinda stuck
2. Originally Posted by nurseme
y=e^-x^2
determine whether function symmetric to y-axis, orgin, or neither
this is last problem and im kinda stuck
The tests for symmetry needed are as follows:
A function $f(x)$ is symmetric about the y-axis if $f(-x) = f(x)$
that is, replace x with -x in the function, if it simplifies to the original function, then it is symmetric about the y-axis
A function $f(x)$ is symmetric about the origin if $f(-x) = -f(x)$
that is, replace x with -x, if the function simplifies to being the negative of the original, then it is symmetric about the origin
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7451472878456116, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/new-users/207924-need-help-roots-equation.html
|
1Thanks
• 1 Post By HallsofIvy
# Thread:
1. ## Need help with roots of a equation
Question :
Give that α & β are roots of the equation 2x2 + 5x - 7 = 0
Find without solving the equation the value of (α -β)2
Need to calculate the sum of the root and the product of the roots
2. ## Re: Need help with roots of a equation
Vieta's formulas give us for the quadratic $ax^2+bx+c$ having roots $\alpha,\,\beta$:
$\alpha+\beta=-\frac{b}{a}$
$\alpha\beta=\frac{c}{a}$
3. ## Re: Need help with roots of a equation
MarkFL meant to write $ax^2+ bx+ c$
4. ## Re: Need help with roots of a equation
I know that part i need to find the quadratic equation with the roots (α -β)2
5. ## Re: Need help with roots of a equation
No, you need to find the value of $(\alpha-\beta)^2=(\alpha+\beta)^2-4\alpha\beta$.
6. ## Re: Need help with roots of a equation
Yeah I have that part as well the part I am not sure about is the sum of the root
7. ## Re: Need help with roots of a equation
Since you are new here, I am going to give you a few tips:
1.) Don't double post. You have posted this topic in 2 different forums. This can lead to duplication of effort on the part of our contributors. You should choose the forum most appropriate for your question, and post it once there.
2.) Let us know what you have already done, and exactly where you are stuck. This avoids us telling you what you already know, and expedites the process.
So, you need the sum of the roots...you say you know Vieta's formulas, but these formulas give you just what you need, i.e., the sum and product of the roots.
8. ## Re: Need help with roots of a equation
well the double posting was not intentional I thought I placed the original post in the wrong thread where no one would see it.
I found α + β = -b/a = -5/2 while αβ = c/a = -7/2
the product of the root --> ((α -β)^2 = (α -β)(α -β) = (α+β)^2 - 4αβ= when you substitute the values where appropriate the final answer is 20.25
the sum of the root is (α -β) + (α -β) = 2α - 2β and this is the part that I am stuck
9. ## Re: Need help with roots of a equation
In the future if you find you have posted in the wrong forum, you may use the report post function at the lower left of each post to report the post to the moderators and ask them to move the topic to the appropriate forum.
You have found that the requested expression is 20.25, which is correct. I don't understand why you are doing the additional step.
What is the problem in its entirety, exactly as given?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302210807800293, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/lie-groups?page=4&sort=newest&pagesize=15
|
# Tagged Questions
A Lie group is a group (in the sense of abstract algebra) that is also a differentiable manifold, such that the group operations (addition and inversion) are smooth, and so we can study them with differential calculus. They are a special type of topological group. Consider using with the ...
0answers
57 views
### Representations of non-semisimple Lie algebras
Let $G$ be a compact Lie group with Lie algebra $\mathfrak{g}$, and suppose $\mathfrak{g}$ is semisimple. An integral weight for $G$ is an element $\lambda \in \mathfrak{t}^*$ with ...
1answer
29 views
### maximal tori and principal $N(T)$-bundles.
Let $U(n)$ be the unitary group and $T^{n}= S^{1} \times \cdots \times S^{1}$ a maximal torus in $U(n)$. Let $N(T^{n})$ be the normalizer in $U(n)$ of $T^{n}$. How can i prove that \$U(n) \rightarrow ...
1answer
70 views
### $U(n)/U(n-1)$ as homogeneous space
How can I prove that the quotient $U(n)/U(n-1) \simeq S^{2n+1}$ (where $U(n)$ is the unitary group). Is il correlated with the teory of homogeneous spaces?
2answers
98 views
### why the orthogonal group $O(k,l)$ is homotopy equivalent to $SO(K)\times SO(l)$
I want to prove that the orthogonal group $O(k,l)$ (http://en.wikipedia.org/wiki/Indefinite_orthogonal_group)is homotopy equivalent to $SO(k)\times SO(l)$, so that ...
0answers
40 views
### Covering space, Weyl group, flag manifold.
Let $G$ be a compact Lie group and $T$ a maximal torus in $G$. We define the Weyl group $W$ as the quotient space ${N_{G}}(T)/T$, where ${N_{G}}(T)$ is the normalizer of $T$ in $G$. We ...
0answers
66 views
### Covering space (Lie groups and their maximal tori)
Let $G$ be a compact Lie group and $T$ a maximal torus in $G$. We define the Weyl group $W$ as the quotient space ${N_{G}}(T)/T$, where ${N_{G}}(T)$ is the normalizer of $T$ in $G$. We ...
1answer
32 views
### a neighbourhood of identity U generates G where g is a connected lie group
Let G be a connected Lie group and U any neighbourhood of the identity element. How to prove that U generates G.
0answers
28 views
### Invariants of representation theory of Lie groups
How to compute the determinant of a representation of an element of the special linear group? How do I argue that it doesn't change? (@Marek: @rschwieb: Yes well, given one represenation (with ...
2answers
103 views
### An example of a Lie group
I have a trouble learning Lie groups --- I have no canonical example to imagine while thinking of a Lie group. When I imagine a manifold it is usually some kind of a 2D blanket or a circle/curve or a ...
1answer
58 views
### Weyl group, permutation group
Let $U(n)$ be the unitary group and $T$ its maximal torus (group of diagonal matrix) and $N(T)$ the normalizer of $T$ in $G$. Why $N(T)/T$ is the permutation group $S_{n}$?
1answer
64 views
### Defining Lie groups without the notion of a manifold
I like to introduce (matrix) Lie groups without the notion of manifolds. (And I am happy to scarify the "few" Lie groups which are not matrix groups in favor of a simpler definition.) I was thinking ...
1answer
60 views
### Dimension of Lie algebra according to root system
I was wondering how is it possible to find the dimension of a semi-simple lie algebra $L$ if its corresponding root system is (lets make it simple) of type $B_2$. We can find the number of roots and ...
2answers
60 views
### Nilpotent Lie Group that is not simply connect nor product of Lie Groups?
I have been trying to find for days an non-abelian nilpotent Lie Group that is not simply connect nor product of Lie Groups, but haven't been able to succeed. Is there an example of this, or hints to ...
1answer
34 views
### Differential action on a complex manifold
Let $M$ be a complex manifold of dimension $n$. Furthermore assume that we have a action of a Lie-Group $G$ on $M$ i.e. $G \times M \rightarrow M$, which is differential, meaning that for every \$g \in ...
1answer
76 views
### Invariant inner products on infite-dimensional representations
Let $G$ be a compact group and let $V$ be it's continuous representation. It is well known that if $V$ is finite-dimensional, then there is an $G$-invariant inner product on $V$. I haven't found a ...
2answers
128 views
### When does the $\mathfrak g$-invariance of the symplectic form imply $G$-invariance?
Let $G$ be a Lie group with Lie algebra $\mathfrak g$, and let $M$ be a smooth manifold. Suppose $G$ acts on $M$, $G \to \text{Diff}(M)$. This naturally induces an action $\mathfrak g \times M \to M$ ...
0answers
29 views
### Is it possible to further simplify the product of three exponentials $e^A e^B e^C$ when $[A,C]=kB$ (k is a scalar)
The background is calculation of the little group elements of Poincare group for massless particles. I start with a bunch of exponentials of operators, and the end goal is to crunch them into the ...
1answer
47 views
### Nilpotence of Lie Algebra
I am trying to show that if $L$ is a Lie algebra and $L/Z(L)$ is nilpotent than $L$ is also nilpotent. Can someone please help me? I tried to first show by induction: $(L/Z(L))^k=L^k/Z(L)$. Is it ...
1answer
184 views
### Geometric interpretation of the map $SO(4) \to SO(3)$
Let me first explain the background of my question. As is well known, the group $SO(n+1)$ acts transitively on the sphere $S^n$, and the stabilizer is the group $SO(n)$, so that we get a fibration ...
1answer
198 views
### Finding All Irreducible Representations of $SO(3)$
I've read that one may prove that all irreducible representations of $SO(3)$ are tensor product representations of the fundamental representation (or tensor product representations of the spin 1/2 ...
2answers
207 views
### $SU(2)$ Representation of $SO(3)$
I've often seen it written that $SU(2)$ is a "two-valued representation" of $SO(3)$ (in theoretical physics books mainly). I have a major conceptual issue with this however. I know there is a Lie ...
2answers
145 views
### Lie group and SO3 visualisation
Maybe I'm asking a very vague question but I'd like to know if there are some visualisation tools available already that explain lie algebra exponential map or logarithm? I'd like to be able to ...
1answer
47 views
### Schur's first lemma for finitely generated continuous groups of $SU(d)$
Suppose that a finite set $S$ of $d\times d$ special unitary matrices densely generates a representation $\rho$ of a continuous subgroup of $G$ of $SU(d)$. That is, for every $\epsilon>0$ and ...
0answers
34 views
### How to find the induced Lie algebra homomorphism
Consider the quaternions $H=\{1+bi+cj+dk, a,b,c,d \in \mathbb{R}\}$ and the norm $\|h\|=\sqrt{h^*h}$, which is a Lie group homomorphism between $H^*$ and $\mathbb R^*$. How can I find the Lie algebra ...
0answers
42 views
### Is there a general expression for the adjoint representation of $U(N)$ or $u(N)$?
At least for low values of $N$ like $2$ or $3$ and such I would like to know if there are explicit matrices known giving the representation of $u(N)$ or $U(N)$ in the adjoint? (..a related query: ...
0answers
44 views
### How to prove that a lie group is simply connected
I need to prove that $O(3,19)/SO(2)\times O(1,19)$ is simply connected. In particular $O(n_{+},n_{-})$ denotes the orthogonal group of $\mathbb{R}^{n_{+}+n_{-}}$ endowed with the diagonal quadratic ...
1answer
62 views
### Quaternions as group of rotation and scaling
It is very well known that unit quaternions are well suited to represent rotations in 3D. In particular, the group of unit quaternions forms a double cover of the special orthogonal group $SO(3)$. ...
0answers
27 views
### Dynkin diagrams
Let $\gamma$ be a tripple edged graph that is associated with an admissible set in a real inner product space. Please, how do I show that $\gamma$ is the Coxeter graph of the Dynkin diagram, $G_2$?
1answer
38 views
### Lie algebra 3 Dimensional with 2 Dimensional derived lie algebra #2
I read the book of Karin Erdmann and Mark Wildon: "An introduction to Lie algebras". In page 22 they say that: If $\dim (L) = 3$, $\dim (L') = 2$ then (a) $L'$ abelian and (b) \$\operatorname{ad} x ...
0answers
60 views
### Does the $O(n)$ bundle of a manifold depend on the metric?
Let $g_1$ and $g_2$ be two Riemannian metrics on a manifold $M$. These induce two $O(n)$ bundles on $M$, whose fibers over each point $x\in M$ are the groups of orthogonal transformations of $T_x M$ ...
0answers
31 views
### Detail in polar action
I am reading a paper "Tits geometry and positive curvature - Fang, Grove, and Thorbergsson" See the following site http://arxiv.org/pdf/1205.6222.pdf In page 7, the 9-th line from the bottom ...
1answer
77 views
### establishing an isomorphism
I request help with this is a question from Introduction to Lie algebra by Erdmann and Wildon. The question asks to show that show that \$so(4,\mathbf{C})\cong sl(2,\mathbf{C}) \oplus ...
1answer
104 views
### $3\times 3$ symmetric matrix with signature $(2,1)$
I need to show the set of $3\times 3$ real symmetric matrices with signature $(2,1)$ is an open connected subset in the usual topology of $\mathbb{R}^6$. To show connectedness I did like the ...
0answers
63 views
### Product of all rotation matrices in $\mathrm{SO}(3)$
With curiousity, I'm trying to prove whether multiplication of all rotation matrixes in $\mathrm{SO}(3)$ is identity irrelevant of multiplication order. As each rotation matrix in $\mathrm{SO}(3)$ ...
1answer
33 views
### How can I show that $ASL_n(F)$ is acting 2-transitively?
One of my friends asked me to ask this question here. This is a question from his last exam: Let ASL_n(F)=\{T_{A,v}:V_n(F)\to V_n(F)\mid\exists A\in SL_n(F), \exists v\in V_n(F), ...
0answers
36 views
### Smooth Action of a Finite Group
Suppose $H$ is a finite group acting smoothly on a smooth connected manifold $M$. The action is trivially proper, as $H$ is discrete. If the action of $H$ were also known to be free, i.e. \$h\cdot ...
1answer
34 views
### Examples of Pansu differentiable maps
I would like to know some non-too-trivial examples of Pansu-differentiable maps between stratified groups (real ones, not $\mathbb{R}^n$, pun intended). For example, can anyone name a ...
2answers
68 views
### Elements of finite order in compact abelian Lie Group
If $G$ is a compact abelian Lie group, why does the $n$th power map from $G$ to $G$ form a finite covering? I cannot see why the kernel must be finite.
1answer
71 views
### Isometries from Diffeomorphisms
Given a compact, connected Lie group G of diffeomorphisms on a manifold M, how to construct a Riemannian metric on M such that elements of G are isometries of M?
1answer
261 views
### Given a group $G$, how many topological/Lie group structures does $G$ have?
Given any abstract group $G$, how much is known about which types of topological/Lie group structures it might have? Any abstract group $G$ will have the structure of a discrete topological group ...
2answers
75 views
### SO(5)-invariant metrics on the 4-sphere
Are there any examples of Riemannian metrics on $S^{4} \subset \mathbb{R}^{5}$ that are not SO(5)-invariant? Or are all metrics on the 4-sphere SO(5)-invariant? Hope my question is not too trivial ...
1answer
50 views
### Symmetry, change of variables
I am having trouble understanding a section in these notes. It is on page 3. Section 3 -- Discretization of the Korteweg-de Vries equation. I don't understand why $$V_4=x∂_x+3t∂_t-2u∂_u$$ generates a ...
1answer
67 views
### Quotient group $S^3/\{+I,-I\}$
How can I prove that the quotient group $S^3/\{+I,-I\}$ is isomorphic to $SO_3$ and that the group $S^3$ is not isomophic to $SO_3$? Here $S^3$ is the subgroup of the quaternion group: ...
1answer
55 views
### Error in Weyl character formula computation.
I need someone with a keen eye for errors. I am trying to use the Weyl character formula for the symplectic group Sp$(4,\mathbb{C})$ on certain matrices coming from 2x2 quaternion matrices. Summing ...
1answer
66 views
### Dimension of the GL-orbit of d-forms in one less variable
Let $V:=k[x_0,\ldots,x_n]_d$ be the $k$-vector space of homogeneous polynomials of degree $d$. Let $G:=\mathrm{Gl}(n+1,k)$ act on $V$ induced by the canonical action on the linear forms: For ...
0answers
68 views
### Submanifold of a Lie group - tangent space
Let $G$ be a compact Lie group and $H, H' \leq G$ Lie subgroups. Consider the set $M = H' \cdot H = \{h\cdot h' \ \vert \ h \in H, h' \in H'\}$. Is it possible to describe explicitly the tangent space ...
0answers
35 views
### Group of affine transformation in plane is unimodular
I am trying to do an exercise in the book "Analysis on Lie group" as follows: Let $G$ be the group of all affine transformations in the plane, i.e. $G$ contains all the mapping of form \$(x,y)\mapsto ...
0answers
124 views
### Integrating angular velocity to obtain orientation
Suppose that $\gamma:[0,1]\to \operatorname{SO}(3)$ is a path in the space of orientation preserving rotations of $\mathbb R^3$. It is classical that we can find a corresponding \$\omega:[0,1]\to ...
1answer
57 views
### A question about Lie algebras corresponding to Lie groups and algebraic groups
Lie groups and algebraic groups both correspond with Lie algebras, which are by definition the left invariant vector field. But the topology of Lie groups and algebraic groups are different. Are their ...
2answers
129 views
### Lie algebra 3 Dimensional with 2 Dimensional derived lie algebra
i read in mark wildon book , an introduction to lie algebras, in page 22 say that : Suppose that dim $L$ = 3 and dim $L'$ = 2. We shall see that, over $\mathbb{C}$ at least, there are infinitely many ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 161, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9111075401306152, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/43826/quadratic-reciprocity-in-characteristic-2
|
## Quadratic reciprocity in characteristic 2
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a polynomial $a(t)\in F_2[t]$, for which irreducible $f(t)\in F_2[t]$ does the trace of $a(t)\bmod f(t)$ equal 0/1? By Artin reciprocity applied to the extension defined by $x^2-x-a(t)$ it should depend only on the residue of $f(t)$ modulo some conductor, but what does this reciprocity law look like explicitly for this case?
-
3
Is math.uconn.edu/~kconrad/blurbs/ugradnumthy/… of any help? (Sorry, I have absolutely no time to check myself.) – darij grinberg Oct 27 2010 at 16:32
3
Gee Darij, I hope it is. :) – KConrad Oct 27 2010 at 19:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8326355814933777, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.