url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://gowers.wordpress.com/2010/02/24/edp9-a-change-of-focus/
# Gowers's Weblog Mathematics related discussions ## EDP9 — a change of focus The discussion in the last thread has noticeably moved on to new topics. In particular, multiplicative functions have been much less in the spotlight. Some progress has been made on the question of whether the Fourier transform of a sequence of bounded discrepancy must be very large somewhere, though the question is far from answered, and it is not even clear that the answer is yes. (One might suggest that the answer is trivially yes if EDP is true, but that is to misunderstand the question. An advantage of this question is that there could in theory be a positive answer not just for $\pm 1$-valued functions but also for $[-1,1]$-valued functions with $L_2$ norm at least $c>0$, say.) Another question that has been investigated, mostly by Sune, is the question about what happens if one takes another structure (consisting of “pseudointegers”) for which EDP makes sense. The motivation for this is either to find a more general statement that seems to be true or to find a more general statement that seems to be false. In the first case, one would see that certain features of $\mathbb{N}$ were not crucial to the problem, which would decrease the size of the “proof space” in which one was searching (since now one would try to find proofs that did not use these incidental features of $\mathbb{N}$). In the second case, one would see that certain features of $\mathbb{N}$ were crucial to the problem (since without them the answer would be negative), which would again decrease the size of the proof space. Perhaps the least satisfactory outcome of these investigations would be an example of a system that was very similar to $\mathbb{N}$ where it was possible to prove EDP. For example, perhaps one could find a system of real numbers $X$ that was closed under multiplication and had a counting function very similar to that of $\mathbb{N}$, but that was very far from closed under addition. That might mean that there were no troublesome additive examples, and one might even be able to prove a more general result (that applied, e.g., to $[-1,1]$-valued functions). This would be interesting, but the proof, if it worked, would be succeeding by getting rid of the difficulties rather than dealing with them. However, even this would have some bearing on EDP itself, I think, as it would be a strong indication that it was indeed necessary to prove EDP by showing that counterexamples had to have certain properties (such as additive periodicity) and then pressing on from there to a contradiction. A question I have become interested in is understanding the behaviour of the quadratic form with matrix $A_{xy}=\frac{(x,y)}{x+y}$. The derivation of this matrix (as something to be interested in in connection with EDP) starts with this comment and is completed in this comment. I wondered what the positive eigenvector would look like, and Ian Martin obliged with some very nice plots of it. Here is a link to where these plots start. It seems to be a function with a number-theoretic formula (that is, with a value at $n$ that strongly depends on the prime factorization of $n$ — as one would of course expect), but we have not yet managed to guess what that formula is. I now want to try to understand this quadratic form in Fourier space. That is, for any pair of real numbers $(\alpha,\beta)\in [0,1)^2$ I want to calculate $K(\alpha,\beta)=\sum_{x,y}A_{x,y}e(\alpha x-\beta y)$, and I would then like to try to understand the shape of the kernel $K$. Now looking back at this comment, one can see that $\displaystyle \langle f,Af\rangle=\sum_d w_d\int_0^\infty|\sum_{n=1}^\infty e^{-\theta n}f(dn)|^2 d\theta.$ Since the bilinear form $K(\alpha,\beta)$ is determined by the quadratic form $K(\alpha,\alpha)$ I’ll concentrate on the latter (which in any case is what interests me). So substituting $f(n)=e(\alpha n)$ into the above formula gives me $\displaystyle K(\alpha,\alpha)=\sum_d w_d\int_0^\infty|\sum_{n=1}^\infty e^{2\pi i\alpha dn-\theta n}|^2 d\theta.$ The infinite sum is a geometric progression, so this simplifies to $\displaystyle K(\alpha,\alpha)=\sum_d w_d\int_0^\infty\Bigl|\frac{e^{2\pi i\alpha d-\theta}}{1-e^{2\pi i\alpha d-\theta}}\Bigr|^2 d\theta.$ Note that for each $d$ the integrand is bounded unless $\alpha$ is a multiple of $1/d$, and more generally is small unless $\alpha$ is close to a multiple of $1/d$ and $\theta$ is close to 0. So we do at least have the condition of being close to a rational with small denominator making an appearance here. (Why small denominator? Because then there will be more $d$ such that $\alpha$ is a multiple of $1/d$.) I plan to investigate the sequence 1, -1, 0, 1, -1, 0, … from this perspective. It takes the value $\frac 2{\sqrt{3}}\sin(2\pi n/3)$ at $n$. I shall attempt to understand from the Fourier side why this gives a sequence with such small discrepancy. Before I finish this post, let me also mention a nice question of Alec’s, or perhaps it is better to call it a class of questions. It’s a little bit like the “entropy” question that I asked about EDP, but it’s about multiplicative functions. The question is this: you play a game with an adversary in which you take turns assigning $\pm 1$ values to primes. You want the resulting completely multiplicative function to have as small discrepancy as you can, whereas your adversary wants the discrepancy (that is, growth of partial sums) to be large. How well can you do? One can ask many variants, such as what happens if your adversary is forced to choose certain primes (for instance, every other prime), or if your adversary’s choices are revealed to you in advance (so now the question is what you can do if you are trying to make a low-discrepancy function but someone else has filled in half the values already and done so as badly as possible), or if you choose your values randomly, etc. etc. So far there don’t seem to be any concrete results, and yet it feels as though it ought to be possible to prove at least something non-trivial here. One other question I’d like to highlight before I finish this post. It seems that we do not know whether EDP is true even if you insist that the HAPs have common differences that are either primes or powers of 2. The powers of 2 rule out all periodic sequences, but for a strange parity reason: for instance, if you have a sequence that’s periodic with period 72, then along the HAP with common difference 8 it is periodic with period 9, which means that the sum along each block of 9 is non-zero (because it is an odd number) and therefore the sums along that HAP grow linearly. Sune points out that the sequence $f(n)=\exp(2\pi i n/6)$ is a simple counterexample over $\mathbb{T}$, but it’s not clear what message we can take from that, given that periodic sequences don’t work in the $\pm 1$ case. I like this question, because finding a counterexample should be easier if there is one, and if there isn’t, then proving the stronger theorem might be easier because HAPs with prime common differences are “independent” in a nice way. Update: I intended, but forgot, to mention also some interesting ideas put forward by Gil in this comment. He has a proposal for trying to use probabilistic methods, and in particular methods that are suited to proving that rare events have non-zero probability when there is sufficient independence around, to show that there are many sequences with slow-growing discrepancy. It is not clear at this stage whether such an argument can be made to work, but it seems very likely that thinking about the problems that arise when one attempts to make it work will be fruitful. ### Like this: This entry was posted on February 24, 2010 at 12:49 pm and is filed under polymath5. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 108 Responses to “EDP9 — a change of focus” 1. Sune Kristian Jakobsen Says: February 24, 2010 at 2:34 pm | Reply I just realized that we have some very integer-like pseudointegers where the EDP is false, namely the set of integers not divisible by p for some prime p (the sequence is the Legendre character). I know that this is nothing new, we already knew that these examples told us something about what a proof of EDP could look like, but I didn’t think of them as pseudointeger (actually I did once but the two parts of my brain that knew that “the set of integers not divisible by p can have bounded discrepancy” and “the set of integers not divisible by p is a set of pseudointeger” where too far from each other). I think these examples have discrepancy about at most $\sqrt(p)$ (if I remember correctly) and the density of them is $d=1-1/p$. The next question is: Can we find pseudointegers with density 1 and bounded discrepancy? Or a weaker question: Can we find sets of pseudointeger with density arbitrary close to 1 and discrepancy <C for some fixed C? • Sune Kristian Jakobsen Says: February 24, 2010 at 4:51 pm I think the following gives a set of pseudointegers with density 1 and a multiplicative sequence with discrepancy 2: Start with the set of integers not divisible by 3. This have discrepancy 1 (I’m using the 1,-1,0,1,-1,0,… sequence) and density 2/3, so we need to increase the density. In order to do so, we add two primes a and b such that $a< b$ but b-a is infinitesimal (so they are not a subset of R. More about this later. I think of $a$ as a real number). We define $x_a=1$ and $x_b=-1$ and let multiplicativity define the rest. So for any $n\in \mathbb{N}$ not divisible by 3 the numbers na and nb are close and $x_{na}$ and $x_{nb}$ have opposite signs, so these doesn’t increase the discrepancy to more than 2. The only problem is at powers of a: Around $a^2$ we have the numbers $a^2,ab, b^2$ with $x_{a^2}=1,x_{ab}=-1,x_{b^2}=1$ so we “create” a new prime $c> b^2$ such that $c-a^2$ is infinitesimal, but infinitely larger than $b-a$. Now everything is fine for $a^3$. Here $x_{a^3}=1,x_{a^2b}=-1,x_{ab^2}=1,x_{b^3}=-1,x_{ac}=-1,x_{bc}=1$, but at $a^4$ we get intro trouble again, so we need to add a new prime $d> c^2$. Continuing this way, we get a set of pseudointegers with bounded discrepancy. Lets calculate the density. To begin with we have the set of integers not divisible by 3. This has density 2/3. The set of numbers an or bn, where n is a integer not divisible by 3, has density $\frac{2}{3}\frac{2}{a}$. The set of numbers on the form aan,abn,bbn or cn has density $\frac{2}{3}\frac{4}{a^2}$, and so on. Now the whole set has density $d=\frac{2}{3}\left(1+\frac{2}{a}+\frac{4}{a^2}+\frac{6}{a^3}+\frac{10}{a^4}+\dots\right)$ Where the nominator is A000123 (number of partitions of 2n into powers of 2). Since the sequence of nominators grows slower than some geometric sequence (e.g. $2^{2n-1}=4^n/2$ using that it is less than the number of ordered partitions of 2n), we can choose a so that the density is 1. This is of course not a subset of the real numbers as I used infinitesimal, but if we just choose very small numbers instead, think we can correct the errors when they arise. I think this shows that a proof of EDP must somehow use the additive structure of the integers. • Sune Kristian Jakobsen Says: February 24, 2010 at 5:01 pm @Tim: “Perhaps the least satisfactory outcome of these investigations would be an example of a system that was very similar to \mathbb{N} where it was possible to prove EDP.” I’m not sure I understand you correctly. Do you mean that this wouldn’t be interesting or just that we should at least be able to get this result? I think that finding a set of pseudointegers where we can prove EDP is the only interesting pseudointeger-question left (when I say interesting I mean with relevances to EDP). Perhaps I have forgot some questions. • gowers Says: February 24, 2010 at 6:38 pm It was a slightly peculiar thing for me to say, because I changed my mind as I was writing it. Initially I thought that finding such a proof wouldn’t shed any light on EDP itself, but by the end of the comment I had started to realize that it could do after all. 2. Jason Dyer Says: February 24, 2010 at 3:49 pm | Reply Going back to my graph-theoretic construction, I wanted to include a meaning for “completely multiplicative”. This likely could be more elegant. I apologize for the mess. Define everything as the comment, including the 4 extra conditions (it might be possible to do without some, but I haven’t had time to think about it). I will also call the set of edges from condition #1 the omega set, and the root node of that set to be the alpha node. A node is called prime if the in-degree is 1; that is, the only edge incoming is from the omega set. A node is a power if the in-degree is 2. Consider all incoming edges to a particular node; trace the edges backwards to their root nodes. These are the divisors of the node. Here is a prime factorization algorithm for the node n: 1. List the k divisors of n (excluding the alpha node) $d_1, d_2, ... d_k$. Call the path of a divisor to be the traversal going from the divisor to n. Given any divisor a that has divisor b in its path, remove a from the list. 2. For any divisor that is not prime and is not a power on the list $d_1, d_2, ... d_k$, connect the divisors of each of $d_1, d_2, ... d_k$ as a branching tree (again excluding the alpha node); again, given any divisor a that has divisor b in its path, remove a from the list. For any divisor that is a power on the list $d_1, d_2, ... d_k$, connect the divisor that is not the alpha node c a multiple number of times m+1, where m is the number of times powers occur in the traversal between the divisor of c and c. 3. Repeat #2 until all divisors on the bottom of the tree are prime. 4. The set of divisors at the bottom of the tree is the prime factorization of n. So, a graph is completely multiplicative if that multiplying the values of all the nodes in the prime factorization of a node n (including repeated nodes as ncessary) gives the value of the node n. • Jason Dyer Says: February 24, 2010 at 5:25 pm There’s a bug in the algorithm: if n is already a power, it should jump to the “for any divisor that is a power on the list” process. Also, an extra condition #5 should be added if we want multiplication to be a function: each node can be the root node of only one labelled set of directed edges. 3. Klas Markström Says: February 24, 2010 at 4:47 pm | Reply Regarding Gill’s probabilistic approach http://gowers.wordpress.com/2010/02/19/edp8-what-next/#comment-6278 it might be a good idea to consider separate “bad” events for “sum is less than -C” and “sum is greater than C” The “less than “-events along a given AP are clearly pairwise positively correlated, and likewise for the “greater than”-events. However the “less than”-events and “greater than”-events along an AP are negatively correlated. To me mixing the two types of failure for the discrepancy bound seems to make things harder to keep track of. I guess I’m going more in the direction of the Janson inequalites than the local lemma here. • Gil Says: February 26, 2010 at 11:09 am I think what I have in mind is to choose the locations of the zeros for the partial sum. We need to be able to show (1) that we can locate the zeroes of the partial sums where we want them, and then that (2) conditioned on such locations that the probabilities for “sum is greater than C” or “sum is smaller than -C” are very small. For part (2) the probabilities may perhaps be small enough that we can use union bounds and not worry about dependenies. For the location of zeroes part (1), we certainly need to be able to handle dependencies. And it seems that if we want to go below $\sqrt{\log n}$-discrepancy we will need to exploit positive dependencies (for the events of vanishing partial sums along intervals.) Indeed Janson’s inequalities may be quite relevant. 4. gowers Says: February 24, 2010 at 7:39 pm | Reply A couple of observations that will I hope make the calculations in the post make a bit more sense. First, note that $\int_0^\infty\frac{e^{-2\theta}}{(1-e^{-\theta})^2}d\theta$ equals $\int_0^\infty \frac d{d\theta}\Bigl(\frac 1{1-e^{-\theta}}\Bigr)d\theta$, which is infinite. Therefore, $K(\alpha,\alpha)$ blows up when $\alpha$ is equal to a rational number. Next, note that the bilinear form derived from $K(\alpha,\alpha)$ is given by the formula $\displaystyle K(\alpha,\beta)=\sum_dw_d\int_0^\infty \frac{e^{2\pi i\alpha-\theta}}{1-e^{2\pi i\alpha-\theta}}\frac{e^{-2\pi i\beta-\theta}}{1-e^{-2\pi i\beta-\theta}}d\theta.$ Now let’s continue to write $e(x)$ for $e^{2\pi i x}$ and let us also write $c(x)=\cos(2\pi x)=(e(x)+e(-x))/2$ and $s(x)=\sin(2\pi x)=(e(x)-e(-x))/2i$. Our basic example of a bounded-discrepancy sequence is (up to a constant multiple) the sequence $s(n/3)$, which is begging to be understood from a Fourier perspective. Let us call this function $s_3$ and let $e_3(n)=e(n/3)$. Now $\langle s_3,As_3\rangle=\langle e_3-\overline{e_3},e_3-\overline{e_3}\rangle/4$, which equals $\displaystyle (1/4)(K(1/3,1/3)+K(-1/3,-1/3)+K(1/3,-1/3)+K(-1/3,1/3)),$ which, because of easy symmetry properties of $K$, is equal to $\frac 14\sum_dw_d\int_0^\infty \Bigl(\frac{2e^{-2\theta}}{|1-e^{2\pi i\alpha d-\theta}|^2}-\frac{e^{4\pi i\alpha d-2\theta}}{(1-e^{2\pi i\alpha d-\theta})^2}-\frac{e^{-4\pi i\alpha d-2\theta}}{(1-e^{-2\pi i\alpha d-\theta})^2}\Bigr)d\theta,$ where $\alpha=1/3$ (but I’d rather keep it as $\alpha$, as the argument is perfectly general). Every time $\alpha d$ is an integer, this gives us zero, and when $\alpha d$ is not an integer it gives us something fairly small. So in this way we (sort of) see the bounded average discrepancy coming about. I still need to think more about this calculation before I can say I fully understand it, or be confident that it isn’t rubbish in some way. 5. Kristal Cantwell Says: February 24, 2010 at 9:27 pm | Reply If player one and player two alternate assigning signs to primes with one have the objective of forcing a discrepancy of four then the player trying to force a discrepancy of four will always win. If player one and player two assign values to the primes of a multiplicative function and player one is trying to force a discrepancy of 4 and player 2 to prevent this player one will win. Player one chooses to assign 2 the value 1 then player 2 must assign 3 the value -1 or the sum of 1 to 4 will be 4 after 3 is assigned the value 1. Then player one assigns 5 the value 1 and since f(1),f(2),f(4),f(5) and f(8),f(9) and f(10) are positive the sum at 10 is at least 4 and we are done. If player one and player two assign values to the primes of a multiplicative function and player two is trying to force a discrepancy of 4 and player 2 to prevent this player two will win. First we will show that the first player must choose 2. If player one does not choose this then let the second player choose 2 and assign it the value 1 at player 2′s first move. Then the first two moves of player one must be assigning 3 and 5 the values -1. If not then player two will assign one of these the value one on player 2′s second move and either f(1),f(2),f(4),f(5) and f(8),f(9) and f(10) are positive or f(1),f(2),f(3), and f(4) are positive and in either case the discrepancy is 4. So the first player must assign f(2) the value -1 on the first players first move. Let the second player assign f(3) the value 1 on the second players first move. Now f(1),f(3),f(4),f(5), f(9),f(12),f(15),f(16) are positive one of f(7) and f(14) is positive. So if the first player does not assign 11 and 13 the value -1 on the second and third moves then the second player can assign one of them value 1 and f(1),f(3),f(4),f(5),f(9),f(12),f(15),f(16), one of f(7) and f(14) and the number chosen make 10 positive values less than or equal to 16 which gives a discrepancy of four.So the second and third moves of the first player are choosing 11 and 13 and assigning them value -1. The second player assigns 7 the value 1 on the third move. Now f(1),f(3),f(4),f(5), f(9),f(12),f(15),f(16) f(7),f(20),f(21),f(22)f(25),f(26),f(27) and f(28) are positive and the discrepancy is at least 4 at 28 and we are done. 6. Polymath5 « Euclidean Ramsey Theory Says: February 24, 2010 at 9:45 pm | Reply [...] Polymath5 By kristalcantwell There is a new thread for Polymath5. The pace seems to be slowing down a bit. Let me update this there is another thread. [...] 7. gowers Says: February 25, 2010 at 11:55 am | Reply I want to follow up on this comment with some rather speculative ideas about why EDP might be true. The annoying example we keep coming up against is 1, -1, 0, 1, -1, 0, …, which, as I have already pointed out, is given by the formula $f(n)=(2/\sqrt{3})s(n/3)$, where $s(x)$ stands for $\sin(2\pi x)$. Now perhaps something like the following is true. Given any $\pm 1$ sequence (or indeed any sequence) we can decompose it as a linear combination of sequences of the form $c(\alpha n)$ and $s(\alpha n)$, where $\alpha$ is rational. A simple proof is as follows. We can produce the characteristic function of the HAP with common difference d as the sum $d^{-1}(1+e(n/d)+e(2n/d)+\dots+e((d-1)n/d)$. And we can produce any function of the natural numbers as a linear combination of HAPs, just by solving an infinite system of linear equations that’s in triangular form. (Basically that is the proof of the Möbius inversion formula.) Let’s gloss over the fact that the decomposition is far from unique. It means that what I am about to say is not correct as it stands, but my hope is that it could be made correct somehow. On the face of it, using cosines in a decomposition is likely to lead to large discrepancies, because not only are they periodic, but they are at their maximum at the end of each period, so the HAP with common difference that period (which is an integer if we are looking at $c(\alpha n)$ for some rational $\alpha$) is being given a linear discrepancy that we would hope could not be cancelled out. (This is where we would need a much more unique decomposition, since as things stand it can be cancelled out.) So perhaps one could prove that for a $\pm 1$ sequence to have any chance of having small discrepancy, it has to be built out of sines rather than cosines. Now the problem with sines is that they keep being zero. So one could ask the following question: how large a sum of coefficients do you need if you want to build a $\pm 1$ sequence out of sines? I think the answer to this question may be that to get values of $\pm 1$ all the way up to $N$ requires coefficients that sum to at least $c\log N$. My evidence for that is the weak evidence that the obvious way of getting a $\pm 1$ sequence does give this kind of logarithmic growth. Here is how it works. I know that $s(n/3)$ gives me $\pm 1$ values (after normalization) except at multiples of 3 where it gives me zero. To deal with multiples of 3, I first create the HAP with common difference 3 by taking $(1+c(x/3)+c(2x/3))/3$ (which works because it’s the real part of $(1+e(x/3)+e(2x/3))/3$, which also works. That, however, is not allowed because I’ve used cosines. So I’ll multiply it by $s(x/9)$. The addition formulae for trigonometric functions tell us that $s(\alpha x)c(\beta x)=(s(\alpha+\beta)x)+s((\alpha-\beta)x))/2$, so this results in a sum of sines with coefficients adding up to 1 (or $2/\sqrt{3}$ when we do the normalization). But this new sequence has gaps at multiples of 9. Continuing this process, we find that for each power of 3 we need coefficients totalling 1 (or $2/\sqrt{3}$) to fill the gaps at multiples of that power, which gives us a logarithmic bound. So the following might be a programme for proving EDP, which has the nice feature of using the $\pm 1$ hypothesis in a big way. 1. Show that any sequence that involves cosines in a significant way must have unbounded discrepancy. (One might add functions such as $s(\alpha x)$ where $\alpha$ is irrational.) 2. Show that any sequence involving sines must have a discrepancy at least as big as the sum of the coefficients of those sines. 3. Show that to make a $\pm 1$ sequence out of sines one must have coefficients that grow at least logarithmically. As I say, before even starting to try to prove something like this, one would need to restrict the class of possible decompositions, so some preliminary thought is required before one can attempt anything like 1 or 2. Can anyone come up with a precise conjecture that isn’t obviously false? As for 3, it may be that one can already attempt to prove what I suggested above, that to get a $\pm 1$ sequence all the way up to $N$ requires coefficients with absolute values that sum to at least $c\log N$. • Sune Kristian Jakobsen Says: February 25, 2010 at 7:03 pm A small remark: If we only show (2) for finite sums of sines, it might still be possible to for an infinite sum of sines where the coefficients doesn’t converge to have bounded discrepancy. A sequence with bounded discrepancy can easily be written as a limit of sequences with unbounded discrepancy. • gowers Says: February 25, 2010 at 7:07 pm I agree. For an approach like this to work there definitely needs to be some extra ingredient, which I do not yet see (and it is not clear that it even exists), that restricts the way you are allowed to decompose a sequence. • Sune Kristian Jakobsen Says: February 25, 2010 at 9:25 pm This sounds interesting, but I have to ask: Do we have any reason to prefer to use sine functions rather than characteristic functions of APs? • gowers Says: February 25, 2010 at 9:52 pm I’ve wondered about that, and will try to come up with a convincing justification for thinking about sines. However, that doesn’t mean that characteristic functions of APs shouldn’t be tried too. If we did, then perhaps the idea would be to show that using HAPs themselves is a disaster, and using other APs requires too many coefficients, or something like that. On a not quite related note, does anyone know whether EDP becomes true if for each $p$ you are allowed to choose some shift of the HAP with common difference $p$? (We considered a related problem earlier, but here I do not insist that the shifts are consistent: e.g., you can shift the 2-HAP by 1 and the 4-HAP by 2.) • Gil Says: February 26, 2010 at 10:54 am I suppose the following concrete questions are relevant: Conjecture 1 If a sequence of +1 -1 0 has bounded discrepency then, except for a set of density 0 it is periodic. I do not know a counter example to: Conjecture 2 If a multiplicative sequence of +1 -1 0 has bounded discrepency then it is periodic. A periodic sequence with period r has bounded discrepency if the sum of elements with indices $di$ for every $d$ which devides $r$ is 0. (In particular, $x_r=0$.) So it make sense to check the suggested conjecture 3 of fine tune it on such periodic sequences. 8. Gil Says: February 25, 2010 at 4:29 pm | Reply How about the periodic sequence whose period is 1 -1 1 1 -1 -1 1 -1 0 • gowers Says: February 25, 2010 at 5:15 pm • Gil Says: February 25, 2010 at 6:02 pm I was just curious about the sine decomposition (or sine/cosine decomposition) for the basic sequence $\mu_3$ and for “r-truncated $mu_3$” where you make the sequence 0 if at integers divisible by $3^r$. (You mentiones the r=1 case and I wondered especially about r=2.) • gowers Says: February 25, 2010 at 6:13 pm I would decompose it by the method I mentioned above. That is, at the first stage I obtain the function 1 -1 0 1 -1 0 … as a multiple of s(n/3). I then obtain 0 0 1 0 0 1 0 0 1 … by taking (1+c(n/3)+c(2n/3))/3. I then convert that into 0 0 1 0 0 -1 0 0 0 … by pointwise multiplying it by s(n/9) (or rather $3/\sqrt{2}s(n/9)$). By the addition formulae for sin and cos, that gives you the periodic sequence 1 -1 1 1 -1 -1 1 -1 0 recurring as a sum of sines, where the total sum of coefficients is $3\sqrt{2}$. • Gil Says: February 25, 2010 at 6:16 pm so part 2) of the conjecture works ok for such trubcations? 9. Moses Charikar Says: February 25, 2010 at 7:57 pm | Reply I wanted to mention an approach to proving a lower bound similar to some of the ideas being discussed here. Consider the generalization of EDP to sequences of unit vectors. The problem is to find unit vectors $v_i$ so that $\max_{k,d} \{||v_{d} + v_{2d} + \ldots + v_{kd}||_2^2\}$ is bounded. For sequences of length $n$, this leads to the optimization problem: Find unit vectors $v_1,\ldots,v_n$ so as to minimize $\max_{k,d} \{||v_{d} + v_{2d} + \ldots + v_{kd}||_2^2\}$. This can be expressed as a semidefinite program (SDP) – a convex optimization problem that we know how to solve to any desired accuracy. Moreover, there is a dual optimization problem such that any feasible solution to the dual gives us a lower bound on the value of the (primal) SDP and the optimal solutions to both are equal. The dual problem to this SDP is the following: Find values $c_{k,d}, b_i$ so as to maximize $\sum b_i$ such that $\sum_{k,d} c_{k,d} \leq 1$ and the quadratic form $\sum_{k,d} c_{k,d}(x_{d}+x_{2d}+\ldots+x_{kd})^2 - \sum_i b_i x_i^2$ is positive semidefinite. The semidefinite program is usually referred to as a relaxation of the original optimization question over $\pm 1$ variables. Note that any feasible dual solution gives a valid lower bound for the original $\pm 1$ question. This is easy to see directly. Suppose that $\sum_{k,d} c_{k,d}(x_{d}+x_{2d}+\ldots+x_{kd})^2 - \sum_i b_i x_i^2 \geq 0$ for all $x_1,\ldots,x_n$. In particular, it is non-negative for any $x_i \in \{\pm 1\}$ and for such values $\sum_{k,d} c_{k,d}(x_{d}+x_{2d}+\ldots+x_{kd})^2 \geq \sum_i b_i$. Since $\sum_{k,d}c_{k,d} \leq 1$, there must be some $k,d$ such that $(x_d + x_{2d} + \ldots + x_{kd})^2 \geq \sum b_i$. Now what is interesting is that for the vector discrepancy question for sequences of length $n$, there is always a lower bound of this form that matches the optimal discrepancy. If the correct answer for vector discrepancy was a slowly growing function of $n$, and if we could figure out good values for $c_{k,d}$ and $b_i$ to prove this, we would have a lower bound for EDP. Now the nice thing is that we can actually solve these semidefinite programs for small values of $n$ and examine their optimal solutions to see if there is any useful structure. Huy Nguyen and I have been looking at some of these solutions. Firstly here are the optimal values of the SDP for some values of $n$. 128: 0.53683 256: 0.55467 512: 0.56981 1024: 0.58365 1500: 0.59064 (I should mention that we excluded HAPs of length 1 from the objective function of the SDP, otherwise the optimal values would be trivially at least 1.) The discrepancy values are very small, but the good news is that they seem to be growing with $n$, perhaps linearly with $\log n$. The bad news is that it is unlikely we can solve much larger SDPs. (We haven’t been able to solve the SDP for 2048 yet.) The main point of this was to say that there seems to be significant structure in the dual solutions of these SDPs which we should be able to exploit if we understand what’s going on. One pattern we discovered in the $c_{k,d}$ values is that the sums of tails of this sequence (for fixed d) seem to drop exponentially. More specifically, if $t_{j,d} = \sum_{k\geq j} c_{k,d}$ then $t_{j,d} = C_d e^{-\alpha jd}$ (approximately). It looks like the scaling factor $C_d$ is dependent on $d$, but the factor $\alpha$ in the exponent is not. The $b_i$ values in the dual solution also seem to have interesting structure. The values are not uniform and tend to be higher for numbers with many divisors (not surprising since they appear in many HAPs). We should figure out the easiest way to share these SDP dual solutions with everyone so others can play with the values as well. • gowers Says: February 26, 2010 at 10:19 am This looks like a very interesting idea to pursue. One aspect I do not yet understand is this. It is crucial to EDP that we look at functions that take values $\pm 1$ and not, say sequences that take values in $[-1,1]$ and are large on average. For example, the sequence 1, -1, 0, 1, -1, 0, … has bounded discrepancy. In your description of the dual problem and how it directly gives a lower bound, it seems to me that any lower bound would also be valid for this more general class of sequences. But perhaps I am wrong about this. If so, then where is the $\pm 1$ hypothesis being used? If it was in fact not being used, that does not stop what you wrote being interesting, since one could still try to find a positive semidefinite matrix and try to extract information from it. For example, it might be that the sequences that one could build out of eigenfunctions with small eigenvalues had properties that meant that it was not possible to build $\pm 1$-valued sequences out of them. (This is the kind of thing I was trying to think about with the quadratic form mentioned in the post.) I have edited your comment and got rid of all the typos I can, but I can’t quite work out what you meant to write in the formula $t_{j,d}=C_de^{-\alpha j,d}$. • Moses Charikar Says: February 26, 2010 at 12:55 pm Tim, the $\pm 1$ hypothesis is being used in placing the constraint $v_i \cdot v_i = 1$ in the SDP. So a sequence with $\pm 1$ would be a valid solution but one with $\{\pm 1, 0\}$ would not. The variables $b_i$ correspond to this constraint. In fact, it seems sufficient to use the constraint $v_i \cdot v_i \geq 1$ (the optimal values are almost the same). In this case, the dual variables $b_i \geq 0$. In some sense, value of $b_i$ is a measure of how important the constraint $v_i \cdot v_i \geq 1$ is. While $\{\pm 1,0\}$ sequences are not valid solutions, distributions over such sequences which are large on average on every coordinate are valid solutions to the SDP. So a lower bound on the vector question means that any distribution on sequences which is large on average on every coordinate must contain a sequence with high discrepancy. The expression $t_{j,d} = C_d e^{-\alpha j,d}$ should not have a comma in the exponent, i.e. it should read: $t_{j,d} = C_d e^{-\alpha jd}$. Sorry for all the typos ! I wish I could visually inspect the comment before it posts (or edit it later). • gowers Says: February 26, 2010 at 1:32 pm Ah, I see now. This is indeed a very nice approach, and I hope that we will soon be able to get some experimental investigations going and generate some nice pictures. • Moses Charikar Says: February 26, 2010 at 1:56 pm I mentioned that it looks like the optimal values of the SDP seem to be growing linearly with $\log n$. If true, this would establish a lower bound of $\sqrt{\log n}$ on the discrepancy of $\pm 1$ sequences. This is because for the vector problem, the objective function is actually the square of the discrepancy for integer sequences. The analog of integer discrepancy would be to look at $||v_{d} + v_{2d} + \ldots + v_{kd}||_2$ but in fact, the objective function of the SDP is the maximum of $||v_{d} + v_{2d} + \ldots + v_{kd}||_2^2$ In fact the growth rate for vector sequences could be different from the growth rate for integer sequences. I think we can construct sequences unit vectors of length $n$ such that the maximum value of $||v_{d} + v_{2d} + \ldots + v_{kd}||_2$ is $\sqrt{\log n}$. This can be done using the familiar 1,-1,0,1,-1,0,… sequence: Construct the vectors one coordinate at a time. Every vector in the sequence will have a 1 or -1 in exactly one coordinate and 0′s elsewhere. In the first coordinate we place the 1,-1,0,1,-1,0,… sequence. Look at the subsequence of vectors with 0 in the first coordinate. For these, we place the 1,-1,0,1,-1,0,… sequence in the second coordinate. Now look at the subsequence of vectors with 0′s in the first two coordinates. For these, we place the 1,-1,0,1,-1,0,… sequence in the third coordinate and so on. All unspecified coordinate values are 0. The first $n$ vectors in this sequence have non-zero coordinates only amongst the first $\log_3 n$ coordinates. Now for any HAP, the vector sequence has bounded discrepancy in every coordinate. Thus the maximum of $||v_{d} + v_{2d} + \ldots + v_{kd}||_2$ for the first $n$ vectors is bounded by $\sqrt{\log n}$. • gowers Says: February 26, 2010 at 3:04 pm I like that observation. One could even think of it as a multiplicative function as follows. We take the function from $\mathbb{N}$ to $L_\infty(\mathbb{T})$ defined by the following properties: (i) if $n$ is congruent to $\pm 1$ mod 3, then $f(n)$ is the constant function $\pm 1$; (ii) $f(3)$ is the function $z$; (iii) $f$ is completely multiplicative (where multiplication in $L_\infty(\mathbb{T})$ is pointwise). To be more explicit about it, to calculate $f(n)$ you write $n$ as $m3^k$, where $m$ is congruent to $\pm 1$ mod 3, and $f(n)$ is then the function $z\mapsto\pm z^k$. • gowers Says: February 26, 2010 at 3:35 pm Out of curiosity, let me assume that $c_{k,d}$ is given by a formula of the kind $C_d e^{-\alpha kd}$ (which would give the right sort of behaviour for the tails). What does that give us for $\sum_{k,d}c_{k,d}(x_d+\dots+x_{kd})^2$? Well, if $m\ne n$ then $x_mx_n$ is counted $2\sum c_{k,d}$ times, where the sum is over all common factors $d$ of $m$ and $n$ and over all $k$ that exceed $\max\{m/d,n/d\}$. For each fixed $d$, … OK, I’ll change to assuming that the tail of the sum of the $c_{k,d}$ is given by that formula, so we would get $C_d\exp(-\alpha\max\{m,n\})$, and then we’d sum that over all $d$ dividing $(m,n)$. Now let me choose, purely out of a hat, to go for $C_d=\phi(d)$, so that when we sum over $d$ we get $(m,n)\exp(-\alpha\max\{m,n\})$. This is not a million miles away from what I was looking at before, but now the task is much clearer. We don’t have to worry about the fact that some functions like 1, -1, 0, 1, -1, 0, … have bounded discrepancy. Rather, we must find some sequence $b_1,b_2,\dots$ such that subtracting the corresponding diagonal matrix leaves us with something that’s still positive semidefinite, in such a way that the sum of the $b_i$ is large. I haven’t checked in the above what the sum of the $c_{k,d}$ is, so I don’t know how large the sum of the $b_i$ has to be. But, for those who share the anxiety I had earlier, the way we deal with the problem of the bounded-discrepancy sequences is that the sequence $b_i$ will tend to be bigger at numbers with many prime factors, to the point where, for example, the sum of the $b_i$ such that $i$ is not a multiple of 3 will be bounded. Here’s a simple toy problem, but it could be a useful exercise. Find a big supply of positive sequences of real numbers $(b_i)$ such that $\sum_ib_i=\infty$ but for every $m$ the sum of all $b_i$ such that $i$ is not a multiple of $m$ is finite. I’ve just found one to get the process going: take $b_n$ to be 1 if $n=m!$ for some $m$ and 0 otherwise. So the question is to find more interesting examples, or perhaps even something closer to a characterization of all such sequences. • gowers Says: February 26, 2010 at 4:05 pm The more I think about this the more I look forward to your sharing the values of the solutions to the SDP dual problem that you have found, especially if they can also be represented visually. You’ve basically already said this, but what excites me is that we could then perhaps make a guess at some good choices for the $c_{k,d}$ and the $b_i$ and end up with a much more tractable looking conjecture than EDP — namely, that some particular matrix is positive semidefinite. • Alec Edgington Says: February 26, 2010 at 6:52 pm Regarding the toy problem, here’s a slight generalization of your example: let $K_m$ ($m \geq 1$) be any sequence of finite non-empty subsets of $\mathbb{N}$, and let $c_m \geq 0$ be any sequence such that $\sum_m c_m = \infty$. Then the sequence $\sum_m c_m \chi_{m! K_m}$ satisfies the condition (where $\chi_S$ is the characteristic function of $S$). • Alec Edgington Says: February 26, 2010 at 9:07 pm To generalize a bit further: let $\beta_{m,r}$ ($m, r \geq 1$) be a matrix of non-negative reals such that $\sum_m \beta_{m,1} = \infty$ and $\sum_r \beta_{m,r} < \infty$ for all $m$. Then let $b_n = \beta_{m,r}$ where $n = m! r$ with $m$ maximal. • gowers Says: February 26, 2010 at 11:48 pm One can greedily create such sequences as follows. First, choose a sequence $(a_n)$ of positive reals that sums to infinity. Next, arbitrarily choose a sequence with finite sum that takes the value $a_1$ somewhere, and put it down on the odd numbers. That takes care of the non-multiples of 2. Now we take care of multiples of 3 by choosing a sequence that has finite sum and takes the value $a_2$ somewhere and placing it at the points that equal 2 or 4 mod 6 (that is, the non-multiples of 3 that have not had their values already assigned). Next, we deal with non-multiples of 5 by choosing values for numbers congruent to 6, 12, 18 or 24 mod 30 … and so on. • Huy Nguyen Says: February 27, 2010 at 4:47 am I have put some summary data of the SDP solutions at http://www.cs.princeton.edu/~hlnguyen/discrepancy/discrepancy.html The data mostly focus on the tails rather than $c_{k, d}$. • Moses Charikar Says: February 27, 2010 at 7:24 am This comment got posted in the wrong spot earlier. Please delete the other copy. The dual solutions for n=512,1024,1500 and the corresponding positive semidefinite matrices are available here . Double click on a file to download it. The files dual*.txt have the following format. Lines beginning with “b” specify the $b_i$ values b i $b_i$ Lines beginning with “t” specify the tails $t_{k,d}$ t k d $t_{k,d}$ The files matrix*.txt have the following format: the ith line of the file contains the entries of the ith row of the matrix. 10. gowers Says: February 26, 2010 at 12:23 pm | Reply Gil, I meant to mention your ideas about creating low-discrepancy sequences probabilistically in my latest post, but forgot. I have now updated it. I would like to bring that particular discussion over to here, which is why I am writing this comment. I am trying to come up with a purely linear-algebraic question that is nevertheless relevant to EDP. Here is one attempt. Let r be an integer, and let us try to build a sequence $(x_n)$ of real numbers such that for every $d$ and every $k$ the sum $x_{((k-1)r+1)d}+\dots+x_{krd}=0$. That is, for every $d$, if you break the HAP with common difference $d$ into chunks of length $r$, then the sum over every chunk is zero. If $r$ is prime, then one way of achieving this is to create a sequence with the following three properties: (i) it is periodic with period $r$; (ii) $x_n=0$ whenever $n$ is a multiple of $r$; (iii) $x_1+\dots+x_{r-1}=0$. If $r$ is composite, then the condition is similar but slightly more complicated. A general condition that covers (ii) and (iii) simultaneously is that for every factor $d$ of $r$ the sum $x_d+x_{2d}+\dots+x_r$ must be zero. The set of all such sequences is a linear subspace of the set of all real sequences. My question is whether every sequence satisfying the first condition (namely summing to zero along chunks of HAPs) must belong to this subspace. I have given no thought to this question, so it may have a simple and uninteresting answer. Let me just remark that it is very important that the “chunks” are not just any sets of $r$ consecutive terms of HAPs, since then periodicity would follow trivially (because when you remove $x_n$ from the beginning of a chunk and add $x_{n+r}$, you would need the sum to be the same). So, for example, if $r=5$, then the conditions imposed on the HAP with common difference 3 are that $x_3+x_6+x_9+x_{12}+x_{15}=0$, that $x_{18}+x_{21}+x_{24}+x_{27}+x_{30}=0$, and so on. • Gil Kalai Says: February 28, 2010 at 8:22 am Sure, lets continue discussing it here along with the various other interesting avenues. The idea was to try to impose zeroes as partial sums along intervals of HAPs. If the distance between these zeroes is order k then in a random such sequence we can hope for discrepency $\sqrt k$. The value $k=\log n$ appears to be critical. (Perhaps, more accurately, \$k=\log n \log\log n\$.) When $k$ is larger the number of conditions is sublinear and the probability for such a condition to hold is roughly is $1/\sqrt k$. So when $k=(\log n)^{1.1}$ we can expect that the number of sequences satisfying the conditions is typically $2^{n-o(1)}$. This give a heuristic prediction of $\sqrt{\log n}$ (up to lower order terms, or perhaps, more accurately, $\sqrt{\log n\log\log n}$) as the maximum discrepency of a sequence of length $n$. Of course, the evidence that this is the answer is rather small. This idea suggests that randomized constructions may lead to examples with descrepency roughly $\sqrt{\log n}$. In fact, this may apply to variants of the problems like the one where we restrict ourselves to square free integers. I will make some specific suggestions in a different post. Regarding lower bounds, if $k$ is smaller than \$\log n\$ then the number of constrains is larger then the number of variables. So this may suggest that even to solve the linear equations might be difficult. Like with any lower bound approach we have to understand the case that we consider only HAP with odd periods where we have a sequence of discrepency 1. It is possible, as Tim suggested, that solutions to the linear algebra problem exists only if the imposed spacing are very structured which hopefully implies a periodic solution. 11. Moses Charikar Says: February 27, 2010 at 7:14 am | Reply The dual solutions for n=512,1024,1500 and the corresponding positive semidefinite matrices are available here. Double click on a file to download it. The files dual*.txt have the following format. Lines beginning with “b” specify the $b_i$ values b $i$ $b_i$ Lines beginning with “t” specify the tails $t_{k,d}$ t $k$ $d$ $t_{k,d}$ The files matrix*.txt have the following format: the $i$th line of the file contains the entries of the $i$th row of the matrix. 12. gowers Says: February 27, 2010 at 12:52 pm | Reply Although the mathematics of this comment is entirely contained in the mathematics of Moses Charikar’s earlier comment, I think it bears repeating, since it could hold the key to a solution to EDP. Amongst other things, Moses points out that a positive solution to EDP would follow if for large $n$ one could find coefficients $c_{k,d}$ and $b_m$ such that the quadratic form $\displaystyle \sum_{k,d}c_{k,d}(x_d+x_{2d}+\dots+x_{kd})^2-\sum_mb_mx_m^2$ is positive semidefinite, the coefficients $c_{k,d}$ are non-negative and sum to 1, and the coefficients $b_m$ are non-negative and sum to $\omega(n)$, where $\omega$ is some function that tends to infinity. Here, the sums are over all $k,d$ such that $kd\leq n$ and over all $m\leq n$. The proof is simple: if such a quadratic form exists, then when each $x_i=\pm 1$ we have that $\sum_mb_mx_m^2=\sum_mb_m=\omega(n)$, and since the $c_{k,d}$ are non-negative and sum to 1 we know by averaging that there must exist $k,d$ such that $(x_d+\dots+x_{kd})^2\geq\omega(n)$. (i) The condition that $x_i=\pm 1$ is used in an important way: if the $b_m$ are mainly concentrated on pretty smooth numbers, then we will not be trying to prove false lower bounds for sequences like 1, -1, 0, 1, -1, 0, … since the sum of the $b_m$ over non-multiples of 3 can easily be at most 1 or something like that. (ii) We can use semidefinite programming to calculate the best possible quadratic form for fairly large $n$ (as Moses and Huy Nguyen have done already) and try to understand its features. This will help us to make intelligent guesses about what sorts of coefficients $c_{k,d}$ and $b_m$ have a chance of working. (iii) We don’t have to be too precise in our guesses in (ii), since to prove EDP it is not necessary to find the best possible quadratic form. It may be that there is another quadratic form with similar qualitative features that we can design so that various formulae simplify in convenient ways. (iv) To prove that a quadratic form is positive semi-definite is not a hopeless task: it can be done by expressing the form as a sum of squares. So we can ask for something more specific: try to find a positive linear combination of squares of linear forms in variables $x_1,\dots,x_n$ such that it equals a sum of the form $\displaystyle \sum_{k,d}c_{k,d}(x_d+x_{2d}+\dots+x_{kd})^2-\sum_mb_mx_m^2.$ To do this, it is not necessary to diagonalize the quadratic form, though that would be one way of expressing it as a sum of squares. In theory, therefore, a simple identity between two sums of squares of linear forms could give a one-line proof of EDP. It’s just a question of finding one that will do the job. At this point I’m going to stick my neck out and say that in view of (i)-(iv) I now think that if we continue to work on EDP then it will be only a matter of time before we solve it. That is of course a judgment that I may want to revise in the light of later experience. • gowers Says: February 27, 2010 at 2:41 pm Here is a very toy toy problem, just to try to get some intuition. The general aim is to try to produce a sum of squares of linear forms, which will automatically be a positive semidefinite quadratic form, from which it is possible to subtract a diagonal quadratic form and still have something positive semidefinite. Here is a simple example where this can be done. Let us consider the quadratic form $\displaystyle (a+b)^2+(a+c)^2+\lambda(b+c)^2$ in three variables $a,b$ and $c$. Here, $\lambda$ is a small positive constant. Now this form is positive definite, since the only way it could be zero is if $a+b=a+c=b+c=0$, which implies that $a=b=c=0$. But we want to quantify that statement. One possible quantification is as follows. We rewrite the quadratic form as $\displaystyle (1-\lambda)((a+b)^2+(a+c)^2)+$ $\displaystyle +\lambda((a+b)^2+(b+c)^2+(a+c)^2)$ and observe that the part in the second bracket can be rewritten as $a^2+b^2+c^2+(a+b+c)^2$. Therefore, the whole quadratic form is bounded below by $\lambda(a^2+b^2+c^2)$. This isn’t a complete analysis, since if $a+b+c=a+b=a+c=0$ then $a=b=c=0$, so I haven’t subtracted enough. But I have to go. • gowers Says: February 27, 2010 at 4:15 pm Actually, for intuition-building purposes, I think the identity $\displaystyle (a+b)^2+(b+c)^2+(a+c)^2=a^2+b^2+c^2+(a+b+c)^2$ is better, because it is very simple, and it shows how the “bunchedupness” on the left-hand side can be traded in for a diagonal part and something that’s more spread out. Now all we have to do is work out how to do something similar when we have HAPs on the left-hand side … • gowers Says: February 27, 2010 at 4:44 pm Here’s an exercise that would be one step harder than the above, but still easier than EDP and possibly quite useful. I’d like to know what the possibilities are for subtracting something diagonal and positive from the infinite quadratic form $\displaystyle ax_1^2+a^2(x_1+x_2)^2+a^3(x_1+x_2+x_3)^2+\dots$ and still ending up with something positive definite, where $a$ is some constant less than 1. That is, I would like to find a way of rewriting the above expression as a sum of squares of linear forms, with as much as possible of the weight of the coefficients being given to squares of single variables. Actually, I’ve seen one possible way of doing it. Note that $x^2+a(x+y)^2=(1-a)x^2+a(2(x+y/2)^2+y^2/2)$. That allows us to take the terms of the above series in pairs and write each pair as a square plus $a^{2n}x_{2n}^2/2$. So we can subtract off the diagonal form $\displaystyle \frac12(a^2x_2^2+a^4x_4^2+a_6x_6^2+\dots)$ and still have something positive semidefinite. However, it looks to me as though that is not going to be good enough by any means, because if we sum the coefficients in that kind of expression over all $d$ we are going to get something of comparable size to the sum of all the coefficients, which will be bounded. So we have to take much more account of how the HAPs mix with each other (which is not remotely surprising). So I’d like some better examples to serve as models. • Moses Charikar Says: February 27, 2010 at 5:54 pm If you were able to subtract off a large diagonal term from the expression $\displaystyle ax_1^2+a^2(x_1+x_2)^2+a^3(x_1+x_2+x_3)^2+\dots$ and still end up with something positive semidefinite, then this would serve as a lower bound for the discrepancy of the collection of subsets $\{1\}, \{1,2\}, \{1,2,3\}, \ldots$. But the minimum discrepancy of this collection is 1. Hence the sum of coefficients of the diagonal terms can be no larger than $\sum_{i=1}^{\infty} a^i$. (The ratio of the two quantities is a lower bound on the square discrepancy). The best you can do I suppose is to have $a$ be very small, and subtract $a x_1^2$. • gowers Says: February 27, 2010 at 6:27 pm I didn’t mean that just that form on its own would suffice, but that even if you add up a whole lot of expressions of that kind, one for each $d$, the bound for the sum of the diagonal terms will be bounded in terms of the sum of all the coefficients. But I suppose your remark still applies: that is inevitably the case, or else one could prove for some fixed $d$ that the discrepancy always had to be unbounded on some HAP of common difference $d$, which is obvious nonsense. Let’s define the $d$-part of the quadratic form that interests us to be $\sum_kc_{k,d}(x_d+\dots+x_{kd})^2$. And let’s call it $q_d$. Then it is absolutely essential to subtract a diagonal form $\Delta$ from $\sum_dq_d$ in such a way that we cannot decompose $\sum_dq_d-\Delta$ as a sum $\sum_d(q_d-\Delta_d)$ of positive semi-definite forms. Maybe the next thing to do is try to find a non-trivial example of subtracting a large diagonal from a sum of a small number of $q_d$s. (By non-trivial, I mean something that would beat the bound you get by subtracting the best $\Delta_d$ from each $q_d$ separately.) • Moses Charikar Says: February 27, 2010 at 6:47 pm We could get some inspiration from the SDP solutions for small values of $n$. Note that we explicitly excluded the singleton terms from each HAP because that would trivially give us a bound of 1. So far, the best bound we have (for $n=1500$) from the SDP is still less than 1. Getting a bound that exceeds 1 by this approach is going to require a very large value of $n$. That being said, here are some dual solutions that have clean expressions: $\displaystyle 2(x_1+x_2)^2 + 2(x_1+x_2+x_3)^2 - x_3^2 \geq 0$ This gives a lower bound of $\frac{1}{4}$ on the square discrepancy for $n=3$. $\displaystyle (x_1+x_2+x_3+x_4+x_5)^2 + 2 (x_1+x_2+x_3+x_4+x_5+x_6)^2 + 2 (x_1+x_2+x_3+x_4+x_5+x_6+x_7)^2 + (x_1+x_2+x_3+x_4+x_5+x_6+x_7+x_8)^2 + (x_2+x_4)^2 + (x_2+x_4+x_6)^2 + (x_2+x_4+x_6+x_8)^2 - x_6^2 - x_7^2 - x_8^2 \geq 0$ This gives a lower bound of $\frac{1}{3}$ on the square discrepancy for $n=8$. I don’t have proofs that these quadratic forms are non-negative, except that they ought to be if we believe that the SDP solver is correct. • Moses Charikar Says: February 27, 2010 at 7:23 pm Of course, we can get a sum of squares representation from the Cholesky decomposition of the matrix corresponding to the quadratic form. • gowers Says: February 27, 2010 at 8:48 pm The non-negativity of the first of those forms is a special case of what I said earler: $x^2+(x+y)^2=2(x+y/2)^2+y^2/2$. I’ll think about the other one. • Moses Charikar Says: February 27, 2010 at 9:55 pm The second form has the following decomposition: $\displaystyle 6(x_1+x_2+x_3+x_4+x_5+\frac{5}{6}x_6 + \frac{1}{2}x_7 + \frac{1}{6} x_8)^2 + 3(x_2 + x_4 + \frac{2}{3} x_6 + \frac{1}{3} x_8)^2 + \frac{1}{2} (x_6 + x_7 + x_8)^2$ We know that the SDP bound is $\frac{1}{4}$ for $n \leq 7$ and jumps to $\frac13$ for $n=8$ thanks to this quadratic form. Hence this decomposition is non-trivial in that it cannot be decomposed as a sum $(q_1-\Delta_1) + (q_2 -\Delta_2)$ of positive semidefinite forms. • Huy Nguyen Says: February 28, 2010 at 7:53 am If we look at the Cholesky decomposition of the matrix $M$ corresponding to the quadratic form computed by the SDP solver for n=1500 for inspiration for the sum of squares, there seem to be some interesting patterns going on there. Let $R^T R$ be the decomposition, and $R_i$ be the i-th row of $R$, then $x^T Mx$ can be rewritten as $\sum_i (R_i x)^2$. $R_i$ seems to put most of its weight on numbers that are multiple of i. The weight at the multiples of i decreases quickly as the multiples get larger. Among the non-multiples of i, there is also some pattern in the weights of numbers with common divisors with i as well. I have put some plots at http://www.cs.princeton.edu/~hlnguyen/discrepancy/cholesky.html • gowers Says: February 28, 2010 at 12:09 pm Here’s another way of thinking about the problem. Let’s assume that we have chosen (by inspired guesswork based on experimental results, say) the coefficients $c_{k,d}$. So now we have a quadratic form $\sum_{k,d}c_{k,d}(x_d+x_{2d}+\dots+x_{kd})^2$, and we want to write it in a different way, as $\Delta+\sum_iL_i(x)^2$, where $x$ is shorthand for $(x_1,x_2,\dots)$ and for each $i$ $L_i(x)$ is some linear form $\sum_j a_{ij}x_j$. Also, $\Delta$ is a non-negative diagonal form (that is, one of the form $\sum \delta_ix_i^2$, and our aim is to get the sum of the $\delta_i$ to be as large as possible. Instead of focusing on $\Delta$, I think it may be more fruitful to focus on the off-diagonal part of the quadratic form. That is, we try to choose the $L_i$ so as to produce the right coefficients at every $x_ix_j$, and we try to do that so efficiently that when we’ve finished we find that the diagonal part is not big enough — hence the need to add $\Delta$. To explain what I mean about “efficiency” here, let me give an extreme example. Suppose we have a quadratic form in $x_1,\dots,x_n$ and all we are told about it is that the coefficient of every $x_ix_j$ is 2. An inefficient way of achieving this is to take $\sum_{i<j}(x_i+x_j)^2$. If we do this, then the diagonal part is $(n-1)\sum_ix_i^2$. But we can do much much better by taking $(x_1+\dots+x_n)^2$, which gives a diagonal part of $\sum_ix_i^2$. In the HAPs case, what we’d like to do is find ways of reexpressing the sum $\sum_{k,d}c_{k,d}(x_d+\dots+x_{kd})^2$ more efficiently by somehow cleverly combining forms so as to achieve the off-diagonal part with less effort. The fact that there are very long sequences with low discrepancy tells us that this will be a delicate task, but we could perhaps try to save only something very small. For instance, we could try to show that the form was still positive semidefinite even after we subtract $\sum_n n^{-1}x_{n!}^2$. (This would show $c\sqrt{\log\log n}$ growth in the discrepancy, whereas we are tentatively expecting that it should be possible to get $\sqrt{\log n}$.) • Gil Says: February 28, 2010 at 12:53 pm What would it take, by this method, to show that the discrepency is > 2? • gowers Says: February 28, 2010 at 1:17 pm If I understand correctly from Wikipedia, the Cholesky decomposition would attempt to solve the problem in my previous comment in a greedy way: it would first make sure that all the coefficients of $x_1x_i$ were correct, leaving a remainder that does not depend on $x_1$. Then it would deal with the $x_2x_i$ terms (with $i\geq 2$), and so on. If this is correct (which it may not be) then it is not at all clear that it will be an efficient method in the sense I discussed above (though in the particular example I gave it happened to give the same decomposition). • gowers Says: February 28, 2010 at 1:23 pm The answer to Gil’s question is that we’d need to choose the coefficients $c_{k,d}$ to sum to 1 and to be able to rewrite the quadratic form $\sum_{k,d}c_{k,d}(x_d+x_{2d}+\dots+x_{kd})^2$ as $\sum_i L_i^2+\Delta$ where the $L_i$ are linear forms and $\Delta$ is a diagonal form $\sum_i d_ix_i^2$ with non-negative coefficients $d_i$ that sum to more than 4. (The 4 is because we then have to take square roots.) Indeed, if we can do this, then we see that if $x_i=\pm 1$ for every $i$, then the quadratic form we started with takes value greater than 4, so by averaging at least one of the $(x_d+x_{2d}+\dots+x_{kd})^2$ is greater than 4. We know from the 1124 examples that this is not going to be easy, but the fact that we need to go up to 4 is encouraging (in the sense that 1124 is not as frighteningly large a function of 4 as it is of 2). • gowers Says: February 28, 2010 at 1:39 pm As a tiny help in thinking about the problem, it is useful to note that the coefficient of $x_mx_n$ in the quadratic form $\sum_{k,d}c_{k,d}(x_d+x_{2d}+\dots+x_{kd})^2$ is $2\sum_{d|(m,n)}\sum_{kd\geq m\vee n}c_{kd}$. If, following Moses, we write $\tau_{k,d}$ for $\sum_{j\geq k}c_{k,d}$, then this becomes $2\sum_{d|(m,n)}\tau_{m\vee n,d}$. It’s a shame that this formula involves the maximum $m\vee n$, but we might be able to deal with that by smoothing the truncations of the HAPs (as I did in the calculations in the EDP9 post). That is, one could try to prove that a sum such as $\sum_r e^{-\alpha r} x_{rd}$ is large, which implies by partial summation that one of the sums $\sum_{r\leq m} x_{rd}$ is large. This too raises technical problems — instead of summing over the coefficients we end up integrating (which isn’t a problem at all) but the integral of the coefficients is infinite. I call this a technical problem because it still doesn’t rule out finding some way of showing that the diagonal coefficients are “more infinite” in some sense, or doing some truncation to make things finite and then dealing with the approximations. • gowers Says: February 28, 2010 at 3:21 pm One further small remark. The suggestion from the experimental evidence is that $\tau_{k,d}$ has the form $C_de^{-\alpha kd}$. However, we are not forced to go for the best possible form. So perhaps we could try out $\tau_{k,d}=C_de^{-\alpha k}$ (and it is not hard to choose the $c_{k,d}$ so as to achieve that). Then $\sum_{d|(m,n)}\tau_{m\vee n,d}$ would equal $e^{-\alpha(m\vee n)}\sum_{d|(m,n)}C_d$. That would leave us free to choose some nice arithmetical function $d\mapsto C_d$. For example, we could choose $C_d=\Lambda(d)$ and would then end up with $\log(m,n)e^{-\alpha(m\vee n)}$. If we did that, then we would have the following question. Fix a large integer $N$, and work out the sum $S$ of all the coefficients $c_{k,d}$ such that $kd\leq N$. Then try to prove that it is possible to rewrite the quadratic form $\sum_{m,n\leq N}\log(m,n)e^{-\alpha(m\vee n)}x_mx_n$ as a diagonal form plus a positive semidefinite form in such a way that the sum of the diagonal terms is at least $\omega(S)$. There is no guarantee that this particular choice will work, but I imagine that there is some statement about a suitable weighted average of discrepancies that would be equivalent to it, and we might find that that statement looked reasonably plausible. • Moses Charikar Says: February 28, 2010 at 8:44 pm Since $\Lambda(d)$ is non-zero only for prime powers, your choice of $C_d = \Lambda(d)$ would prove that the discrepancy is unbounded even if we restrict ourselves to HAPs with common differences that are prime powers. Certainly plausible. One comment on how large $n$ needs to be for this to prove that the discrepancy is $\geq 2$. If the trend from the SDP values for small $n$ continues, it will take really really large $n$ for the square discrepancy to exceed even 1 ! The increment in the value from $n=2^9$ to $n=2^{10}$ was a mere 0.014. At this rate, you would need to multiply by something like $2^{70}$ to get an increment of 1 in the lower bound on square discrepancy. • gowers Says: March 1, 2010 at 12:28 am For exactly this reason, we had a discussion a few days ago about whether EDP could be true even for HAPs with prime-power common differences. Sune observed that it is false for complex numbers, since one can take a sequence such as $x_n=\exp(2\pi in/6)$. However, it is not clear what the right moral from that example is, since no periodic sequence can give a counterexample for the real case. But it shows that any quadratic-forms identity one found would have to be one that could not be extended to the complex numbers. But it seems that such identities do not exist: if we change the real quadratic form $\sum_{d,k}c_{k,d}(x_d+\dots+x_{kd})^2$ into the Hermitian form $\sum_{d,k}c_{k,d}|x_d+\dots+x_{kd}|^2$, the matrix of the form is unchanged, and the coefficient of $x_mx_n$ becomes the coefficient of $\Re(x_mx_n)/2$. So if my understanding is correct, even if EDP is true for prime power differences, it cannot be proved by this method, and $C_d=\Lambda(d)$ was therefore a bad choice. • gowers Says: March 1, 2010 at 12:31 am In fact, the reason I chose it was that I had a moment of madness and stopped thinking that $\sum_{d|n}\phi(d)=n$, because I forgot that it was precisely the same as $\sum_{d|n}\phi(n/d)$. (What I mean is that I knew this fact, but temporarily persuaded myself that it wasn’t true.) So I go back to choosing instead $C_d=\phi(d)$, in which case the quadratic form one would try to rewrite would be $\sum_{m,n\leq N}(m,n)e^{-\alpha(m\vee n)}x_mx_n$. • Moses Charikar Says: March 1, 2010 at 12:50 am Another way to see this is that complex numbers can be viewed as 2 dimensional vectors: $e^{i \theta} \equiv (\cos \theta, \sin \theta)$. If EDP does not hold for complex numbers for a subset of HAPs, then it does not hold for vectors for the same subset of HAPs. Hence we cannot get a good lower bound via the quadratic form. • Moses Charikar Says: March 1, 2010 at 2:35 am One observation about the $b_i$: If only a subset of the $b_i$ are non-zero, we need to think carefully about where to place these non-zero values. A proof of this form would imply this: Consider any sequence where we place $\pm 1$ values at locations $i$ such that $b_i > 0$, but are free to place arbitrary values (including $0$) at locations $i$ such that $b_i = 0$. Then the discrepancy of any such sequence over HAPs is unbounded. In particular, this rules out having non-zero values for $b_i$ only at $i = n!$, because an alternating \$\pm 1\$ sequence at these values (and zeroes elsewhere) has bounded discrepancy over all HAPs. One attractive feature of this approach is that the best lower bound achievable via a proof of this form is exactly equal to the discrepancy of vector sequences (for every sequence length $n$). But we ought to admit the possibility that the discrepancy for vector sequences may be bounded (even if it is unbounded for $\pm 1$ sequences). We know at least two instances where we can do something with unit vectors that cannot be done with $\pm 1$ sequences. Are there constructions of sequences of unit vectors with bounded discrepancy ? • Moses Charikar Says: March 1, 2010 at 7:47 am This is a minor point, but I think the choice $\tau_{k,d}=C_de^{-\alpha kd}$ actually ensures that the coefficient of $x_mx_n$ is $2e^{-\alpha(m\vee n)}\sum_{d|(m,n)}C_d$. Doesn’t change what we need to prove. 13. Gil Kalai Says: February 28, 2010 at 10:13 am | Reply Here is a proposed probabilistic construction which is quite similar to earlier algorithms, can be tested empirically, and perhaps even analyzed. Let $n$ be the lenth of the sequence we want to create and let $T$ be a real number. ( $T=$ $T(n)$ will be a monotone function of $n$ that we determine later.) You chose the value of $x_i$, \$1 \le i \le n\$, after you have chosen earlier values. For every HAP containing $i$ we compute the discrepency along the HAP containing i. We ignore those HAP so that the discrepency is smaller than T. For an HAP so that the discrepency D is larger than T, if the sum is negative we give weight (D/T) to +1 and weight 1 to -1. If the sum is negative we give the weight (D/T) to -1 and weight 1 to 1. (D is the absolute value of the sum.) (We can also replace D/T by a fixed constant, say 2.) Now when we chose $x_i$ we compute the prduct of these weights for the choise +1 and for the choise -1 and choose at random according to these products. I propose to choose T as something like $\sqrt{\log n \log\log n}$ or a little bit higher. We want to find a T so that typically for every i only a small (perhaps typically at most 1) number of HAP will contribute non trivial weights. We experiment this algorithm for various ns and Ts. • Gil Says: February 28, 2010 at 6:06 pm Actually, when we move from the spacing k to the discrepency inside the intervals, we do pay a price from time to time. And it seems that if you consider n intervals as we do there will be intervals whose discrepency is $\sqrt{\log n}$ larger than expected. This brings us back to the $\log n$ area. I still propose to experiment what I suggested in the previos post but would expect that this will lead to examples with discrepency in the order of $\log n$. I see no heuristic arument that we can go below $\log n$ discrepency. But for certain measures of average discrepency the heuristic remains at $\sqrt{\log n}$). 14. gowers Says: March 1, 2010 at 9:54 am | Reply This is a response to this comment of Moses, but that subthread was getting rather long, so I’m starting a new one. What you say is of course a good point, and one that, despite its simplicity, I had missed. Let me spell it out once again. If we manage to find an identity of the form $\displaystyle \sum_{kd\leq n}c_{k,d}(x_d+x_{2d}+\dots+x_{kd})^2=\sum_{m\leq n}b_mx_m^2+\sum_iL_i(x)^2,$ with all coefficients non-negative, the $c_{k,d}$ summing to 1 and the $b_m$ summing to $\omega(n)$, then not only will we have proved EDP, but we will have shown a much stronger result that applies to all sequences $x_1,\dots,x_n$ (and even vector sequences, but let me not worry about that for now). To give an example of the kind of implication this has, let us suppose that $A$ is any set such that $\sum_{m\in A}b_m\leq\omega(n)/2$, then we must be able to prove that EDP holds for sequences that take the value 0 on $A$ and $\pm 1$ on the complement of $A$, since for such a sequence we know that $\displaystyle \sum_{kd\leq n}c_{k,d}(x_d+x_{2d}+\dots+x_{kd})^2\geq \sum_{m\notin A}b_m\geq\omega(n)/2.$ We know that there are subsets on which EDP holds. An obvious example is the set of all even numbers. As Moses points out, the set of all factorials is not even close to being an example. In general we do not have to stick to subsets: the experimental evidence suggests that we should be looking at weighted sets, where the discussion is a bit more complicated. An obvious preliminary problem is to try to come up with a set of integers of density zero such that EDP does not obviously fail for sequences that are $\pm 1$ inside that set and 0 outside. Unfortunately, there is a boring answer to this. If EDP is true, then let $n_k$ be such that all $\pm 1$ sequences of length $n_k$ have discrepancy at least $k$. Now take all integers up to $n_1$, together with all even numbers up to $2n_2$, all multiples of 4 up to $4n_3$, and so on. The density of this set is zero and it has been constructed so that EDP holds for it. However, it may be that this gives us some small clue about the kind of sequence $(b_m)$ we should be looking for. • gowers Says: March 1, 2010 at 5:52 pm Here is another attempt to gain some intuition about what is going on in the SDP approach to the problem. I want to give a moderately interesting example (but only moderately) of a situation where it is possible to remove a diagonal part from a quadratic form and maintain positive semidefiniteness. It is chosen to have a slight resemblance to quadratic forms based on HAPs, which one can think of as sets that have fairly small intersections with each other, but a few points that belong to a larger than average number of sets. To model this, let us take a more extreme situation, where we have a collection of sets that are almost disjoint, apart from the fact that one element is common to all of them. To be precise, let us suppose that we have $r$ subsets $A_1,\dots,A_r$ of $\{2,3,\dots,r+1\}$, each of size $q$, with $|A_i\cap A_j|=1$ for every $i\ne j$ such that for every $x\ne y$ belonging to $\{2,3,\dots,r+1\}$ there is exactly one $A_i$ that contains both $x$ and $y$. Such systems (projective planes) are known to exist, though I may have got the details slightly wrong — I think $r$ works out to be $q^2+q+1$. Now let us consider the quadratic form $\sum_i(x_1+\sum_{j\in A_i}x_j)^2$. Thus, we are adding the element 1 to each set $A_i$ and taking the quadratic form that corresponds to this new set system. It is not hard to check that the coefficient of $x_1^2$ is $r$, the coefficient of $x_1x_i$ is $2q$ (because for each $j$ there turn out to be exactly $q$ sets $A_i$ that contain $j$), the coefficient of $x_i^2$ is $q$ when $i\geq 2$ (for the same reason) and the coefficient of $x_ix_j$ when $2\leq i<j$ is 2. It follows, if my calculations are correct, that the form can be rewritten as $(qx_1+x_2+\dots+x_{r+1})^2+$ $+(r-q^2)x_1^2+(q-1)(x_2^2+\dots+x_{r+1}^2).$ To me this suggests that a more efficient way of representing some average of squares over HAPs may well involve something like putting together bunches of HAPs that and replacing the corresponding sum of squares by sums of squares of linear forms that have weights that favour numbers that belong to many of the HAPs — that is, smoother numbers. One problem is that we may have two numbers $m$ and $n$ that are both very smooth but nevertheless $(m,n)=1$. But that may be where the bunching comes in. For example, perhaps it would be useful to take a large smooth number $n$ and look at all HAPs that contain $n$, and try to find a more efficient representation for just that bunch. (That doesn’t feel like a strong enough idea, but exploring its weaknesses might be helpful.) 15. Alec Edgington Says: March 1, 2010 at 10:30 am | Reply While wondering about heuristic arguments for EDP I did a very simple calculation which leaves me somewhat puzzled. Consider the set of all $\pm 1$-valued sequences on $\{ 1, 2, \ldots, n\}$ as a probability space, with all sequences having equal probability $2^{-n}$. Let $M$ be the event ‘sequence is completely multiplicative’, and $K_C$ the event ‘partial sums are bounded by $C$’. Then $\mathrm{P}(M) = 2^{\pi(n) - n}$, and I think that $\mathrm{P}(K_C) \sim k_C / \sqrt{N}$ (for some constant $k_C$). If we were to assume that $M$ and $K_C$ are independent, we would get $\mathrm{P}(M \cap K_C) \sim k_C 2^{\pi(n) - n} / \sqrt{n}$. Since $2^{\pi(n)}$ is much bigger than $\sqrt{n}$, this means we would expect there to be lots and lots of completely multiplicative sequences of length $n$ with discrepancy $C$. What puzzles me is not that the assumption of independence is wrong but just how wrong it is. If EDP is true, then the multiplicativity of a sequence must force the discrepancy to be large in a big way. Of course, we know that it forces $f(a^2 n) = f(n)$, so that the sequence ‘reproduces’ itself across certain subsequences, but this doesn’t seem enough to explain the above. • Gil Says: March 1, 2010 at 10:37 am The probability that the partial sums are bounded are exponentially small, no? • gowers Says: March 1, 2010 at 10:58 am I don’t suppose you’ll be satisfied with this answer, but part of the explanation is surely that if a sequence is multiplicative and has bounded partial sums, then it has bounded partial sums along all HAPs. So if that is impossible, then … I see that that’s not much more than a restatement of the question. Perhaps one way to gain intuition would be to think carefully about the events “has partial sums between -C and C” and “is 2-multiplicative”, where by that I mean that $f(2n)=f(2)f(n)$ for every $n$. Do these two events appear to be negatively correlated? (I can’t face doing the calculation myself.) It may be that the negative correlation doesn’t show up at this point, in which case this question will not help. • Alec Edgington Says: March 1, 2010 at 12:05 pm Gil: unless I’m misunderstanding something, the probabiity of the partial sum being within a certain bound goes down as $n^{-\frac{1}{2}}$, corresponding to the height around zero of the density function of a Gaussian variable with mean zero and variance $n$. (We actually want the maximum rather than the final height, but I think this behaves similarly.) Tim: that sounds like a good idea; I’ll look at that… • gowers Says: March 1, 2010 at 12:31 pm Alec, I think Gil is probably right. Intuitively, if you want to force a random walk to stay within [-C,C], then you will have to give it a nudge a positive proportion of the time, which is restricting it by an exponential amount. In the extreme case where C=1 this is clear — every other move of the walk is forced. To put it another way, and in fact, this is a proof, you cannot afford to have a subinterval with drift greater than 2C. But for this to happen with positive probability, the subinterval needs to have size a constant times $C^2$, so the probability that it never happens is at most something like $\exp(-cn/C^2)$ for some absolute constant $c$. • Alec Edgington Says: March 1, 2010 at 1:04 pm Sorry, yes, you’re right. I was misinterpreting a result about the maximum value attained (as opposed to the maximum absolute value). That alleviates my puzzlement. 16. Gil Kalai Says: March 1, 2010 at 10:33 am | Reply Another technique which is fairly standard and may be useful in the lower bound direction is the “polynomial technique”. Say you want to prove that every long enough sequence has discrepency at least 3. Consider polynomials in the variables $x_1,x_2,\dots,x_n$ over a field with 7 elements. Now mod our by the equation $x_i^2=1$. We want to show that a certain polynomial is identically zero. The polynomial is: $\prod \prod (\sum_{i=1}^mx_{ir}-3)(\sum_{i=1}^mx_{ir}+3)$, where the products are over all r and all m smaller than n/r. Once we forced the identity $x_i^2=1$ we can reduce every such polynomial to square free polynomial and what we need to show is that for large enough n this polynomial is identically zero. In case the formula wont compile: you take the product of all partial sums -3 times the same sum + 3, over all HAPs. (If you prefer to consider polynomials over the field and not mod up by any ideal you can replace the $x_i$ by $x_i^3$ and add the product of all $x_i$s as another term.) We use the fact that if the discrepency is becoming greater than 2 then some of the above sums will be +3 or -3 modulo 7. On the one hand, it feels like simply reformulating the problem, but on the other hand, maybe it is not. Perhaps, the complicated product can be simplified or some other algebraic reason why such products ultimately vanish exist. This type of technique is sometimes useful for problems where we can think also of semi definite/linear programming methodsl. I also share the feeling that at present Moses’s semidefinite program is the most promising. 17. Klas Markström Says: March 1, 2010 at 2:15 pm | Reply A while back we were looking at multiplicative functions such that $f(3x)=0$ for all $x$, and one question which was raised was wether a function of this type, with bounded discrepancy , must be the function 1, -1, 0, 1, -1, 0, 1, -1, 0,… http://gowers.wordpress.com/2010/02/08/edp7-emergency-post/#comment-6094 I could not see an intuitive reason for why this should be true, apart from just not managing to construct a counterexample. Has anyone looked more at this? I used one of my old Mathematica program to search for a multiplicative function of this type, with $f(1)=1$, and while I had lunch it found an assignment to the first 115 primes which works up to n=630. Here is the values for the primes {-1,0, 1, -1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, -1, -1, -1, 1, 1, -1, 1, 1, -1, 1, -1, 1, 1, -1, -1, 1, -1, -1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1, 1, 1, 1, -1, 1, -1, 1, -1, 1, -1, 1, -1, -1, -1, -1, 1, 1, 1, -1, -1, 1, 1, 1, 1, -1, -1, -1, 1, 1, 1, 1, -1, -1, -1, 1, -1, 1, -1, -1, -1, 1, -1, -1, -1, -1, 1, 1, -1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, -1, -1, -1, 1, 1, 1, -1, -1, 1, -1, -1} The function for all x is then {1, -1, 0, 1, 1, 0, -1, -1, 0, -1, 1, 0, -1, 1, 0, 1, 1, 0, -1, 1, 0, \ -1, -1, 0, 1, 1, 0, -1, 1, 0, -1, -1, 0, -1, -1, 0, 1, 1, 0, -1, 1, \ 0, -1, 1, 0, 1, -1, 0, 1, -1, 0, -1, 1, 0, 1, 1, 0, -1, -1, 0, -1, 1, \ 0, 1, -1, 0, -1, 1, 0, 1, -1, 0, 1, -1, 0, -1, -1, 0, 1, 1, 0, -1, \ -1, 0, 1, 1, 0, -1, 1, 0, 1, -1, 0, 1, -1, 0, 1, -1, 0, 1, -1, 0, 1, \ 1, 0, -1, -1, 0, 1, -1, 0, -1, 1, 0, -1, 1, 0, 1, -1, 0, 1, 1, 0, -1, \ 1, 0, -1, -1, 0, 1, -1, 0, 1, 1, 0, -1, 1, 0, -1, -1, 0, 1, -1, 0, 1, \ -1, 0, 1, -1, 0, -1, 1, 0, 1, -1, 0, 1, -1, 0, -1, 1, 0, -1, 1, 0, 1, \ -1, 0, 1, -1, 0, -1, 1, 0, -1, 1, 0, -1, 1, 0, -1, -1, 0, 1, 1, 0, 1, \ -1, 0, 1, -1, 0, 1, -1, 0, 1, -1, 0, 1, -1, 0, 1, -1, 0, 1, -1, 0, \ -1, -1, 0, 1, 1, 0, 1, -1, 0, 1, -1, 0, 1, -1, 0, 1, 1, 0, -1, 1, 0, \ -1, 1, 0, -1, 1, 0, -1, -1, 0, 1, -1, 0, 1, -1, 0, -1, 1, 0, 1, 1, 0, \ -1, -1, 0, -1, 1, 0, 1, 1, 0, -1, -1, 0, 1, -1, 0, 1, -1, 0, -1, 1, \ 0, -1, 1, 0, -1, 1, 0, -1, 1, 0, 1, -1, 0, -1, -1, 0, 1, -1, 0, 1, \ -1, 0, 1, 1, 0, -1, -1, 0, 1, 1, 0, 1, 1, 0, -1, -1, 0, 1, -1, 0, 1, \ 1, 0, -1, -1, 0, 1, -1, 0, 1, 1, 0, -1, -1, 0, -1, 1, 0, -1, 1, 0, 1, \ -1, 0, 1, -1, 0, 1, -1, 0, 1, -1, 0, -1, 1, 0, -1, 1, 0, 1, 1, 0, -1, \ -1, 0, -1, 1, 0, -1, -1, 0, 1, 1, 0, 1, 1, 0, -1, -1, 0, -1, -1, 0, \ 1, -1, 0, 1, -1, 0, 1, -1, 0, 1, 1, 0, -1, -1, 0, 1, 1, 0, -1, -1, 0, \ 1, 1, 0, -1, -1, 0, 1, -1, 0, 1, -1, 0, 1, 1, 0, -1, -1, 0, 1, 1, 0, \ -1, 1, 0, 1, 1, 0, -1, -1, 0, -1, 1, 0, 1, -1, 0, 1, 1, 0, -1, -1, 0, \ 1, 1, 0, -1, -1, 0, 1, -1, 0, 1, -1, 0, -1, 1, 0, 1, 1, 0, -1, 1, 0, \ -1, 1, 0, -1, -1, 0, -1, 1, 0, -1, -1, 0, 1, 1, 0, 1, -1, 0, -1, -1, \ 0, 1, 1, 0, -1, -1, 0, 1, 1, 0, 1, 1, 0, -1, -1, 0, 1, -1, 0, -1, 1, \ 0, -1, 1, 0, 1, 1, 0, -1, 1, 0, -1, 1, 0, -1, -1, 0, -1, 1, 0, -1, 1, \ 0, 1, 1, 0, -1, -1, 0, 1, -1, 0, 1, -1, 0, 1, -1, 0, -1, 1, 0, -1, 1, \ 0, 1, 1, 0, -1, 1, 0, -1, 1, 0, -1, -1, 0, -1, 1, 0, -1, -1, 0, 1, \ -1, 0, 1, -1, 0, 1, 1, 0, 1, 1, 0, -1, -1, 0, 1, -1, 0, -1, -1, 0, 1, \ 1, 0, 1, -1, 0, -1, -1, 0, 1, 1, 0, 1, 1, 0, -1, -1, 0, -1, 1, 0, 1, \ -1, 0, -1, 1, 0, -1, 1, 0, 1, 1, 0, -1, -1, 0, 1, 1, 0, -1, -1, 0, \ -1, -1, 0, 1, 1, 0, 1, 1, 0 • Klas Markström Says: March 1, 2010 at 2:22 pm The point of the example of course being that it is distinct from the unique example with discrepancy 1 even for small values of \$x\$. For a finite bound $N$ one will typically be able to construct trivial examples defined on the first $N$ integers simply by changing the values at the primes closest to $N$, since they are not affected by the multiplicative requirement. • Alec Edgington Says: March 2, 2010 at 8:22 am One can get at least as far as 886 with a sequence like this of discrepancy 2 sending 5 to $+1$: ```+-0++0--0-+0-+0++0-+0--0++0-+0--0--0++0-+0-+0+-0+-0--0++0-+0-+0+-0-+0++0--0--0++0 --0++0-+0+-0+-0--0+-0++0+-0+-0-+0-+0--0++0-+0+-0+-0++0-+0--0--0++0+-0-+0+-0+-0-+0 -+0+-0+-0-+0-+0-+0--0++0+-0+-0-+0++0--0+-0+-0--0+-0+-0+-0+-0++0-+0++0--0-+0+-0+-0 -+0++0-+0--0++0--0++0--0-+0-+0-+0-+0+-0-+0+-0+-0--0+-0++0++0--0+-0++0--0+-0++0--0 -+0-+0+-0+-0+-0+-0-+0-+0-+0--0++0--0++0+-0+-0-+0+-0+-0+-0++0-+0-+0--0-+0-+0+-0+-0 ++0--0+-0-+0+-0--0++0+-0+-0+-0++0--0+-0+-0-+0++0-+0--0-+0-+0+-0++0--0--0++0--0+-0 ++0-+0+-0--0++0-+0-+0+-0+-0-+0-+0++0--0--0++0+-0-+0-+0++0-+0-+0--0-+0-+0+-0+-0++0 -+0--0+-0--0++0-+0+-0+-0++0--0-+0+-0-+0-+0++0--0++0--0--0++0++0--0++0--0-+0-+0+-0 ++0-+0-+0-+0--0++0--0+-0+-0+-0+-0+-0++0-+0-+0--0++0--0-+0++0--0-+0-+0++0--0-+0+-0 +-0+-0+-0++0-+0--0++0--0+-0+-0++0--0--0++0-+0-+0--0++0++0-+0--0++0-+0--0+-0+-0++0 --0--0++0++0--0+-0-+0-+0--0+-0++0++0--0--0+-0++0--0+-0+-0++0+-0--0++0++0--0+ ``` Here the following primes are sent to $-1$: 2, 7, 13, 19, 23, 31, 43, 47, 53, 61, 67, 73, 83, 97, 101, 107, 131, 139, 149, 151, 163, 167, 181, 191, 193, 199, 233, 239, 271, 277, 281, 283, 293, 313, 317, 349, 353, 359, 397, 401, 409, 419, 421, 431, 439, 443, 457, 463, 467, 509, 523, 547, 563, 571, 577, 587, 607, 613, 619, 631, 643, 647, 661, 677, 683, 691, 701, 709, 751, 787, 811, 823, 827, 829, 839, 859, 863, 883. I can’t see much of a pattern, but there seems to be quite a bit of freedom to assign either $(+1, -1)$ or $(-1, +1)$ at twin-prime pairs $(3k-1, 3k+1)$. • Alec Edgington Says: March 2, 2010 at 10:37 pm That search finally terminated — so that is the longest such sequence we can get that sends 5 to +1. • Klas Markström Says: March 3, 2010 at 9:03 am Alec, could you try that with the value at 7 flipped instead? It would be interesting to see how much further one gets by following the discrepancy 1 function out to different primes. • Alec Edgington Says: March 3, 2010 at 9:54 am OK, I’ll try that when I get back to my computer this evening. • Alec Edgington Says: March 5, 2010 at 7:49 pm In this case (flipping the value at 7) the maximal length of sequence (or rather, the maximal length that is of the form $p-1$ — which is what I should have said above too) is 946: ```+-0+-0--0++0-+0+-0+-0-+0++0--0+-0++0--0+-0-+0-+0+-0-+0-+0+-0+-0++0--0--0++0+-0--0 +-0++0--0++0--0+-0++0++0-+0-+0--0--0++0+-0+-0+-0-+0-+0+-0-+0+-0+-0-+0+-0+-0-+0+-0 +-0+-0+-0-+0-+0+-0+-0-+0-+0+-0+-0+-0+-0-+0+-0-+0-+0-+0-+0-+0-+0+-0-+0++0--0-+0+-0 +-0--0+-0+-0+-0++0-+0-+0--0+-0++0-+0--0+-0++0+-0+-0++0--0+-0+-0+-0+-0++0-+0--0+-0 --0+-0+-0++0+-0++0-+0-+0-+0--0+-0+-0+-0+-0++0--0++0-+0--0+-0+-0+-0--0++0+-0+-0-+0 --0+-0++0++0-+0-+0--0-+0--0++0-+0++0--0++0-+0--0+-0-+0-+0+-0--0++0+-0++0--0+-0+-0 --0+-0++0++0--0++0--0++0--0+-0+-0--0++0--0++0-+0-+0++0--0++0--0--0++0--0++0+-0+-0 +-0+-0-+0--0++0+-0+-0+-0-+0-+0+-0+-0+-0+-0+-0+-0+-0--0++0+-0-+0++0--0-+0+-0+-0+-0 -+0+-0--0++0+-0++0--0-+0+-0++0--0--0++0--0++0-+0++0--0-+0+-0--0++0+-0+-0--0+-0+-0 ++0+-0--0++0++0--0--0++0--0++0+-0+-0++0--0+-0+-0+-0-+0++0--0-+0--0++0--0++0++0--0 -+0+-0--0++0+-0-+0+-0--0++0-+0++0--0-+0+-0++0--0-+0++0--0--0++0-+0+-0--0++0++0--0 -+0++0--0-+0+-0+-0+-0-+0--0++0-+0--0+-0++0++0--0-+0--0+ ``` (Primes sent to -1 in this example: 2, 5, 7, 13, 17, 29, 37, 41, 43, 59, 67, 71, 79, 83, 89, 109, 113, 137, 139, 157, 167, 179, 191, 197, 211, 223, 227, 229, 251, 257, 269, 277, 281, 293, 311, 349, 353, 359, 379, 383, 389, 401, 421, 431, 443, 457, 467, 479, 487, 491, 499, 521, 541, 547, 557, 563, 569, 577, 587, 599, 617, 619, 641, 647, 653, 683, 701, 709, 719, 761, 769, 773, 787, 809, 811, 857, 859, 881, 911, 929, 937.) I’ll kick off a search with 11 flipped, but won’t hold my breath for it to finish! • Alec Edgington Says: March 5, 2010 at 7:55 pm Well, I’ll have to eat my words. It finished almost immediately, having got only as far as 330: ```+-0+-0+-0++0--0+-0+-0--0++0+-0+-0+-0+-0+-0-+0+-0+-0-+0--0+-0+-0++0+-0+-0--0++0+-0 +-0++0--0--0+-0+-0+-0++0-+0-+0+-0+-0+-0+-0+-0+-0--0+-0+-0+-0+-0++0+-0+-0--0+-0+-0 +-0++0+-0--0++0+-0-+0+-0--0+-0+-0++0+-0+-0+-0-+0-+0-+0++0-+0--0+-0+-0+-0+-0+-0+-0 +-0--0++0--0+-0++0+-0--0+-0+-0++0+-0+-0+-0+-0+-0--0+-0++0--0+-0++0+-0+-0++0--0+-0 --0+-0 ``` (Primes sent to -1: 2, 5, 13, 17, 23, 29, 41, 43, 47, 59, 71, 73, 83, 89, 101, 109, 113, 131, 137, 149, 173, 179, 181, 191, 211, 223, 227, 233, 239, 257, 263, 269, 281, 293, 311.) I had assumed that the maximum length would be a monotonic function of the first prime flipped, but evidently not… • Alec Edgington Says: March 5, 2010 at 11:20 pm I’ve now gone a bit further with this, and found: ```min p flipped max n (n+1 prime) 5 886 7 946 11 330 13 >= 1380 17 408 19 >= 2646 23 22 29 >= 546 31 >= 9906 37 >= 27508 41 >= 690 43 >= 24420 47 46 53 52 59 >= 1326 61 >= 15072 ``` It seems that flipping a prime of the form $3k+1$ is much better than flipping a prime of the form $3k-1$. (Perhaps this isn’t too surprising, as it means we don’t immediately require $f(3k+4)$ to change sign to keep the discrepancy at 2, but I’m not sure that fully explains it.) • Klas Markström Says: March 6, 2010 at 9:46 am That’s an interesting sequence. I would not have expected the different $p$:s to give such different lengths. However the fact that one can find sequences as long a the ones you have found for some primes, suggest that any proof showing that the maximum length is finite for any $p$ will have to be rather indirect, not giving a good bound for the maximum length. 18. Moses Charikar Says: March 2, 2010 at 7:40 am | Reply As Tim pointed out earlier, the Cholesky decomposition is greedy and does not necessarily give an efficient decomposition into sum of squares that uses the diagonal sparingly. But I want to propose using it as a proof technique for positive semidefiniteness after we have subtracted out the diagonal terms. If we can prove that the Cholesky decomposition succeeds, this proves that the matrix is positive definite. Put differently, I want to try and figure out what $b_i$ values we can get away with without making the Cholesky decomposition procedure fail. Let’s review how Cholesky works. The goal is to represent positive definite matrix $A = V V^T$ such that $V$ is a lower triangular matrix. If $V_i$ is the $i$th row of $V$, the only non-zero coordinates of $V_i$ are in the first $i$ positions, and $V_i \cdot V_j = A_{ij}$. The algorithm proceeds thus: $V_{11} = \sqrt{A_{11}}.$ Having determined $V_1, V_2, \ldots, V_{i-1}$, the coordinates of $V_i$ are determined thus: $V_{i,1}$ is completely determined by the condition $V_1 \cdot V_i = A_{1,i}$. Next, $V_{i,2}$ is determined by $V_2 \cdot V_i = A_{2,i}$ and so on. Finally, $V_{i,i}$ is determined by the constraint: $\sum_{j \leq i} V_{i,j}^2 = A_{ii}$. The procedure succeeds if, for all $i$, in the computation of $V_{i,i}$, $A_{ii}-\sum_{j < i} V_{i,j}^2\geq 0$. Consider what happens if, for a particular $i$, we subtract $b_i$ from $A_{i,i}$ and compute the new Cholesky decomposition. Since $A_{i,i}$ is smaller, $V_{i,i}$ will become smaller. This in turn will increase the values of $V_{j,i}$ for $j > i$. This sets off a chain reaction and decreases $V_{j,j}$ values for $j > i$. If we subtract too much, we run the risk of the Cholesky decomposition failing. But if it does fail, we know for sure that we have ruined positive definiteness. Now suppose we know the Cholesky decomposition of matrix $A$. This might help us guess what diagonal terms we can safely subtract from $A$. I’m hoping that the large diagonal $V_{i,i}$ in the decomposition give us clues for where we can hope to subtract $b_i$ (although there are complex dependencies and subtracting a small amount from one location changes a whole bunch of other elements in the decomposition.) • Moses Charikar Says: March 2, 2010 at 8:24 am Here is a setting I want to think about, with some assumptions on various parameters slightly different from the ones that Tim proposed earlier. I’m going to repeat a few things from previous comments. Recall that we are interested in the quadratic form $\displaystyle \sum_{k,d} c_{k,d} (x_d + x_{2d} + \ldots + x_{kd})^2$ As before $\sum_{j \geq k} c_{j,d} = \tau_{k,d} = C_d e^{-\alpha kd}$. Consider $\alpha$ to be very small. We’ll use this to make some approximations. Say $C_d = 1$ for all $d$. (I hope this will turn out to be an easier setting to analyze and understand). Note that $\sum_{k,d} c_{k,d} = \sum_d \tau_{1,d} = \sum_d e^{-\alpha d} \approx 1/\alpha$. Consider the symmetric matrix $A$ corresponding to the quadratic form above (i.e. the quadratic form is $x^T A x$). We hope to subtract a large diagonal term $\sum b_i x_i^2$ such that the remainder is still positive semidefinite. For $m \geq n$, the entry $A_{m,n} = e^{-\alpha m} d((m,n))$ where $d(i) =$ the number of divisors of $i$. First, lets figure out the Cholesky decomposition for $\alpha = 0$. In this case, $A_{m,n} = d((m,n))$. This has a nice form which is obvious in retrospect: $V_{m,n} = 1$ if $n|m$ and $V_{m,n} = 0$ otherwise. In other words, the $n$th column is the indicator function for the infinite HAP with common difference $n$. It is easy to verify that $V_m \cdot V_n = d((m,n))$ since $V_{md} = V_{nd} = 1$ precisely when $d|m$ and $d|n$. The next step is to figure out what the decomposition is for this setting of parameters with $\alpha > 0$ and very small. I want to obtain approximate expressions for the entries of the Cholesky decomposition as linear expressions in $\alpha$, i.e. by dropping higher order terms from the Taylor series. I’m hoping that the relatively structured form of the matrix will allow us to get closed form approximations for the entries of this decomposition, and further that, it will result in diagonal elements $V_{n,n}$ being relatively high for $n$ with many divisors. If so, this should help us subtract large $b_n$ values for such $n$. • Moses Charikar Says: March 2, 2010 at 11:03 am I tried calculating the coefficients of $\alpha$ for the entries of the Cholesky decomposition of the matrix described above. (The constant term for each entry is given by the Cholesky decomposition for $\alpha = 0$ determined above).The computation of the $\alpha$ coefficients seemed too tedious (and error-prone) to do by hand, so I wrote some code to do it. Here is the output of the code for the first 100 rows of the decomposition. The format of each line of the file is i j $\beta(i,j)$ where $\beta(i,j)$ is the coefficient of $\alpha$ in the Taylor expansion of $V_{i,j}$. It seems to match some initial values I computed by hand, so I hope the code is correct. I can’t say I understand what these values are, but it is reassuring to see that for $n$ with many divisors, $\beta(n,n)$ seems to be large. On the other hand it seems to be $-1/2$ for prime $n$. I hope this means that we can subtract out large values of $b_n$ for $n$ with many divisors while maintaining the property that the Cholesky decomposition continues to proceed successfully. To prove that Cholesky works, we will need to show upper bounds on $|V_{i,j}|$ for $i > j$ and lower bounds on the diagonal terms $V_{i,i}$ (for the decomposition of the matrix with the diagonal form subtracted out). Before we do that, we need to understand what these coefficients $\beta(i,j)$ are and why the values $\beta(n,n)$ are large for $n$ with many divisors. Any ideas ? • gowers Says: March 2, 2010 at 11:26 am I’ve just had a look in Sloane’s database to see if there are any sequences that match the values $\beta(n,n)$ but had no luck (even after various transformations, such as multiplying by 2, or multiplying by 2 and adding 1). But it might be worth continuing to look. • gowers Says: March 2, 2010 at 11:35 am A couple of small observations. I think if we continue along these lines we will be able to get a formula for the diagonal values. The observations are that $2^k$ maps to $(k-2)2^{k-2}$ for every $k$ (so far, that is) and that $3^k$ maps to $(k-3/2)3^{k-1}$ for every $k$. I’ll continue to look at powers for a bit and then I’ll move on to more general geometric progressions (an obvious example being 3,6,12,24,48,96) and see what pops out. • gowers Says: March 2, 2010 at 11:45 am A formula that works when $p=2,3,5$ and gives the right value for k=0 and k=1 is that $p^k$ maps to $((\frac{p-1}2)k-p/2)p^{k-1}$. I suspect it is valid for all prime powers but have not yet checked. Maybe this looks more suggestive if we rewrite it as $\frac 12 ((1-1/p)k-1)p^k$. • Moses Charikar Says: March 2, 2010 at 11:51 am Very interesting. I’ve put up a file with only the diagonal entries upto 1000 here . It is tedious to look for diagonal entries in the earlier file. I’ve also included prime factorizations in the new file. • gowers Says: March 2, 2010 at 12:39 pm Sorry — had to do something else. But I’ve now checked the GP 3,6,12,… and we get the formula $f(3.2^k)=2^{k-2}(7k-2)$. Next up, the GP 5,10,20,40,… which gives the formula $f(5.2^k)=2^{k-2}(11k-2)$. So I’d hazard a guess that $f(p.2^k)=2^{k-2}((2p+1)k-2)$. Let me do a random check by looking at $f(88)$. If my guess is correct, this should be $2^{3-2}\times 23=46$ and in fact it is … 134. Let me try that again. It should be $2^{3-2}(23\times 3-2)=2\times 67=134$. So I believe that formula now. • gowers Says: March 2, 2010 at 12:50 pm It’s taken me longer than it should have, but I now see that we ought to be looking not at $f(n)$ but at $f(n)/n$. Somehow, this is where the “real pattern” is. When $n=p^k$ we get $f(n)/n=(1/2)((1-1/p)k-1)$, and when $n=2^kp$ we get $f(n)/n=(1/4)((2+1/p)k-2/p)=(1/2)(1+1/2p)k-1/p)$. I think an abstract question is emerging. Suppose we know that, as seems to be the case, for any $a$ and $p$ we have a formula of the kind $f(ap^k)=p^k(uk+v)$. With enough initial conditions, that should determine $f$. We already have a lot of initial conditions, since we know that $f(1)=f(p)=-1/2$ for every prime $p$. Those conditions are enough to determine $f(p^k)$, but they don’t seem to be sufficient to do everything. For that I appear to need to know $f$ at all square-free numbers, or something like that. • gowers Says: March 2, 2010 at 1:03 pm For primes $p,q$ with $p<q$ the formula appears to be $f(pq)=q(p-1)-1/2$. I inducted that from a few small instances, and then checked it on $407=11\times 37$. My formula predicts $370-1/2=369.5$ and that is exactly what I get. In keeping with the general idea of looking at $f(n)/n$, let me also write this formula as $f(pq)/pq=1-1/p-1/2pq$. A small calculation, which I shall now do, will allow me to deduce the formula for $p^aq^b$. • Moses Charikar Says: March 2, 2010 at 1:15 pm Ok, I think we should have the diagonal values figured out soon, although understanding why these numbers arise will take more effort. Looking ahead, the hope was that large numbers are an indication that large $b_i$ can be subtracted out from those entries without sacrificing positive semidefiniteness. At this point, I don’t have a clue about how these $b_i$ should be picked. We can experiment with some guesses later. Question for later: The hope is that for any $C$, for suitably small $\alpha$, we can pick $b_i$ such that $\sum b_i > C/\alpha$. At this point, I haven’t thought through why this should necessarily be the case. Is there a heuristic argument that we have enough mass on the diagonal to pull this through ? I am going to have to sign off for a while and get back to this later in the day. • gowers Says: March 2, 2010 at 1:36 pm For $n=p^kq$ with $p<q$ I get $f(n)=kp^{k-1}(p-1)(q+1/2)-p^k/2$ or equivalently $f(n)/n=k(1-1/p)(1+1/2q)-1/2q.$ I’ll add to this comment when I get a formula for $p^aq^b$, which should be easy now. • Alec Edgington Says: March 2, 2010 at 1:50 pm All very interesting. Another thing to note about the $\beta(i,j)$ is that the fractional part is $\frac{1}{2}$ precisely when $j \mid i$ and $4 \nmid j$. • gowers Says: March 2, 2010 at 2:01 pm I got the following formula for $f(p^aq^b)/p^aq^b$: $\frac 12(1-\frac 1p)a((1+\frac 1q)b+1)+\frac 12((1-\frac 1q)b-1).$ I have tried it out on $225=3^2.5^2$ and got the answer $f(225)=577.5$, which was correct. So I believe this formula too. Just to repeat what I am doing here, I am assuming that the function $f(n)/n$ takes geometric progressions to arithmetic progressions and deducing what I can whenever I know what two values are. In this way, knowing $f$ for prime powers and for products of two primes was enough to give me $f$ for all numbers of the form $p^aq^b$. Sorry — correction — I have assumed that only for GPs with prime common ratio. I haven’t looked at what happens when the common ratio is composite. Come to think of it, if $f(n)/n$ takes GPs with prime common ratio to APs, then $\exp(f(n)/n)$ takes GPs with prime common ratio to GPs, which is a pretty strong multiplicativity property. That leads me to think that the eventual formula is going to be quite nice. I hope I’ll be able to guess it after working out $f(pqr)/pqr$, or perhaps I’ll have to do $f(p^aq^br^c)/p^aq^br^c$. But first I need some lunch. • gowers Says: March 2, 2010 at 2:04 pm Here’s a slightly more transparent way of writing the formula for $f(p^aq^b)/p^aq^b$: $\frac 12(1-\frac 1p)(1+\frac 1q)ab+\frac 12(1-\frac 1p)a+\frac 12(1-\frac 1q)b-\frac 12.$ • gowers Says: March 2, 2010 at 3:05 pm Just before I dive into the calculations, let me make the remark that the signs are that the formula will be an inhomogeneous multilinear one in the indices. More precisely, I am expecting a formula of the following kind for $f(n)/n$ when $n=p_1^{a_1}\dots p_r^{a_r}$ and the primes $p_1,\dots,p_r$ are in increasing order: $\frac 12\sum_{A\subset\{1,\dots,k\}}\prod_{j\in A}(1+\frac{c_{A,j}}{p_j})a_j-1.$ • gowers Says: March 2, 2010 at 3:52 pm I’ve just wasted some time going about this a stupid way, which was to calculate $f(pqr)/pqr$ for several values and try to spot a pattern. But I now see that that was unlikely to be easy, since the dependence on $1/p$, $1/q$ and $1/r$ is trilinear. The values look quite strange too, but I have a new plan. That plan is to try to find a formula for $f(6p^k)/6p^k$ when $p$ is a prime greater than 3. That should give me some coefficients of a linear function in $k$. If I repeat the exercise for one or two other small products of two primes, then it should be easyish to spot a formula for the coefficients, and then I’ll have a formula for $f(pqr^k)$, which will be a good start. In fact, it will be more than just a start, as from that and the multilinearity assumption I’ll be able to work out all the rest of the values at products of three prime powers. • gowers Says: March 2, 2010 at 4:04 pm Oh dear, for the first time I have run up against an anomaly in the data that may be hard to fit into a nice pattern. When $p$ is any of 7,11,13,17 or 19 the value of $f(6p)/6p$ is given by the formula $2-(2p-5)/12p$, but when $p=5$ we get instead $2-(2p-7)/12p$. But I’ll continue with this and try not to worry about $f(30)$ too much for now. (But I would love to learn that it was wrong and should have been 57.5 instead of 58.5.) • Moses Charikar Says: March 2, 2010 at 4:08 pm I’ve added an even bigger file with diagonal entries upto 10,000 in this directory in case you are looking for more data points to verify guesses for formulae. The directory also contains the C code (cholesky.c) used to generate the lower triangular matrix of coefficients and another piece of code (cholesky2.c) to generate the diagonal entries only. I really hope the code is not buggy ! • Moses Charikar Says: March 2, 2010 at 4:36 pm Uh oh … I spotted a tiny problem in the code which could potentially affect the calculation for rows and columns greater than n/2. Except that for some mysterious reason it doesn’t. i.e. the output is exactly the same after fixing the “bug”. Basically, I was not setting the constant term in the Taylor expansion correctly for diagonal entries > n/2. It should have been 1 and was being set to 0 instead. This could potentially affect the calculation of the coefficient of $\alpha$ for all entries in columns > n/2, but apparently it does not. I’m checking to see if there are any other issues. • Moses Charikar Says: March 2, 2010 at 4:55 pm I looked over the code and as far as I can tell, it is correct. I now understand why the “bug” above did not really change anything. • gowers Says: March 2, 2010 at 5:16 pm After further investigation, I no longer think the value at 30 is anomalous, but I have discovered (with much more effort than I thought would be needed, because the pattern is clearly subtler than I thought it would be) the following formula for $f(2^kpq)/2^kpq$. We know from earlier that $f(pq)/pq=1-1/p-1/2pq$, so that gives us the value when $k=0$. If I call that value $a_{pq}$, then the formula, which is linear in $k$ as we expect, is $\displaystyle a_{pq}+(1+\frac{p*q}{4pq})k,$ where $*$ is a binary operation that I do not fully understand. Let me tabulate the values that I have calculated so far: $3*5=19, 3*7=21, 3*11=29,3*13=33,$ $5*7=31,5*11=33,5*13=37,$ $7*11=43, 7*13=43,$ $11*13=67.$ I had to calculate a lot of values before I realized that there is a nice formula for this binary operation except if $p$ and $q$ are consecutive primes. The formula is simply $p*q=2(p+q)+1$. But when $p$ and $q$ are consecutive we seem to get a little “kick” and the value is larger. In fact, by the time we get to $11*13$ the kick is not all that little. Anyhow, this observation partially explains the mysterious anomaly at 30, but really it replaces it by a bigger mystery: why should the values be sensitive to consecutiveness of primes? Of course, at this stage I haven’t looked at all that many numbers, so I hope that I’ll be able to ask a more precise question in due course. So far, however, I don’t feel all that close to a formula for $f(p^aq^br^c)$ … • gowers Says: March 2, 2010 at 5:21 pm Correction: the formula does not give the right value for $7*13$ either, so my understanding is worse than I thought. However, in a way that might be good news, since I found the idea of a formula that fails for consecutive primes a bit bizarre. • gowers Says: March 2, 2010 at 5:45 pm At some point I may ask whether somebody can write a piece of code to work out this binary operation that I am working out laboriously by hand. But for now, here are a few more values, which perhaps give us tiny further clues. $7*17=49,$ at which point I think it is a pretty safe conjecture that if $p*q=2(p+q)+1$, then $p*r=2(p+r)+1$ for all $r\geq q$. $11*19=67,$ so we’re not there yet with 11. $11*23=69=2(11+23)+1$ so now we have got there. It looks as though a necessary and sufficient condition for $p*q=2(p+q)+1$ is that $q\geq 2p$. What’s more, it looks as though $p*q$ is constant before that. So the right formula should involve maxes and things. OK, here is my revised formula: $p*q=\max\{6p+1,2(p+q)+1\}$. Phew! To test the earlier conjecture, let me try $7*19$. It works out to be 53, which is indeed what it should be. And to test the new conjecture, let me try $13*19$. The prediction is 79. What I actually get is … 79. OK, I think I believe this formula now, but it will need quite a bit more slog to get a formula for $f(p^aq^br^c)/p^aq^br^c$ from here. I’ve spent several hours on this: the calculations are certainly routine enough to do on a computer, so I wonder if anyone could take over at this point. For instance, it would be good to give a formula for $f(3^kpq)$. Does it change behaviour according to whether $q<3p$ or $q>3p$? • Alec Edgington Says: March 2, 2010 at 9:16 pm Here’s a plot of $f(n)/n$ for $1 \leq n \leq 10\,000$: http://www.obtext.com/erdos/diagonals_gradient.png 19. gowers Says: March 2, 2010 at 11:12 am | Reply These calculations (of Moses in the previous comment) look very interesting and potentially fruitful. Rather than trying to duplicate them, I will continue to do what I can to refine my speculations about what the coefficients $c_{k,d}$ and $b_m$ might conceivably look like. As has already been mentioned, an important constraint on the $b_m$ (aside from the essential one that the partial sums should be unbounded) is that if $A$ is a set for which EDP fails, then $\sum_{m\in A}b_m<\infty$. Here I am saying that EDP fails for $A$ if there exists a sequence $(x_n)$ that takes $\pm 1$ values on $A$ and arbitrary values on $\mathbb{N}\setminus A$ such that the HAP-discrepancy of $(x_n)$ is bounded. This is a necessary condition on the coefficients $b_m$, since if $\sum_{m\in A}b_m=\infty$ then we know that EDP is true for $A$. (Once again, I am not being careful about the distinction between the large finite case and the infinite case.) In fact, it is better to think about the complex case, or even the vector case, since the SDP proof, if it succeeds, automatically does those cases as well. So let me modify that definition to say that EDP fails for $A$ if there is a bounded-discrepancy sequence of complex numbers (or vectors) that has modulus (or norm) 1 everywhere on $A$. With this in mind, it is of some interest to have a good idea of which sets $A$ are the ones for which EDP succeeds. A trivial example is any HAP (assuming, that is, that EDP is true). An example of a set for which EDP fails is the complement of any HAP. Let me prove that in the case where the HAP has common difference 15 — it will then be clear how the proof works in general. For any $d$, let $\chi_d$ be a non-trivial character of the multiplicative group of residue classes mod $d$ that are coprime to $d$. Now define $f(n)$ to be $\chi_{15}(n)$ if $n$ is coprime to 15, $\chi_5(n/3)$ if $(15,n)=3$, $\chi_3(n/5)$ if $(15,n)=5$, and 0 if $n$ is a multiple of 15. Any HAP will run periodically through either all the residue classes mod 15, or all the residue classes that are multiples of 3, or all the residue classes that are multiples of 5, or will consists solely of multiples of 15. Whatever happens, we know that $f((15m+1)d)+f(15m+2)d)+\dots+f(15(m+1)d)=0$ for every $m$ and every $d$, and that $|f(n)|=1$ for every $n$ that is not a multiple of 15. As Moses pointed out yesterday (to stop me getting carried away), if $A$ is the set of factorials, then EDP fails for $A$. That is because every HAP intersects $A$ in a set of the form $\{m!,(m+1)!,(m+2)!,\dots\}$, which means that we can define $x_n$ to be 0 if $n$ is not a factorial, and $(-1)^m$ if $n=m!$. The main point of this comment is to give a slightly non-trivial example of a set for which EDP succeeds. By “non-trivial” I don’t mean mathematically hard, but just that the set is rather sparse. It is the set of perfect squares. (The proof generalizes straightforwardly to higher powers, and, I suspect, to a rich class of other sets, but I have not yet tried to verify this suspicion.) To see this, one merely has to note what the intersection of $\{d,2d,3d,4d,\dots\}$ with $\{1,4,9,16,\dots\}$ is. The answer is that it is the set $\{r^2,4r^2,9r^2,16r^2,\dots\}$, where $r^2$ is the smallest multiple of $d$ that is a perfect square. (That is, to get $r$ from $d$ you multiply once by each prime that divides $d$ an odd number of times.) The proof that this is what you get is that a trivial necessary and sufficient condition for $d$ to divide $a^2$ is that $r^2$ should divide $a^2$. Now suppose that EDP failed for the set of perfect squares, and let $y_n$ be the … Hang on, my proof has collapsed. What I am proving is the weaker statement that EDP fails if you insist that the sequence is zero outside $A$. Let me continue with that assumption, though the statement is much less interesting. Now suppose that EDP fails for the set of perfect squares, and let $y_n$ be the sequence that demonstrates this. Thus (because of our new assumption) $y_{m^2}$ has modulus 1 for every positive integer $m$, and $y_n=0$ if $n$ is not a perfect square. But then we can set $x_m=y_{m^2}$ and we have a counterexample to EDP. This comment has ended up as a bit of a damp squib, but let me salvage something by asking the following question. Suppose that the vector version of EDP is true and that $(x_n)$ is a sequence such that $|x_{m^2}|=1$ for every $m$. Does it follow that $(x_n)$ has unbounded discrepancy? This may be a fairly easy question, since the squares are so sparse. (If it is, then the whole point of this comment is rather lost, but a different useful point will have been made instead.) For example, perhaps it is possible to find a sequence that takes the value 1 at every square and 0 or -1 everywhere else and has bounded discrepancy. I think there is an interesting class of questions here. For which sets $A$ does there exist a function $f$ of bounded discrepancy that takes the value 1 everywhere on $A$ and 0, 1 or -1 everywhere else? As I did in the end of this comment, one can paste together different HAPs to produce an example with density zero, but are there any interesting examples? I would expect that as soon as $A$ is reasonably sparse (in some sense that would have to be worked out — it would have to mean something like “sparse in every HAP”), it would be possible to use a greedy algorithm to prove that EDP fails. So here is a yet more specific question. Define a sequence $(x_n)$ as follows. For every perfect square it takes the value 1. For all other $n$ we choose $x_n$ so as to minimize the maximum absolute value of the partial sum along any HAP that ends at $n$, and if we have the choice of taking $x_n=0$ then we do. Thus, the first few terms of the sequence are as follows. $x_1=1$ (since 1 is a square), $x_2=0$ (since to minimize the maximum of $|x_1|$ and $|x_1+x_2|$ we can take $x_2$ to be 0 or -1 and we prefer 0), $x_3=0$ (for the same reason), $x_4=1$ (since 4 is a square), $x_5=-1$ (since $x_1+\dots+x_4=2$, so this makes the maximum at 5 equal to 1 rather than 2), $x_6=-1$, $x_7=0$, and so on. Does this algorithm give rise to a bounded-discrepancy sequence? If not, is there some slightly more intelligent algorithm that does? %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 906, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481834769248962, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/lie-algebra+mathematical-physics
# Tagged Questions 2answers 80 views ### When are there enough Casimirs? I know that a Casimir for a Lie algebra $\mathfrak{g}$ is a central element of the universal enveloping algebra. For example in $\mathfrak{so}(3)$ the generators are the angular momentum operators ... 2answers 143 views ### In quantum mechanics(QM), can we define a high-dimensional “spin” angular momentum other than the ordinary 3D one? Inspired by my previous question Questions about angular momentum and 3-dimensional(3D) space? and another relevant question How to define angular momentum in other than three dimensions? , now I get ... 1answer 143 views ### Equivalent Representations of Clifford Algebra I'm reviewing David Tong's excellent QFT lecture notes here and am a little confused by something he writes on page 94. We've considered the standard chiral representation of the Clifford Algebra, ... 1answer 246 views ### Wigner-Eckart projection theorem I'm following the proof of Wigner-Eckart projection theorem which states that: \langle \bf{A} \rangle ~=~ \frac{\langle \bf{A} \cdot \bf{J} \rangle}{\langle {\bf{J}}^2 \rangle} \langle \bf{J} ... 2answers 344 views ### Lie bracket for Lie algebra of $SO(n,m)$ How does one show that the bracket of elements in the Lie algebra of $SO(n,m)$ is given by $$[J_{ab},J_{cd}] ~=~ i(\eta_{ad} J_{bc} + \eta_{bc} J_{ad} - \eta_{ac} J_{bd} - \eta_{bd}J_{ac}),$$ ... 1answer 264 views ### Wigner-Eckart theorem of SU(3) I have just come across the Wigner-Eckart theorem and am not sure on how to apply it. How do I find the matrix elements of $\langle u|T_a|v\rangle$ in terms of tensor components and the Gell-Mann ... 1answer 202 views ### How do I find the tensor components of all weights of a representation of SU(3), e.g. the six dimensional representation (2,0) How do I find the corresponding tensor component v^ij of the six dimensional representation of SU(3) with dynkin label (2,0). 1answer 476 views ### General procedure for Clebsch-Gordan expansions I'm wondering if the Clebsch-Gordan series generalize to any orthonormal set of basis functions? If so, how would one go about deriving an expression for an arbitrary set of basis functions (perhaps ... 2answers 512 views ### Is the G2 Lie algebra useful for anything? Seems like all the simpler Lie algebras have a use in one or another branch of theoretical physics. Even the exceptional E8 comes up in string theory. But G2? I've always wondered about that one. ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8633516430854797, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/57505?sort=votes
## A geometric reference for (affine) Gorenstein varieties and singularities ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I would like to ask for a reference to some text that explains in relatively down to earth (if possible geometric) terms (for dummies) what is a Gorenstein singularity and Gorenstein variety (for a person whose knowlage of commutative algebra is minor). What standard things one could try to do to check if a given scheme is Gorenstein? If the field of definition is $\mathbb C$ - it is even better. I know one source - Eisenbud "commutative algebra" but find it a bit hard. - 1 In view of your other question (and your alias), asking about Gorenstein is bit surprising. Unfortunately, I don't think there is answer which doesn't involve a reasonable amount of commutative algebra. But if you are content with an example, any hypersurface in affine space is Gorenstein. More complicated singularities may fail to be. – Donu Arapura Mar 5 2011 at 23:10 Donu, thanks a lot for your comment! This question about Gorenstein varieties is related to integrable systems... I know that complete intersections are Gorenstein. But a typical singularity is not a complete intersection... – aglearner Mar 5 2011 at 23:24 ## 2 Answers Hi, I don't think there is an easy way to do this in general. Gorenstein is a fairly homological / commutative algebraic condition. However, the condition that $K_X$, the canonical divisor, is a Cartier divisor is quite close to the Gorenstein condition and for some purposes, it is just as good. Another algebraic place to read about Gorenstein singularities (besides Eisenbud's book) include Bruns and Herzog's Cohen-Macaulay rings. There is also a question you should ask yourself about Gorenstein singularities. Which of the following properties of Gorenstein singularities do you want: 1. The fact that Gorenstein singularities are Cohen-Macaulay (and so have well-behaved Serre-duality without the need for fancy homological machinary and derived categories, see the Serre duality section in Hartshorne's Algebraic Geometry). 2. The fact that on a Gorenstein variety, the canonical Weil divisor $K_X$ is actually a Cartier divisor. In fact, a singularity being Gorenstein is equivalent to both conditions 1. and 2. I also think that 1. + 2. is how most geometers think about the Gorenstein condition. Commutative algebraists tend to have a different perspective. I should also point out perhaps one other large class of rings where you can easily detect whether or not it is Gorenstein (besides the already-mentioned complete intersections). Suppose that X is a projective variety with an ample line bundle $\mathcal{L}$. Then the section ring: $$\oplus_{n \geq 0} H^0(X, \mathcal{L}^{\otimes n})$$ is Gorenstein if and only if the following two conditions hold. 1. $H^i(X, \mathcal{L}^{\otimes n}) = 0$ for all $i > 0$ and all $n \geq 0$. This is just condition 1. above. 2. $\mathcal{O}_X(K_X)$ is isomorphic to $\mathcal{L}^n$ for some integer $n$. This is condition 2. above. EDIT: If you have explicit equations, you can often use Macaulay2 to check whether the ring is Gorenstein. Let me know if this would be useful to you. - Thank you for the detailed answer! I'll think of it. – aglearner Mar 6 2011 at 12:56 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Sorry, tough luck, but most first (and second) algebraic geometry courses don't even touch Cohen-Macaulay rings, let alone Gorenstein. Look, for example, at Definition 4.2 here. So it is unlikely you can find such reference. Here is an explanation why you need both Cohen-Macaulayness and the fact that the canonical divisor is Cartier, as mentioned in Karl's answer. The trouble is that there are UFDs (so all divisors are Cartier) which are not Cohen-Macaulay (for instance, the invariant ring of $\mathbb Z_4$ acting by cyclically permuting the variables on the polynomial ring in four variables over a field of char 2). Such examples are not very well-known, I remember Sándor Kovács pointed out in a recent comment that most people don't even realize that it could be an issue. But without Cohen-Macaulayness, the canonical sheaf would not be truly dualizing (see the comments here). This is perhaps where the real power of the property lies. Finally, I would like to recommend this survey on Gorenstein rings. You can pick up a lot about them from there, including the very interesting history. To quote from the Introduction: As we shall see, they could perhaps more justifiably be called Bass rings, or Grothendieck rings, or Rosenlicht rings, or Serre rings. Enjoy! - Many thank for the answer and for the link to the article of Craig Huneke! – aglearner Mar 7 2011 at 8:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266092777252197, "perplexity_flag": "head"}
http://mathoverflow.net/questions/115032?sort=newest
## Non-rigorous reasoning in rigorous mathematics ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I was wondering what role non-rigorous, heuristic type arguments play in rigorous math. Are there examples of rigorous, formal proofs in which a non-rigorous reasoning still plays a central part? Here is an example of what I am thinking of. You want to prove that some formula $f(n)$ holds, and you want to prove this by induction. Based on heuristic arguments, you conjecture what the correct formula is. Then you prove it by induction. But, if you had just given the induction proof on its own, then you would have to pluck this mysterious formula out of thin air. I am interested in situations in which there is a heuristic argument which is valid and can be formalized. I am more interested in cases in which there is a heuristic argument and a separate (or complementary) rigorous argument, but the heuristic argument is more enlightening and more explanatory. - 5 This is tagged community wiki, but is it actually community wiki? – Brian Rushton Dec 1 at 0:41 7 no, but it should be! – unknown (google) Dec 1 at 0:44 22 I don't understand this question. When does non-rigorous heuristic type arguments not play a role when doing research in mathematics? The whole point of doing research in math is to find new proofs and new theorems. If you restrict yourself to rigorous reasoning, how would you ever find anything new except by some kind of process of exhaustion? – Deane Yang Dec 1 at 2:24 7 To follow up on Deane's comment, in a paper on the development of integration Lebesgue wrote at one point "... if one were to refuse to have direct, geometric, intuitive insights, is one were reduced to pure logic, which does not permit a choice among every thing that is exact, one would hardly think of many questions, and certain notions [...] would escape us completely." – KConrad Dec 1 at 2:58 3 ----If you restrict yourself to rigorous reasoning, how would you ever find anything new except by some kind of process of exhaustion?---- There is a viable alternative to intuition and heuristics, which is "make only (but all) unavoidable steps in your argument". If you've read Littlewood's "Miscelany", you probably remember the passage about a "precisian" who might pick on the absence of the word "finite" in the phrase "covering by intervals" and start "on the road inevitably leading to the Lebesgue measure". This remark deserves more attention than it usually gets, IMHO :-) – fedja Dec 1 at 6:30 show 5 more comments ## 11 Answers Close to he requirement in the original question: Waring's problem which generalizes Lagranges's four-square theorem. Every positive integer can be expresses as a sum of 9 cubes, a sum of 19 fourth powers etc. For the $k$-th powers the number of summands required, denoted $g(k)$, was a heuristic guess, $g(k) = 2^k + [ (\frac32)^k ]- 2$ and some variations of this. Though Hilbert proved $g(k)$ is finite before 1910 actual specific values were proved decades later. The reason I know this is because one of the persons who 'nailed the last nail into the coffin" of this problem in 1980s was working where I started my PhD. The number of summands is very high for low numbers because (heuristically) you have only 1 and $2^k$ to use. For 4-th powers 79 is the culprit, needing fifteen 1's and four 16's. So this lead to related another natural question: as numbers needing that many summands are small in size they may be a finite number of exceptions. Define $G(k)$ as the number of summands needed for expressing every sufficiently large integer as sum of $k$-th powers (i.e. treat 79 as an exception for the case of fourth powers). $G(4)$ is known to be 16. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Every geometric problem that has a two-dimensional representation is solved by almost every mathematician by first drawing a diagram, then deriving the correct formal description from this diagram, and then continuing to solve the problem in the algebraic description. These certainly are ubiquitous "situations in which there is a heuristic argument which is valid and can be formalized." Also the heuristic argument is "separate (or complementary)" to the "rigorous argument, but the heuristic argument is more enlightening and more explanatory." - In 1965 I had the idea that the proof of the Seifert-van Kampen theorem for the fundamental groupoid generalised to two dimensions, and higher, but lacked the gadget corresponding a 2-dimensional fudamental groupoid using squares composed in two directions. So this was an idea of a proof in search of a theorem. So I tried for 9 years to define this for a topological space. Finally, in 1974, Philip Higgins and I realised that we could do this for a pair of spaces, i.e. a space $X$ and subspace $A$, mapping a square to $X$ with edges mapped into $A$ and taking homotopy classes of these maps with vertices fixed in the homotopies. Fortunately, lots of work on related algebra had been done in the meanwhile, so the main stuff rolled out, and got published in 1978. Unfortunately, the use of groupoids and double groupoids seemed to arouse hostility. so this and the work in all dimensions was, a colleaue remarked, pursued in the teeth of opposition! So that is another possible affect of intuition. to say some work is ridiculous! It's a hard life! But has been lots of fun pursuing a line of intuition and trying to make it really work. I was lucky in my collaborators, too. Later: There is an example in J.E. Littlewood's "A mathematican's miscellany" where a picture contains the essential argument. In higher category theory, there is quite a lot of use of manipulating diagrams, and this is regarded, rightly, as rigorous. - There is a well known connection between parabolic and elliptic partial differential equations and Brownian motion. By now it very well explored formally (e.g. the probabilistic proof of Hörmander's theorem due to Malliavin) but it used to be the case that people would get their intuition from Brownian motion and then prove a theorem by completely different means. One example is the following quote from Nash's 1958 paper "Continuity of solutions of parabolic and elliptic equations": The methods here were inspired by physical intuition, but the ritual of mathematical exposition tends to hide this natural basis. For parabolic equations, diffusion, Brownian movement, and flow of heat or electrical charge all provide helpful interpretations. As a side note. One of the people who contributed the most to establishing the formal connection between Brownian motion and parabolic/elliptic equations was Joseph Doob. He had done his Phd. thesis on harmonic analysis, but couldn't find a job anywhere (this was a couple of years after the Great Depression) until he got offered a post at a probability department. He started working on formal (i.e. Kolmogorov) foundations of probability and ended up establishing the connection between harmonic functions and Martingales. He's one of my favorite mathematicians and I think his contributions are underrated. - Have you reads Proofs and Refutations by Lakatos? It's all about the dynamic tension between "heuristics" (don't get mad at me, Andrew Stacey!) and rigorous proof, centering particularly on a classroom situation where they are discussing Euler's formula V - E + F = 2. The difference between the first attempted "proofs" or thought experiments and the final rigorous proof involving homology is pretty stark; the first proof is however memorable and explanatory. Even in pre-Robinson days before they were made rigorous, you could say that the original intuitions of infinitesimals in calculus were effective and explanatory (even today, I am told, among certain physicists and engineers who might never learn the rigorous foundations). If you read the introduction to Models of Smooth Infinitesimal Analysis by Moerdijk and Reyes, you will see examples of intuitive reasoning with infinitesimals among geometers like Lie and E. Cartan which were certainly convincing to them, but which had to undergo some distortion to meet the demands of Weierstrassian rigor -- at least that was so until recent years when the types of reasoning with nilpotent infinitesimals in smooth analysis were clarified and made rigorous through sheaf theory and its internal logic. - Nice references! – Brian Rushton Dec 2 at 3:31 In elementary calculus, the way I was taught to integrate by partial fractions was to guess the numerators of the fractions, then back-substitute to check the correctness of the guess. Very often the guess would be right for no obvious reason. When it was wrong, the discrepancy immediately suggested (non-rigorously) what the next guess should be, and the second guess was almost always correct. - I've always called this method of integration "intelligent guessing". It can be used any time you can predict in advance what the terms up to constant factors will appear in the answer. Many substitution and integration by parts problems, especially those that otherwise require doing it more than once, can also be done more efficiently this way. I think we do a disservice to calculus students by forcing them to do it the longer, more error-prone "rigorous" way. – Deane Yang Dec 1 at 15:13 I'd never thought of "guess and check". For the case where the denominator of the rational function splits into linear factors, I like to mention to students how each of the coefficients of the partial fractions involve a derivative of the denominator; perhaps that's not too error-prone. – Todd Trimble Dec 4 at 15:25 Unless I am misunderstanding, the Weil conjectures fit into this framework. I believe it took about two decades for the Grothendieck school to formalize Weil's heuristic that his conjectures follow from a Lefschetz fixed point formula for varieties over finite fields. Rather than try to flesh this post out, let me point to the Brian Osserman's article for the PCM: http://www.math.ucdavis.edu/~osserman/math/pcm.pdf. The Wikipedia account of the history also seems to be not bad (but I haven't really read it in detail): http://en.wikipedia.org/wiki/Weil_conjectures. I also seem to remember learning about the history and some of the mathematics for the first time from an article by Steven Kleiman, but cannot remember the precise reference. - In mathematical statistics people often have experience about some method that works well in practice even though it "shouldn't" in all generality. The game is then to ask what conditions need to be satisfied to explain why the method works. Here is an example which I was not personally involved in, so I can only speculate. This paper by Bickel and Li considers local polynomial regression methods and shows that they works as well as possible (in the sense of asymptotic optimality) when the data it is being used on has low dimensional structure. The idea is that people were finding that certain regression techniques were giving reasonable generalization performance in prediction problems even when the data was high dimensional so they figured that maybe the data wasn't actually high dimensional in some relevant aspect. But which relevant aspect, that's the challenging part. To my mind, figuring out how to explicitly articulate the minimal conditions under which some obvious" fact is true is where the discover and understanding come in. It is a very different process than what a student does on problem set, where the statement and all the relevant conditions are laid out and the main job is deriving the stated implication. Put another way: research has degrees of freedom on both ends -- you can find/create the answer and the question as pairs, rather than being handed the one and being asked to complete the set. This perspective of course doesn't cover all cases -- notably, that of people chasing down famous open problems. But it is a way in which one can develop a rigorous understanding from non"-rigorous reasoning. When one first starts thinking vaguely about a problem there is nothing there about which to be rigorous. - I think that the following "derivation" of the Prime Number Theorem from the well-known identity $\sum_{d|n}\Lambda(d) = \log n$ is a particularly prominent example of what you are asking. Indeed, it follows from the said identity that $\sum_{n \leq x} \sum_{d|n} \Lambda(n) = \sum_{d\leq x} \Lambda(d)\sum_{n \leq x, d|n} 1 = \sum_{d\leq x}\Lambda(d)\lfloor \frac{x}{d}\rfloor$ and whence, $\sum_{n \leq x}\Lambda(n)\lfloor \frac{x}{n} \rfloor = \sum_{n \leq x} \log n \sim x \log x$. Now if we replaced the $\lfloor x/n \rfloor$ in the previous line by $x/n$, we would get $\sum_{n\leq x}\frac{\Lambda(n)}{n} \sim \log x \sim \sum_{n \leq x}\frac{1}{n}$. This might lead us to ascertain that the function $\Lambda$ of von Mangoldt behaves in the average like the arithmetical function that is identically equal to $1$, thus $\psi(x):=\sum_{n\leq x} \Lambda(n)\sim x.$ (Voilà!) As to the formal version of the preceding argument you may want to take a look at sections 9.9 through 9.12 of [2]. You are to find there a proof of the Prime Number Theorem (presumably due to Ingham) based on the estimate $\sum_{n \leq x} \psi(\frac{x}{n}) = x\log x - x+ O(\log x), \quad x \geq 1.$ According to Prof. Balanzario (see [1, page 59]): "This demonstration ... is the correct version of our heuristic reasoning [given above]." References [1] E. P. Balanzario. Breviario de Teoría Analítica de los Números. SMM, México, 2003. [2] W. Rudin. Functional Analysis. Tata McGraw Hill Publishing Company Ltd., 1974. - I feel that almost all of math is this way. One specific example is the Riemann mapping theorem for annuli (which is equivalent to the standard Riemann mapping theorem). Riemann is said to have conceived of the idea by imagining current flowing from the inside of an annulus to the outside. The current flows and equipotentials would form an orthogonal set of coordinates which could be "stretched out" to form a perfect cylinder. Riemann's first proof of this theorem was shown to have an error, but he reportedly commented that it didn't matter, because he knew the theorem was true anyway. (Most of this comes from Jim Cannon's paper The Combinatorial Riemann Mapping Theorem). - You do "pluck the mysterious formula from thin air". That is why there are (not enough) jobs available to mathematicians: Doing Math is not something you can leave to a computer program. The german word ansatz describes this mental process very well. To solve a system of linear ODEs you assume the solutions are exponentials, and then proceed to find the coefficients. This assumption step is where intuition takes place. Of course this example is old, trivial, and well known, but similar insights are part of all new results. Your intuition shows you the way and THEN you formalize the proof. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9563645720481873, "perplexity_flag": "middle"}
http://samjshah.com/2012/10/25/approximating-the-instantaneous-rate-of-change-in-calculus/
# Approximating the Instantaneous Rate of Change in Calculus Posted on October 25, 2012 by I’ve been trying something new this year in calculus… really having students grapple with the concepts of what they can definitively know, what they can definitively not know, and what they can know with some certainty (but not total certainty) when they are given some information about a car trip. I’m hammering home the conceptual underpinnings of average and instantaneous rates of change. And I’ll blog about that soon I hope. But today’s post comes from where we went with this… This week, we got to the point where we were estimating the instantaneous rate of change of a function at a point by using the average rate of change for a small interval near the point. And we’re used to seeing things like this in a textbook: We’re getting our interval smaller and smaller and seeing the average rate of change get closer to some value. This value it is getting closer and closer to is the instantaneous rate of change. That’s a deep and important thing. And we all know that. But when we were generating a table like this, one of my students asked “Why do we have to do this? Why can’t we just pick two points really really close together instead of doing this horrible calculation like 4 times? Like super close together. Then we only have to do it once if we’re just trying to estimate the instantaneous rate of change.” Brilliant! Because who wants to do that horrible calculation like 4 times? It’s tedious, even with a calculator. I wasn’t ready to talk about the derivative but I did want to answer his question. Why do we have to do so many calculations instead of just one? Unfortunately, I fumbled through it. And as always is the case, a genius idea strikes me right after class ends. So I decided to use it for my other section. In that section, I have them think about what the use is of doing this calculation for smaller and smaller intervals, instead of just one interval… one student came up with the idea that “it gives us more certainty… more data to work with…” but that was ambiguously stated. More certainty about what? So here’s where the idea came in. I had each student individually use only one small interval of their choice (instead of four) to estimate the instantaneous rate of change of $y=sin(921,364x)$ at $x=0$. What was great is that some students picked intervals like [0,0.0001] and others [0,0.00001]. Were they similar? Different? WHOA they were very different. Students got VERY VERY different estimates even though everyone used really small intervals. So what’s going on? When we looked at the average rate of changes for various intervals, we saw this: So yeah, if you happened to choose two numbers really close to each other, they might not be close enough! You just don’t know. Even if they’re really close. So doing a series of smaller and smaller intervals indeed gives us more certainty that we have a good estimation. This was just sort of thrown into my lesson, so I don’t know exactly how much they got out of it. But I hope that next year either I use it as a do now, a new conceptual skill that I add to my calculus Standards Based Grading skill list, and make it a little more formalized [1]. Maybe after doing this next year, have a sheet with a few different functions, some which are wildly erratic and fluctuate a lot and some which are nice — and have students pick out merely from the graph and the point I want to estimate the average rate of change, if they can make do with two points “pretty close together” to estimate the instantaneous rate of change, or if they truly do need two points “very very close together.” That would be a good check to see if they understood the conceptual underpinnings of what’s going on. [1] Idea. Have a sheet with two columns. On the left column, the function $y=x^2$. On the right column $y=\sin(921,364x)$. Have them use the interval $[0,0.001]$ to estimate the instantaneous rate of change at $x=0$. Then say: “You have \$5 to bet on which one is closest to the true instantaneous rate of change. What are you going to bet on, and why?” Have groups whiteboard their ideas/thoughts for 5 minutes and present. Then show the graphs of the functions. Have then talk for 2 minutes to see if the graphs change their thoughts. Finish up student discussion. ## 3 thoughts on “Approximating the Instantaneous Rate of Change in Calculus” 1. My intro about the car trip went like this: You use an awesome camera (like this one http://youtu.be/Y_9vd4HWlVA) in your cop car and use it to video a speeding car. Unfortunately, the monitor on your computer is broken. So, to see the car, you have to tell the computer to print out certain frames. Which frames do you want to print out to figure out how fast it was going? 2. Pingback: A Cool Post from Mr. Shah in NY about Calculus - poliquinmath.netpoliquinmath.net 3. Pingback: What does it mean to be going 58 mph at 2:03pm? « Continuous Everywhere but Differentiable Nowhere # Tags This is a work in progress (not all posts are tagged yet). But it's all due to the efforts of @crstn85. Thank you! Algebra II Pre-Calculus Calculus Multivariable Calculus Standards Based Grading General Ideas for the Classroom Big Teaching Questions Good Math Problems Mathematical Communication Other # Email Subscription Join 250 other followers # Blogroll (0.6014,-0.1169) (-0.4777,0.1747) Blog at WordPress.com. | Theme: Confit by Automattic. Follow ### Follow “Continuous Everywhere but Differentiable Nowhere” Get every new post delivered to your Inbox. Join 250 other followers %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381256103515625, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/6698/effect-of-expansion-of-space-on-cmb
# Effect of expansion of space on CMB Is it true that the expansion of space time cause the CMB to become microwaves from a shorter wavelength. If it is has the amplitude been increased? Seeing as the amplitude has decreased; why hasn't it increased (/"stretched") in the same way the wavelength has? - I tried to answer the first part of your question, but I can't do a good job at answering the second. I don't know why you would have expected the amplitude to increase, so I can't give a very satisfactory answer to the question "why not". – Ted Bunn Mar 11 '11 at 16:38 I would have expected it to increase because it "stretches" similar to the wavelength, but I see now from the answers that the amplitude is to do with the energy (which I knew already but didn't completely make the connection) – Jonathan. Mar 11 '11 at 16:45 1 Hi Jonathan, imagine that the Universe is a surface of a ball - a sphere. A photon is moving along the equator and has $N$ maxima of the wave on it. Clearly, $N$ will be conserved during expansion. It follows that the wavelength will grow proportionally to the radius of the Universe, and because the photon's energy is inversely proportional to wavelength, the energy of the photon will go down inversely proportionally. You shouldn't imagine the "waves' amplitudes" to be geometric quantities similar to length because they (e.g. the electric field) don't even have the units of length. – Luboš Motl Mar 11 '11 at 17:49 ## 3 Answers Yes to the first part of your question. This phenomenon is known as cosmological redshift. Also due to the increase in the volume due to cosmlogical expansion the same amount of photons, as were present during the CMB, now have to occupy a much larger region. Consequently the average temperature ("amplitude" in your words) of such a gas drops from a high of $T \sim 150,000$ deg. Kelvin (corresponding to $T=E/k_B$ with $E$ being the ionization energy of hydrogen (13.6 ev) and $k_B$ is Boltzmann's constant $k_B = 8.6 \times 10^{-5}$ eV/Kelvin) to the currently observed $T'=2.7$ deg. Kelvin. - If the wavelength of the radiation was increased then why has the amplitude of the waves decreased? – Jonathan. Mar 11 '11 at 16:31 I agree with every word of this answer except the all-important first one! Don't you mean "No"? The question asks if the amplitude increases, and the answer is that it decreases. (Also, I wouldn't have guessed that "amplitude" meant "temperature," but maybe you're right that that was the intent.) – Ted Bunn Mar 11 '11 at 16:36 @Ted good point. That is liable to create a big misunderstanding. – user346 Mar 11 '11 at 16:45 1 @Jonathan I'm guessing that your reasoning (regarding the amplitude) is based upon some notion of energy conservation applied to the CMB photons. Keep in mind that this energy is not conserved. It is lost in overcoming the (effective) gravitational potential well the photon experiences on its journey from the time of reionization uptil now. – user346 Mar 11 '11 at 16:51 Excellent point, Deepak. – Ted Bunn Mar 11 '11 at 16:59 show 1 more comment ## Did you find this question interesting? Try our newsletter Sign up for our newsletter and get our top new questions delivered to your inbox (see an example). email address Just to be clear: by "amplitude" you mean the amplitude of a classical electromagnetic wave -- that is, the peak value of the electric field -- right? In that case, the answer is that the amplitude goes down. For definiteness, let's consider a wave packet of electromagnetic radiation with some fairly well-defined wavelength. At some early time, it has a wavelength $\lambda_1$ and energy $U_1$. (I'm not calling it $E$ because I want to reserve that for the electric field.) After the Universe has expanded for a while, it has a longer wavelength $\lambda_2$ and a smaller energy $U_2$. (Fine print: wavelengths and energies are measured by a comoving observer -- that is, one who's at rest in the natural coordinates to use.) In fact, the ratios are both just the factor by which the Universe has expanded: $${\lambda_2\over\lambda_1}={U_1\over U_2}={a_2\over a_1}\equiv 1+z,$$ where $a$ is the "scale factor" of the Universe. $1+z$ is the standard notation for this ratio, where $z$ is the redshift. The physical extent of the wave packet is also stretched by the same factor. So the energy density in the wave packet goes down by a factor $(1+z)^2$. What does that mean about the amplitude of the wave? The energy density in the wave packet is proportional to the electric field amplitude squared. So if the energy density has gone down by $(1+z)^2$, the electric field amplitude must have gone down by $(1+z)$. Specifically, if the Universe doubles in size, the wavelength of any given wave packet doubles, and the amplitude (peak value of ${\bf E}$) is cut in half. - If E and frequency are halved simultanously, then Energy of the wave paket should be quartered, right? – Georg Mar 11 '11 at 16:44 Nope. Energy density in an EM wave goes like $E^2$ (independent of frequency). So total energy goes like $E^2L$, where $L$ is the physical extent of the wave packet. $E\propto (1+z)^{-1}$, $L\propto (1+z)$, so $E^2L\propto (1+z)^{-1}$ as desired. – Ted Bunn Mar 11 '11 at 16:51 It is easier to think in terms of a gas of photons. The photons (in thermal equilibrium) have their wavelength distributed according with the Bose-Einstein (BE) distribution $$n(p) = \frac{2}{\exp(pc/k_BT) - 1}.$$ From the null geodesics in a Friedmann metric we have that the photon momentum decreases with $a^{-1}$ (the wavelength is stretched with the scale factor $a$) and applying the Vlasov equation in a Friedmann metric we have that the temperature also decreases with $a^{-1}$. Using the BE distribution and the results above one obtain that the total number of photons ($\propto T^3V$) in a volume $V$ is conserved, therefore, the density of photons decreases with $a^{-3}$. This shows that the energy density decreases with $a^{-4}$ ($a^{-1}$ from the stretch of the wavelength and $a^{-3}$ from the decrement of the number density). Other way to obtain that is to use the continuity equation for a gas of relativistic particles in Friedmann i.e, $$\dot{\rho} + 3\frac{\dot{a}}{a}(\rho+p) = 0,$$ and since $p=\rho/3$ we obtain $\rho \propto a^{-4}$. Finally, since the energy density is proportional to the electric field squared $E^2$, then, the electric field decreases with $E \propto a^{-2}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505749344825745, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/tagged/estimation+bayesian
Tagged Questions 1answer 61 views Under what conditions do Bayesian and frequentist point estimators coincide? With a flat prior, the ML (frequentist -- maximum likelihood) and the MAP (Bayesian -- maximum a posteriori) estimators coincide. More generally, however, I'm talking about point estimators derived ... 0answers 50 views Pooling asymmetric confidence intervals for proportions? I have several measurements of proportions (values in [0, 1]) $\theta_1,...,\theta_n$, each with an (asymmetric) 95% confidence interval. The $\theta$'s are repeated measurements of the same variable ... 2answers 155 views Bayesian parameter estimation of a Poisson process with change/no-change observations at irregular intervals Consider a Poisson process with unknown parameter $\lambda$. We perform a sequence of $n$ observations at intervals $\overline{t}=t_1,\,t_2,\,\dots,\,t_n$. Each observation is a binary variable $x_i$ ... 2answers 163 views Estimating the covariance posterior distribution of a multivariate gaussian I need to "learn" the distribution of a bivariate gaussian with few samples, but a good hypothesis on the prior distribution, so I would like to use the bayesian approach. I defined my prior: ... 1answer 115 views What is the name of the estimator that takes the mean of likelihood? Let $X,Y$ be input and output (observed) continuous variables in $\mathbb{R}$. Let $\{y_1,...,y_n\}$ be the set of $n$ observations. Is there a name for the estimator \$\hat x = \int_{x \in X} x ... 2answers 152 views Bayesian estimation of Dirichlet distribution parameters I want to estimate parameters of Dirichlet mixture models using Gibbs sampling and I have some questions about that: Is a mixture of Dirichlet distributions equivalent to a Dirichlet process? What ... 2answers 81 views What is the name of (and alternatives to) this Bayesian point-estimate? Assume that we have a Markovian environment that generates at every time step an event $A$ with probability $p^*$ and an event $B$ otherwise. Now suppose you are a Bayesian agent that wants to learn ... 1answer 54 views Posterior probabilities of variables that aren't included in likelihood Disclaimer: I'm not a statistician so I apologize if this is a trivial question or written in a way that convolutes ideas and abuses jargon. It seems like a problem that should be common but I ... 1answer 55 views Estimating conditional mean Suppose I have a uniform random variable $X$ taking values $\{1,...,n\}$, and two functions $v(X)$ and $w(X)$. I know that $v(X)$ and $w(X)$ are jointly distributed with correlation $\rho$. (And can ... 1answer 274 views Posterior distribution for multinomial parameter (topic moved from maths.stackexchange.com) I'm currently developing an application integrating a probabilistic inference engine for Bayesian Networks. The Bayesian Network integrates some form of ... 0answers 58 views Inferring from a combination of uncertain and certain data I am trying to estimate the surface (isochrone), zi(x) for which T(x,z)=0 from noisy measurements of T(x,z) everywhere and 3 almost noise free control points: Instead of using the T(x,z) data ... 1answer 94 views Pros/cons of estimating parameters for missing observations? Some people are playing a game online. Every time a person plays, a new game board is generated randomly. On generation of a new board, the player can also choose a special weapon. (The choice of ... 1answer 254 views Is there a difference between the “maximum probability” and the “mode” of a parameter? I am reading a manuscript that provides the "maximum posterior probability" in a Bayesian context as a statistical summary of a parameter. Is the term "maximum posterior probability" equivalent to ... 0answers 43 views Combining Deterministic and Random Unbiased Estimators I am trying to compute an expectation $E[f(X;\theta,n)]$ where $\theta$ and $n$ are known parameters. I have an easy-to-compute deterministic function $\tilde{f}(\theta,n)$ that provides an ... 1answer 251 views Minimax estimator for the mean of a Poisson distribution I recently took a course on Bayesian statistics based on The Bayesian Choice by C. Robert (aka Xi'an). I couldn't solve one of the exercises regarding minimax estimators and was hoping that someone ... 2answers 179 views Bayesian updating using $n$ noisy observations of Brownian motion I am very new to Bayesian inference and can't figure out what may be an elementary problem. Also, please forgive me if I am screwing up the notation -- this is my first foray into Bayesian ... 0answers 46 views Estimation and functional space In the first chapter of the book Algebraic Geometry and Statistical Learning Theory which talks about the convergence of estimations in different functional space, it mentions that the Bayesian ... 1answer 134 views Bayesian estimators I have a model $f(x|\theta)$ ($\theta$ is a vector) for which I want to specify a prior $\pi(\theta)$. I only know that $\theta$ is in some interval. There are ways to specify an ignorance prior ... 0answers 114 views Estimating parameters in a model with a periodic design parameter We recently studied a model with a likelihood function of the form $$\Pr(d_i|\omega,t_i)=(1-d_i)+(2d_i-1)\cos^2(\omega t_i),$$ where $d_i\in\{0,1\}$, $\omega$ is an estimation parameter, and $t_i$ ... 1answer 186 views Combining two pieces of evidence expressed as probabilities I have a hidden binary random variable Z that can have a value of either 1 or 0. There is some true probability P(Z=1) = z that I do not know. I also have two separate pieces of "evidence" that give ... 1answer 177 views Bayesian analysis of data I have a big dataset in the form: $X_1, X_2, X_3, X_4, Y$. All the $X_i, i \in {1,...,4}$ come from different unknown distributions and $Y$ follows a bernoulli distribution, so it can take only values ... 2answers 540 views Acceptance rates for Metropolis-Hastings with uniform candidate distribution When running the Metropolis-Hastings algorithm with uniform candidate distributions, what is the rationale of having acceptance rates around 20%? My thinking is: once the true (or close to true) ... 0answers 180 views Confusion in MLE and EM [closed] I was trying to read through Maximum Likelihood Estimation(MLE) and Expectation and Maximization(EM) algorithm. But while reading them, I got two interpretations. I am trying to post my questions, ... 1answer 415 views Bayesian vs Maximum entropy Suppose that the quantity which we want to infer is a probability distribution. All we know is that the distribution comes from a set $E$ determined, say, by some of its moments and we have a prior ... 2answers 415 views Estimating distribution parameters from few data points Say I'm doing stats on the height of adults from various countries. I assume the heights of adults from one country are normally distributed, and ignore sex differences (I also ignore the fact that ... 2answers 237 views Regularization and Mean Estimation Suppose I have some i.i.d. data $x_1, \ldots, x_n \sim N(\mu, \sigma^2)$, where $\sigma^2$ is fixed and $\mu$ is unknown, and I want to estimate $\mu$. Instead of simply giving the MLE of \$\mu = ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.900968074798584, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/62193?sort=oldest
## What do poles of differentials on a curve mean? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let C be a (smooth proper) curve. I think of an element f of the (algebraic) function field of C as, literally, a map from C to P1. Poles of f literally mean the fiber over infinty, and div f makes good sense to me. How should I think of a pole of a differential? I.e., let K(C) be the function field of C and define Omega(C) to be the set of symbols df for f in K(C), modulo the usual relations. Let w in Omega(C). I understand quite well the yoga of defining and manipulating div w; my question is -- how, exactly, should I think of poles of w? - 1 Omega(C) is not just stuff of the form df. – David Hansen Apr 19 2011 at 5:31 ## 4 Answers The right way to think about this is to look at the whole set of zeroes and poles of the meromorphic 1-form $\omega$, in other words to consider the divisor $\textrm{div}(\omega)=\sum_{p} \textrm{ord}_p(\omega) \cdot p$. Every divisor of this type is called a canonical divisor. Two canonical divisors are always linearly equivalent and their degree is $2g-2$, where $g$ is the genus of the curve. The linear system of all canonical divisors is denoted by $|K|$, and it is the basic tool for the study of algebraic curves. Now let $D$ be an effective divisor on $C$; then the linear space of meromorphic 1-forms having poles at most in the set $D$ is given by the cohomology group $H^0(K+D)$ or, using Serre duality, by $H^1(-D)$. This is pretty basic stuff and you should find it in any textbook dealing with algebraic curves or Riemann surfaces (see for instance Miranda's book). - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think of them pretty simply as differential forms with zeros in the denominator of the "coefficient function" upon choosing a local uniformizing parameter (which is really just the definition). Edit: The following isn't quite right, as pointed out in the comments. See below for attempt at fixing it. _____ However, if you want something more akin to your "map to $\mathbb{P}^1$" description, you might try something like this: to the invertible sheaf $\Omega$ you can associate a projective bundle $\mathbb{P}(\Omega)$ equipped with a map $\pi:\mathbb{P}(\Omega)\to C$ whose fibers are the projectivizations of the fibers of $\Omega$. Then a meromorphic differential form should correspond to a section $s$ to $\pi$ and the poles of the form are the points where $s(x)=\infty$. I can't say that I've ever seen exactly this written down, but it seems quite reasonable... _____ Perhaps one should consider the projectivization of the bundle $\Omega\oplus\mathcal{O}$ instead. Clearly if I take a regular differential form $\omega$, it gives rise to a section of $\mathbb{P}(\Omega\oplus \mathcal{O})$ by considering the image of $\omega\oplus 1$ in the projectivization. Arguing locally, it seems that if I take a meromorphic differential form of the form $t^{-n}udt$ where $u$ is a local unit, I can associate to it the image of $udt\oplus t^n$ in the projectivization. Now glue over the curve to associate a section of this projective bundle to your chosen meromorphic differential form. If everything glues without incident, it seems that the resulting section should have the desired property that the poles are the points mapping to $\infty$ in the fiber, just by construction. Hopefully this makes more sense than my first attempt :) - bold P of a line bundle is just isomorphic to C? – mdeland Apr 19 2011 at 18:33 Oops. I guess my comment isn't at all right as stated... I still wonder if there is a $\mathbb{P}^1$ bundle over $C$ with the property I mentioned. It clearly isn't $\mathbb{P}(\Omega)$, which as you point out is just $C$! Let me think... – Ramsey Apr 19 2011 at 18:59 Perhaps you want the following: consider $\Omega$ as a $\mathbb{G}_m$-torsor (the punctured line bundle) and allow $\mathbb{G}_m$ to act on $\mathbb{P}^1$ by $u \cdot [x : y] = [ux : y]$. Take the associated $\mathbb{P}^1$-bundle. – Ryan Reich Apr 19 2011 at 23:56 In the beginning, it is good to gain intuition by considering the case where $C$ is a (compact) Riemann surface. Then you can think of a differential on $C$ as a meromorphic section of some holomorphic line bundle on $C$. More precisely, if you consider the trivial holomorphic line bundle $\mathcal{O}_C$ (which you can think of as the space $C \times \mathbf{C}$ over $C$, then its holomorphic sections are just the holomorphic functions on (open sets of) $C$, while its meromorphic sections are the meromorphic functions on $C$. In the same way, there is a holomorphic line bundle $\Omega^1_C$ over $C$ (called the canonical bundle). The fiber of $\Omega^1_C$ over $p \in C$ is the cotangent space of $C$ at $p$ (this is a $\mathbf{C}$-line). Then the holomorphic sections of $\Omega^1_C$ are just holomorphic differential forms on $C$ (usually denoted by $\Omega^1(C)$), while the meromorphic sections of $\Omega^1_C$ are the differentials you consider. More generally, given any holomorphic line bundle $\mathcal{L}$ on $C$, you can consider its space of holomorphic/meromorphic sections. For every meromorphic section $s$ of $\mathcal{L}$, you can make sense of $s$ being holomorphic at a given point $p \in C$. Moreover, you can define $\operatorname{ord}_p(s) \in \mathbf{Z}$. Finally you can define $\operatorname{div}(s)$, which is a divisor on $C$ and is well-defined up to the principal divisors. - This is not unlike Francois' answer. First, let's look at the naive description of the order of a zero/pole of a differential $\omega$ on $C$ at some point $p$: write $\omega = f(z)\;dz$ in terms of some local coordinate $z$ near $p$, and then define $\operatorname{ord}_p(\omega) = \operatorname{ord}_p(f)$. Explicitly, this says two things: • Every differential is proportional to the differential of a local coordinate, locally; • The differential of a local coordinate is regular where it is defined. In order for that to make sense, you have to check that for any other local coordinate $w$, the ratio (derivative) $dw/dz$ is regular at $p$. Of course, some precise algebraic computation is necessary, but intuitively, this is just the statement that both $z$ and $w$ have "slope 1" at $p$, so are equal to order one. The fact that you have to choose a local coordinate is what is troubling you (it also troubles me); it comes about because there is no impartial basis for comparison, like there is with rational functions, which you can just compare to the function 1. The way around this, which also frees you from coordinate choices, is to talk about the entire sheaf of differentials rather than individual differentials. Let's define $\Omega_C$ to be the sheaf of regular (Kähler) differentials on $C$, as defined in any basic algebraic geometry book. You give me $\omega$, a rational section of $\Omega_C$, or in other words, an element of $\Omega_C(U)$ for some open set $U$, and we want to find its divisor. Here is how we restate the first part of the above computations: • The fact that $\omega$ can be expressed locally in terms of rational functions means that $\Omega_C$ is a line bundle (which is is; the trivializations are choices of local coordinate). What about the second part? Let's continue: a section of $\Omega_C(U)$ is the same thing as a map `$\phi \colon \mathcal{O}_C|_U \to \Omega_C|_U$`, sending the rational function 1 to the differential $\omega$. Suppose for the sake of argument that $\omega$ had only zeros but no poles; then around a point $p \in C \setminus U$, it would look like $z^n\;dz$, choosing a local coordinate, and therefore, the image of $\phi$ would look like the ideal $(z^n) \subset \mathcal{O}_{C,p}$. More intrinsically, the cokernel of $\phi$ would have length $n$ at $p$. Thus, the convention that $dz$ is regular means that: • When $\phi$ is an inclusion of sheaves, the order of vanishing of $\omega$ at $p$ is the length of $\operatorname{coker}(\phi)$ at $p$. Of course, $\omega$ has poles, since you said $C$ is proper. Thus, $\phi$ does not even extend to an inclusion of sheaves. However, we want to think of a pole of something as being like a zero of the inverse, and we know how to find zeros. Suppose that we extend $\phi$ as much as possible, so that its zeros lie in $U$, form the divisor of zeros: `$$D_Z = \operatorname{div}(\omega)_Z = \sum_p \ell(\operatorname{coker}(\phi)|_p)p$$` and replace $\phi$ by its induced map $\phi \colon \mathcal{O}_C(-D_Z)|_{U \setminus D_Z} \to \Omega_C|_{U \setminus D_Z}$. Then this new $\phi$ is an isomorphism, and we can invert it; the poles of $\phi$ are by definition the zeros of $\phi^{-1}$. The divisor of poles $D_P = \operatorname{div}(\omega)_P$, defined as the divisor of zeros of $\phi^{-1}$, is disjoint from $D_Z$, because the new $\phi$ already is an isomorphism on $D_Z$, so that, after twisting by $-D_P$, $\phi^{-1}$ extends to all of $C$ (you should convince yourself, by playing with DVR's, that it really does). Then $$\operatorname{div}(\omega) = \operatorname{div}(\omega)_Z - \operatorname{div}(\omega)_P$$ is the canonically-defined divisor of $\omega$. The short definition of this divisor is therefore: • $\operatorname{div}(\omega)$ is the unique divisor $D$ such that the induced map `$\phi \colon \mathcal{O}_C(-D)|_U \to \Omega_C|_U$` extends to an isomorphism of line bundles. This is what the divisor corresponding to a line bundle usually means. Note that none of this is particular to differential forms, but allows you to define the zeros and poles of any rational section of any line bundle. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 122, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523226618766785, "perplexity_flag": "head"}
http://quantummoxie.wordpress.com/2012/05/22/some-perils-regarding-interpretations-of-entropy/
# Quantum Moxie thoughts of a selective subjectivist ## Some perils regarding interpretations of entropy I have been involved in an interesting discussion on Google+ with a few people regarding the proper way to interpret entropy, $S=-k\sum_{i}p_{i}\textrm{log}p_{i}$. The Bayesian interpretation takes S to be a state of ignorance such that when we update our probabilities, the entropy subsequently updates.  The question is, is the updating of the probabilities themselves subjective or objective? Here’s a very simple example.  Consider a chamber divided into two sub-chambers of equal volume.  This setup can be observed from two sides, with no communication occurring between the two sides as shown in the figure. Suppose that Observer One thinks that Gas A = Gas B.  On the other hand, suppose Observer Two thinks that Gas A ≠ Gas B.  In short, Observer One believes that Gases A and B are indistinguishable.  Observer Two believes the opposite.  Both these states of knowledge can be reflected in terms of the probabilities for the associated microstates of the gases and thus in the entropies. Now let us suppose that the partition is removed and the two gases are allowed to mix.  Since Observer One thinks that Gas A = Gas B, he will find that the entropy of mixing will be zero.  Since Observer Two thinks that Gas A ≠ Gas B, she will find the entropy of mixing to be non-zero.  It is important to note here, that both observers have correctly carried out their experiments with the equipment and knowledge that they have at their disposal.  That is not under debate.  What is under debate is how do we interpret these results? Let us take entropy to be a measure of the state of knowledge, then.  Let us assume that the number of molecules of Gas A is the same as the number of molecules of Gas B, i.e. N(A) = N(B).  Since this is a case of free expansion, there is also no change in internal energy, U, of either Gas A or Gas B.  The thermodynamic identities, then, reduce to $TdS=PdV$ where the PdV term can be interpreted as mechanical work.  Suppose both Observer One and Observer Two have an identical device attached to the chamber on their respective sides that measures mechanical work.  Call these measurements M(A) and M(B).  According to the above thermodynamic identity, Observer One expects to find that M(A) = 0 while Observer Two expects to find that M(B) ≠ 0. Now let us adopt a principle of consistency for classical physics: Experiments on classical systems must yield consistent results. This is a slight variation on the Principle of Relativity.  If this principle is correct, then it must be that M(A) = M(B).  That is, either the devices both measure non-zero mechanical work (in which case there’s something wrong with Observer One’s assumptions) or they both measure zero mechanical work (in which case there’s something wrong with Observer Two’s assumptions).  In other words, classically, the measurements of mechanical work represent a state of reality independent of the Observer’s knowledge of the system.  As such, the thermodynamic identity can be interpreted to imply that [knowledge] = [reality]. Now, this can be interpreted in two ways. The usual Bayesian interpretation (or what passes for it), at least when applied to classical situations, would say that reality should inform our knowledge, i.e. with new information, we can update our knowledge (in other words, the equal sign is really directional, in a way).  On the other hand, it is possible to interpret this as implying that by changing our knowledge of the system we can change reality!  While this may be true for certain quantum systems, it nevertheless implies that, for instance, prior to our discovery of the existence of Antarctica, it didn’t exist which is absurd! I prefer to interpret entropy as a measure of the number of possible configurations of a system.  This removes the ambiguity in the classical case and it still works in the quantum case.  In quantum situations we assume there is some subjectivity to the measurements (in fact there is in some classical cases as well).  But at the quantum level, we become part of the system.  As such, there are degrees of freedom (configurations) that the act of measurement can introduce.  These degrees of freedom are minor in classical systems, but not so in quantum systems.  Nevertheless, they are always still there. So, yes, to some extent, we do “make” reality, i.e. it is a “participatory universe”, as Wheeler has suggested, just not quite in the way that everyone assumes.  Or, another way of looking at it is to say that, if we apply Ockham’s razor to all the possible interpretations, this is the simplest and most consistent.  Are there still issues with it?  Sure.  No theory is perfect.  But until someone comes up with something more consistent, I’m sticking with this one. ### Like this: This entry was posted on May 22, 2012 at 3:56 pm and is filed under Uncategorized . You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 2 Responses to “Some perils regarding interpretations of entropy” 1. Mike Says: May 22, 2012 at 4:14 pm “. . . the measurements of mechanical work represent a state of reality independent of the Observer’s knowledge of the system.” This must be equally true for classical and quantum systems. Only in the later case, it’s more complicated. • quantummoxie Says: May 22, 2012 at 9:46 pm True enough! Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9156728982925415, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/168559/seminorm-exercise
# Seminorm exercise Can you tell me if my answer is correct? It's another exercise suggested in my lecture notes. Exercise: Consider $C[-1,1]$ with the sup norm $\|\cdot\|_\infty$. Let $$W = \{f \in C[-1,1] \mid \int_0^1 f d\mu = \int_{-1}^0 f d \mu = 0 \}$$ Show that $W$ is a closed subspace. Let $f(x) = x$ and calculate $\|f\|_{V/W} = \inf_{w \in W} \|f + w \|_\infty$ and show that the infimum is not achieved. My answer: To show that $W$ is closed we show that if $f$ is a limit point of $W$ then $f \in W$. So let $f$ be a limit point. Then there is a sequence $w_n \in W$ converging to $f$, i.e. for $\varepsilon > 0$ there is $w_n$ such that $\|f - w_n\|_\infty < \varepsilon$. Hence for $\varepsilon > 0$, $\int_{0}^1 f d \mu = \int_0^1 (f + w_n - w_n ) d \mu \leq \int_0^1 |f-w_n| + \int_0^1 w_n = \int_0^1 |f-w_n| \leq \|f-w_n\|_\infty \leq \varepsilon$. Let $\varepsilon \to 0$. Same argument for $[-1,0]$. Now we compute the norm: $$\|x\|_{V/W} = \inf_{w \in W} \|x + w\|_\infty = \inf_{w \in W} \sup_{x \in [-1,1]} |x + w(x)|$$ $\|x + w\|_\infty$ is smallest for $w(x) = -x$. But $-x \notin W$. I'm not so sure about the second part. Is this what is meant by showing that the infimum is not achieved? "$\|x + w\|$ is smallest for $w(x) = -x$" seems a bit... wobbly. Thanks for your help. - 1 Make up your post it is full of mistakes and typos, which I think your see – Norbert Jul 9 '12 at 9:08 @Norbert The only mistake I can spot on this page is in your comment. I have taken a screen shot in case I need evidence in the future. – Matt N. Jul 9 '12 at 9:18 1 When you are proving $f\in W$ you are talking about $w_n$. What is $w$? Why don't you write limits of integration? And why $\Vert f+w\Vert_\infty\leq\varepsilon$. You only assumed that $\Vert f-w_n\Vert_\infty<\varepsilon$. – Norbert Jul 9 '12 at 9:24 2 This is the only way not to make mistakes, when you have no intuition in new subject. Well my comments are boring but I think they are necessary when you study FA for the first time. – Norbert Jul 9 '12 at 9:32 1 Not at all. By the way you can prove that $W$ is closed by the following way. $W$ is an intersection of kernels of continuous functionals $I_1(f)=\int_0^1 f d\mu$, $I_2(f)=\int_{-1}^0 f d\mu$. It is known that kernels of continuous functionals are closed, then does $W$ as intersection of closed sets. – Norbert Jul 9 '12 at 9:39 show 9 more comments ## 2 Answers Your idea for the first part is correct but the inequalities you write are odd. Try $$\left|\,\int_0^1f\,\right|=\left|\int_0^1(f-w_n)\right|\leqslant\int_0^1|f-w_n|\leqslant\|f-w_n\|_\infty\to0,$$ and similarly for the interval $[-1,0]$. Regarding the second part, one would like to use the function $w_0:x\mapsto x-\frac12\mathrm{sgn}(x)$ to approximate $u:x\mapsto x$ but, despite the fact that $\int\limits_{-1}^0w_0=\int\limits_0^1w_0=0$, one cannot because $w_0$ is not continuous at zero. Hence $w_0$ is not in $W$ but the task is to show that $w_0$ indeed provides the infimum $\|u\|_{V/W}$. Note that $u(x)-w_0(x)=-\frac12\mathrm{sgn}(x)$ for every $x$ hence $\|u-w_0\|_\infty=\frac12$. Call $W_0\supset W$ the set of integrable functions $w$ such that $\int\limits_{-1}^0w=\int\limits_0^1w=0$. For every $w$ in $W_0$, $\int\limits_0^1(u-w)=\frac12$ hence there exists some $x\geqslant0$ such that $u(x)-w(x)\geqslant\frac12$. This proves that $\|u-w\|_\infty\geqslant\frac12$ for every $w$ in $W_0$, and in particular for every $w$ in $W$, hence $\|u\|_{V/W}\geqslant\frac12$. Furthermore, for any $w$ in $W$, the condition $\|u-w\|_\infty=\frac12$ implies that $u(x)-w(x)\leqslant\frac12$ for every $x$ in $[0,1]$. Since $\int\limits_0^1(u-w)=\frac12$ and $u-w$ is continuous, $u(x)-w(x)=\frac12$ for every $x$ in $[0,1]$. Likewise, $u(x)-w(x)=-\frac12$ for every $x$ in $[-1,0]$. These two conditions are incompatible at $x=0$ hence there is no function $w$ in $W$ such that $\|u-w\|_\infty=\frac12$. Finally, one can modify $w_0$ to get some function $w_\varepsilon$ in $W$ such that $\|u-w_\varepsilon\|_\infty\leqslant\|u-v\|_\infty+\varepsilon$ hence $\|u\|_{V/W}=\frac12$. For example, one can consider the unique $w_\varepsilon$ in $W$ which is affine on $x\leqslant-\varepsilon$ and on $x\geqslant\varepsilon$ and such that $w_\varepsilon(x)=-x/(2\varepsilon)$ on $|x|\leqslant\varepsilon$. Edit: For the last step, one could try to use the approximation of $w_0$ by its Fourier series, that is, to consider $w_n(x)=-\sum\limits_{k=1}^n\frac{\sin(2k\pi x)}{\pi k}$. Unfortunately, due to Gibbs phenomenon, this choice leads to $\|w_n-u\|_\infty$ converging to $\frac12+a$ where $a\approx0.089490$, instead of the desired limit $\frac12$. - Thank you for this answer, did. I guess the main problem is to see that $\|u\|_{V/W} = \frac12$. How did you see that? I don't know how to see from $$\inf_{w\in W} \sup_{x \in [-1,1]} |x - w(x)|$$ that $w(x)$ should be $x - \frac{1}{2} \operatorname{sgn}{(x)}$. – Matt N. Jul 9 '12 at 11:03 1 Here is how. Since $u$ is discontinuous at $0$, I first tried to determine the best possible $w$ on $(-1,0)$ with integral $0$ and the best possible $w$ on $(0,1)$ one with integral $0$. For example, if $w\geqslant u-c$ on $(0,1)$ with $c\lt\frac12$, there is no chance the integral of $w$ on $(0,1)$ can be $0$. Once I got those two functions $w$, I saw they were not making a continuous function on (-1,1)\$... et voilà. – Did Jul 9 '12 at 14:35 Merci! I assume you mean $w_0$ at the beginning of your second sentence. Before I got your comment I'd asked Jonas in chat how to see $\|u\|_{V/W}$ and he said that it's "obvious" if you know the Dirichlet conditions. Which I did not know, of course. – Matt N. Jul 10 '12 at 5:32 Here is an explicit sequence of functions that approaches the infimum: Let $\phi_{-}(f) = \int_{-1}^0 f$, $\phi_{+}(f) = \int_0^{1} f$. (Both $\phi_{-}$ and $\phi_{+}$ are continuous, hence $W = \phi_{-}^{-1}\{0\} \cap \phi_{+}^{-1} \{0\}$ is closed.) Now consider the sequence of functions $$w_n(x) = \left\{ \begin{array}{ll} x-\frac{1}{2}-\alpha_n & \mbox{if } x \in [\frac{1}{n},1] \\ (\frac{1}{n}-\frac{1}{2}-\alpha_n)nx & \mbox{if } x \in [0,\frac{1}{n}] \\ -w_n(-x) & \mbox{if } x \in [-1,0) \end{array} \right.$$ A rather tedious calculation shows that to have $\phi_{+}(w_n)=0$ (and hence $\phi_{-}(w_n)=0$, by oddness), I need to pick $\alpha_n = \frac{1}{4 n -2}$. Furthermore, it is clear that $\|f-w_n\|_{\infty} = \frac{1}{2}+\alpha_n$, from which it follows that $\inf_{w \in W} \|f-w\|_{\infty} = \frac{1}{2}$. - Nice, thank you copper.hat. I'll read it in more detail later, I'm working on something else right now. – Matt N. Jul 19 '12 at 9:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 125, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9620352983474731, "perplexity_flag": "head"}
http://mathoverflow.net/questions/83192/m-bases-for-ck-spaces-k-scattered
M-bases for $C(K)$-spaces, $K$ -scattered Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Recall that a biorthogonal system $\{(x_i, x^*_i)\colon i\in I\}$ in a Banach space $E$ is a M-basis if $\{x_i\}_{i\in I}$ is linearly dense in $E$ and $\{x_i^*\}_{i\in I}$ separates points. Let me restrict my attention to $C(K)$-spaces only, where $K$ is a compact, scattered space. (Naive) Question 1: Does every such a space admits a M-basis? (Note that W.B. Johnson observed that $\ell^\infty=C(\beta\omega)$ has no M-basis but $\beta\omega$ is not scattered). I believe it's not true but I think I cannot produce any counter-example. Of course, when $\alpha$ is an ordinal, then $C([0,\alpha])$ admits a transfinite Schauder basis which one may use to extract a M-basis. In fact, I am interested in special kinds of M-bases: Question 2. Suppose that the space $C(K)$ admits a M-basis. Can we extract a M-basis which looks like the canonical Schauder basis in $C([0,\sigma])$ (that is $\{\mathbf{1}_{[0,\alpha]}\colon \alpha\leq \sigma\}$)? More precisely, can we construct a new M-basis $\{(f_i, \mu_i)\colon i\in I\}$ indexed by some linearly ordered set $(I, <)$ with the property that if $i\leq j$ then $f_j(x) = f_i(x)$ for $x\in \mbox{supp}(f_i )$? Any references to the papers studying the condition introduced above will be appreciated as well. - 1 Answer I suggest that you read Zizler's article "Nonseparable Banach spaces" in volume 2 of the Handbook of the Geometry of Banach Spaces. Therein he describes the space he calls $JL_0$, constructed by Lindenstrauss and me in "Some remarks on weakly compactly generated spaces," Israel J. Math. 17 (1974), 219-230. $JL_0$ as well as its nonseparable subspaces do not have $M$ bases but $JL_0$ is a $C(K)$ space for some scattered $K$; in fact, $JL_0$ is a twisted sum of $c_0(\Gamma)$ with $c_0$ with $\Gamma$ uncountable. If I understand Question 2, $c(\aleph)$ (that is, the closed span in $\ell_\infty(\aleph)$ of $c_0(\aleph)$ and the constant functions) seems to be a counterexample as long as $\aleph$ is large enough so that every linear order of $\aleph$ has a well ordered (or reverse well ordered) subset of order type $\omega_2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437917470932007, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/177098-reflexive-symmetric-antisymmetric-transitive.html
Thread: 1. reflexive, symmetric, antisymmetric, transitive? For each of these binary relations, determine whether they are reflexive, symmetric, antisymmetric, transitive. Give reasons for your answers and state whether or not they form order relations or equivalence relations. On the set {audi, ford, bmw, mercedes}, the relation {(audi, audi), (audi, bmw), (bmw, bmw), (ford, ford), (mercedes,mercedes), (audi, mercedes), (audi, ford), (bmw, ford), (mercedes, ford) }. Let F be the set of all possible filenames consisting of character strings of at least one character. The relation R contains all pairs of names (name1, name2) where the first eight characters of name1 are the same as the first eight characters of name2, or if name1 and name 2 have fewer than eight characters and are exactly the same. 2. What have you tried so far? 3. I don't really know were to start 4. Originally Posted by jander1 I don't really know were to start Start with the reflexive. A binary relation $R$ on a set $S$ is said to be reflexive iff $aRa$ for all $a\in S$ . Are those relations reflexive? 5. Originally Posted by jander1 I don't really know were to start 6. Suppose the parent set is $\{a_1,a_2,a_3,a_4\}$. A relation defined on this set is reflexive if the relation-set contains the element $(a_i,a_i)$ for all $i=1,2,3,4$. Is it the case here? A relation defined on this set is symmetric if for any $(a_i,a_j)$ belonging to the relation-set, the element $(a_j,a_i)$ is also present in the relation-set. Is it the case here for all $i,j=1,2,3,4$? A relation defined on this set is transitive if for two elements $(a_i,a_j)$ and $(a_j,a_k)$ belonging to the relation-set, the element $(a_i,a_k)$ is also present in the relation-set. Is it the case here for all $i,j,k=1,2,3,4?$ Proceed step by step; realize the concept and tell what you get.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236549735069275, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/127160/independence-of-random-variables-kernel-ica
# Independence of Random Variables (kernel ICA) In the paper Bach, F. R., & Jordan, M. I. (2002). Kernel Independent Component Analysis. Journal of Machine Learning Research, 3(1), 1-48. doi:10.1162/153244303768966085 I stumpled upon the following claim involving a correlation measure the authors define, the $\mathcal F$-correlation of two univariate random variables $x_1,x_2$ relative to a vector space $\mathcal F$ of functions from $\mathbb R$ to $\mathbb R$, $$\rho_{\mathcal F}= \sup_{f_1,f_2\in\mathcal F} \text{corr}\left(f_1(x_1),f_2(x_2)\right)= \sup_{f_1,f_2\in\mathcal F} \frac{ \text{cov}\left(f_1(x_1),f_2(x_2)\right) }{ \text{var}\left(f_1(x_1)\right)^{1/2} \text{var}\left(f_1(x_1)\right)^{1/2} }.$$ The authors state that if $x_1,x_2$ are independent, then $\rho_\mathcal{F}(x_1,x_2)=0$, but they also claim that the converse ($\rho_{\mathcal F}=0~\implies$ $x_1,x_2$ are independent) also holds when $\mathcal F$ is large enough. My question: As an example, they say that it is well known that if $\mathcal F$ contains the Fourier basis (i.e. functions $f_\omega(x) = \exp(i\omega x)$ with $\omega \in \mathbb R$) then $\rho_{\mathcal F}=0~\implies$ $x_1\bot\!\!\!\bot x_2$. My problem is, that I do not see how this is obviously true and I also failed at proving it. Unfortunately, there is no reference or proof for that claim in the paper. When I tried to prove it myself, I could not find a good starting point. First, I thought that the proof could be done via properties of the characteristic function, but I did not get far with that. I am explicitly interested in the claim for the Fourier basis and not so much in the more general claim of Bach and Jordan. If anyone could show me how to prove it (or point at a reference) I would be grateful? - ## 2 Answers You said it yourself: if $\mathcal F$ contains every complex exponential function and if $X$ and $Y$ are such that $\rho_\mathcal F(X,Y)=0$, then $\mathrm E(\mathrm e^{\mathrm i(xX+yY)})=\mathrm E(\mathrm e^{\mathrm ixX})\mathrm E(\mathrm e^{\mathrm iyY})$ for every $(x,y)$ in $\mathbb R^2$. This means the Fourier transform of the distribution $\mathrm P_{(X,Y)}$ of $(X,Y)$ coincides with the Fourier transform of the product distribution $\mathrm P_X\otimes\mathrm P_Y$, hence $\mathrm P_{(X,Y)}=\mathrm P_X\otimes\mathrm P_Y$, which means exactly that $X$ and $Y$ are independent. - Now I see it (see comment to @bgins). Next time I try to post a questions for which your answer does not start with "you said it yourself". – fabee Apr 2 '12 at 10:00 My first sentence was not meant as a criticism, more as a praise for explaining what you know, what you tried, why it failed (something which is all too rare on the site, although it is explicitely encouraged). – Did Apr 2 '12 at 11:11 Hmm, I guess my comment was misleading. I didn't take it as criticism. I just thought it was funny that I almost got it (like last time in "How does a function acting on a random variable change the probability density function of that random variable?"), but needed your answer to see the last bit of it, and you even started it with the same phrase. So my comment was meant as "next time I try not to ask something that I almost solved myself" :). In any case: thanks. – fabee Apr 2 '12 at 11:26 Here's the background. We know that the Pearson correlation coefficient, the quantity we are taking the max or supremum of, is defined iff the transformed random variables (continuing with the authors' use of lowercase) $y_i=f_i(x_i)$ both have a well-defined finite and nonzero variance, and that when it is defined, it lies in $[-1,1]$ and is zero for independent $y_i$, since the numerator is $$E[(y_1-E[y_1])(y_2-E[y_2])]= E[y_1y_2]-E[y_1]E[y_2].$$ We also know that the converse is not in general true because of limitations of the measure, namely, that it measures linear dependence (a classic example being $x_2=x_1^2$) and positive correlation (the latter limitation is fully circumvented by this definition because $f_1\in\mathcal F\iff-f_1\in\mathcal F$). But this is where our transformation space $\mathcal F$ comes in handy. For example, the dependence of $x_2=x_1^2$ will always be detected if $\mathcal F$ contains $f_1:x\mapsto x$ and $f_2:x\mapsto x^2$. Well, the Fourier basis above is dense in $L^1(\mathbb{R})$, so... do you see where this is going? See @Didier's post. - Thanks @bgins. While you wrote your answer, I guess I figured it out myself from Didier's answer. Since $\mathcal F$ is a space there must be a positive correlation for each negative one (as you wrote). So $\sup_{f_1,f_2\in \mathcal F}\rho_{\mathcal F}=0$ really means that it is zero for all $f_1,f_2\in\mathcal F$. Then I can use $cov(f_1(X),f_2(Y))=0$ to get $E[e^{i(xX+yY)}] = E[e^{ixX}] E[e^{iyY}]$ as in Didier's answer. However, I almost forgot that I need that the Fourier basis is dense in $L^1(\mathbb R)$. Thanks for that. If I could accept two answers, I'd accept yours, too. – fabee Apr 2 '12 at 10:09 Yeah, they go together pretty nicely, don't they! No worries :-) – bgins Apr 2 '12 at 10:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9558583498001099, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2012/08/30/uses-of-the-jordan-chevalley-decomposition/?like=1&_wpnonce=ab93ff8ff3
# The Unapologetic Mathematician ## Uses of the Jordan-Chevalley Decomposition Now that we’ve given the proof, we want to mention a few uses of the Jordan-Chevalley decomposition. First, we let $A$ be any finite-dimensional $\mathbb{F}$-algebra — associative, Lie, whatever — and remember that $\mathrm{End}_\mathbb{F}(A)$ contains the Lie algebra of derivations $\mathrm{Der}(A)$. I say that if $\delta\in\mathrm{Der}(A)$ then so are its semisimple part $\sigma$ and its nilpotent part $\nu$; it’s enough to show that $\sigma$ is. Just like we decomposed $V$ in the proof of the Jordan-Chevalley decomposition, we can break $A$ down into the eigenspaces of $\delta$ — or, equivalently, of $\sigma$. But this time we will index them by the eigenvalue: $A_a$ consists of those $x\in A$ such that $\left[\delta-aI\right]^k(x)=0$ for sufficiently large $k$. Now we have the identity: $\displaystyle\left[\delta-(a+b)I\right]^n(xy)=\sum\limits_{i=0}^n\binom{n}{i}\left[\delta-aI\right]^{n-i}(x)\left[\delta-bI\right]^i(y)$ which is easily verified. If a sufficiently large power of $\delta-aI$ applied to $x$ and a sufficiently large power of $\delta-bI$ applied to $y$ are both zero, then for sufficiently large $n$ one or the other factor in each term will be zero, and so the entire sum is zero. Thus we verify that $A_aA_b\subseteq A_{a+b}$. If we take $x\in A_a$ and $y\in A_b$ then $xy\in A_{a+b}$, and thus $\sigma(xy)=(a+b)xy$. On the other hand, $\displaystyle\begin{aligned}\sigma(x)y+x\sigma(y)&=axy+bxy\\&=(a+b)xy\end{aligned}$ And thus $\sigma$ satisfies the derivation property $\displaystyle\sigma(xy)=\sigma(x)y+x\sigma(y)$ so $\sigma$ and $\nu$ are both in $\mathrm{Der}(A)$. For the other side we note that, just as the adjoint of a nilpotent endomorphism is nilpotent, the adjoint of a semisimple endomorphism is semisimple. Indeed, if $\{v_i\}_{i=0}^n$ is a basis of $V$ such that the matrix of $x$ is diagonal with eigenvalues $\{a_i\}$, then we let $e_{ij}$ be the standard basis element of $\mathfrak{gl}(n,\mathbb{F})$, which is isomorphic to $\mathfrak{gl}(V)$ using the basis $\{v_i\}$. It’s a straightforward calculation to verify that $\displaystyle\left[\mathrm{ad}(x)\right](e_{ij})=(a_i-a_j)e_{ij}$ and thus $\mathrm{ad}(x)$ is diagonal with respect to this basis. So now if $x=x_s+x_n$ is the Jordan-Chevalley decomposition of $x$, then $\mathrm{ad}(x_s)$ is semisimple and $\mathrm{ad}(x_n)$ is nilpotent. They commute, since $\displaystyle\begin{aligned}\left[\mathrm{ad}(x_s),\mathrm{ad}(x_n)\right]&=\mathrm{ad}\left([x_s,x_n]\right)\\&=\mathrm{ad}(0)=0\end{aligned}$ Since $\mathrm{ad}(x)=\mathrm{ad}(x_s)+\mathrm{ad}(x_n)$ is the decomposition of $\mathrm{ad}(x)$ into a semisimple and a nilpotent part which commute with each other, it is the Jordan-Chevalley decomposition of $\mathrm{ad}(x)$. ### Like this: Posted by John Armstrong | Algebra, Lie Algebras, Linear Algebra ## 3 Comments » 1. Could you elaborate a bit (or just give me a hint) on the first “identity: … which is easily verified”? I’m not seeing it… Thanks. Comment by | August 30, 2012 | Reply 2. Run an induction on $n$; it works out sort of like the binomial theorem does. Comment by | August 30, 2012 | Reply 3. [...] we know that is the semisimple part of , so the Jordan-Chevalley decomposition lets us write it as a polynomial [...] Pingback by | August 31, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 52, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9069902300834656, "perplexity_flag": "head"}
http://en.m.wikipedia.org/wiki/Multivariate_mutual_information
# Multivariate mutual information This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2012) In information theory there have been various attempts over the years to extend the definition of mutual information to more than two random variables. These attempts have met with a great deal of confusion and a realization that interactions among many random variables are poorly understood. ## Definition The conditional mutual information can be used to inductively define a multivariate mutual information (MMI) in a set- or measure-theoretic sense in the context of information diagrams. In this sense we define the multivariate mutual information as follows: $I(X_1;\ldots;X_{n+1}) = I(X_1;\ldots;X_n) - I(X_1;\ldots;X_n|X_{n+1}),$ where $I(X_1;\ldots;X_n|X_{n+1}) = \mathbb E_{X_{n+1}}\big(I(X_1;\ldots;X_n)|X_{n+1}\big).$ This definition is identical to that of interaction information except for a change in sign in the case of an odd number of random variables. ### Properties Multi-variate information and conditional multi-variate information can be decomposed into a sum of entropies. $I(X_1;\ldots;X_n) = -\sum_{ T \subseteq \{1,\ldots,n\} }(-1)^{|T|}H(T)$ $I(X_1;\ldots;X_n|Y) = -\sum_{T \subseteq \{1,\ldots,n\} } (-1)^{|T|} H(T|Y)$ ### Example of Positive Multivariate mutual information Positive MMI is typical of common-cause structures. For example, clouds cause rain and also block the sun; therefore, the correlation between rain and darkness is partly accounted for by the presence of clouds, $I(rain;dark|cloud) \leq I(rain;dark)$. The result is positive MMI $I(rain;dark;cloud)$. ### Example of Negative Multivariate mutual information The case of negative MMI is infamously non-intuitive. A prototypical example of negative $I(X;Y;Z)$ has $X$ as the output of an XOR gate to which $Y$ and $Z$ are the independent random inputs. In this case $I(Y;Z)$ will be zero, but $I(Y;Z|X)$ will be positive (1 bit) since once output $X$ is known, the value on input $Y$ completely determines the value on input $Z$. Since $I(Y;Z|X)>I(Y;Z)$, the result is negative MMI $I(X;Y;Z)$. It may seem that this example relies on a peculiar ordering of $X,Y,Z$ to obtain the positive interaction, but the symmetry of the definition for $I(X;Y;Z)$ indicates that the same positive interaction information results regardless of which variable we consider as the interloper or conditioning variable. For example, input $Y$ and output $X$ are also independent until input $Z$ is fixed, at which time they are totally dependent. This situation is an instance where fixing the common effect $X$ of causes $Y$ and $Z$ induces a dependency among the causes that did not formerly exist. This behavior is colloquially referred to as explaining away and is thoroughly discussed in the Bayesian Network literature (e.g., Pearl 1988}. Pearl's example is auto diagnostics: A car's engine can fail to start $(X)$ due either to a dead battery $(Y)$ or due to a blocked fuel pump $(Z)$. Ordinarily, we assume that battery death and fuel pump blockage are independent events, because of the essential modularity of such automotive systems. Thus, in the absence of other information, knowing whether or not the battery is dead gives us no information about whether or not the fuel pump is blocked. However, if we happen to know that the car fails to start (i.e., we fix common effect $X$), this information induces a dependency between the two causes battery death and fuel blockage. Thus, knowing that the car fails to start, if an inspection shows the battery to be in good health, we conclude the fuel pump is blocked. Battery death and fuel blockage are thus dependent, conditional on their common effect car starting. The obvious directionality in the common-effect graph belies a deep informational symmetry: If conditioning on a common effect increases the dependency between its two parent causes, then conditioning on one of the causes must create the same increase in dependency between the second cause and the common effect. In Pearl's automotive example, if conditioning on car starts induces $I(X;Y;Z)$ bits of dependency between the two causes battery dead and fuel blocked, then conditioning on fuel blocked must induce $I(X;Y;Z)$ bits of dependency between battery dead and car starts. This may seem odd because battery dead and car starts are governed by the implication battery dead $\rightarrow$car doesn't start. However, these variables are still not totally correlated because the converse is not true. Conditioning on fuel blocked removes the major alternate cause of failure to start, and strengthens the converse relation and therefore the association between battery dead and car starts. ↑Jump back a section ## Bounds The bounds for the 3-variable case are $-min\ \{ I(X;Y|Z), I(Y;Z|X), I(X;Z|Y) \} \leq I(X;Y;Z) \leq min\ \{ I(X;Y), I(Y;Z), I(X;Z) \}$ ↑Jump back a section ## Difficulties A complication is that this multivariate mutual information (as well as the interaction information) can be positive, negative, or zero, which makes this quantity difficult to interpret intuitively. In fact, for n random variables, there are $2^n-1$ degrees of freedom for how they might be correlated in an information-theoretic sense, corresponding to each non-empty subset of these variables. These degrees of freedom are bounded by the various inequalities in information theory. ↑Jump back a section ## See also ↑Jump back a section Last modified on 17 February 2012, at 10:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229332208633423, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/114446/nonzero-convex-combinations-of-convex-hull-vertices-to-yield-an-inner-point
Nonzero convex combinations of convex hull vertices to yield an inner point Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Two questions: 1) (ALREADY ANSWERED) This is likely to be a very basic question for you folks. Carathéodory's theorem gives us an upper bound for the minimum number of convex hull vertices that can be used in a nonzero convex combination to yield an inner point of the convex hull (d+ 1 in $\mathbb{R}^d$). Is there a result which gives a lower bound on the maximum number of convex hull vertices that can be used in a nonzero convex combination to yield an inner point of the convex hull? (By an inner point I mean one that belongs to the convex hull but does not lie directly on the convex hull.) I am primarily interested in whether any interior point of a convex hull can always be expressed as a nonzero combination of all convex hull vertices. ANSWER: Any interior point of a convex hull can be expressed as a nontrivial convex combination of all hull vertices. 2) Would I be correct in saying that the convex hull of any set of points in a simplex is a Choquet simplex, which implies that in this case not only does such a nontrivial convex combination of convex hull vertices exist, but that the convex combination is unique? - 2 Answers This is indeed easy. Let $p$ be a point that you want to represent, $m$ the barycenter of all vertices and $\varepsilon>0$ so small that the point $q=(1+\varepsilon)p-\varepsilon m=p+\varepsilon(p-m)$ is still in the convex hull. Represent $q$ as a convex combination of some vertices, add $\varepsilon m$ with $m$ represented as the arithmetic mean of all vertices, and finally divide all the coefficients by $1+\varepsilon$. - That settles it! Any interior point of a convex hull can be expressed as a nontrivial convex combination of the hull vertices. – 4fj Nov 25 at 21:07 Another question: would I be correct in saying that the convex hull of any set of points in a simplex is a Choquet simplex, which implies that in this case not only does such a nontrivial convex combination of convex hull vertices exist, but that the convex combination is unique? – 4fj Nov 25 at 22:25 Is this still about finite dimensions? If so, a Choquet simplex is just an ordinary simplex, otherwise you need to consider integrals rather than linear combinations. – Sergei Ivanov Nov 25 at 22:30 Yes, finite dimensions. What I'm trying to ask is: is the convex hull of a set of points in a finite dimensional probability simplex a Choquet simplex? Please excuse my lack of knowledge in these areas... it's funny where research can land you. – 4fj Nov 25 at 22:44 Answered my own question; it seems that's only the case when the vertices of the convex hull are affinely independent. – 4fj Nov 25 at 23:15 show 1 more comment You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The following is not a proof (you'll see why in a moment), but is the basis for my belief that there are points which are interior to the convex hull and are nontrivial linear combinations of all the vertices of the convex hull. Take V, the set all vertices of the convex hull in d dimensions, and do your best to partition into neighboring sets of size d+1. For a simplex in d dimensions, some version of barycentric coordinates should make it clear that each neighboring set has a realm of points interior to the hull which contains a simplex worth of points that are nontrivial combinations of each set. The effect of the partition is to replace the problem of size V with one of size roughly V/d: pick one point which is a representative combination of each neighboring set, and form V', while reusing any vertices from V that were left out of the initial partition. You should be able to work your way down to at most 2d interior vertices, and you might even arrange the result to be convex at each step. This is not a proof because I am not guaranteeing convexity, nor that when you get down to at most 2d vertices that things will work out. In my geometric worldview however, I can imagine lopping off d vertices at a time in a controlled fashion to converge to a particular interior point, so the picture above might be useful in showing which (if not all) interior points are nontrivial combinations of all the vertices. Gerhard "One Simplex At A Time" Paseman, 2012.11.25 - Actually, the 2d case might work out if you shift your perspective: replace a facet with k vertices by a certain combination that lies on the facet. You could even start with that to reduce V to the right number of vertices mod d (the integer). Gerhard "Ask Me About System Design" Paseman, 2012.11.25 – Gerhard Paseman Nov 25 at 20:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9223981499671936, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/3733/how-to-better-understand-trading-signals?answertab=active
# How to better understand trading signals? I am looking to get a better understanding of an output from a trading strategy. Basically I have a daily equity curve lets call it $Y_t$. I have defined a bunch of independent variables $X_{it}$ that I think can explain the movement in the daily PnL. The independent variables are not used in trading signal generation directly. 1) How can I go about deducing which independent variables explain my $Y_t$ , assuming that the relationship could be non-linear? I can start out with PCA but from what I understand it assumes a linear relationship. 2) Using a reduced independent variable set $X_{it}$ from 1) how do I go about defining a non-linear relationship with $Y_t$ . Neural Nets maybe? I understand that doing 1) and 2) might result in overfitting but I just want to understand the equity curve better. - The problem with Principle component analysis is that you moght lose information about what really drives your returns. Also as a rule of thumbs, before assuming that you have a non linear relationship make sure that a linear methods realy results in underfitting your problem. If you want to reduce the number of variables you could use shrinkage regression for example. – Zarbouzou Jul 5 '12 at 8:22 I agree with @Zarbouzou, is there any particular reason you are so focused on non-linear relationships? What's wrong with PCA? – Tal Fishman Jul 5 '12 at 17:04 – John Jul 5 '12 at 21:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439505338668823, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/141081-finiteness-non-commutative-ring.html
# Thread: 1. ## finiteness of a non-commutative ring Let $R$ be a non-commutative ring . Suppose that the number of non-units of $R$ is finite . Can we say that $R$ is a finite ring? 2. I think that we can say $R$ is a finite ring. The only way it could be infinite is if the group of units, call it $U$, were infinite. But then $rU$ for any $r\notin U$ with $r\neq 0$ would be an infinite number of non-unital elements, which is a contradiction. So I guess we must assume that there is at least one non-zero, non-unital element, otherwise an infinite ring of nothing but units with an additive identity thrown in (such as $\mathbb{R}$) would destroy the argument. In short: yes, unless $R$ is a field! 3. nimon's proof works only for commutative domains because $rU$ is not necessarily infinite even if $U$ is infinite. the answer to xixi's question is also negative for non-commutative rings. for example $\mathbb{H},$ the ring of quaternions over $\mathbb{R},$ has this property because every non-zero element of $\mathbb{H}$ is a unit. in general, every (infinite) division ring has the property because every non-zero element of a division ring is a unit. here's a less trivial version of xixi's problem: let $R$ be an infinite commutative (resp. non-commutative) ring. suppose that the number of non-unit elements of $R$ is finite. is $R$ necessarily a field (resp. division ring)? 4. Originally Posted by NonCommAlg nimon's proof works only for commutative domains because $rU$ is not necessarily infinite even if $U$ is infinite. the answer to xixi's question is also negative for non-commutative rings. for example $\mathbb{H},$ the ring of quaternions over $\mathbb{R},$ has this property because every non-zero element of $\mathbb{H}$ is a unit. in general, every (infinite) division ring has the property because every non-zero element of a division ring is a unit. here's a less trivial version of xixi's problem: let $R$ be an infinite commutative (resp. non-commutative) ring. suppose that the number of non-unit elements of $R$ is finite. is $R$ necessarily a field (resp. division ring)? for my question , I meant there are non-units other than zero and the set of these elements is finite , now by this assumption ,can't we still say that $R$ is finite?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289101362228394, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/132373-definition-connectedness.html
# Thread: 1. ## Definition of Connectedness In my book, "Real Mathematical Analysis" by Charles Pugh, a subset S of a metric space M is disconnected if there exists a proper, clopen subset of Because if the proper, clopen subset of S is A, then S = A union A(compliment) It says that S is not connected (so disconnected) if it is not the union of two disjoint non-empty open sets, which is basically saying that S has no proper, open set. Which is the correct definition? 2. Originally Posted by JG89 In my book, "Real Mathematical Analysis" by Charles Pugh, a subset S of a metric space M is disconnected if there exists a proper, clopen subset of Because if the proper, clopen subset of S is A, then S = A union A(compliment) It says that S is not connected (so disconnected) if it is not the union of two disjoint non-empty open sets, which is basically saying that S has no proper, open set. To confuse you even more here is a third definition given by R L Moore in c1920. Two nonempty sets are separated if neither one contains a point nor a limit point of the other. (Of course, Prof Moore did not say nonempty- he did not believe empty point sets existed.) A set is connected if and only if it is not the union of two separated sets. But here is the kicker: All three are equivalent. You ought to prove it. 3. Thanks! I'll get on that. 4. Be careful, they are not exactly equivalent. The definition by Moore, quoted by Plato, is for connected sets. The other two definitions are for connected spaces. For example, $A= [0, 1]\cup [2, 3]$ is not a connected set in R (with the "usual" topology), but the only "clopen" sets in R are the empty set and R itself. Of course, if you think of A, above, as a vector space in its own right, with the topology inherited from R, then [0,1] and [2,3] are "clopen" in A. 5. Halls, if I am studying metric spaces, should I use the clopen and open set definitions? 6. Originally Posted by JG89 I am studying metric spaces, should I use the clopen and open set definitions? It is save to say that in many textbooks where metric spaces are the central focus, a space is said to be connected if it is not the union of two nonempty disjoint open sets. If a metric space is not connected, then the two separating sets are both open and closed (clopen). 7. Originally Posted by Plato You ought to prove it. Definition 1: A metric space $M$ is connected if there is not a proper, clopen subset of M. Definition 2: A metric space $M$ is connected if $M$ is not the union, of two disjoint, non-empty open sets. Proof that the two definitions are equivalent: First suppose that $M$ is not the union of two disjoint, non-empty open sets. We prove that it does not contain a proper, clopen subset. Assume that indeed it does, $A \subset M$ is clopen. Then $M = A \cup A^c$ and $A \cap A^c = \varnothing$. Both $A$ and $A^c$ are clopen, and thus open, sets that are disjoint and non-empty (we know they are non-empty because they are assumed to be proper subsets), which contradicts the fact that M is not the union of two disjoint, non-empty open sets. Now we must prove the other direction. We will prove it in contrapositive form. Suppose that $M$ is the union of two disjoint, non-empty open sets, $A$ and $B$. We will prove that one of them is clopen. Take either $A$ or $B$, say $A$. Suppose it is not closed. There exists a convergent sequence $a_n \rightarrow a$ of its elements whose limit is not in $A$, and thus must be in $B$. Remember that $B$ is open and so $\exists \delta > 0 : d(a, x) < \delta \Rightarrow x \in B$. Note that there exists at least one point in $\{x \in M : d(a,x) < \delta \}$ that's from the sequence $a_n$ since $a_n \rightarrow a$. But this contradicts the fact that $A \cap B = \varnothing$, and so $A$ is closed, and since it is also open, it is clopen. $A$ is also a proper subset, and so there exists a proper, clopen subset of $M$. QED Is this fine? 8. That proof works. In the first part you need to be sure that set A is a proper subset. As for the second part recall that the complement of an open set is closed. Therefore $M\setminus A=B$ so the set $B$ is both open and closed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519768953323364, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/supersymmetry+spin
Tagged Questions 1answer 162 views Lorentz spinors of $SO(n,1)$ and conformal spinors of $SO(n,2)$ It would be great if someone can give me a reference (short enough!) which explains the (spinor) representation theory of the groups $SO(n,1)$ and $SO(n,2)$. I have searched through a few standard ... 1answer 324 views Fundamental particles with spin > 1 I am in undergraduate quantum mechanics, and the TA made an off-hand comment that currently no one knows how to describe fundamental particles with spin > 1 without supersymmetry. I was curious and ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465339183807373, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/27689/does-kaluza-klein-theory-successfully-unify-gr-and-em-why-cant-it-be-extended/27694
# Does Kaluza-Klein theory successfully unify GR and EM? Why can't it be extended to the Standard Model gauge group? As a quick disclaimer, I thought this might be a better place to ask than Physics.SE. I already searched there with "kaluza" and "klein" keywords to find an answer, but without luck. As background, I've been reading Walter Isaacson's biography of Einstein, and I reached the part where he briefly mentions the work of Kaluza and Klein. Also, I did my undergraduate degree in theoretical physics, but that was quite a few years ago now... The way I understand the original work of Kaluza and Klein is that you can construct a theory that looks like a 5-D version of GR that reproduces the Field Equations and the Maxwell Equations. An important bit is that the 4th spatial dimension is the circle group $U(1)$, which we now know as the gauge group of EM. I guess the first part of the question should be whether I've understood that correctly. Then, if you build a Kaluza-Klein type theory, but use the SM gauge group $U(1)\times SU(2)\times SU(3)$ instead of $U(1)$, what do you get? Is it gravity and the standard model together? If not, then what? The Wiki article on Kaluza-Klein theory says that this logic "flounders on a number of issues". The only issue is explicitly states is that fermions have to be included by hand. But even if this (Kaluza-Klein)+(SM gauge) theory only describes interactions, isn't that okay, or at least a big help? - ## 5 Answers if you build a Kaluza-Klein type theory, but use the SM gauge group U(1)×SU(2)×SU(3) instead of U(1), what do you get? If you want to use U(1)xSU(2)xSU(3), you get gravity over a 11 dimensional manifold, such the extra seven dimensional manifold is of the kind produced by quotienting $S^3 \times S^5$ by an orbit of U(1). A particular space in this family is $S^3\times CP^2$, you can learn that the group of isometries of $CP^2$ is SU(3) and the group of isometries of $S^3$ is obviously SO(4), so SU(2)xSU(2). More general spaces of this kind can be obtained by using a generic lens space instad of $S^3$; remember that lens spaces interpolate between $S^3$ as fiberes product of $S^1$ and $S^2$ and the plain product $S^2 \times S^1$. (This is already a bit beyond the answer, but I mention it becasue my first question in Physics.SE was about if this interpolation was a kind of Weinberg angle). The dimension of a Lie Group is equal to its number of generators, so G=U(1)xSU(2)xSU(3) has, as a manifold, dimension 1+3+8=12. Such manifold has an action with GxG, which is overkill. So we can quotient the manifold using a maximal non trivial subgroup of G, in this case H=U(1)xU(1)xSU(2), and use instead the manifold G/H. Thus the number of dimensions that we need is $1+3+8-(1+1+3)=7$. The ways to map H into G are not unique, and in the particular case of the SM group this creates a 3-parameter family of manifolds, and each of them seems to have, according Salam et al, a 2-parameter family of metrics. In some special cases of this parameter space, as the aforementioned $S^3\times CP^2$, some extra symmetries can appear. I am not sure, but it seems that before Witten the technique to put "$G$ instead of U(1)" was really to put the whole Lie manifold as compact space, and then act on it with $G\times G$. A particularly intriguing case is when $G$ has the topology of an sphere, and then the maximal possible number of isometries. So $S^1$ and $S^3$ had naturally attracted some attention, and Adam theorem could have pushed some interest on $S^7$. But even if this (Kaluza-Klein)+(SM gauge) theory only describes interactions, isn't that okay, or at least a big help? It seems that it does not help, and I am as surprised as you. The question of fermions "by hand" goes beyond the chirality problem. It was a program, led mainly by Salam, that an analysis of the compactification manifold and its tangent plane should reveal the charge assignments. For the SM-like manifold in 7 dimensions, the program fails; you can not find the charge assigments that the standard model has. It was noticed later, by Bailin and Love, that by going to 8 extra dimensions the problem could be solved, but further research was not pursued. A reasonable inquiry is how the jump to 8 and eventually 9 dimensions relates to Pati-Salam, SU(5) and SO(10). Of course SO(10) needs nine extra dimensions (It is the isometries of $S^9$), and the projections down to the standard model seem very much as recent work of John Huerta. Other interesting question, to me, is if the extra dimension, from 7 to 8, can really be a local gauge symmetry, given that we have reasons to keep ourselves in D=11 at most. When one notices that the extra dimension is the origin of $B-L$ charge, that is interesting. History You can also check SPIRES for the history of Witten involvement in Kaluza Klein: FIND A WITTEN AND K KALUZA-KLEIN (edit:link changed towards inspire) He has four papers with the Keyword "Kaluza Klein". The first of them is "Realistic Kaluza Klein Theories". It is the start of the KK trend, not the end. All the relevant papers come because of it. Do an FIND K KALUZA-KLEIN AND TOPCITE 50+ AND DATE BEFORE 1990 AND DATE AFTER 1975 And order by increasing date. You will notice the works of Salam et al, Pope, Duff, all of it. The difference with the previous, and later, research, is that in this timelapse KK was considered seriously as in the original proposal, while generic references about KK in modern literature are really about compactification from higher dimensions; in some cases the fields comming from KK are even a nuissance to avoid. I do not know who invented the late excuse that "Realistic Kaluza Klein" killed the research on KK; it appears very frequently in folk introductions to compactifications in string theory. More rarely, some person notes the contradiction and quotes instead the last paper of Witten on the topic, Shelter Island II, which has a more deeper discussion on the chirality problem, and even hints -or I read between lines- the question of singularity or regularity of the manifold, so that the late proposed solution to the fermion problem (see Moshe answer) is not so surprisingly ironical, really it was there from the start. The topic of Kaluza Klein, or more properly of using the gauge fields of Kaluza Klein as physical fields, was abandoned in 1984 with the second superstring revolution. Ten dimensions were more interesting that eleven, and then you have not enough room to produce the SM group in a pure way from KK, so why bother? A mix of crowdthinking with "publish or perish" led to the end of the research, as most of the easy topics on KK had been covered in the interval (from 1981 to 1984), and some others were common to any extra dimensional theory: compactification, stability, etc... Nobody was even worried by the estability, because at that time it was believed that an AdS spacetime was reasonable, and then some compactification mechanisms from $M^{11}$ to $AdS \times M^7$ were known. An important role in this mechanism was the 84-component tensor that is added to the 44-component graviton in supergravity; some years later it should be recognised as the starting point of M-theory. - I'm slightly torn between this answer and @Moshe's. My impression is that you have very well explained what happens when you build a KK-theory of the kind I asked and I gather that the failure (aside from chiral fermions) is that one derives incorrect charge assignments (compared to SM)? – Warrick Dec 9 '11 at 9:26 A separate, simpler question is: why does GR+(U(1)xSU(2)xSU(3)) give an 11-dimensional space? Surely the dimension of U(1)xSU(2)xSU(3) is at most 1+2+3=6? Or am I making a basic misunderstanding of the dimensions of these groups and their product spaces? – Warrick Dec 9 '11 at 9:28 @Warrick. Ok, I am adding further edit to explain why seven is the minimum. And yes, the first failures detected were both chirality AND charge assignments; after them, Moshe discussion follows. – user135 Dec 9 '11 at 10:23 1 – José Figueroa-O'Farrill Dec 10 '11 at 5:59 @JoséFigueroa-O'Farrill done; the guys at INSPIRE have added extra post-1985 papers of Witten to the keyword kaluza-klein, so I was afraid it was more confusing. But it is either this, or not working at all. – user135 Dec 10 '11 at 9:07 The original KK motivation is derivation of gauge theory from higher dimensional gravity, so I will assume we are discussing higher dimensional theories of pure gravity (or perhaps supergravity), with no extra ingredients. I will also assume that the higher dimensional space is smooth, for the simple reason that otherwise we cannot make definite statements, at least not without more information. Both these caveats are relevant for string theory, which has higher dimensional gauge fields (not only gravity) and crucially -- in which you can make sense out of singular spaces. Indeed, the singularities give you just the right ingredients to solve the issues below: Stability of the extra dimensions The size and shape of the extra dimensions can change from place to place, resulting effectively in light scalar fields (moduli) which are not observed (this is known as the moduli problem). In the simplest examples this typically results in "runaway" behaviour in which the size of the extra dimensions rapidally goes to zero or infinity. To stabilize the size and shape moduli one has to find ways to build a sufficiently complex potential for these moduli, which requires some extra ingredients (such as the ones existing in the KKLT construction in string theory). Vacuum Decay Even if the KK compactification is stable to small perturbations, there is a mysterious quantum gravity effect that makes the KK vacuum decay (to "nothing") in non-supersymmetric KK theories (at least ones that, like the original one, contain a circle in the extra dimensions). This is one of the reasons most of the modern work on the subject considers only supersymmetric theories (i.e. supergravity) from the outset. deSitter Compactifications Now that we know that we have a small cosmological constant, there is a new argument against higher dimensional supergravity theories. The only known way to circumvent this no-go theorem is using ingredients specific to string theory (e.g. orinetifolds). This argument was not known at the time that the modern KK theories were studied and eventually rejected (roughly the early 1980s). Chiral Fermions Historically, the main reason the modern KK program was rejected (or at least slowed down) was this paper by Witten. One of the generic difficulties in constructing any model of fundamental physics is that the standard model has chiral fermions -- fermions of different chirality have different couplings. This is hard to achieve because fermions tend to come and go in pairs of opposite chirality whose couplings are exactly the same. If you do manage to somehow construct chirally asymmetric models, these models have many more possibilities to be inconsistent (anomalous) and therefore many more consistency checks to pass. This is therefore one of the best, most stringent, tests to subject any claim for beyond the standard model physics. What Witten has shown is that there is no way to get chiral fermions starting with higher dimensional (super)gravity theory on any smooth manifold. This caused a general loss of interest in this research direction. Ironically, it was Witten and various collaborators that demonstrated, about 15 years later, that the problem can be solved (in string theory, using singular manifolds). Turns out that String theory has exactly the right ingredients to make the physics of the required singularities regular, and to pass all the non-trivial consistency checks that accompany any chiral theory. - Just to be sure, Freund-Robin solutions are unstable too? My understanding is that they were rejected due to the third point (being AdS instead of dS), but I have not read about its stability and/or decay. – user135 Dec 8 '11 at 13:14 I have defined what I call KK, the reasonable thing to do when the original idea did not work is to try variations, which circumvent some but not all the problems. – user566 Dec 8 '11 at 16:00 Yes, KK in the broad sense of compactification of extra dimensions. But you should read all of the OP question, and not only the header... he is interested on what happens if you try to build exactly the SM in, and that fails. The answers are in Witten 1981, Salam, and then Witten 1984. The classification of isometry groups for 7 dim manifold is also done in this interval (some mathematicians review it later), showing that the SM group appears really between a small set of possibilities. – user135 Dec 8 '11 at 16:07 I did not have enough time, but I will at some point edit it to remove any reference to history (which for some reason tends to start food fights, as we already starting to see), and add something more specific about the SM. – user566 Dec 8 '11 at 16:10 2 Yes, that was the idea. The answer to the questions in the title is no and probably yes. But, before getting into specifics like the gauge group and matter representation, one needs to worry about the basic logic. In the end all the problems can be solved, and higher dimensional theories can work, but they cannot be just gravity theories. For example they need supersymmetry and a whole bunch of other ingredients. which successively leads to structure indistinguishable from string theory (which is the real reason KK was subsumed by ST). – user566 Dec 9 '11 at 15:43 show 3 more comments Not my domaine of expertise, but I would say that Kaluza-Klein, by itself, can't properly unify GR and EM. If we stick to the 5-dimensional approach, the first studied by KK, we get a few drawbacks right on the starting points. The deviation of the photon, for instance, would be affected by the presence of a 5-dimensional parameter, we can calculate it for a Scwarzchild solution and we would get an extra term that was never measured in any kind of light deviation. For a metric of the form: $$\bar{g} = A^a dt^2 - A^{-(a+b)}dr^2 - A^{1-a-b}r^2d\Omega^2 - A^b dl^2.$$, where $A$ is the usual Schwarzchild factor, we would get a photon deviation: $$\delta = (4a+2b)M/r_0$$ if we suppose there is no initial velocity for the fifth coordinate. The experimental data from the 70's would force $b<0,00075$. I guess this is one of the "flounders" mentioned in the article, not actually a real problem, but an annoying one that would force dimensions to be too tiny and making all kind of measurements very difficult. However, KK theories went well beyond their stating points, and evolved to Yang-Mills theory, so I guess they are still an active topic of research. There is more of it in "Space, Time, Matter : Modern Kaluza-Klein Theory", from P. S. Wesson. - +1 for the reference to the Wesson book – lurscher Dec 7 '11 at 19:42 this is my oldest question at Physics.SE: Measurement of Kaluza-Klein radion field gradients I get the impression that the main problem why people doesn't continue this venue of research is that there is no known mechanism that explains why the compactified dimensions stay tiny. In that question i argued that not only are the dimensions of those dimensions tiny, but their derivatives as well. Maybe there is some quantization going on that forces the dimension into well-defined sizes, but thats something the experts on the area should address. - 1 There are a zillion papers on moduli stabilization out there. – Aaron Dec 9 '11 at 0:49 Theodor Kaluza Theory successfully unified GR and EM, in 1919. "The unifying feature of this theory was that it unified Einstein's theory of gravitation and Maxwell's electromagnetic theory. As Kaku writes ... this unknown scientist was proposing to combine, in one stroke, the two greatest field theories known to science, Maxwell's and Einstein's, by mixing them in the fifth dimension." - 2 And therein lies the problem with taking popularisations too seriously. – José Figueroa-O'Farrill Dec 8 '11 at 2:04 Jose you are absolutely correct, but who listens. - string theory. – Terry Giblin Dec 9 '11 at 18:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504201412200928, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=3459932
Physics Forums ## Apostol 2.13 - #15 Cavalieri Solids (Volume Integration) A solid has a circular base of radius 2. Each cross section cut by a plane perpendicular to a fixed diameter is an equilateral triangle. Compute the volume of the solid. First, we find a way to define a the distance of a chord of the circle perpendicular to the fixed diameter. The equation $y=\sqrt(2^2-x)$ from x=-2 to 2 gives half the chord, so 2y is equal to the chord's length. At any point x, the solid's area is an equilateral triangle, so all sides must have length equal to the chord of the circle, or 2y. Now the area of an equilateral triangle with side 2y is equal to $(2y)^2\sqrt(3)/4 = y^2\sqrt(3)$. Substituting for y, we have that $Area(x)=(4-x^2)\sqrt{3}$. Integrating, we find that $\int_{-2}^2 A(x) dx=2\int_0^2 \sqrt{3}(4-x^2) dx = \frac{32\sqrt{3}}{3}$ The problem is that the book has $\frac{16\sqrt{3}}{3}$, and I want to make sure I didn't do it incorrectly. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Check the area of your triangular cross-sections again. If the base is 2y , what is the height? Quote by dynamicsolo Check the area of your triangular cross-sections again. If the base is 2y , what is the height? I did it by dropping an angle bisector from the top vertex of the equilateral triangle to create two right triangles. Then the base is y, and the hypotenuse is 2y. The pythagorean theorem yields the height equal to $\sqrt{(2y)^2-y^2}=y\sqrt{3}$, so the area of this right triangle is $\frac{1}{2}y \times y\sqrt{3}$; however this is just one half of the area of the equilateral triangle. Therefore the area of the equilateral triangle is $y^2\sqrt{3}$. This agrees with the formula for the area of an equilateral triangle given here: http://www.mathwords.com/a/area_equi...l_triangle.htm Taking $s=2y$, we have that the area is equal to $\frac{(2y)^2\sqrt{3}}{4}=y^2\sqrt{3}$. Recognitions: Homework Help ## Apostol 2.13 - #15 Cavalieri Solids (Volume Integration) Sorry, yes: my fault for trying to deal with more than one matter at once. I am wondering if the solver for Apostol used symmetry and forgot to double the volume integration. I am getting the same answer you are. Stewart does this as Example 7 in Section 6.2 with a radius of 1 and gets one-eighth our volume, which is consistent. Back-of-the-book answers aren't 100%... Quote by dynamicsolo Sorry, yes: my fault for trying to deal with more than one matter at once. I am wondering if the solver for Apostol used symmetry and forgot to double the volume integration. I am getting the same answer you are. Stewart does this as Example 7 in Section 6.2 with a radius of 1 and gets one-eighth our volume, which is consistent. Back-of-the-book answers aren't 100%... Glad to see that you're getting the same answer as me. I felt pretty solid about this one, but Apostol's answers in the back are better percentage-wise than any other book I've seen. I've done every problem through the first 200 pages or so and only come up with a few legitimate discrepancies. Recognitions: Homework Help What edition is Apostol up to now? Generally, Third and later Editions have the error rates in the answer sections down to about 0.25% or less... Quote by dynamicsolo What edition is Apostol up to now? Generally, Third and later Editions have the error rates in the answer sections down to about 0.25% or less... The most recent edition is the second, and it's from the 1960s. I don't think any new ones will be out any time soon, but it's a really solid text. Recognitions: Homework Help Well, it's supposed to be a classic. But I suspect the percentage of errors in the answers could be somewhere in the 0.25% to 0.5% range (from my long experience with textbooks)... I looked Apostol up and he's 88 this year. I doubt he's going to revise the book (though I've been surprised in the past); he's moved on to other projects. Thread Tools | | | | |-------------------------------------------------------------------------------|----------------------------|---------| | Similar Threads for: Apostol 2.13 - #15 Cavalieri Solids (Volume Integration) | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 10 | | | Calculus | 3 | | | Calculus & Beyond Homework | 3 | | | Calculus & Beyond Homework | 2 | | | Calculus | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315770268440247, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/138265-continuous-function-measurable.html
# Thread: 1. ## Continuous function => Measurable If a function f:[a,b]->R is continuous a.e, then show that f is a measurable function. 2. Originally Posted by Chandru1 If a function f:[a,b]->R is continuous a.e, then show that f is a measurable function. The definition of continuity is that $f^{-1}$ takes open sets to open sets. f is a measurable function if $f^{-1}$ takes open sets to measurable sets. But, every open set is measurable. Thus, all that remains is to show that the a.e. condition doesn't effect measurability. Define g:[a,b]-->R to be: g(a) = f(a) if f is continuous at a g(a) = $\lim_{x->a}{f(x)}$ if f is not continuous at a Then g is a continuous function, so g is measurable. But g = f a.e. thus, f is also measurable. Note: You may have to show g is well-defined 3. Originally Posted by southprkfan1 The definition of continuity is that $f^{-1}$ takes open sets to open sets. f is a measurable function if $f^{-1}$ takes open sets to measurable sets. But, every open set is measurable. Isn't a function measurable iff the preimage of measurable sets is measurable?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552991390228271, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/101160?sort=oldest
## Axiom to exclude nonstandard natural numbers ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In Peano Arithmetic, the induction axiom states that there is no proper subset of the natural numbers that contains 0 and is closed under the successor function. This is intended to rule out the possibility of extra natural numbers beyond the familiar ones. It doesn't accomplish that goal; there remains the possibility that other natural numbers exist and the familiar ones do not form a set. In Internal Set Theory (IST), which is an extension of ZFC that is consistent relative to ZFC, there is a distinction between standard and nonstandard sets, and it can be shown that (1) 0 is standard; (2) if $n \in \mathbb{N}$ is standard, then so is its successor; (3) $\mathbb{N}$ has nonstandard elements. The induction axiom is not violated because the standard natural numbers do not form a set. Is there a way to axiomatize set theory so that no such nonstandard natural numbers can exist? (Note: this question is not about nonstandard models of arithmetic. In IST, $\mathbb{N}$ is a standard set, and within a given model of IST, all models of second-order Peano arithmetic are isomorphic to $\mathbb{N}$.) - 1 The intention of the induction axiom is not so much what you say but more to allow one to prove (standard) statements quantified over all natural numbers, and it does accomplish that goal. – Lee Mosher Jul 2 at 18:51 I just ran across the following article, with the charming title "INEXPRESSIBLE LONGING FOR THE INTENDED MODEL", which provides a rich historical backdrop to various attempts in answering this very question and variants: glli.uni.opole.pl/publications/L_006_en.pdf – Ali Enayat Jul 7 at 2:08 ## 2 Answers As long as you axiomatize set theory in first-order logic, the answer to your question is no. The axioms would be consistent with each finite subset of the following set of sentences involving a new constant symbol $c$: "$c$ is a natural number" and "$c\neq n$" for each (standard name of a) natural number $n$. By compactness, there would be a model of the axioms plus all of these sentences, and in that model $c$ would denote a nonstandard natural number. On the other hand, if you're willing to go beyond first-order logic, then the answer to your question is yes. For example, in second-order logic, you can express the induction axiom as a single sentence and be confident that "set" really means arbitrary set (not "internal set" or anything like that). In other words, once you're sure that "set" has its intended meaning, the induction principle guarantees that "natural number" also has its intended meaning. (To me, this doesn't look very helpful, since the intended meaning of "set" seems more complicated than the intended meaning of "natural number".) For another example, if you're willing to use infinitary logic, then you can formulate the axiom "every natural number is equal to 0 or to 1 or to 2 or to 3, or ..." - 1 Also, the upwards Löwenheim–Skolem theorem states that, in first order theories of the natural numbers, there are always models with (uncountably many) nonstandard numbers. The proof of that can be done in just the same way as in the first paragraph of this answer. – George Lowther Jul 2 at 18:23 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. By Gödel's incompleteness theorem, there can't be any such axiom in a first-order, recursively enumerable theory. You can axiomatize $\mathbb N$ by adding an infinitary rule of inference, the Hilbert $\omega$-rule: $$P(0)\wedge P(1) \wedge P(2) \wedge \cdots \over \forall n P(n)$$ to PA for each arithmetic predicate P. This says, if predicate P holds for each of the natural numbers (0,1,2...) then you can deduce the formula $\forall n P(n)$. The resulting system is called ω-logic. Obviously this is something of a "cheat" since you no longer have an effective theory. As one example (maybe there are better ones) of how it can be used, Michael Rathjen's article "The Art of Ordinal Analysis" describes using the $\omega$-rule to analyze stronger and stronger arithmetic theories, and is pretty interesting. - 2 The $\omega$-rule does not prohibit nonstandard models. If you add it to PA, you obtain the first-order theory of all statements in the language of arithmetic true in $\mathbb N$. In general, if you add it to a first-order theory, you obtain another first-order theory, since $\omega$-logic does not change the set of formulas. Thus, the result will still have nonstandard models by Andreas’ argument. $\omega$-logic is complete wrt $\omega$-models, and there are no nonstandard $\omega$-models, but this is not any deep property of the theory, but a matter of the definition: ... – Emil Jeřábek Jul 3 at 11:21 1 ... Basically, you first declare that you will only study standard models, and then of course, it should come as no surprise that all such models are standard. – Emil Jeřábek Jul 3 at 11:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323859810829163, "perplexity_flag": "head"}
http://cms.math.ca/10.4153/CJM-2003-010-0
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM Abstract view # Short Kloosterman Sums for Polynomials over Finite Fields Read article [PDF: 204KB] http://dx.doi.org/10.4153/CJM-2003-010-0 Canad. J. Math. 55(2003), 225-246 Published:2003-04-01 Printed: Apr 2003 • William D. Banks • Asma Harcharras • Igor E. Shparlinski Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: HTML LaTeX MathJax PDF PostScript ## Abstract We extend to the setting of polynomials over a finite field certain estimates for short Kloosterman sums originally due to Karatsuba. Our estimates are then used to establish some uniformity of distribution results in the ring $\mathbb{F}_q[x]/M(x)$ for collections of polynomials either of the form $f^{-1}g^{-1}$ or of the form $f^{-1}g^{-1}+afg$, where $f$ and $g$ are polynomials coprime to $M$ and of very small degree relative to $M$, and $a$ is an arbitrary polynomial. We also give estimates for short Kloosterman sums where the summation runs over products of two irreducible polynomials of small degree. It is likely that this result can be used to give an improvement of the Brun-Titchmarsh theorem for polynomials over finite fields. MSC Classifications: 11T23 - Exponential sums 11T06 - Polynomials
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7518782019615173, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/41173/microsoft-excel-not-graphing-x-y1-2/41252
# Microsoft Excel not graphing $x = y^{1/2}$ The experiment was relating the period of one "bounce" when you hang a weight on a spring and let it bounce. I have this data here, one being mass and one being time. The time is an average of 5 trials, each one being and average of 20 bounces, to minimize human error. t 0.3049s 0.3982s 0.4838s 0.5572s 0.6219s 0.6804s 0.7362s 0.7811s 0.8328s 0.869s The mass is the mass that was used in each trial (they aren't going up in exact differences because each weight has a slight difference, nothing is perfect in the real world) m 50.59g 100.43g 150.25g 200.19g 250.89g 301.16g 351.28g 400.79g 450.43g 499.71g My problem is that I need to find the relationship between them, I know $m = \frac{k}{4\pi^2}\times T^2$ so I can work out $k$ like that but we need to graph it. In this equation, $k$ is the spring constant, $m$ is the mass hanging on the spring and $T$ is the period. I can assume that the relationship is a sqrt relation, not sure on that one. But it appears to be the reverse of a square. Should it be $\frac{1}{x^2}$ then? Either way my problem is still present, I have tried $\frac{1}{x}$, $\frac{1}{x^2}$, $\sqrt{x}$, $x^2$, none of them produce a straight line. The problem for SU is that when I go to graph the data on Excel I set the y axis data (which is the weights) and then when I go to set the x axis (which is the time) it just replaces the y axis with what I want to be the x axis, this is only happening when I have the sqrt of $m$ as the y axis and I try to set the x axis as the time. The problem of math/physics is that, am I even using the right thing? To get a straight line it would need to be $x = y^{1/2}$ right? I thought I was doing the right thing, it is what we were told to do. I'm just not getting anything that looks right. - I can guess that $m$ is the mass, $T$ is the period you have measured, and $k$ is the spring constant; what are the other variables? What is $\Pi$? What are you plotting as your $x$ and $y$? – Colin McFaul Oct 19 '12 at 2:49 – michielm May 12 at 6:51 ## 2 Answers The equation you present, $m = \frac{k}{4\Pi^2}\times T^2$ is of the form $y = \mathrm{slope}\times x + \mathrm{intercept}$ for a straight line. To put it in that form, you can make the identifications $m \mapsto x$, $T^2 \mapsto y$, $\frac{4\Pi}{k} \mapsto \mathrm{slope}$. Your equation is also predicting $\mathrm{intercept} = 0$. I'm using mass as the x-variable because that is the independent variable you are actually controlling in your experiment. Time is then the dependent variable, so it gets plotted as the y-variable. As I mentioned in my comment, I'm guessing that $k$ is the spring constant, and I don't know what $\Pi$ is. Those are important for the physical interpretation of the slope, but aren't that important for getting the fit correct. I'm plotting mass as my x-variable, and $T^2$ as my time variable. I didn't try to actually fit that to a straight line, but I did plot a straight line with no y-intercept on the same plot. Eye-balling those together, the fit looks nearly perfect. I don't see any evidence of any deviation from the law you're trying to verify. - Your balance plate weighed about 6-7 grams and you need to take this into account. In order to get a good relation, you need to add a constant to all the weights. This is not exactly right, because the center of mass also changes a little when you add weights, but it's the major effect. The only thing you need to do is drop the first point, where there is a 10-15% error due to the weight of the balance. - The base weight was the first one, 50.59g – SmartLemon Oct 18 '12 at 23:38 @SmartLemon: It isn't quite--- there is a shift either in the mass or in the length --- I found that I had to add an offset to make it match the square-root law, although it is approximately correct. – Ron Maimon Oct 19 '12 at 0:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9716128706932068, "perplexity_flag": "head"}
http://mathoverflow.net/questions/79086/finite-generation-of-equivariant-cohomology-rings
## Finite generation of equivariant cohomology rings ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a finite group acting on a topological space $X$. (In my applications, $X$ is the classifying space of a compact Lie group.) Let $H^\ast(-)=H^\ast(-;\mathbb{Z}/2\mathbb{Z})$ denote mod 2 singular cohomology. By the work of Quillen ("The Spectrum of an Equivariant Cohomology Ring: I"), if $H^\ast(X)$ is a finitely generated $\mathbb{Z}/2\mathbb{Z}$-module, then the equivariant cohomology $$H^\ast_G(X) = H^\ast(EG\times_G X)$$ is a finitely generated as a $\mathbb{Z}/2\mathbb{Z}$-algebra. My question is, if we only assume that $H^\ast(X)$ is finitely generated as an algebra, does the conclusion still hold, ie is the equivariant cohomology $H^\ast_G(X)$ necessarily finitely generated as an algebra? More background The standard tool for computing equivariant cohomology is the Leray-Serre spectral sequence of the fibration $X\to EG\times_G X \to BG$, which has $$H^\ast(BG,H^\ast(X)) \Longrightarrow H^\ast(EG\times_G X)$$ as algebras. Now if $G$ acts trivially on $H^\ast(X)$ then the $E_2$-page can be identified with $$H^\ast(BG)\otimes H^\ast(X)$$ as algebras. By a classical result of Evens, the mod 2 cohomology algebra of a finite group is finitely generated. Since the tensor product of finitely generated algebras is again finitely generated, so is this $E_2$-page, and hence so is the equivariant cohomology algebra. (see the comments made by Algori and Ralph below.) However, in general the coefficients in the $E_2$-page are twisted by the action of $G$ on $H^\ast(X)$ (which in my case happens to be non-trivial), and so I can't see how the argument would go. At the other extreme, if $G$ acts freely on $X$ then we are basically asking if there is a finite covering space $X\to Y$ such that $H^\ast(X)$ is finitely generated as an algebra but $H^\ast(Y)$ is not. Such a beast would give a counter-example. (I'm also rather sure that if the answer to my question was yes, Quillen would have told us so!) - 2 Mark -- re "and hence so is the equivariant cohomology algebra": $E_3$ is the quotient of a subalgebra of $E_2$; now, a subalgebra of a finitely generated algebra is not necessarily finitely generated, so I think an extra argument is needed here. – algori Oct 25 2011 at 20:16 1 $E_2$ is always a finitely generated algebra over $k = \mathbb{F}_2$: By Noether normalization $H^\ast(X)$ is a finitely generated module over a polynomial ring $k[x_1,...x_n]$ and wlog we may assume that the $x_i$ have the same degree. By Dickson invariants, $k[x_1,...,x_n]$ is a finitely generated module over the $k$-polynomial ring $R := k[x_1,...,x_n]^G$. Thus $H^\ast(X)$ is a finitely generated $R$-module. Now by Evens' theorem $H^\ast(BG;R)$ is a finitely generated $R$-algebra and $E_2=H^\ast(BG;H^\ast(X))$ is a finitely generated $H^\ast(BG;R)$-module. – Ralph Oct 26 2011 at 23:21 1 ... Summarizing, $E_2$ is a finitely generated module over a finitely generated $k$-algebra. In particular, $E_2$ is a finitely generated $k$-algebra. --- But as pointed out by algori (and can be seen in Evens' original proof) this doesn't suffice to show finite generation. – Ralph Oct 26 2011 at 23:21 @Ralph: Is $E_2$ a finitely generated $H^\ast(BG)$-module? If it were, I think this would answer the question as the ring $H^\ast(BG)$ is commutative and Noetherian, and so subquotients of f.g. modules are f.g. – Mark Grant Oct 27 2011 at 13:09 2 In this case $H_G^\ast(X;k)$ is a finitely generated $k$-algebra: By a theorem of Nakaoka, $H^\ast(EG \times_G Y^r;k) \cong H^\ast(BG;H^\ast(Y;k)^{\otimes r})$ as $k$-algebras and the latter is isomophic to $H^\ast(BG;H^\ast(X;k))$ which is just the $E_2$-term of the spectral sequence discussed above. From the considerations in my first comment now the desired result follows. --- You can find Nakaoka's theorem in arxiv.org/abs/0711.5017, Theorem 2.1. Note: $X^{\otimes \Omega}$ in that theorem is a typo, it should be $X^\Omega$ as is clear from context or from the proof. – Ralph Oct 27 2011 at 22:48 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928646981716156, "perplexity_flag": "head"}
http://terrytao.wordpress.com/tag/lindeberg-replacement-trick/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘Lindeberg replacement trick’ tag. ## 254A, Notes 2: The central limit theorem 5 January, 2010 in 254A - random matrices, math.PR | Tags: Berry-Esseen theorem, central limit theorem, Lindeberg replacement trick, Ornstein-Uhlenbeck process, Stein's method | by Terence Tao | 54 comments Consider the sum ${S_n := X_1+\ldots+X_n}$ of iid real random variables ${X_1,\ldots,X_n \equiv X}$ of finite mean ${\mu}$ and variance ${\sigma^2}$ for some ${\sigma > 0}$. Then the sum ${S_n}$ has mean ${n\mu}$ and variance ${n\sigma^2}$, and so (by Chebyshev’s inequality) we expect ${S_n}$ to usually have size ${n\mu + O(\sqrt{n} \sigma)}$. To put it another way, if we consider the normalised sum $\displaystyle Z_n := \frac{S_n - n \mu}{\sqrt{n} \sigma} \ \ \ \ \ (1)$ then ${Z_n}$ has been normalised to have mean zero and variance ${1}$, and is thus usually of size ${O(1)}$. In the previous set of notes, we were able to establish various tail bounds on ${Z_n}$. For instance, from Chebyshev’s inequality one has $\displaystyle {\bf P}(|Z_n| > \lambda) \leq \lambda^{-2}, \ \ \ \ \ (2)$ and if the original distribution ${X}$ was bounded or subgaussian, we had the much stronger Chernoff bound $\displaystyle {\bf P}(|Z_n| > \lambda) \leq C \exp( - c \lambda^2 ) \ \ \ \ \ (3)$ for some absolute constants ${C, c > 0}$; in other words, the ${Z_n}$ are uniformly subgaussian. Now we look at the distribution of ${Z_n}$. The fundamental central limit theorem tells us the asymptotic behaviour of this distribution: Theorem 1 (Central limit theorem) Let ${X_1,\ldots,X_n \equiv X}$ be iid real random variables of finite mean ${\mu}$ and variance ${\sigma^2}$ for some ${\sigma > 0}$, and let ${Z_n}$ be the normalised sum (1). Then as ${n \rightarrow \infty}$, ${Z_n}$ converges in distribution to the standard normal distribution ${N(0,1)_{\bf R}}$. Exercise 1 Show that ${Z_n}$ does not converge in probability or in the almost sure sense (in the latter case, we think of ${X_1,X_2,\ldots}$ as an infinite sequence of iid random variables). (Hint: the intuition here is that for two very different values ${n_1 \ll n_2}$ of ${n}$, the quantities ${Z_{n_1}}$ and ${Z_{n_2}}$ are almost independent of each other, since the bulk of the sum ${S_{n_2}}$ is determined by those ${X_n}$ with ${n > n_1}$. Now make this intuition precise.) Exercise 2 Use Stirling’s formula from Notes 0a to verify the central limit theorem in the case when ${X}$ is a Bernoulli distribution, taking the values ${0}$ and ${1}$ only. (This is a variant of Exercise 2 from those notes, or Exercise 2 from Notes 1. It is easy to see that once one does this, one can rescale and handle any other two-valued distribution also.) Exercise 3 Use Exercise 9 from Notes 1 to verify the central limit theorem in the case when ${X}$ is gaussian. Note we are only discussing the case of real iid random variables. The case of complex random variables (or more generally, vector-valued random variables) is a little bit more complicated, and will be discussed later in this post. The central limit theorem (and its variants, which we discuss below) are extremely useful tools in random matrix theory, in particular through the control they give on random walks (which arise naturally from linear functionals of random matrices). But the central limit theorem can also be viewed as a “commutative” analogue of various spectral results in random matrix theory (in particular, we shall see in later lectures that the Wigner semicircle law can be viewed in some sense as a “noncommutative” or “free” version of the central limit theorem). Because of this, the techniques used to prove the central limit theorem can often be adapted to be useful in random matrix theory. Because of this, we shall use these notes to dwell on several different proofs of the central limit theorem, as this provides a convenient way to showcase some of the basic methods that we will encounter again (in a more sophisticated form) when dealing with random matrices. Read the rest of this entry » ## Random matrices: universality of local eigenvalue statistics 3 June, 2009 in math.PR, paper | Tags: eigenvalues, Lindeberg replacement trick, random matrices, universality, Van Vu, Wigner matrices | by Terence Tao | 21 comments Van Vu and I have just uploaded to the arXiv our paper “Random matrices: universality of local eigenvalue statistics“, submitted to Acta Math..  This paper concerns the eigenvalues $\lambda_1(M_n) \leq \ldots \leq \lambda_n(M_n)$ of a Wigner matrix $M_n = (\zeta_{ij})_{1 \leq i,j \leq n}$, which we define to be a random Hermitian $n \times n$ matrix whose upper-triangular entries $\zeta_{ij}, 1 \leq i \leq j \leq n$ are independent (and whose strictly upper-triangular entries $\zeta_{ij}, 1 \leq i < j \leq n$ are also identically distributed).  [The lower-triangular entries are of course determined from the upper-triangular ones by the Hermitian property.]  We normalise the matrices so that all the entries have mean zero and variance 1.  Basic examples of Wigner Hermitian matrices include 1. The Gaussian Unitary Ensemble (GUE), in which the upper-triangular entries $\zeta_{ij}, i<j$ are complex gaussian, and the diagonal entries $\zeta_{ii}$ are real gaussians; 2. The Gaussian Orthogonal Ensemble (GOE), in which all entries are real gaussian; 3. The Bernoulli Ensemble, in which all entries take values $\pm 1$ (with equal probability of each). We will make a further distinction into Wigner real symmetric matrices (which are Wigner matrices with real coefficients, such as GOE and the Bernoulli ensemble) and Wigner Hermitian matrices (which are Wigner matrices whose upper-triangular coefficients have real and imaginary parts iid, such as GUE). The GUE and GOE ensembles have a rich algebraic structure (for instance, the GUE distribution is invariant under conjugation by unitary matrices, while the GOE distribution is similarly invariant under conjugation by orthogonal matrices, hence the terminology), and as a consequence their eigenvalue distribution can be computed explicitly.  For instance, the joint distribution of the eigenvalues $\lambda_1(M_n),\ldots,\lambda_n(M_n)$ for GUE is given by the explicit formula $\displaystyle C_n \prod_{1 \leq i<j \leq n} |\lambda_i-\lambda_j|^2 \exp( - \frac{1}{2n} (\lambda_1^2+\ldots+\lambda_n^2))\ d\lambda_1 \ldots d\lambda_n$ (0) for some explicitly computable constant $C_n$ on the orthant $\{ \lambda_1 \leq \ldots \leq \lambda_n\}$ (a result first established by Ginibre).  (A similar formula exists for GOE, but for simplicity we will just discuss GUE here.)  Using this explicit formula one can compute a wide variety of asymptotic eigenvalue statistics.  For instance, the (bulk) empirical spectral distribution (ESD) measure $\frac{1}{n} \sum_{i=1}^n \delta_{\lambda_i(M_n)/\sqrt{n}}$ for GUE (and indeed for all Wigner matrices, see below) is known to converge (in the vague sense) to the Wigner semicircular law $\displaystyle \frac{1}{2\pi} (4-x^2)_+^{1/2}\ dx =: \rho_{sc}(x)\ dx$ (1) as $n \to \infty$.  Actually, more precise statements are known for GUE; for instance, for $1 \leq i \leq n$, the $i^{th}$ eigenvalue $\lambda_i(M_n)$ is known to equal $\displaystyle \lambda_i(M_n) = \sqrt{n} t(\frac{i}{n}) + O( \frac{\log n}{n} )$ (2) with probability $1-o(1)$, where $t(a) \in [-2,2]$ is the inverse cumulative distribution function of the semicircular law, thus $\displaystyle a = \int_{-2}^{t(a)} \rho_{sc}(x)\ dx$. Furthermore, the distribution of the normalised eigenvalue spacing $\sqrt{n} \rho_{sc}(\frac{i}{n}) (\lambda_{i+1}(M_n) - \lambda_i(M_n))$ is known; in the bulk region $\varepsilon n \leq i \leq 1-\varepsilon n$ for fixed $\varepsilon > 0$, it converges as $n \to \infty$ to the Gaudin distribution, which can be described explicitly in terms of determinants of the Dyson sine kernel $K(x,y) := \frac{\sin \pi(x-y)}{\pi(x-y)}$.  Many further local statistics of the eigenvalues of GUE are in fact governed by this sine kernel, a result usually proven using the asymptotics of orthogonal polynomials (and specifically, the Hermite polynomials).  (At the edge of the spectrum, say $i = n-O(1)$, the asymptotic distribution is a bit different, being governed instead by the  Tracy-Widom law.) It has been widely believed that these GUE facts enjoy a universality property, in the sense that they should also hold for wide classes of other matrix models. In particular, Wigner matrices should enjoy the same bulk distribution (1), the same asymptotic law (2) for individual eigenvalues, and the same sine kernel statistics as GUE. (The statistics for Wigner symmetric matrices are slightly different, and should obey GOE statistics rather than GUE ones.) There has been a fair body of evidence to support this belief.  The bulk distribution (1) is in fact valid for all Wigner matrices (a result of Pastur, building on the original work of Wigner of course).  The Tracy-Widom statistics on the edge were established for all Wigner Hermitian matrices (assuming that the coefficients had a distribution which was symmetric and decayed exponentially) by Soshnikov (with some further refinements by Soshnikov and Peche).  Soshnikov’s arguments were based on an advanced version of the moment method. The sine kernel statistics were established by Johansson for Wigner Hermitian matrices which were gaussian divisible, which means that they could be expressed as a non-trivial linear combination of another Wigner Hermitian matrix and an independent GUE.  (Basically, this means that distribution of the coefficients is a convolution of some other distribution with a gaussian.  There were some additional technical decay conditions in Johansson’s work which were removed in subsequent work of Ben Arous and Peche.)   Johansson’s work was based on an explicit formula for the joint distribution for gauss divisible matrices that generalises (0) (but is significantly more complicated). Just last week, Erdos, Ramirez, Schlein, and Yau established sine kernel statistics for Wigner Hermitian matrices with exponential decay and a high degree of smoothness (roughly speaking, they require  control of up to six derivatives of the Radon-Nikodym derivative of the distribution).  Their method is based on an analysis of the dynamics of the eigenvalues under a smooth transition from a general Wigner Hermitian matrix to GUE, essentially a matrix version of the Ornstein-Uhlenbeck process, whose eigenvalue dynamics are governed by Dyson Brownian motion. In my paper with Van, we establish similar results to that of Erdos et al. under slightly different hypotheses, and by a somewhat different method.  Informally, our main result is as follows: Theorem 1. (Informal version)  Suppose $M_n$ is a Wigner Hermitian matrix whose coefficients have an exponentially decaying distribution, and whose real and imaginary parts are supported on at least three points (basically, this excludes Bernoulli-type distributions only) and have vanishing third moment (which is for instance the case for symmetric distributions).  Then one has the local statistics (2) (but with an error term of $O(n^{-1+\delta})$ for any $\delta>0$ rather than $O(\log n/n)$) and the sine kernel statistics for individual eigenvalue spacings $\sqrt{n} \rho_{sc}(\frac{i}{n}) (\lambda_{i+1}(M_n) - \lambda_i(M_n))$ (as well as higher order correlations) in the bulk. If one removes the vanishing third moment hypothesis, one still has the sine kernel statistics provided one averages over all i. There are analogous results for Wigner real symmetric matrices (see paper for details).  There are also some related results, such as a universal distribution for the least singular value of matrices of the form in Theorem 1, and a crude asymptotic for the determinant (in particular, $\log |\det M_n| = (1+o(1)) \log \sqrt{n!}$ with probability $1-o(1)$). The arguments are based primarily on the Lindeberg replacement strategy, which Van and I also used to obtain universality for the circular law for iid matrices, and for the least singular value for iid matrices, but also rely on other tools, such as some recent arguments of Erdos, Schlein, and Yau, as well as a very useful concentration inequality of Talagrand which lets us tackle both discrete and continuous matrix ensembles in a unified manner.  (I plan to talk about Talagrand’s inequality in my next blog post.) Read the rest of this entry » ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 77, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.898479700088501, "perplexity_flag": "head"}
http://mathoverflow.net/questions/29490/how-many-surjections-are-there-from-a-set-of-size-n
## How many surjections are there from a set of size n? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is well-known that the number of surjections from a set of size n to a set of size m is quite a bit harder to calculate than the number of functions or the number of injections. (Of course, for surjections I assume that n is at least m and for injections that it is at most m.) It is also well-known that one can get a formula for the number of surjections using inclusion-exclusion, applied to the sets $X_1,...,X_m$, where for each $i$ the set $X_i$ is defined to be the set of functions that never take the value $i$. This gives rise to the following expression: $m^n-\binom m1(m-1)^n+\binom m2(m-2)^n-\binom m3(m-3)^n+\dots$. Let us call this number $S(n,m)$. I'm wondering if anyone can tell me about the asymptotics of $S(n,m)$. A particular question I have is this: for (approximately) what value of $m$ is $S(n,m)$ maximized? It is a little exercise to check that there are more surjections to a set of size $n-1$ than there are to a set of size $n$. (To do it, one calculates $S(n,n-1)$ by exploiting the fact that every surjection must hit exactly one number twice and all the others once.) So the maximum is not attained at $m=1$ or $m=n$. I'm assuming this is known, but a search on the web just seems to lead me to the exact formula. A reference would be great. A proof, or proof sketch, would be even better. Update. I should have said that my real reason for being interested in the value of m for which S(n,m) is maximized (to use the notation of this post) or m!S(n,m) is maximized (to use the more conventional notation where S(n,m) stands for a Stirling number of the second kind) is that what I care about is the rough size of the sum. The sum is big enough that I think I'm probably not too concerned about a factor of n, so I was prepared to estimate the sum as lying between the maximum and n times the maximum. - 3 Whenever anyone has a question of the form "what is this function f:N-->N" then one very natural thing to do is to compute the first 10 values or so and then type it in to Sloane. research.att.com/~njas/sequences/index.html . If it's there then there will be references where you can read further. If it's not then this is evidence that little is known about it. I tried it with this sequence and got "ceiling of (n/sqrt(2))". Given that this is not the answer (e.g. f(1000)=722 and n/sqrt(2)=707.10678...) one might suspect that no exact formula is known but that n/sqrt(2) is close. – Kevin Buzzard Jun 25 2010 at 12:23 More likely is that it's less than any fixed multiple of $n$ but by a slowly-growing amount, don't you think? – JBL Jun 25 2010 at 12:27 These numbers also have a simple recurrence relation: mathoverflow.net/questions/27071/… – JBL Jun 25 2010 at 12:28 @JBL: I have no idea what the answer to the maths question is. I just thought I'd advertise a general strategy, which arguably failed this time. – Kevin Buzzard Jun 25 2010 at 12:30 See Herbert S. Wilf 'Generatingfunctionology', page 175. – Bruce Arnold Jun 26 2010 at 21:32 ## 6 Answers It seems to be the case that the polynomial $P_n(x) =\sum_{m=1}^n m!S(n,m)x^m$ has only real zeros. (I know it is true that $\sum_{m=1}^n S(n,m)x^m$ has only real zeros.) If this is true, then the value of $m$ maximizing $m!S(n,m)$ is within 1 of $P'_n(1)/P_n(1)$ by a theorem of J. N. Darroch, Ann. Math. Stat. 35 (1964), 1317-1321. See also J. Pitman, J. Combinatorial Theory, Ser. A 77 (1997), 279-303. By standard combinatorics $$\sum_{n\geq 0} P_n(x) \frac{t^n}{n!} = \frac{1}{1-x(e^t-1)}.$$ Hence $$\sum_{n\geq 0} P_n(1)\frac{t^n}{n!} = \frac{1}{2-e^t}$$ $$\sum_{n\geq 0} P'_n(1)\frac{t^n}{n!} = \frac{e^t-1}{(2-e^t)^2}.$$ Since these functions are meromorphic with smallest singularity at $t=\log 2$, it is routine to work out the asymptotics, though I have not bothered to do this. Update. It is indeed true that $P_n(x)$ has real zeros. This is because $(x-1)^nP_n(1/(x-1))=A_n(x)/x$, where $A_n(x)$ is an Eulerian polynomial. It is known that $A_n(x)$ has only real zeros, and the operation $P_n(x) \to (x-1)^nP_n(1/(x-1))$ leaves invariant the property of having real zeros. - 1 Given that Tim ultimately only wants to sum m! S(n,m) rather than find its maximum, it is really only P_n(1) which one needs to compute. In principle this is an exercise in the saddle point method, though one which does require a nontrivial amount of effort. – Terry Tao Jun 26 2010 at 19:03 8 You don't need the saddle point method to find the asymptotic rate of growth of the coefficients of $1/(2−e^t)$. The smallest singularity is at $t=\log 2$. It is a simple pole with residue $−1/2$. Hence $$P_n(1)\sim \frac{n!}{2(\log 2)^{n+1}}.$$ Using all the singularities $\log 2+2\pi ik, k\in\mathbb{Z}$, one obtains an asymptotic series for $P_n(1)$. It can be shown that this series actually converges to $P_n(1)$. – Richard Stanley Jun 26 2010 at 19:51 8 I quit being lazy and worked out the asymptotics for $P'_n(1)$. The Laurent expansion of $(e^t-1)/(2-e^t)^2$ about $t=\log 2$ begins $$\frac{e^t-1}{(2-e^t)^2} = \frac{1}{4(t-\log 2)^2} + \frac{1}{4(t-\log 2)}+\cdots$$ $$\qquad = \frac{1}{4(\log 2)^2\left(1-\frac{t}{\log 2}\right)^2} -\frac{1}{4(\log 2)\left(1-\frac{t}{\log 2}\right)}+\cdots,$$ whence $$P'_n(1)= n!\left(\frac{n+1}{4(\log 2)^{n+2}}- \frac{1}{4(\log 2)^{n+1}}+\cdots\right).$$ Thus $P'_n(1)/P_n(1)\sim n/2(\log 2)$. – Richard Stanley Jun 26 2010 at 21:00 3 Ah, I didn't realise that it was so simple to read off asymptotics of a Taylor series from nearby singularities (though, in retrospect, I implicitly knew this in several contexts). Thanks, I learned something today! – Terry Tao Jun 28 2010 at 20:26 1 While we're on the subject, I'd like to recommend Flajolet and Sedgewick for anyone interested in such techniques: algo.inria.fr/flajolet/Publications/books.html – Qiaochu Yuan Jun 29 2010 at 4:26 show 3 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This looks like the Stirling numbers of the second kind (up to the $m!$ factor). This and this papers are specifically devoted to the maximal Striling numbers. It seems that for large $n$ the relevant asymptotic expansion is $$k! S(n,k)= (e^r-1)^k \frac{n!}{r^n}(2\pi k B)^{-1/2}\left(1-\frac{6r^2\theta^2 +6r\theta+1}{12re^r}+O(n^{-2})\right),$$ where $$e^r-1=k+\theta,\quad \theta=O(1),$$ $$B=\frac{re^{2r}-(r^2+r)e^r}{(e^r-1)^2}.$$ - 1 If I understand correctly, what I (purely accidentally) called S(n,m) is m! times the Stirling number of the second kind with parameters n and m, which is conventionally denoted by S(n,m). So if I use the conventional notation, then my question becomes, how does one choose m in order to maximize m!S(n,m), where now S(n,m) is a Stirling number of the second kind? – gowers Jun 25 2010 at 11:08 I've added a reference concerning the maximum Stirling numbers. For large $n$ $S(n,m)$ is maximized by $m=K_n\sim n/\ln n$. – Andrey Rekalo Jun 25 2010 at 11:26 1 Is it obvious how to get from there to the maximum of m!S(n,m)? – gowers Jun 25 2010 at 11:29 Well, it's not obvious to me. But the computation for $S(n,m)$ seems to be not too complicated and probably can be adapted to deal with $m! S(n,m)$. – Andrey Rekalo Jun 25 2010 at 11:36 If f is an arbitrary surjection from N onto M, then we can think of f as partitioning N into m different groups, each group of inputs representing the same output point in M. The Stirling Numbers of the second kind count how many ways to partition an N element set into m groups. But this undercounts it, because any permutation of those m groups defines a different surjection but gets counted the same. There are m! such permutations, so our total number of surjections is m! S(n,m) To look at the maximum values, define a sequence S_n = n - M_n where M_n is the m that attains maximum value for a given n - in other words, S_n is the "distance from the right edge" for the maximum value. Computer-generated tables suggest that this function is constant for 3-4 values of n before increasing by 1. If this is true, then the m coordinate that maximizes m! S(n,m) is bounded by n - ceil(n/3) - 1 and n - floor(n/4) + 1. I have no proof of the above, but it gives you a conjecture to work with in the meantime. - The paper by Canfield and Pomerance that you quoted has an interesting expansion for $S(n,k+1)/S(n,k)$ at pag 5. The corresponding quotient $Q := Sur(n,k+1)/Sur(n,k)$ is just $k+1$ times as big; and sould be maximized by $k$ solving Q=1. – Pietro Majer Jun 25 2010 at 14:16 ops sorry, the above comment was addressed to Andrey. – Pietro Majer Jun 25 2010 at 21:15 I found this paper of Temme (available here) that gives an explicit but somewhat complicated asymptotic for the Stirling number S(n,m) of the second kind, by the methods alluded to in previous answers (generating functions -> contour integral -> steepest descent) Here's the asymptotic (as copied from that paper). One first sets $t_0 := \frac{n-m}{m}$ and finds the positive real number $x_0$ solving the transcendental equation $\frac{1-e^{-x_0}}{x_0} = \frac{m}{n}$ (one has the asymptotics $x_0 \approx 2(1-m/n)$ when $m/n$ is close to 1, and $x_0 \approx n/m$ when $m/n$ is close to zero.) One then defines $A := \phi(x_0) - m t_0 + (n-m) t_0$ where $\phi(x) := - n \ln x + m \ln(e^x - 1).$ (Note: $x_0$ is the stationary point of $\phi(x)$.) One has an integral representation $S(n,m) = \frac{n!}{m!} \frac{1}{2\pi i} \int e^{\phi(x)} \frac{dx}{x}$ where the integral is a small contour around the origin. The saddle point method then gives $S(n,m) = (1+o(1)) e^A m^{n-m} f(t_0) \binom{n}{m}$ where $f(t_0) := \sqrt{\frac{t_0}{(1+t_0)(x_0-t_0)}}$ and o(1) goes to zero as $n \to \infty$ (uniformly in m, I believe). In principle, one can now approximate $m! S(n,m)$ to within o(1) and compute its maximum in finite time, but this seems somewhat tedious. It does seem though that the maximum is attained when $m/n = c+o(1)$ for some explicit constant $0 < c < 1$. EDIT: Actually, it's clear that the maximum is going to be obtained in the range $n/e \leq m \leq n$ asymptotically, because $m! S(n,m)$ equals $n! \approx (n/e)^n$ when $m=n$, and on the other hand we have the trivial upper bound $m! S(n,m) \leq m^n$. Among other things, this makes $x_0$ and $t_0$ bounded, and so the f(t_0) term is also bounded and not of major importance to the asymptotics. The other terms however are still exponential in n... EDIT: There is also the identity $\sum_{k=1}^n (k-1)! S(n,k) = (-1)^n Li_{1-n}(2)$ where $Li_s$ is the polylogarithm function. So, up to a factor of n, the question is the same as that of obtaining an asymptotic for $Li_{1-n}(2)$ as $n \to -\infty$. This seems quite doable (presumably from yet another contour integration and steepest descent method) but a quick search of the extant asymptotics didn't give this immediately. - Richard's answer is short, slick, and complete, but I wanted to mention here that there is also a "real variable" approach that is consistent with that answer; it gives weaker bounds at the end, but also tells a bit more about the structure of the "typical" surjection. I'll write the argument in a somewhat informal "physicist" style, but I think it can be made rigorous without significant effort. Tim's function $Sur(n,m) = m! S(n,m)$ obeys the easily verified recurrence $Sur(n,m) = m ( Sur(n-1,m) + Sur(n-1,m-1) )$, which on expansion becomes $Sur(n,m) = \sum m_1 ... m_n = \sum \exp( \sum_{j=1}^n \log m_j )$ where the sum is over all paths $1=m_1 \leq m_2 \leq \ldots \leq m_n = m$ in which each $m_{i+1}$ is equal to either $m_i$ or $m_i+1$; one can interpret $m_i$ as being the size of the image of the first $i$ elements of ${1,\ldots,n}$. If we make the ansatz $m_j \approx n f(j/n)$ for some nice function $f: [0,1] \to {\bf R}^+$ with $f(0)=0$ and $0 \leq f'(t) \leq 1$ for all $t$, and use standard entropy calculations (Stirling's formula and Riemann sums, really), we obtain a contribution to $Sur(n,m)$ of the form $\exp( n \int_0^1 \log(n f(t))\ dt + n \int_0^1 h(f'(t))\ dt + o(n) )$ (*) where $h$ is the entropy function $h(\theta) := -\theta \log \theta - (1-\theta) \log (1-\theta)$. So, heuristically at least, the optimal profile comes from maximising the functional $\int_0^1 \log(f(t)) + h(f'(t))\ dt$ subject to the boundary condition $f(0)=0$. (The fact that $h$ is concave will make this maximisation problem nice and elliptic, which makes it very likely that these heuristic arguments can be made rigorous.) The Euler-Lagrange equation for this problem is $-\frac{f''}{f'(1-f')} = \frac{1}{f}$ while the free boundary at $t=1$ gives us the additional Neumann boundary condition $f'(1)=1/2$. The translation invariance of the Lagrangian gives rise to a conserved quantity; indeed, multiplying the Euler-Lagrange equation by $f'$ and integrating one gets $\log(1-f') = \log f + C$ which is easily solved as $f = \frac{1}{A} (1 - B e^{-At} )$ for some constants A, B. The Dirichlet boundary condition $f(0)=0$ gives $B=1$; the Neumann boundary condition $f'(1)=1/2$ gives $A=\log 2$, thus $f(t) = (1 - 2^{-t}) / \log 2$. In particular $f(1)=1/(2 \log 2)$, which matches Richard's answer that the maximum occurs when $m/n \approx 1/(2 \log 2)$. To match up with the asymptotic for $Sur(n,m)$ in Richard's answer (up to an error of $\exp(o(n))$, I need to have $\int_0^1 \log f(t) + h(f'(t))\ dt = - 1 - \log \log 2.$ And happily, this turns out to be the case (after a mildly tedious computation.) This calculation reveals more about the structure of a "typical" surjection from n elements to m elements for m free, other than that $m/n \approx 1/(2 \log 2)$; it shows that for any $0 < t < 1$, the image of the first $tn$ elements has cardinality about $f(t) n$. If one fixes $m$ rather than lets it be free, then one has a similar description of the surjection but one needs to adjust the A parameter (it has to solve the transcendental equation $(1-e^{-A})/A = m/n$). With a bit more effort, this type of computation should also reveal the typical distribution of the preimages of the surjection, and suggest a random process that generates something that is within o(n) edits of a random surjection. It's also interesting to note that the answer $m/n \approx 1/(2\log 2) = 0.72134\ldots$ fits extremely well with Kevin's numerical computation $f(1000)=722$, so we now have several independent confirmations that this is the correct answer... - 1 I found Terry's latest comment very interesting. Although his argument is not as easy as the complex variable technique and does not give the full asymptotic expansion, it is of much greater generality. It would make a nice expository paper (say for the Monthly) to present this example without any handwaving. – Richard Stanley Jun 29 2010 at 18:59 1 Hmm, not a bad suggestion. I may write a more detailed proof on my blog in the near future. – Terry Tao Jun 30 2010 at 18:13 If I'm not wrong the asymptotics $m/n\sim 1/(2\log 2)$ is equivalent to $(m+1)^n\sim 4m^n$. Thus, for the maximal $m$ , the number of maps from $n$ to $m+1$ is approximatively 4 times the number of maps from $n$ to $m$ . I wonder if this may be proved by a direct combinatorial argument, yelding to another proof of the asymptotics. – Pietro Majer Jul 2 2010 at 19:47 I don't have a precise reference for your problem (given $n$ find "the most surjected" $m$); waiting for a precise one, I can say that I think the standard starting point should be as follows. To avoid confusion I modify slightly your notation for the surjections from an $n$ elements set to an $m$ elements set into $\mathrm{Sur}(n,m).$ One has the generating function (coming e.g. from the analogous g.f. for Stirling numbers of second kind) $$(e^x-1)^m\,=\sum_{n\ge m}\ \mathrm{Sur}(n,m)\ \frac{x^n}{n!}\ ,$$ whence by the Cauchy formula with a simple integration contour around 0 , $$\frac{\mathrm{Sur}(n,m)}{n!}={1 \over 2\pi i} \oint \frac{(e^z-1)^m}{z^{n+1}}dz$$ For a circular path $re^{it}$ we find $$\frac{\mathrm{Sur}(n,m)}{n!}={1 \over 2\pi } \int_{-\pi}^{\pi}\left(\exp(re^{it})-1\right)^m e^{-int} dt\ .$$ This holds for any number $r>0$, and the most convenient one should be chosen according to the stationary phase method; here a change of variable followed by dominated convergence may possibly give a convergent integral, producing an asymptotics: this is e.g. how one can derive the Stirling asymptotics for n!. In your case, the problem is: for a given $n$ (large) maximize the integral in $m$, and give asymptotic expansions for the maximal $m$ (the first order should be $\lambda n + O(1)$ with $2/3\leq \lambda\leq 3/4$ according to Michael Burge's exploration). This seems to be tractable; for the moment I leave this few hints hoping they are useful, but I'm very curious to see the final answer. - OK this match quite well with the formula reported by Andrey Rekalo; the $r$ there is most likely coming from the stationary phase method. – Pietro Majer Jun 25 2010 at 13:52 Pietro, I believe this is very close to how the asymptotic formula was obtained. – Andrey Rekalo Jun 25 2010 at 14:03 yes, I think the starting point is standard and obliged. – Pietro Majer Jun 25 2010 at 21:13 PS: Andrey, the papers you quoted initially where in pay-for journal, and led me to the wrong idea that there where no free version of that standard computation. My fault, I made a computation for nothing. – Pietro Majer Jun 26 2010 at 6:24 Thank you for the comment. I'll try my best to quote free sources whenever I find them available. – Andrey Rekalo Jun 26 2010 at 15:32 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 142, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367759227752686, "perplexity_flag": "head"}
http://theoreticalatlas.wordpress.com/2011/02/02/brzezinski-sncg/
# Theoretical Atlas He had bought a large map representing the sea, / Without the least vestige of land: / And the crew were much pleased when they found it to be / A map they could all understand. February 2, 2011 ## Tomasz Brzezinski – “Toward Synthetic Non-Commutative Geometry” Posted by Jeffrey Morton under algebra, c*-algebras, category theory, geometry, noncommutative geometry, talks [3] Comments So there’s a lot of preparations going on for the workshop HGTQGR coming up next week at IST, and the program(me) is much more developed – many of the talks are now listed, though the schedule has yet to be finalized.  This week we’ll be having a “pre-school school” to introduce the local mathematicans to some of the physics viewpoints that will be discussed at the workshop – Aleksandar Mikovic will be introducing Quantum Gravity (from the point of view of the loop/spin-foam approach), and Sebastian Guttenberg will be giving a mathematician’s introduction to String theory. These are by no means the only approaches physicists have taken to the problem of finding a theory that incorporates both General Relativity and Quantum Field Theory.  They are, however, two approaches where lots of work has been done, and which appear to be amenable to using the mathematical tools of (higher) category theory which we’re going to be talking about at the workshop.  These are “higher gauge theory”, which very roughly is the analog of gauge theory (which includes both GR and QFT) using categorical groups, and TQFT, which is a very simple type of quantum field theory that has a natural description in terms of categories, which can be generalized to higher categories. I’ll probably take a few posts after the workshop to write up these, and the many other talks and mini-courses we’ll be having, but right now, I’d like to say a little bit about another talk we had here recently.  Actually, the talk was in Porto, but several of us at IST in Lisbon attended by a videoconference.  This was the first time I’ve seen this for a colloquium-style talk, though I did once take a course in General Relativity from Eric Poisson that was split between U of Waterloo and U of Guelph.  I thought it was a great idea then, and it worked quite well this time, too.  This is the way of the future – and unfortunately it probably will be for some time to come… Anyway, the talk in question was by Thomasz Brzezinski, about “Synthetic Non-Commutative Geometry” (link points to the slides).  The point here is to take two different approaches to extending differential geometry (DG) and combine the two insights.  The “Synthetic” part refers to synthetic differential geometry (SDG), which is a program for doing DG in a general topos.  One aspect of this is that in a topos where the Law of the Excluded Middle doesn’t apply, it’s possible for the real-numbers object to have infinitesimals: that is, elements which are smaller than any positive element, but bigger than zero.  This lets one take things which have to be treated in a roundabout way in ordinary DG, like $dx$, and take them at face value – as an infinitesimal change in $x$.  It also means doing geometry in a completely constructive way. However, these aspects aren’t so important here.  The important fact about it here is that it’s based on building a theory that was originally defined in terms of sets, or topological spaces – that is, in the toposes $Sets$, or $Top$  – and transplanting it to another category.  This is because Brzezinski’s goal was to do something similar for a different extension of DG, namely non-commutative geometry (NCG).  This is a generalisation of DG which is based on the equivalence $CommAlg^{op} \simeq lCptHaus$ between the categories of commutative $C^{\star}$-algebras (and algebra maps, read “backward” as morphisms in $CommAlg^{op}$), and that of locally compact Hausdorff spaces (which, for objects, equates a space $X$ with the algebra $C(X)$ of continuous functions on it, and an algebra $A$ with its spectrum $Spec(A)$, the space of maximal ideals).  The generalization of NCG is to take structures defined for $lCptHaus$ that create DG, and make similar definitions in the category $Alg^{op}$, of not-necessarily-commutative $C^{\star}$-algebras. This category is the one which plays the role of the topos $Top$.  It isn’t a topos, though: it’s some sort of monoidal category.  And this is what “synthetic NCG” is about: taking the definitions used in NCG and reproducing them in a generic monoidal category (to be clear, a braided monoidal category). The way he illustrated this is by explaining what a principal bundle would be in such a generic category. To begin with, we can start by giving a slightly nonstandard definition of the concept in ordinary DG: a principal $G$-bundle $P$ is a manifold with a free action of a (compact Lie) group $G$ on it.  The point is that this always looks like a “base space” manifold $B$, with a projection $\pi : P \rightarrow B$ so that the fibre at each point of $B$ looks like $G$.  This amounts to saying that $\pi$ is an equalizer: $P \times G \stackrel{\longrightarrow}{\rightarrow} P \stackrel{\pi}{\rightarrow} B$ where the maps from $G\times P$ to $P$ are (a) the action, and (b) the projection onto $P$.  (Being an equalizer means that $\pi$ makes this diagram commute – has the same composite with both maps – and any other map $\phi$ that does the same factors uniquely through $\pi$.)  Another equivalent way to say this is that since $P \times G$ has two maps into $P$, then it has a map into the pullback $P \times_B P$ (the pullback of two copies of $P \stackrel{\pi}{\rightarrow} B$), and the claim is that it’s actually ismorphic. The main points here are that (a) we take this definition in terms of diagrams and abstract it out of the category $Top$, and (b) when we do so, in general the products will be tensor products. In particular, this means we need to have a general definition of a group object $G$ in any braided monoidal category (to know what $G$ is supposed to be like).  We reproduce the usual definition of a group objects so that $G$ must come equipped with a “multiplication” map $m : G \otimes G \rightarrow G$, an “inverse” map $\iota : G \rightarrow G$ and a “unit” map $u : I \rightarrow G$, where $I$ is the monoidal unit (which takes the role of the terminal object in a topos like $Top$, the unit for $\times$).  These need to satisfy the usual properties, such as the monoid property for multiplication: $m \circ (m \otimes id_G) = m \circ (id_G \otimes m) : G \otimes G \otimes G \rightarrow G$ (usually given as a diagram, but I’m being lazy). The big “however” is this: in $Sets$ or $Top$, any object $X$ is always a comonoid in a canonical way, and we use this implictly in defining some of the properties we need.  In particular, there’s always the diagonal map $\Delta : X \rightarrow X \times X$ which satisfies the dual of the monoid property: $(id_X \times \Delta) \circ \Delta = (\Delta \times id_X) \circ \Delta$ There’s also a unique counit $\epsilon \rightarrow \star$, the map into the terminal object, which makes $(X,\Delta,\epsilon)$ a counital comonoid automatically.  But in a general braided monoidal category, we have to impose as a condition that our group object also be equipped with $\Delta : G \rightarrow G \otimes G$ and $\epsilon : G \rightarrow I$ making it a counital comonoid.  We need this property to even be able to make sense of the inverse axiom (which this time I’ll do as a diagram): This diagram uses not only $\Delta$ but also the braiding map $\sigma_{G,G} : G \otimes G \rightarrow G \otimes G$ (part of the structure of the braided monoidal category which, in $Top$ or $Sets$ is just the “switch” map).  Now, in fact, since any object in $Set$ or $Top$ is automatically a comonoid, we’ll require that this structure be given for anything we look at: the analog of spaces (like $P$ above), or our group object $G$.  For the group object, we also must, in general, require something which comes for free in the topos world and therefore generally isn’t mentioned in the definition of a group.  Namely, the comonoid and monoid aspects of $G$ must get along.  (This comes for free in a topos essentially because the comonoid structure is given canonically for all objects.)  This means: For a group in $Sets$ or $Top$, this essentially just says that the two ways we can go from $(x,y)$ to $(xy,xy)$ (duplicate, swap, then multiply, or on the other hand multiply then duplicate) are the same. All these considerations about how honest-to-goodness groups are secretly also comonoids does explain why corresponding structures in noncommutative geometry seem to have more elaborate definitions: they have to explicitly say things that come for free in a topos.  So, for instance, a group object in the above sense in the braided monoidal category $Vect = (Vect_{\mathbb{F}}, \otimes_{\mathbb{F}}, \mathbb{F}, flip)$ is a Hopf algebra.  This is a nice canonical choice of category.  Another is the opposite category $Vect^{op}$ – this is a standard choice in NCG, since spaces are supposed to be algebras – this would be given the comonoid structure we demanded. So now once we know all this, we can reproduce the diagrammatic definition of a principal $G$-bundle above: just replace the product $\times$ with the monoidal operation $\otimes$, the terminal object by $I$, and so forth.  The diagrams are understood to be diagrams of comonoids in our braided monoidal category.  In particular, we have an action $\rho : P \otimes G \rightarrow P$,which is compatible with the $\Delta$ maps – so in $Vect$ we would say that a noncommutative principal $G$-bundle $P$ is a right-module coalgebra over the Hopf algebra $G$.  We can likewise take this (in a suitably abstract sense of “algebra” or “module”) to be the definition in any braided monoidal category. To have the “freeness” of the action, there needs to be an equalizer of: $\rho, (id_P \otimes \epsilon) : P \otimes G \stackrel{\longrightarrow}{\rightarrow} P \stackrel{\pi}{\rightarrow} B$ The “freeness” condition for the action is likewise defined using a monoidal-category version of the pullback (fibre product) $P \times_B P$. This was as far as Brzezinski took the idea of synthetic NCG in this particular talk, but the basic idea seems quite nice.  In SDG, one can define all sorts of differential geometric structures synthetically, that is, for a general topos: for example, Gonzalo Reyes has gone and defined the Einstein field equations synthetically.  Presumably, a lot of what’s done in NCG could also be done in this synthetic framework, and transplanted to other categories than the usual choices. Brzezinski said he was mainly interested in the “usual” choices of category, $Vect$ and $Vect^{op}$ – so for instance in $Vect^{op}$, a “principal $G$-bundle” is what’s called a Hopf-Galois extension.  Roger Picken did, however, ask an interesting question about other possible candidates for the category to work in.  Given that one wants a braided monoidal category, a natural one to look at is the category whose morphisms are braids.  This one, as a matter of fact, isn’t quite enough (there’s no braid $m : n \otimes n \rightarrow n$, because this would be a braid with $2n$ strands in and $n$ strands out – which is impossible.  But some sort of category of tangles might make an interestingly abstract setting in which to see what NCG looks like.  So far, this doesn’t seem to have been done as far as I can see. About these ads Like Loading... ### 3 Responses to “Tomasz Brzezinski – “Toward Synthetic Non-Commutative Geometry”” 1. Kea Says: February 21, 2011 at 4:41 am Thanks, the Brzezinski work sounds interesting. Quantum gravity requires both NC and nonassociative structures though, since triality appears in the categorical mass generation mechanism. 1. Jeffrey Morton Says: February 21, 2011 at 2:00 pm I like Brzezinski’s stuff – there are certainly a number of directions in which to take it. Nonassociativity is an interesting one: I tend to think of it as a higher form of noncommutativity (of left- and right-multiplication operators), so whenever NC comes up, it’s natural to ask about nonassociativity, too. I haven’t spent too much time thinking about nonassociative algebra, though obviously the importance of Lie theory, if nothing else, would imply that one really has to deal with it somewhere. There does appear to be a theory of non-associative geometry (due, as far as I can see, to Sabinin), but I don’t know much about it. I imagine it could be treated in a similar synthetic way, with a little extra care in dealing with diagams – one needs to define left- and right- versions of various concepts, and so on. Generalising the opposite of some category of non-associative algebras would also presumably require a loosening of the type of category in which one is working: the monoidal structure needs non-trivial associator map, anyway, so possibly it would be better to work in a monoidal 2-category. Has anyone thought about this already? Triality seems on the surface to be a whole different way to bring non-associative algebras into things, via division algebras, and thus the octonions. This seems to tie in with a theme in higher gerbes (and string theory) of describing algebraic structures by cocycles in the various higher cohomology rings, which are seen as obstructions to making some extension of a given (e.g. Lie) algebra. The higher algebraic gadget uses a cocycle to give a twisting of some operation, with the cocycle condition making things work consistently – as with the associator for the octonions. I don’t immediately see how this connects to what I said in the first paragraph. 2. Kea Says: February 21, 2011 at 6:04 pm No, triality is a means to ‘derive’ the octonions etc. It enters at the most basic level of generalizing categorical duality, such as with locales and frames. We have been working with these ideas for many years now, and with some success on the physical side. One must break the monoidal structure to obtain the full parity cube for three qubits. You might like Rios’ new ‘M theory’ paper on Jordan algebras (over the bioctonions, but one can go further). This begins to illustrate the idea that spaces and algebras are united, in contrast to standard NCG where a distinction is maintained. (This paper also puts the final nail in the coffin of string theory, but they don’t seem to have noticed yet). That is, morally speaking all nice functors are endofunctors, and we should think of Stone duality as an endofunctor for one category (motives) that contains both algebras and spaces. Thus the M theory matrices have an interpretation completely outside linear algebra: a bit like Vaughn Pratt’s Chu matrices, but the categorical structure is different. In fact, the ‘qudit’ ladder is an n-categorical ladder. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 88, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9368216395378113, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/43184/so-do-i-use-this-lorentzs-law-or-which-law-do-i-use/43187
# So do I use this Lorentz's law or which law do I use? I have difficulty understanding exercise 24 in this document: Two parallel wires I and II that are near each other carry currents i and 3i both in the same direction. Compare the forces that the two wires exert on each other. (a) Wire I exerts a stronger force on wire II than II exerts on I. (b) Wire II exerts a stronger force on wire I than I exerts on II. (c) The wires exert equal magnitude attractive forces on each other. (d) The wires exert equal magnitude repulsive forces on each other. (e) The wires exert no forces on each other. I think - if you use $F_m=IlB\sin \alpha$, which is Force on electric wire in uniform magnetic field - that $F_{II}=IlB\sin \alpha = 3IlB\sin \alpha > IlB\sin \alpha =F_{I}$ So answer would be b), but how is it possible because you have Newton's third law( the forces should be equal, but does it apply here) and there is not any magnetic field here. So do I use this Lorentz's law or which law do I use? - ## 3 Answers The $B$ in these equations refers to the magnetic fields each individual wire creates. These fields are proportional to the individual wires' own currents. In other words, the force of wire I on wire II is $F_{I \to II} =I_{II} \ell B_{I} \sin \alpha$. What is $B_{I}$, how does it relate to $I_I$, and how does it differ from $B_{II}$? - So is it $F_{I \rightarrow II} = I_{II}lB_I sin \alpha = 3I_{I}l \frac{B_{II}}{3} sin \alpha = I_{I}lB_{II} sin \alpha = F_{II \rightarrow I}$? – alvoutila Nov 1 '12 at 13:59 So you can use this formula although they are not in any uniform magnetic field? – alvoutila Nov 1 '12 at 14:05 In this case, you can use it because the wires are parallel and equidistant from one another. The magnetic fields only change with distance (not with translation along the wires' axes or rotation), so it is, essentially, that every point on a given wire feels the same magnetic field. – Muphrid Nov 1 '12 at 14:08 So would the answer be d)? – alvoutila Nov 1 '12 at 14:30 1 All we've talked about to this point are the force magnitudes. What determines the force's direction (attractive vs. repulsive) in this problem? Isn't there a vector form of the Lorentz force law that would give directions to these forces? – Muphrid Nov 1 '12 at 14:33 Keep in mind the following: The current in wire 1 is smaller but interacts with the stronger B field of wire 2. The current in wire 2 is larger but interacts with the weaker B field of wire 1. - So it seems like the easiest way to do this problem is to consider three things: (1) Ampere's Law for a wire: $B= \frac{I\mu_0}{2\pi r}$ (2) Lorentz Force: $F=qv \times B$ (3) $qv = lq/t= lI \implies F= lI \times B$ Using (1) we can get the magnetic field that will act on each current/wire (a field generated by a current does not act on that same current; a wire sill not push itself). Using (3) you can find a the force acting on each wire along with the directions of that force. Good luck! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284511804580688, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/88278-solving-power-series-just-seeing-pattern.html
Thread: 1. solving by power series...just seeing the pattern If I have a denominator that goes 2 4 6 8...2k ( all multiplied) what is that pattern and how do I find it? Similarly if I have a denominator for the odds 3 5 7...2k+1 ( all multiplied ) what is the pattern for this and how do I get it? Thanks. 2. Originally Posted by pberardi If I have a denominator that goes 2 4 6 8...2k ( all multiplied) what is that pattern and how do I find it? Similarly if I have a denominator for the odds 3 5 7...2k+1 ( all multiplied ) what is the pattern for this and how do I get it? Thanks. Hi Could you be more precise ? 3. This was the diffeq. y'' +xy' + y = 0. I have to find a power series solution. After getting a general solution, differentiating, plugging in, I get a formula. Then I have to start plugging in different values of n to see a pattern so that I can get a power series. When I start plugging in n values for the formula my denominator looks like this for the even values of a: 2*4*6*8*10 Similarly for the odd values they are 3*5*7*9. The numerator is simple so I need not mention it. I need a formula for these two denominators. After some research I know that for 2k the formula is 2^k(k!) and for 2k+1 it is (2k+1)! however I do no know how yet to reach that conclusion except for sheer memorization which doesn't help much. Thanks and I hope this is clear. If not please continue to ask if you don't mind. Thank you. 4. I did the problem and think I see what you mean! You can write the coefficients using a number of different notational methods: $a_{2n+1} = \frac{(-1)^n}{\prod_{k=1}^{k=n}(2k +1) } \, a_1<br /> = \frac{(-1)^n \, n! \, 2^n}{(2n +1)! } \, a_1 <br /> = \frac{(-1)^n }{(2n +1)!! } \, a_1$ and similarly $a_{2n} = \frac{(-1)^n}{\prod_{k=1}^{k=n}(2k) } \, a_0<br /> = \frac{(-1)^n }{n! \, 2^n } \, a_0<br /> = \frac{(-1)^n }{(2n)!! } \, a_0$ . So the three notations used were the product notation (capital Pi), the factorial and the double factorial respectively. Hope this is of some help to you! 5. Originally Posted by pberardi If I have a denominator that goes 2 4 6 8...2k ( all multiplied) what is that pattern and how do I find it? Similarly if I have a denominator for the odds 3 5 7...2k+1 ( all multiplied ) what is the pattern for this and how do I get it? Thanks. 2 4 6 8...2k = 2^k k! 3 5 7...(2k+1) = (2k+1)!/(2^k k!) CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333763122558594, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/94172-normally-distributed-lifespan.html
Thread: 1. normally distributed lifespan I have the general idea to approach this problem ; however, I don't understand it completely , so I would like to get some help. A manufacturer knows that their items have a normally distributed lifespan, with a mean of 10.1 years, and standard deviation of 1.8 years. The lowest 7% of items will last less than how many years? 2. Originally Posted by skorpiox I have the general idea to approach this problem ; however, I don't understand it completely , so I would like to get some help. A manufacturer knows that their items have a normally distributed lifespan, with a mean of 10.1 years, and standard deviation of 1.8 years. The lowest 7% of items will last less than how many years? You require the value of x* such that Pr(X < x*) = 0.07. Get the value of z* such that Pr(Z < z*) = 0.07 (from tables or technology). The it follows from $Z = \frac{X - \mu}{\sigma}$ that $z* = \frac{x* - 10.1}{1.8}$. Substitute the value of z* and solve for x*. 3. Originally Posted by skorpiox I have the general idea to approach this problem ; however, I don't understand it completely , so I would like to get some help. A manufacturer knows that their items have a normally distributed lifespan, with a mean of 10.1 years, and standard deviation of 1.8 years. The lowest 7% of items will last less than how many years? Compute the Z-score for 0.07 using a table or the InvNormal function on the TI graphing calculator. Translate this Z-score into X using: $z = \frac{X - \mu}{\sigma}$ $\mu = 10.1$ $\sigma = 1.8$ X = unknown z = known My calculation indicates X = 7.4435. Can you confirm? 4. Thank you guys .By the way , Apcalculus your answer is correct
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.962953507900238, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/3567-hey.html
# Thread: 1. ## Hey I'm in 7th grade and I don't get this at all, this is going to be on our final tommorow and im really confused x^2(2x+3y-6) = 2. Originally Posted by Sm0ke I'm in 7th grade and I don't get this at all, this is going to be on our final tommorow and im really confused x^2(2x+3y-6) = Hello, I'm not quite sure what help you need, but I presume that you should expand this product: $x^2(2x+3y-6) =2x^3+3x^2y-6x^2$ Greetings EB 3. Originally Posted by Sm0ke I'm in 7th grade and I don't get this at all, this is going to be on our final tommorow and im really confused x^2(2x+3y-6) = It's the same thing as the distributive property. Think $2\left(3+4\right)=2\cdot3+2\cdot4$ Therefore $x^2(2x+3y-6)=x^2\cdot2x+$ $x^2\cdot3y+x^2\cdot(-6)=2x^3+3x^2y-6x^2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327967762947083, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/2093-conditional-probabiltiy.html
# Thread: 1. ## Conditional probabiltiy Suppose that a bag contains 6 red marbles and 5 yellow marbles. A single marble is picked at random out of the bag and is not put back in the bag; then a second marble is drawn at random out of the bag. Find a. the probability that both marbles are yellow b. the probabilty that the second marble is red c. the probability that the second marble is red given that the first is red d. the probabilty that the second marble is red given that the first is yellow. I have to also attach a tree diagram with this problem and don't understand the multipliang and adding acrooss the diagram. 2. Originally Posted by d.darbyshire Suppose that a bag contains 6 red marbles and 5 yellow marbles. A single marble is picked at random out of the bag and is not put back in the bag; then a second marble is drawn at random out of the bag. Find a. the probability that both marbles are yellow b. the probabilty that the second marble is red c. the probability that the second marble is red given that the first is red d. the probabilty that the second marble is red given that the first is yellow. I have to also attach a tree diagram with this problem and don't understand the multipliang and adding acrooss the diagram. 1]There are $_5C_2=10$ ways to chose to yellow marbles. There are $_{11}C_2=55$ ways to chose two marbles. Thus, the ratio is the probability which is $\frac{10}{55}=\frac{2}{11}$. 2]Thus, either RR or YR. Probability of chosing a red first is $\frac{6}{11}$ and then another red is $\frac{5}{10}=\frac{1}{2}$ thus, the probability of RR is $\frac{6}{11}\cdot\frac{1}{2}=\frac{3}{11}$. The probability of chosing a yellow first is $\frac{5}{11}$ and then choosing a red is $\frac{6}{10}=\frac{3}{5}$ thus the probability of YR is $\frac{5}{11}\cdot\frac{3}{5}=\frac{3}{11}$. Thus, the probability of RR or YR is $\frac{3}{11}+\frac{3}{11}=\frac{6}{11}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593238234519958, "perplexity_flag": "head"}
http://pediaview.com/openpedia/Conjugate_transpose
# Conjugate transpose In mathematics, the conjugate transpose, Hermitian transpose, Hermitian conjugate, bedaggered matrix, or adjoint matrix of an m-by-n matrix A with complex entries is the n-by-m matrix A* obtained from A by taking the transpose and then taking the complex conjugate of each entry (i.e., negating their imaginary parts but not their real parts). The conjugate transpose is formally defined by $(\mathbf{A}^*)_{ij} = \overline{\mathbf{A}_{ji}}$ where the subscripts denote the i,j-th entry, for 1 ≤ i ≤ n and 1 ≤ j ≤ m, and the overbar denotes a scalar complex conjugate. (The complex conjugate of $a + bi$, where a and b are reals, is $a - bi$.) This definition can also be written as $\mathbf{A}^* = (\overline{\mathbf{A}})^\mathrm{T} = \overline{\mathbf{A}^\mathrm{T}}$ where $\mathbf{A}^\mathrm{T} \,\!$ denotes the transpose and $\overline{\mathbf{A}} \,\!$ denotes the matrix with complex conjugated entries. Other names for the conjugate transpose of a matrix are Hermitian conjugate, or transjugate. The conjugate transpose of a matrix A can be denoted by any of these symbols: • $\mathbf{A}^* \,\!$ or $\mathbf{A}^\mathrm{H} \,\!$, commonly used in linear algebra • $\mathbf{A}^\dagger \,\!$ (sometimes pronounced as "A dagger"), universally used in quantum mechanics • $\mathbf{A}^+ \,\!$, although this symbol is more commonly used for the Moore–Penrose pseudoinverse In some contexts, $\mathbf{A}^* \,\!$ denotes the matrix with complex conjugated entries, and the conjugate transpose is then denoted by $\mathbf{A}^{*\mathrm{T}} \,\!$ or $\mathbf{A}^{\mathrm{T}*} \,\!$. ## Example[] If $\mathbf{A} = \begin{bmatrix} 3 + i & 5 & -2i \\ 2-2i & i & -7-13i \end{bmatrix}$ then $\mathbf{A}^* = \begin{bmatrix} 3-i & 2+2i \\ 5 & -i \\ 2i & -7+13i\end{bmatrix}$ ## Basic remarks[] A square matrix A with entries $a_{ij}$ is called • Hermitian or self-adjoint if A = A*, i.e., $a_{ij}=\overline{a_{ji}}$ . • skew Hermitian or antihermitian if A = −A*, i.e., $a_{ij}=-\overline{a_{ji}}$ . • normal if A*A = AA*. • unitary if A* = A-1. Even if A is not square, the two matrices A*A and AA* are both Hermitian and in fact positive semi-definite matrices. The conjugate transpose "adjoint" matrix A* should not be confused with the adjugate adj(A), which is also sometimes called "adjoint". Finding the conjugate transpose of a matrix A with real entries reduces to finding the transpose of A, as the conjugate of a real number is the number itself. ## Motivation[] The conjugate transpose can be motivated by noting that complex numbers can be usefully represented by 2×2 real matrices, obeying matrix addition and multiplication: $a + ib \equiv \Big(\begin{matrix} a & -b \\ b & a \end{matrix}\Big).$ That is, denoting each complex number z by the real 2×2 matrix of the linear transformation on the Argand diagram (viewed as the real vector space $\mathbb{R}^2$) affected by complex z-multiplication on $\mathbb{C}$. An m-by-n matrix of complex numbers could therefore equally well be represented by a 2m-by-2n matrix of real numbers. The conjugate transpose therefore arises very naturally as the result of simply transposing such a matrix, when viewed back again as n-by-m matrix made up of complex numbers. ## Properties of the conjugate transpose[] • (A + B)* = A* + B* for any two matrices A and B of the same dimensions. • (r A)* = r*A* for any complex number r and any matrix A. Here r* refers to the complex conjugate of r. • (AB)* = B*A* for any m-by-n matrix A and any n-by-p matrix B. Note that the order of the factors is reversed. • (A*)* = A for any matrix A. • If A is a square matrix, then det(A*) = (det A)* and tr(A*) = (tr A)* • A is invertible if and only if A* is invertible, and in that case (A*)−1 = (A−1)*. • The eigenvalues of A* are the complex conjugates of the eigenvalues of A. • $\langle \mathbf{Ax}, \mathbf{y}\rangle = \langle \mathbf{x},\mathbf{A}^* \mathbf{y} \rangle$ for any m-by-n matrix A, any vector x in $\mathbb{C}^n$ and any vector y in $\mathbb{C}^m$. Here, $\langle\cdot,\cdot\rangle$ denotes the standard complex inner product on $\mathbb{C}^m$ and $\mathbb{C}^n$. ## Generalizations[] The last property given above shows that if one views A as a linear transformation from Euclidean Hilbert space $\mathbb{C}^n$ to $\mathbb{C}^m$, then the matrix A* corresponds to the adjoint operator of A. The concept of adjoint operators between Hilbert spaces can thus be seen as a generalization of the conjugate transpose of matrices. Another generalization is available: suppose A is a linear map from a complex vector space V to another, W, then the complex conjugate linear map as well as the transposed linear map are defined, and we may thus take the conjugate transpose of A to be the complex conjugate of the transpose of A. It maps the conjugate dual of W to the conjugate dual of V. ## Source Content is authored by an open community of volunteers and is not produced by or in any way affiliated with ore reviewed by PediaView.com. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Conjugate transpose", which is available in its original form here: http://en.wikipedia.org/w/index.php?title=Conjugate_transpose • ## Finding More You are currently browsing the the PediaView.com open source encyclopedia. Please select from the menu above or use our search box at the top of the page. • ## Questions or Comments? If you have a question or comment about material in the open source encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. This open source encyclopedia supplement is brought to you by PediaView.com, the web's easiest resource for using Wikipedia content. All Wikipedia text is available under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Wikipedia® itself is a registered trademark of the Wikimedia Foundation, Inc.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8743151426315308, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/68428/why-is-the-output-of-an-lti-system-the-convolution-of-the-input-funtion-and-the-i/68453
## Why is the output of an LTI system the convolution of the input funtion and the impulse response? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am looking at the description of LTI systems in the time domain. Intuitively, I'd have guessed it would be the composition of the input function and some "system function". $$y(t) = f(x(t)) = (f\circ x)(t)$$ Where $x(t)$ is the input, $y(t)$ output and $f(x)$ a "system function". Why is it not that way? Could such a "system function" be found for, say, an R-C-Circuit? The actual output function y(t), is defined as $$y(t) = (h * x)(t)$$ Where $h(t)$ is the response to a dirac impulse. This is hard to grasp for me. Why is it so? I have looked at various explanations, drawings of rectangles becoming infinitely narrow, which I sort of understood, but it is still "hard to grasp"! I am looking for a simple explanation in one or two sentences here. http://en.wikipedia.org/wiki/LTI_system_theory - 1 I think that the key difference between the transformations $x \mapsto f \circ x$ and $x \mapsto h * x$ is that the output from the former depends only on the instantaneous behaviour of $x$—one can determine the response right now by knowing only the input *right now*—whereas the former allows the behaviour of $x$ at all (past) times to have an effect on the present response. The latter behaviour is what one expects out of a real-world circuit. Of course, this doesn't say why convolution (as opposed to any other integral transform—they all exhibit this sort of behaviour) is the ‘right’ – L Spice Jun 21 2011 at 20:47 thing to do. For that, I rather like Terry Tao's explanation (mathoverflow.net/questions/5892/…) of convolution as a ‘blurring’ process; but my physics is too weak to know whether one can draw any sort of sensible connection between the optics he describes there and the circuit behaviour in which you're interested. – L Spice Jun 21 2011 at 20:48 ## 2 Answers Well, it's just as simple as this: the output at any moment reflects the effect of the input at just that moment, plus the lingering effect after one second of the input from one second before, plus the lingering effect after two seconds of the input from two seconds before, plus the lingering effect after three seconds of the input from three seconds ago, etc. [And half a second and 2.8 seconds and so on as well...]. That is, the sum (or, rather, integral) over all t of "The input from t seconds ago" * "The amount of lingering effect a unit of input contributes after t seconds". That is, precisely a convolution of the input signal and the impulse response. - This (together with khanacademy.org/video/… & a variety of other documents) helped me actually understand what convolution is. -- Thanks a lot! -- // Because an R-C-Circuit can only be described by dirac- or step-function response or an ODE in the time domain, my quest for a "system function" f(x) must be futile, it seems. Correct? – NW Patrick Jun 22 2011 at 1:05 1 @NW Patrick, as indicated in my comment above, any response described by a ‘system function’ yields an output $y$ for each input $x$ such that $y(t)$ depends only on $x(t)$. For example, the constant signal $x : t \mapsto 1$ will produce the same response at time $t = 1$ as the ‘ramp’ signal $y : t \mapsto t$. Unless you've got a very special system, this just won't happen. – L Spice Jun 22 2011 at 1:17 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One way to approach this through the frequency domain. (Assume all the conditions about continuity of the system to make the following work.) Let $L$ denote the input-output operator. For each $f\in\mathbb R$, let $e_f(t)=exp(2\pi ift)$. Then it is not hard to see that $Le_f=H(f)e_f$ for some constant $H(f)$, the frequency response at $f$. (The $e_f$'s are eigenfunctions, if you like.) Then using the inverse Fourier transform, $$x(t)=\int \hat x(f) e^{2\pi ift} df$$ and the linearity of $L$ to obtain $$(Lx)(t)=\int \hat x(f) H(f) e^{2\pi ift} df,$$ which is the inverse transform of the product $\hat x H$. So $Lx$ must be the convolution of $x$ and $h$, the inverse Fourier transform of $H$. This is one reason why the Fourier transform and its relatives work so well with linear time-invariant systems, tying the impulse response to the frequency response. Many systems, especially in signal processing, are specified by the frequency response. Of course, one can also approach this from the time domain using approximation by step functions etc. The system function for an RC circuit is easy to find either in the time domain (solving an ODE) or in the frequency domain (using the impedances of the components and simple algebra.) - Thanks a lot. The answer is not that useful to me because I am not familiar with the term "input-output operator". I find $f$ a confusing choice for the name of a constant here. Furthermore I'd prefer to look at the problem without changing to the frequency domain. As to the "system function": How would it look like? – NW Patrick Jun 22 2011 at 0:17 The input-output operator is just a fancy name for the function that maps the input to the output. The use of $f$, which stands for frequency, is fairly standard in this area. One very important reason to work frequency domain is important is that it is the analogue of diagonalizing an operator. The convolution is fairly messy to compute in time but in the frequency domain, it is just a multiplication operator. Also, in many applications, it is the frequency domain that is important. For example, an ideal lowpass filter would be specified by $H(f)=1$ for $|f|\leq f_0$ and $0$ elsewhere. – Steve Jun 22 2011 at 1:29 All right. I was too locked in to use $f$ as the $f$unction letter. Also, our prof has been using $\omega$, the circular frequency, which equals to $2\pi f$, mostly. -- I am aware of the advantages of doing things in the frequency domain but I'd nevertheless like to see the mess I'm avoiding, just once, to have an better understanding of what I'm doing – NW Patrick Jun 22 2011 at 13:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249280095100403, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Prime_Numbers_in_Linear_Patterns&oldid=35243
# Prime Numbers in Linear Patterns ### From Math Images Revision as of 08:52, 1 January 2013 by Irishryoon (Talk | contribs) Prime numbers in table with 180 columns Field: Number Theory Created By: Iris Yoon Website: [ ] Prime numbers in table with 180 columns Create a table with 180 columns and write down positive integers from 1 in increasing order from left to right, top to bottom. When we mark the prime numbers on this table, we obtain the linear pattern as shown in the figure. # Basic Description Arranging natural numbers in a particular way and marking the prime numbers can lead to interesting patterns. For example, consider a table with 180 columns and infinitely many rows. Write positive integers in increasing order from left to right, and top to bottom. If we mark all the prime numbers, we get a pattern shown in the figure. We can see that prime numbers show patterns of vertical line segments, which implies that the prime numbers only appear on certain columns. # A More Mathematical Explanation [Click to view A More Mathematical Explanation] Instead of studying a table with 180 columns, we will study a table with 30 columns, as shown in [[#1 [...] [Click to hide A More Mathematical Explanation] Instead of studying a table with 180 columns, we will study a table with 30 columns, as shown in Image 1. Image 1 ## Construction First, create a table with 30 columns and sufficiently many rows. Write all positive integers starting from 1 as one moves from left to right, and top to bottom. Then, each row will start with a multiple of 30 added by 1, such as 1, 31, 61, 91, 121, ... . If we mark the prime numbers in this table we get Image 2. Image 2 ## Properties Theorem 1. All prime numbers appear on columns that have a $1$ or a prime number on its top row. In other words, for every prime number $p$, either $p \equiv 1\pmod {30}$, or there exists a prime number $q$ less than $30$ such that $p \equiv q \pmod {30}$. Proof. Given any prime number $p$, assume that $p$ is neither congruent to $1 \pmod {30}$ nor $q \pmod {30}$ for every prime $q$ less than $30$. Then, $p \equiv x \pmod {30}$, where $x$ is some integer less than $30$ that is not $1$ and not a prime. Prime factorization of $x$ must contain one of $2, 3,$ and $5$. (If the prime factorization of $x$ did not contain any of $2, 3,$ or $5$, then the smallest possible value of $x$ will be $7 \cdot 7 =49$, which is greater than $30$). Thus, $x=2^a3^b5^c$, where $a,b,c \ge 0$, and at least one of $a,b,c$ is greater than $0$. Since $p$ is congruent to $x$, we can write $p$ as $p=30n+2^a3^b5^c$, where $n$ is an integer greater than or equal to 1. Then, $p=30n+2^a3^b5^c=(2 \cdot 3 \cdot 5)n+2^a3^b5^c$. $p$ is then equal to one of $2(3 \cdot 5 \cdot n+2^{a-1}3^b5^c)$ or $3(2 \cdot 5 \cdot n+2^a3^{b-1}5^c)$ or $5(2 \cdot 3 \cdot n+2^a3^b5^{c-1})$, which contradicts $p$ being a prime number. Thus, $p \equiv 1 \pmod {30}$ or $p \equiv q \pmod {30}$, for some prime number $q$ less than $30$.$\Box$ However, the statement does not generalize to other integer modulo groups. For instance, consider a table with $60$ columns. The number $49$ appears on the first row, and $49$ is not a prime number. However, the column containing $49$ will contain other prime numbers, such as $109$. Moreover, not all integers that are congruent to $1 \pmod {30}$ or $q \pmod {30}$, where $q$ is a prime number less than $30$, are prime numbers. For instance, $49$, which is congruent to $19 \pmod {30}$, is not a prime number, but $49$ still appears on the same column as$19$. Let's call the columns that have a $1$ or a prime number greater than $5$ on its top row as prime-concentrated columns. One can observe that all composite numbers that appear on these prime-concentrated columns, say $49,77,91,119,121,133,143,161,169,...,$ have prime factors that are greater than or equal to $7$. In other words, these composite numbers do not have $2, 3,$or $5$ as a prime factor. Theorem 2. Composite numbers that appear on prime-concentrated columns do not have $2,3,$ or $5$ as a prime factor. Proof. Let $x$ be a composite number that appears on a prime-concentrated column, and assume that $x$ has at least one of $2, 3,$ or $5$ as a prime factor. Since $x$ appears on the prime-concentrated column, $x$ can be written as $x=30n+k$, where $n$ is a positive integer, and $k =1$or $k$ is a prime number greater than $5$ and smaller than $30.$ If $x$ had $2$ as a prime factor, $k$ must also have $2$ as a factor because $30$ has $2$ as a factor. This contradicts the fact that $k$is equal to $1$ or is a prime number between $7$ and $30$. The same argument works for the case when $x$ has $3$ or $5$ as prime factors.$\Box$ Another pattern to notice is that the prime-concentrated columns seem symmetric about the column that contains $15$, which leads to the following observation. Theorem 3. If $p$ is a prime number less than$30$ and if $p$ is not equal to $2,3,$ or $5,$ then $30-p$ is a prime number. Proof. Let $p$ be a prime number less than $30$ that is not equal to $2, 3,$or $5$. Let $q=30-p$. If $q$ were not a prime, then $q$ must have $2, 3,$ or $5$ as a prime factor. Since $p=30-q$, $p$ will also be divisible by $2, 3,$ or $5$, contradicting our condition that $p$ is a prime number. $\Box$ One can also observe that each prime-concentrated column seems to contain infinitely many prime numbers. In fact, such observation is consistent with Dirichlet's Theorem in Arithmetic Progressions. Dirichlet's Theorem On Primes In Arithmetic Progressions Let $a,N$ be relatively prime integers. Then there are infinitely many prime numbers $p$ such that $p \equiv a \pmod {N}$. The proof of Dirichlet's Theorem is not written in this page. One can easily note that Dirichlet's Theorem implies that each prime-concentrated column contains infinitely many prime numbers. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # Future Directions for this Page Q: would it be possible to generalize the above statements to any subgroup of the integers modded by the product of first n primes? i.e, can we generalize above statements to the case where we create a table with more number of columns? Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. Categories: | | |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 117, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311041235923767, "perplexity_flag": "head"}
http://mathoverflow.net/questions/100082/covering-a-hypercube-with-lines/100216
## Covering a (hyper)cube with lines ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $K_n$ be the sets of vectors $x \in \mathbb{Z}^d$ with each coordinates $x_i$ between $1$ and $n$. For any subset $A$ of $K_n$, let $S(A)$ be the set of points $x \in K_n$ which are on some line containing at least two points of $A$ (in other words, $S(A)$ is the union of the lines passing through - at least - two points of $A$). Such a set $A$ is said to generate $K_n$ if $S(A) = K_n$. Now let $r_d(n)$ be the smallest size of a generating subset of $K_n$. Question : What are the best known bounds on $r_d(n)$ ? (The first non trivial case is $d=2$) This problem may be "well-known" ; I'm almost sure this question has already been studied, but I didn't find any reference, and Google gives nothing. The trivial bound is $r_d(n) \gg_d n^{\frac{d}{2}- \frac{1}{2}}$ : taking a generating subset of size $r_d(n)$, there are at most $O(r_d(n)^2)$ lines to consider, each one intersecting $K_n$ in at most $n$ points, so that $|K_n| \ll r_d(n)^2 \times n$. A refinement of this argument (a typical line contains much less than $n$ points of $K_n$) gives a lower bound $r_d(n) \gg_d n^{\frac{d}{2}- \frac{1}{4} - \frac{1}{4(2d-1)} }$. - ## 3 Answers I think the probabilistic method gives an A of size $O_d(n^{d/2}\sqrt\log n)$. (Formula updated according to js's comment.) Put every point to A independently with probability p. What is the probability that a point x will be in S(A)? For any x, we can find $\Omega(n^d)$ pairs of points that are all different such that x lies on the line of any pair. (This is true because e.g. for d=2 if x is in the bottom-left part of $K_n$, then we can take the $n/4\times n/4$ grid upper-right from it, contract the $n/8 \times n/8$ bottom-left of this grid, and double each point from x to get its pair.) The probability that both points of a fixed pair are in A is $p^2$, the probability that no such pair exists is $(1-p^2)^{n^d}$. So if $n^d(1-p^2)^{n^d}<1$, then we are done using the union bound. Unless I am mistaken this is true if $p>\Omega_d(n^{-d/2}\sqrt\log n)$. Now of course we cannot be sure about how big A is. But if we replace the above $<1$ with a $<1/2$, then we can even add the condition that A should be at most $pn^d$, for which the probability is $\ge 1/2$. So we get $O_d(n^{d/2}\sqrt\log n)$ points. Maybe this can be further improved with some more advanced probabilistic methods. - You can also choose a random subset of a fixed size, say $p n^d$. – Douglas Zare Jun 21 at 18:28 1 A nice application of the probabilistic method ! Remark : the constant in the $\Omega(n^d)$ depends on $d$, so the final bound should be $O_d (n^{d/2} \log n)$, or $O_d (n^{d/2} \sqrt{\log n} )$ if I'm not mistaken (also, $r_d(2) = 2^d$ gives a lower bound for the implicit constant). An other remark : for $d=2$, this is essentially the same as the bound $r_2(n) = O(n)$ (as pointed out below by Gerry Myerson). – js Jun 22 at 15:20 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Not an answer, but a comment too long to fit the space. You may be interested to know that for finite projective geometries, the property in question has a dedicated name: namely, a set `$A\subset PG(r,q)$` is called $\rho$-saturating, if every point of $PG(r,q)$ is contained in a subspace, generated by $\rho+1$ points from $A$. However, to my understanding, Not much is known about such sets, with the exception of the case where $\rho=1$ and $q=2$. It would be equally natural, of course, to consider the problem for the finite affine geometries: how small can be a set `$A\subset{\mathbb F}_q^r$` given that every point of `${\mathbb F}_q^r$` is on a line through two points of $A$? - Are you interested in upper bounds? For $d=2$, if $n=2m$ is even, then a centrally placed $2\times m$ bar seems to generate $K_n$. This gives $r_2(n)\le n$, quite far from your lower bound. - 2 In fact, for any $d\ge 1$ one can get $r_d(n)\ll_d n^{d-1}$ just by considering the points on the boundary. However, this is likely to be very far from the truth. – Seva Jun 21 at 7:57 2 More generally, $r_1(n) = 2$ and $r_d(n) \leq n \times r_{d-1}(n)$ (by taking $n$ copies of a $(d-1)$-dimensional generating set). I expect $n^{d/2}$ to be closer to the truth than $n^{d-1}$ (for large $d$), but I suspect a good upper bound might be harder to obtain than lower bounds. – js Jun 21 at 15:59 1 For n=2, js gives an exact bound. It may be fruitful to consider $r_d(3)$ and $r_d(4)$ for small $d$. Gerhard "Ask Me About Small Cases" Paseman, 2012.06.21 – Gerhard Paseman Jun 21 at 16:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429336786270142, "perplexity_flag": "head"}
http://gmc.yoyogames.com/index.php?showtopic=502086&page=2
# Imaginary Numbers Started by , Mar 10 2011 06:02 PM • Page 2 of 7 • 1 • 2 • 3 • 4 120 replies to this topic chance GMC Member • Reviewer • 5754 posts • Version:GM:Studio Posted 22 March 2011 - 06:59 PM ...as most of people that use GM are kids who have never even heard of them; is it reasonable for YYG to include them? No, it's not reasonable. My post made that clear. My comments were aimed at Sabriath's uninformed comments about the nature of complex numbers, not his view on whether GM needed them. To a computer, trying to actual square root a negative number is impossible. We know otherwise. That's false. Square root finders are software algorithms. And algorithms exist for complex numbers just as they exist for reals. And, um.. no one knows the final digit of pi (or e), so how do you expect someone to pre-program it into a micro-processor. God that's dumb. Sorry, it's just... dumb. Do you think mathematical constants are "hard wired" into the silicon? lol.... . Edited by chance, 22 March 2011 - 07:01 PM. • 1 LSnK NaN • GMC Member • 1188 posts Posted 22 March 2011 - 07:01 PM And, um.. no one knows the final digit of pi (or e), so how do you expect someone to pre-program it into a micro-processor. God that's dumb. Sorry, it's just... dumb. You might even say it's irrational. • 7 Yourself The Ultimate Pronoun • Retired Staff • 7341 posts • Version:Unknown Posted 22 March 2011 - 07:51 PM but you would still have to convert it into and out of the imaginary space Such a conversion is trivial since you can easily access the real and imaginary parts of a complex number as easily as you can access the x and y coordinates of an object. In fact, representing rotations using complex numbers was so useful that someone decided to extend the complex numbers to quaternions which have really become the method for representing orientation in 3D...in 1843. And in that case you not only get i, but a j and k as well. While (as I've mentioned) I don't see much utility in adding complex numbers themselves (especially since they can be very easily implemented as two real variables), the addition of quaternions would be very useful, especially if they came with methods for their conversion into rotation matrices for use in 3D. And algorithms exist for complex numbers just as they exist for reals. In fact, the algorithms are mostly the same for complex numbers as they are for real numbers. To a computer, trying to actual square root a negative number is impossible. We know otherwise. To a computer, anything but a finite subset of the integers is impossible. And yet we somehow managed to make them operate on things resembling real numbers (technically a finite subset of rational numbers). To a computer everything is merely a logical manipulation of a set of bits. • 1 sabriath 12013 • GMC Member • 3147 posts Posted 22 March 2011 - 08:44 PM In other words, in the minds of most educated people. Complex numbers are fundamental to every field of science and mathematics. I am educated, I don't think about 'i' every second of my life. As for it being fundamental, ONLY to those fields, and those fields are worthless to humanity (except as hobby)...it's a fun escape from the real to find an abstract way to arrive at the same answer, but it is hardly a necessity of every day life. When a carpenter puts on a roof, he doesn't calculate square roots of negative numbers to arrive at a pitch (laymen)....up to the electronics designer who works with V=IR and other basic maths to determine circuit paths (and even then he uses SPICE which doesn't use 'i'). Although there are careers and fields of science out there that does make use of 'i', the question is are they getting anywhere? Wasting money to build a giant loop of coiled wires to watch atoms collide hasn't brought us flying cars or immortality (or anything useful except a bunch of papers written in the small community of scientists and hobbyists who care to read it). Nor is pi, e, or the square root of 2. So what does that have to do with it? God that's dumb. Sorry, it's just... dumb. Do you think mathematical constants are "hard wired" into the silicon? lol.... An FPU (now integrated with the CPU) as far back as pentium's first days (roughly) contains a command to load pi in the register, as for e and sqrt of 2, I haven't checked any of the new datasheets. But regardless, those are tangable values (although irrational, they are contained in the 'real' side of math). In order to use 'i', you would have to alter how the processor performs all it's math operations by checking if the NAN is an 'i' and resorting to complex arithmetic instead. Processors don't do this, so it's not native, which means it has to be built up in software instead (and again, forcing YYG to go that route is absurd). No, it's not reasonable. My post made that clear. My comments were aimed at Sabriath's uninformed comments about the nature of complex numbers, not his view on whether GM needed them. My position is that complex numbers are a thought and hobby venture more than practical. In that fact, it makes it useless in game making as well. Instead of voicing my opinion that GM doesn't need complex arithmetic and being shot down by those who have math backgrounds, I figured I would up the ante and show that it is practically useless altogether in life (kill the root and the veins starve). That's false. Square root finders are software algorithms. And algorithms exist for complex numbers just as they exist for reals. What? Square root is a processor command (on FPU), and when you do a square root of a negative number, you get a NAN error. Such a conversion is trivial since you can easily access the real and imaginary parts of a complex number as easily as you can access the x and y coordinates of an object. In fact, representing rotations using complex numbers was so useful that someone decided to extend the complex numbers to quaternions which have really become the method for representing orientation in 3D...in 1843. And in that case you not only get i, but a j and k as well. While (as I've mentioned) I don't see much utility in adding complex numbers themselves (especially since they can be very easily implemented as two real variables), the addition of quaternions would be very useful, especially if they came with methods for their conversion into rotation matrices for use in 3D. Show me the math that would be required in rotating a point (x,y,z) around an origin (ox,oy,oz) by a set degrees (dx, dy, dz) in both the real and complex way....then look and see which one the computer will do faster (has less operations). In fact, the algorithms are mostly the same for complex numbers as they are for real numbers. But for a human to write 'i' next to a calculation isn't as hard as it is for a computer to constantly check every single number that goes into a calculation on whether it is real or imaginary ("hard" in this sense is cycles wasted and a loss of a bit precision for storage). To a computer, anything but a finite subset of the integers is impossible. And yet we somehow managed to make them operate on things resembling real numbers (technically a finite subset of rational numbers). To a computer everything is merely a logical manipulation of a set of bits. Now you're just being silly. • 0 HaRRiKiRi GMC Member • GMC Member • 1364 posts Posted 22 March 2011 - 09:19 PM sabriath: Im sorry, but you are so wrong in so many aspects that it just makes me sad understanding you are not a troll. If imaginary numbers are just for hobbist, then sorry, but Electronics use imaginary numbers A LOT. That's the basics of many transformations needed to calculate many aspects of the circuit. For example, no spectral analysis would be complete (or even useful) without taking complex parameters into account. The same is with almost every other aspect in electronics or any other field of science or technology. You seem like the kind of guy who thinks Pi is invented by man. So I doubt you can call yourself educated (at least in mathematics, physicist, electronics or any other field that has imaginary numbers). What? Square root is a processor command (on FPU), and when you do a square root of a negative number, you get a NAN error. You think all processor commands are magic? Two registers goes in and one comes out? Everything is done by algorithms. There isn't a logical element for sqrt(), so its usually expanded to a different equation. For example, sqrt can be written as , and now you can calculate it with logarithm (which also has a different algorithm which involves bit shiffting) and division by two which is also a bit shift. Basically, logical elements can only do addition, subtraction, multiplication by 2, division by 2 and inversion. Everything else is based on these elements. I once had a book which had algorithms for almost everything (sin, cos, log, ln, powers etc) done with these basic operations. I don't even want to reply to the rest of your post. Now you're just being silly. I doubt you understood what he meant. Edited by HaRRiKiRi, 22 March 2011 - 09:31 PM. • 1 sabriath 12013 • GMC Member • 3147 posts Posted 22 March 2011 - 09:59 PM but Electronics use imaginary numbers A LOT Where? I have never used them and I've designed and built a CPU from scratch (complete with rotary assignment, feedforward and back of registers, and pipelines with stalls). For example, no spectral analysis would be complete (or even useful) without taking complex parameters into account Oh, you mean waveform electronics, like SETI? Yeah, because that's important. You seem like the kind of guy who thinks Pi is invented by man Pi was invented by man. The fact that the circumference of a circle relates to its radius may not have been invented by man...but the number that was derived from those observations was. So I doubt you can call yourself educated (at least in mathematics, physicist, electronics or any other field that has imaginary numbers). You're right, I'm not educated....merely passing with the highest scores in both ap calc and physics in the district for at least 10 years prior and after and having the, pretty much, born knowledge of circuitry and programming is hardly educated. That's borderline mentally challenged right? You think all processor commands are magic? Two registers goes in and one comes out? Everything is done by algorithms. There isn't a logical element for sqrt(), so its usually expanded to a different equation. For example, sqrt can be written as {f}, and now you can calculate it with logarithm (which also has a different algorithm which involves bit shiffting) and division by two which is also a bit shift. Basically, logical elements can only do addition, subtraction, multiplication by 2, division by 2 and inversion. Everything else is based on these elements. I once had a book which had algorithms for almost everything (sin, cos, log, ln, powers etc) done with these basic operations. And? I'm sorry, but I don't see how that relates to me stating that "it is not native to the processor, so it would have to be built up in software" and "you lose 1 bit of precision and wasting time checking every number to make sure it's not imaginary before doing a calculation"? If complex numbers were THAT important, why doesn't intel, AMD and others build the next CPU with them native? I'm just not seeing the point that 'i' brings to the table where _other maths can be used to do the same thing_. As much as I love the beauty of mandelbrot and julia sets (and other fractals), to me, that's all they are...beauty, not practicality. They may have some coincidences (like logistic convergences), but again, just shows another way of doing math. I doubt you understood what he meant. I understood quite clearly...and I could have retorted with the fact that there is no actual "bit" in the machine either, it's actually just a build up of electrical energy on a small bit of metal which is continually recycled while releasing into other parts to allow even more electrical energy to either be blocked or pass. I didn't because I thought going that deep to create a strawman was silly, and I'm sure he knows it. • 0 HaRRiKiRi GMC Member • GMC Member • 1364 posts Posted 22 March 2011 - 11:18 PM Where? I have never used them and I've designed and built a CPU from scratch (complete with rotary assignment, feedforward and back of registers, and pipelines with stalls). You think CPU is basics of electronics? CPU is just a very small part of a very large field. What you did was just a combination logical elements which you later simulated in a program (because I am sure you didn't physically build it). Did you consider how these elements worked or how the simulation program worked? Oh, you mean waveform electronics, like SETI? Yeah, because that's important. Spectrums are the basics of analog electronics, signal transmission, filters and so on. There wouldn't be any mobile phones, satellites, wi-fi or anything else if people didn't understand spectrums. You are clearly not educated in much of electronics. You know only digital 1's and 0's and that is the reason why you don't know the significance of imaginary numbers. You are just as limited as the computers themselves. And if you have this cynical point of view on every human technological advancement then you don't also understand the basics of science as a whole. Pi was invented by man. The fact that the circumference of a circle relates to its radius may not have been invented by man...but the number that was derived from those observations was. May not have? Pi is a irrational number (thus infinite) and humans can't come up with something infinite. If a man invented pi then he would just make it 5 or something (or just 1) to make calculations easier. Pi is constant of 3.1415.. which is true in every part of the galaxy. Even aliens know pi and they have the same value for it. They of course will write it differently, but the value stays. You're right, I'm not educated....merely passing with the highest scores in both ap calc and physics in the district for at least 10 years prior and after and having the, pretty much, born knowledge of circuitry and programming is hardly educated. That's borderline mentally challenged right? _other maths can be used to do the same thing_. And then why does CPU has instruction set for sqrt? Why can't we program it manually with "other maths"? As previously stated, things like quaternion are very useful in 3d graphics. • 0 chance GMC Member • Reviewer • 5754 posts • Version:GM:Studio Posted 22 March 2011 - 11:40 PM sabriath: Im sorry, but you are so wrong in so many aspects that it just makes me sad understanding you are not a troll. Me too. It's embarrassing to see an adult make such a fool of himself. Reminds me of when I was a boy setting up my first circuit (boy scouts). I concluded that electrical engineering was nothing more that connecting the positive and negative terminals. I guess it's human nature to underestimate the importance of things we don't understand. • 1 xshortguy GMC Member • Global Moderators • 4185 posts • Version:GM:Studio Posted 23 March 2011 - 01:23 AM Hi guys, let's avoid using personal attacks on people and only discuss imaginary numbers. The scope of the use of imaginary numbers in electric circuits is beyond the scope of the discussion on this forum. Head to another forum to continue that discussion. With that said, the name imaginary is sort of a poor choice, since the construction of the complex numbers from the real numbers is quite a natural procedure. Without going into too many details, here are the highlights of the construction: 1. Start with the ring of polynomials with real coefficients R[x], i.e. things of the form $\sum a_i x^i$, for some indeterminate x. 2. Consider the ideal I, a subset of R[x], generated by x^2 + 1. One can show that this clearly isn't R[x], since one cannot form the polynomial x from it. Furthermore, one can show that I is a maximal ideal. 3. Since I is an ideal, we can take the quotient ring R[x]/I. Since I is a maximal ideal, R[x]/I is a field. Moreover, from the fundamental theorem of algebra, the degree of the field extension [R[x]/I : R[x]] is degree two, so R[x]/I has two basis elements: things of the form (0 + I) and things of the form (1 + I). The first is a real number since (0 + I)(0 + I) = 0 + 0I + 0I + II = 0 + 1 = -1. The latter is not a real number: (1 + I)(1 + I) = 1 + I + I + -1 = 2I. So numbers in this field are can be written in the form a (0 + I) + b (1 + I), where a, b are real numbers. With a bit of cleanup, we can relabel things as the form a + b i, where (0 + I) is our 1, and (1 + I) is our i. The construction of forming new structures by quotient rings is typical in the subject of abstract algebra. Once you get a feel for this type of construction, you'll see that complex numbers are really just an extension of the real numbers that allows for solutions to given polynomials. • 1 MasterOfKings The True Master • GMC Member • 4888 posts • Version:GM8 Posted 23 March 2011 - 05:21 AM A processor works just like your brain. It gets a problem, it determines the best way around it, and it solves it. The square root of -1 is NOT a real; so it doesn't properly exist. Hence, the term imaginary. We can't define it as a number, hence 'i'. May not have? Pi is a irrational number (thus infinite) and humans can't come up with something infinite. If a man invented pi then he would just make it 5 or something (or just 1) to make calculations easier. Pi is constant of 3.1415.. which is true in every part of the galaxy. Even aliens know pi and they have the same value for it. They of course will write it differently, but the value stays. The existence of pi was not developed by man; however, pi isn't something that nature developed. The 'concept' (probably the wrong word) was developed by nature. Nature didn't call it pi; we did. And you have no right to speak for aliens, they might not even have discovered (or even use) the value. Regardless, the value of complex numbers, here, is in it's use for GM and (basic) game making. In saying that, it's next to useless (mainly due to its audience). Whether the processor can or cannot use them, is neither here nor there. The only point I made was that imaginary numbers don't actually exist. I never said they're useless to the world. -MoK • 0 xshortguy GMC Member • Global Moderators • 4185 posts • Version:GM:Studio Posted 23 March 2011 - 05:30 AM The square root of -1 is NOT a real; so it doesn't properly exist. Hence, the term imaginary. We can't define it as a number, hence 'i'. Stop using words that you clearly aren't understanding the meaning of, in this case "exists". The imaginary unit i exists in the same sense that any real number exists. Namely that each number is created in order to solve a particular type of problem from a well-defined set theoretic construction. For example, we invent natural numbers to solve the problem of associating quantities with objects. In this construction we can answer things such as 4 + x = 7. However one thing that we can't do with just natural numbers is that we can't answer a question such as 4 + x = 3. So in order to get around that, one can construct using equivalence relations--a well-defined method of doing so involving Cartesian products of natural numbers with clever equivalence relationships. In the same manner, once we have real numbers we then have the question of for which x will x^2 + 1 = 0. It isn't hard to show that no such real number exists. However by doing a well-defined algebraic construction (cf Abstract Algebra) or by simply asserting that there is a quantity i such that i^2 = -1, we have created a new system capable of solving such a problem. The terms "real" and "imaginary" are unfortunate terms that often confuse people; they are simply names kept for historical reasons that refer to specific collections of numbers. • 0 sabriath 12013 • GMC Member • 3147 posts Posted 23 March 2011 - 07:55 AM You think CPU is basics of electronics? CPU is just a very small part of a very large field. What you did was just a combination logical elements which you later simulated in a program (because I am sure you didn't physically build it). Did you consider how these elements worked or how the simulation program worked? No, I think CPU is a big part of everyday electronics, and the knowledge in it is tremendous in comparison to other areas. Although it boils down to 1's and 0's, there is still math involved that is not logical at all (as much as you think a gate passes the information clearly, there is drag on some parts, and the clock speed has to be just right to be able to catch them). I could go on, but why should I backpedal? And yes, I did _build_ it myself...using the same litho methods that the big boys use, it was a hobby of mine when I was real young. Spectrums are the basics of analog electronics, signal transmission, filters and so on. There wouldn't be any mobile phones, satellites, wi-fi or anything else if people didn't understand spectrums. You are clearly not educated in much of electronics. You know only digital 1's and 0's and that is the reason why you don't know the significance of imaginary numbers. You are just as limited as the computers themselves. And if you have this cynical point of view on every human technological advancement then you don't also understand the basics of science as a whole. So you are saying that these "spectrums" cannot be shown in real world math? Because y=2.1844*sin(x) seems like a real wave to me, and subtracting it can filter out that "wavelength"....where exactly does the 'i' come in? Can I get someone who is actually in a field that is useful to the human race and society to come forward and vouch that 'i' is used in their every day life? And actually show me where it applies that absolutely no other math can possibly be used other than imaginary math? May not have? Pi is a irrational number (thus infinite) and humans can't come up with something infinite. If a man invented pi then he would just make it 5 or something (or just 1) to make calculations easier. Pi is constant of 3.1415.. which is true in every part of the galaxy. Even aliens know pi and they have the same value for it. They of course will write it differently, but the value stays. The bold text is my point exactly. "Pi" <-- the word and it's constant in our lives is our invention...to an alien or any other creature, "pi" and "3.1415" may not have any meaning whatsoever to them. And then why does CPU has instruction set for sqrt? Why can't we program it manually with "other maths"? As previously stated, things like quaternion are very useful in 3d graphics. Because 'sqrt' is commonly used in everyday life, imaginary numbers are not as common. It's self-defined logic here, thought you would see that. @xshortguy: Thanks for the explanation of the imaginary space....buuuuut...where does that fall in line with making 3D math "easier" or "more efficient"? The only point I made was that imaginary numbers don't actually exist +1 to that! I'm not being ignorant here, but I feel that you _cannot_ take the square root of a negative number. I understand that when used properly, you can come up with an answer that is in the real, like: sqrt(-1) * sqrt(-1) = -1 Although you cannot take the sqrt of -1, the above formula is derived because you reduced the actual functioning (square or a sqrt is itself). To me, that doesn't make the imaginary space "exist", it just means that there is an identity shown and proven: sqrt(A) * sqrt(A) = A 'i' is just used to show that this identity can be used at _some_ point later down the line in order to come up with an answer. If I have '2i * 4' apples, how many do I have? ... exactly Until you reduce any formula to get rid of any and all 'i' references (and its brethren j,k and others), then you do not have a 'real' number, only a partial answer with a function attachment. Do not misread me though. I know that humans have invented all these forms of math and complex number arithmetic for our own purposes (and other species may have as well), but I am talking about the physical nature of the number and not the concept. Just like you cannot touch infinite (and spawns a whole other part of math I won't get into), you cannot touch imaginary numbers either....but when you bring certain parts together, the imaginary parts get reduced out (until then, I consider that 'not existing'). I have still not seen anyone produce to me the 3D mathematics that would show any plausibility for the need of these things in GM? Edited by sabriath, 23 March 2011 - 07:57 AM. • 0 MasterOfKings The True Master • GMC Member • 4888 posts • Version:GM8 Posted 23 March 2011 - 10:07 AM I admit I'm using the term 'exist' rather loosely. I was referring to them as an object (or whatever you want to call them) in the 'real' world. Sit down with a piece of paper and try to determine the square root of -1. You can through the world's supply of paper and you'll never reach the answer. Because it simply isn't possible. Now, this is where the whole 'imaginary' thing comes in. It replaces the square root of -1 with 'i'. This opens up a whole world of possibilities; most of which, a large number of people on this planet don't even understand. They serve a purpose; but not one that exists in the 'real' world. Please don't bite my head off and say that they do. I'll simply refer you to 'sit down with the paper' bit. YOU can't work out the square root of negative 1; so what makes you think that a computer can? Obviously, there's workarounds; it's those workarounds that built up this world as we know it. Lastly, regardless of the uses of complex numbers; see them, not as they are, but what purpose they will serve in GM. You may claim they will be useful in 3D mathematics.. but GM is meant as a 2D game engine; what's the point in simplify 3D programming when, most of the time, we won't use it. -MoK PS: If you disagree, please don't combat it. Just ignore it. This fight has gone on long enough. Edited by MasterOfKings, 23 March 2011 - 10:09 AM. • 0 chance GMC Member • Reviewer • 5754 posts • Version:GM:Studio Posted 23 March 2011 - 10:49 AM The scope of the use of imaginary numbers in electric circuits is beyond the scope of the discussion on this forum. Head to another forum to continue that discussion. My comment was obviously NOT about the use of imaginary numbers in circuits. It was an example of how ignorance leads us to dismiss things we don't understand. No, I think CPU is a big part of everyday electronics, and the knowledge in it is tremendous in comparison to other areas. ... And yes, I did _build_ it myself...using the same litho methods that the big boys use Another example of blissful ignorance. Manufacturing the semiconductor material that comprises the chips themselves, requires detailed understanding of quantum mechanics -- a field heavily dependent on mathematics in the complex plane. Yet you continue to pretend complex numbers are just an unnecessary curiosity. ...you cannot touch imaginary numbers either... This seems to be the heart of your argument, and where you're the most confused. ALL mathematics concepts are conceptual. Can you touch the square root of 2? What about logarithms? Negative numbers? Just because you're more familiar with natural number concepts, you think they're different somehow. More "real". They aren't. And btw, stop asking people to prove the GM needs complex numbers. Nobody is saying that, so give up the straw dogs. . Edited by chance, 23 March 2011 - 10:52 AM. • 1 sabriath 12013 • GMC Member • 3147 posts Posted 23 March 2011 - 11:47 AM Another example of blissful ignorance. Manufacturing the semiconductor material that comprises the chips themselves, requires detailed understanding of quantum mechanics -- a field heavily dependent on mathematics in the complex plane. Yet you continue to pretend complex numbers are just an unnecessary curiosity. I didn't use quantum mechanics to do it....how do you come off telling me how I came up with my design and build? I'm sure I simply used a positive doped alloy and negative doped alloy, then coated continuously while shaving it flat, applying the next stain level and repeating the process. Where exactly in that process did you get quantum mechanics? Did you mean the actual design phase, where I used simple gate logic pre-forms and reduced the entire process to a simple programming language that used nothing but those gates? That would look something like this: ```//simple sr latch q1265 = r nandp qn1265 qn1265 = s nandn q1265``` I can tell you to 100% certainty that I did not use quantum mechanics at all, nor use the square root of a negative number. This seems to be the heart of your argument, and where you're the most confused. ALL mathematics concepts are conceptual. Can you touch the square root of 2? What about logarithms? Negative numbers? Yes, I can. Irrational numbers might have some precision issues, but they are tangable enough to make an effort at hitting the mark...for example, asking me to present the square-root of 2 from a piece of paper, I can cut out an area roughly equal to it (you cannot do that with 'i'). As for negative numbers, that's simply borrowing, if you say "draw me negative 2 circles," then I would turn to you and say "draw me 2 circles"....now they are negative by reverse....how about that 'i'...nope, still can't do that. Just because you're more familiar with natural number concepts, you think they're different somehow. More "real". They aren't. And btw, stop asking people to prove the GM needs complex numbers. Nobody is saying that, so give up the straw dogs. I know the concepts of 'i', along with a lot of other areas in math. Just because I know it, doesn't mean I like it, nor think it's practical. As for wanting proof, I seriously just want to know because ever since highschool, I have not touched that crap for even 1 second...I want to know where it's useful in society, but more to the fact, I want to know how it relates to 3D math (because when I did 3D, I used basic dot-matrix transformations, I never heard of using imaginary numbers for it and want to see it in action...I'm curious now). If by that "proof" that it shows to be more efficient (which I personally don't believe it will be), then I will admit ignorance to the whole thing and bow into humbleness. • 0 chance GMC Member • Reviewer • 5754 posts • Version:GM:Studio Posted 23 March 2011 - 12:20 PM I can tell you to 100% certainty that I did not use quantum mechanics at all, nor use the square root of a negative number. That comment is like a draftsman saying "I didn't use the value of Pi to draw a circle. I just used a compass." lol... You don't have to understand the principles, to use a recipe. You just have to follow directions. Your "CPU design" followed a recipe for semiconductors developed over decades of complex research, and whose properties can only be explained by quantum mechanics. This seems to be the heart of your argument, and where you're the most confused. ALL mathematics concepts are conceptual. Can you touch the square root of 2? What about logarithms? Negative numbers? Yes, I can. Irrational numbers might have some precision issues, but they are tangable enough to make an effort at hitting the mark... You're confusing "tangible" with "familiar". For example, are you comfortable with the concept of a line (length, but no width)? What about a plane (length and width, but no depth)? They may seem familiar, but neither one truly occurs in nature. They are just mathematical constructs. They are no more, and no less, "real" than complex numbers. • 1 paul23 GMC Member • Global Moderators • 3355 posts • Version:GM8 Posted 23 March 2011 - 12:21 PM Another example of blissful ignorance. Manufacturing the semiconductor material that comprises the chips themselves, requires detailed understanding of quantum mechanics -- a field heavily dependent on mathematics in the complex plane. Yet you continue to pretend complex numbers are just an unnecessary curiosity. I didn't use quantum mechanics to do it....how do you come off telling me how I came up with my design and build? I'm sure I simply used a positive doped alloy and negative doped alloy, then coated continuously while shaving it flat, applying the next stain level and repeating the process. Where exactly in that process did you get quantum mechanics? Did you mean the actual design phase, where I used simple gate logic pre-forms and reduced the entire process to a simple programming language that used nothing but those gates? That would look something like this: ```//simple sr latch q1265 = r nandp qn1265 qn1265 = s nandn q1265``` I can tell you to 100% certainty that I did not use quantum mechanics at all, nor use the square root of a negative number. You know, the whole fact you brought electronic engineering to this discussion weakens your points of imaginary numbers being useless? - One of the 'school book' examples of complex numbers is electrical engineering. Transistors, EM-fields, signal analys... Just because you're more familiar with natural number concepts, you think they're different somehow. More "real". They aren't. And btw, stop asking people to prove the GM needs complex numbers. Nobody is saying that, so give up the straw dogs. I know the concepts of 'i', along with a lot of other areas in math. Just because I know it, doesn't mean I like it, nor think it's practical. As for wanting proof, I seriously just want to know because ever since highschool, I have not touched that crap for even 1 second... You're working in the field of EE? Anywhere I see a problem involving sine/cosine especially combined with integration/differentation I'd stop and ask myself the question: "shouldn't I be doing this with complex numbers instead". Edited by paul23, 23 March 2011 - 03:57 PM. • 0 HaRRiKiRi GMC Member • GMC Member • 1364 posts Posted 23 March 2011 - 03:26 PM I think discussion with sabriath is not very productive so I won't go any further. I have seen persons like that before (like carpenters who say why the heck someone needs a sqrt to calculate some diagonal.. just use a tape measure). All I can say is that he overrates himself as he clearly isn't as bright as he thinks he is. Anyway, complex numbers in GM would be just like in C++. A structure with both the real and imaginary parts which can be used in calculations via special functions. The implementation itself wouldn't actually be that hard, but I believe they have bigger ideas. • 0 FakeKraid Total Fraud • GMC Member • 53 posts • Version:GM8 Posted 23 March 2011 - 04:48 PM Irrelevant discussions of the 'reality' of imaginary or complex numbers aside, there have been some pretty fascinating answers to my question here. But, they raise another question. Why are programmers who are experienced and sophisticated enough to be working with planar rotations and complex data structures bothering with GM, with all its limitations, when they could be making their own engines in a more modern and efficient language? Or, to put that question in a positive way, so as to seem less confrontational, what exactly does GM have to offer a programmer of that level? I mean, I'm using GM because I have NO programming experience whatsoever. I find that, even using GML exclusively, the things that the GM engine does for me bring otherwise unreachable levels of programming complexity within my admittedly limited reach. Someone with enough understanding of coding to be working with those rather esoteric things wouldn't see that as a benefit, though, would they? I guess what I'm trying to say is, including imaginary/complex number functionality into GM seems like it would be a lot of trouble on the developers' part to add something that would only be useful to people who, if they really MUST have it, would be perfectly capable of going elsewhere to get it, or just making it themselves. Does that make sense? Edited by FakeKraid, 23 March 2011 - 04:51 PM. • 0 paul23 GMC Member • Global Moderators • 3355 posts • Version:GM8 Posted 23 March 2011 - 05:51 PM Irrelevant discussions of the 'reality' of imaginary or complex numbers aside, there have been some pretty fascinating answers to my question here. But, they raise another question. Why are programmers who are experienced and sophisticated enough to be working with planar rotations and complex data structures bothering with GM, with all its limitations, when they could be making their own engines in a more modern and efficient language? Or, to put that question in a positive way, so as to seem less confrontational, what exactly does GM have to offer a programmer of that level? I mean, I'm using GM because I have NO programming experience whatsoever. I find that, even using GML exclusively, the things that the GM engine does for me bring otherwise unreachable levels of programming complexity within my admittedly limited reach. Someone with enough understanding of coding to be working with those rather esoteric things wouldn't see that as a benefit, though, would they? I guess what I'm trying to say is, including imaginary/complex number functionality into GM seems like it would be a lot of trouble on the developers' part to add something that would only be useful to people who, if they really MUST have it, would be perfectly capable of going elsewhere to get it, or just making it themselves. Does that make sense? RAD • 1 • Page 2 of 7 • 1 • 2 • 3 • 4 #### 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9610522389411926, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/38761/when-is-lebesque-integration-useful-over-riemann-integration-in-physics
# When is Lebesque integration useful over Riemann integration in physics? Riemann integration is fine for physics in general because the functions dealt with tend to be differentiable and well behaved. Despite this, it's possible that Lebesque integration can be more powerfully used even in physical situations that can be solved by Riemann integration. So my questions is: In solving physics problems, when is Lebesque integration useful over Riemann integration? - ## 3 Answers An important example in quantum mechanics is e.g. the Hilbert space $$H~=~L^2(\mathbb{R}^3)$$ of Lebesgue square integrable wave functions $\psi$ in the position space $\mathbb{R}^3$. The Lebesgue square integrable functions (as opposed to just the Riemann square integrable functions) are needed to complete the Hilbert space with respect to the square norm $$||\psi||_2~:=~\sqrt{\int d^3x ~ |\psi(x)|^2}.$$ Concerning completeness, see also this Phys.SE post. - 4 For physicists a little rusty on their analysis and who find that wiki link dense, I'll add: "completeness" means that given a sequence of vectors (square-integrable functions from $\mathbb{R}^3$ to $\mathbb{C}$ in our case) getting arbitrarily "close" to one another, there exists a limit they are approaching, and that limit is itself in our space. Thus we can write infinite sums and know we're talking about something well-defined. – Chris White Oct 1 '12 at 4:44 In theory: Lebesgue integrable functions form a Banach space, whereas Riemann integrable functions do not. This causes problems in, e.g., Quantum mechanics if we try to work with Riemann integrable functions instead of Lebesgue integrable functions... (We want the functions to form a Banach space so we can use good old fashioned linear algebra to solve problems!) When does it matter? Take $$\chi_{\mathbb{Q}}(x)=\begin{cases}1 & x\in\mathbb{Q}\\ 0&\mathrm{otherwise}\end{cases}$$ It is Lebesgue integrable but not Riemann integrable. Riemann integration cannot happen over infinite intervals, e.g., $\int^{\infty}_{0}f(x)\,\mathrm{d}x$ is illegal for Riemann integration. On the other hand, $\mathrm{sinc}(x)$ is Riemann integrable but not Lebesgue integrable. These are all measure theory statements that are irrelevant to physical explanations, but mathematicians are conscious of. Consequently... In Practice: For everyday physics, the "symbolic integration" you learn in calculus is perfectly fine. Even when physicists say "We work with $L^{2}(X)$..." we think about integration in the "symbolic manner". It's only if you work on making the path integral rigorous that you need to be careful about "What you mean with integration...". - 1 Wouldn't $\int_0^\infty f(x)\mathrm{d}x$ be definable as an improper Riemann integral, though, i.e. $\lim_{a\to \infty}\int_0^a f(x)\mathrm{d}x$, for a reasonable function? – David Zaslavsky♦ Oct 1 '12 at 0:29 For "most reasonable functions" (e.g., impose continuity or something), that's true. So to answer your question, (a) for measure theorists, no; (b) for physicists, yes. It's just...I answered as a measure theorist since it's a measure theoretic question :S – Alex Nelson Oct 1 '12 at 0:33 I would like to point out a completely practical example here, too. You can also need to change the order of integration and summation, or integration and derivative is some calculations, i.e., $\int dx\sum_n \to \sum_n \int dx$, or $\int dx\, \partial/\partial t \to \partial/\partial t \int dx$. While this is a problem with Riemann integration, it works for the Lebesgue integral, under certain assumptions, which are, in physical systems, usually fulfilled. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9118548631668091, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/75471/infinitely-many-minimal-models/75562
## Infinitely many minimal models ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) There are examples of elliptic fiber spaces over a two-dimensional base which have infinitely many relative minimal models (where two abstractly isomorphic models connected by flops are counted separately). The one I know is given by Reid and Kawamata and works by repeatedly flopping two rational curves in a singular fiber. In Matsuki's "Introduction to the Mori Program" he indicates (pg. 366) that this construction can be extended to a non-relative setting to yield a variety $X$ with infinitely many minimal models over Spec k. I haven't managed to make this extension or find it written down, so a couple questions: where can I find an explicit example of a variety with infinitely many minimal models (over Spec k)? What is the Kodaira dimension in this case? Is it possible to find a Calabi-Yau threefold with infinitely many minimal models? - Calabi-Yau varieties have, by definition, (numerically) trivial canonical bundles so are their own unique minimal models. Perhaps you meant to ask something different? – ulrich Sep 15 2011 at 5:57 Dear ulrich, I'm not sure I understand your comment. (Terminal) Calabi--Yaus are certainly their own minimal models, but there's no reason they should be unique: if $f:X \dashrightarrow X'$ is a birational map of Calabi--Yaus, then each one is a minimal model of the other, but they need not be isomorphic. – Artie Prendergast-Smith Sep 15 2011 at 13:55 Also, note that the OP is asking about marked minimal models, which introduces even more non-uniqueness into the picture! – Artie Prendergast-Smith Sep 15 2011 at 13:56 @Artie: By minimal model of $X$ I assumed one means a variety $Y$ that is the end result of running the MMP on $X$. If $X$ is Calabi-Yau then the canonical bundle is nef so the MMP ends at $X$ itself. But having read the question again I agree that this is probably not the definition the OP has in mind. – ulrich Sep 15 2011 at 14:24 Dear ulrich: you're right, people aren't always very careful to say exactly what they mean by "minimal model". (Here by "people" I mean "authors", not the OP.) – Artie Prendergast-Smith Sep 15 2011 at 14:33 ## 2 Answers Dear John, One example can be found in this paper by Fryers. This example is a Horrocks--Mumford quintic, in particular a Calabi--Yau threefold. He shows there are infinitely many marked minimal models, but they fall into only 8 (unmarked) isomorphism classes. I think there is another class of examples in higher dimensions in this recent paper of Oguiso. I must admit I haven't looked at the paper too closely, but if I remember correctly a talk I heard him give, there are indeed infinitely many marked models in this case. There is also a nice picture involving the geometry of Coxeter groups, but sadly that didn't seem to make it into the paper. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The references are exactly what I was looking for. Thanks, Artie! I asked the question without logging in and I'm not sure how to accept this answer (or post this as a comment on it instead of an answer in its own right). Is there anything to be done? - Hi John, you could ask the moderators to merge your two accounts. Or you could simply flag this answer for moderator attention. – Artie Prendergast-Smith Sep 15 2011 at 23:57 2 Dear John, I made a request on meta for your accounts to be merged. Regards, Matthew – Emerton Sep 16 2011 at 1:54 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9494990110397339, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/semigroups+ring-theory
# Tagged Questions 1answer 100 views ### Rings and idempotent semirings If $\mathbb{R}$ is the real numbers and x#y=max{x,y} then ($\mathbb{R}$,#,+) is a semiring where ($\mathbb{R}$,#) is a semigroup and + distributes over #. If you have a set R with three distinct ... 1answer 164 views ### Is the additive semigroup of natural numbers the multiplicative semigroup of a ring? $\mathbb N$ will denote the set $\{0,1,2,\ldots\}.$ The semigroup $(\mathbb N,+)$ doesn't have a zero element. $\mathbb N^0$ will denote the semigroup $\mathbb N$ with zero adjoined, that is the set ... 0answers 114 views ### Is there an upper bound to the number of rings that can be obtained from a semigroup with zero by defining an additive operation? Let $\mathscr S$ be the class of all semigroups with zero. For $(S,\times,0)\in\mathscr S,$ I want to count additive operations $+$ on $S$ such that $(S,+,\times,0)$ is a ring (possibly without ... 1answer 101 views ### Must a pseudoinverse of a von Neumann regular element be regular? Background Let $R$ be a ring or a semigroup. We say that $x\in R$ is a von Neumann regular element of $R$ if there exists $y\in R$ such that $$xyx=x.$$ Any $y\in R$ satisfying the above equation is ... 2answers 169 views ### If the product of two idempotents is idempotent, must the two idempotents commute? It is a basic fact that when two idempotents $e,f$ in a semigroup $S$ commute, then $ef$ is an idempotent. Is the converse true? Is it true for idempotents in rings?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8735389709472656, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/127832-solve-radical-equation.html
# Thread: 1. ## Solve the radical equation Hi all, I need help/directions on how to solve this problem: Thank you much. 2. Just to clarify, do you mean $\sqrt3x-2$ or $\sqrt{3x}-2$ or $\sqrt{3x-2}$? 3. Oh, sorry, the third one. The square root of 3x-2 4. Ok. Here's my attempt: $x-\sqrt{3x-2}=4$ $x-4=\sqrt{3x-2}$ $(x-4)^2=\sqrt{3x-2}^2$ $x^2-8x+16=3x-2$ $x^2-11x+18=0$ $(x-9)(x-2)=0$ $x=2$ or $x=9$ x=2 is a solution for the negative result of , because when I square root a number, the result could be positive or negative. $(-2)^2=2^2$ so square rooting also gives two possible solutions. The reason I bring this up is that otherwise, when checking it does not seem to work. 5. Sorry about the latex errors, I've corrected them now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.883447527885437, "perplexity_flag": "middle"}
http://bpchesney.org/?tag=linear-program
# bpchesney.org A blog by Brian Chesney ## Diversity in Investing July 7th, 2012 Minimize risk by maximizing diversity.  Whether you’re a trader deciding which stocks to invest in, an investor deciding which companies to invest in, or a manager trying to decide which projects to support, you can minimize your risk by investing in a portfolio of products.  This approach fits very nicely with the notion of diversity developed earlier. Suppose you have 5 companies seeking investment. An investor can purchase up to 30% of a company. Each company must project its value at the end of the investment period and the investor has this information, along with the price per share for each company. Different companies have different prices per share and different projected valuations at the end of the investment period. The investor cashes out and earns a share of the value of the company when the investment period is complete, proportional to the amount of the company the investor owns. For example, if the investor owns 3000 of the company’s 10000 shares, 30% of the valuation of the company at the end of the investment period goes to the investor. The investor can only purchase whole shares of the company (no fractional shares, this is like a minimum investment). The investor may not ’short’ a company, that is, sell shares of a company that the investor does not own. So the number of shares an investor chooses to purchase must either be 0 or some positive integer less than or equal to the number of shares for sale. Companies may or may not meet their valuation projections. Companies may go under, resulting in a valuation of 0 at the end of the investment period. The investor does not know how likely a company is to meet its goal or fold altogether. The goal is to avoid modeling this unknown and to develop a robust strategy based on the only two things the investor knows for sure: price per share and the projected valuation.  The investor seeks a 10% return and has \$100,000 to invest. Let’s write some equations to figure out what the investor should do.  We’ll use x to represent the amount of money invested in each company. The restriction on how much of each company the investor can own amounts to a restriction on the amount of money the investor can invest in each company.  Let’s say that for company 1, the maximum investment is 50000, for company 2, its 40000, and for company 3 its 60000. Each company expects to turn the investment they receive into a return to the investor.  We’ll use p to represent the percentage return on investment each company claims they will return to the investor at the end of the investment period.  If a company claims a 10% return on their investment, then their value for p is 1.1, since they will return the initial investment, plus another 10% on top of that.  The investor has a goal of 10% return on total investment, so weighted sum of p, weighted by the amount invested, must be greater than 110000. That last equation can be rewritten as: So now we have our equations in the form Ax<=b, where To find the most diverse solution, the optimization problem is As opposed to the most common way of solving this problem, which is to focus on maximizing return.  One way of maximizing return is to ignore the constraint of getting at least 10% on the investment and just get as much as possible.  That linear program is where Another way of maximizing the return is to leave the constraint as a minimum bound on the return.  That just tells the solver of the linear program to quit once 10% return is reached.  That kind of violates the spirit of maximizing your return and is not very illustrative when comparing it to maximizing diversity.  We’ll treat max return as truly getting as much return as possible, without regard for how difficult the problem is for the solver, and drop the minimum return requirement for max return. Let’s look at the two strategies: max diversity and max return.  Let’s say that as a company offers more return, it’s price per share is higher and, even accounting for differences in the number of shares available, that this translates into a higher maximum investment.  So the companies with a higher maximum investment are promising a bigger return.  Let’s say p is: The max return solution is: This solution has a projected return of 14.1%.  This solution is 3-sparse for N=5, or 60% sparse.  It concentrates as much investment as possible into the highest returning company.  Once that investment is maxed out, then the rest goes into the second highest returning company until the total investment limit is reached. The most diverse solution is: It has a PSR of 0.20834 and a return of 10.00016%. What happens if one of the companies fails?  If it’s the high-return company, 4, the max return solution, since it is sparse, only returns 33.6% of the investor’s money ($33600). However, the diverse solution is much more robust and still returns over 86% of the investor’s money ($860411) if company 4 fails and the others meet their targets.  For a 4% gain on return, the risk that you would lose over 50% of your money  is clearly not worth it. Maximizing return could give a sparse solution, even when you don’t specifically seek it. The sparse solution is not robust to the risk of total failure, in this case, zero return. The minimum PSR solution sacrifices return to deliver a diverse solution which is more robust to failure, that is, it is much more likely to have an acceptable return. Notice that I am comparing max return vs. the diverse solution as strategies. Allowing the sparse solution is what makes max return less robust.  When considering how to invest, the sparse solution is focusing your efforts, which could promise more return, but is less robust.  The diverse solution is hedging your bets.  Once the minimum return goal is met the diverse solution mitigates risk by spreading it out across a portfolio of investments. Tags: applied math, diversity, integer program, investing, linear program, portfolio, risk, strategy Posted in Uncategorized | Comments Off ## Diversity, mathematically June 6th, 2012 We know about sparsity for vectors, which is the property of having a lot of 0′s (or elements close enough to 0).  The 0′s in a sparse matrix or vector, generally, make computation easier.  This has been a boon to sampling systems in the form of compressive sampling, which allows recovery of sampled data by using computationally-efficient algorithms that exploit sparsity. The opposite of sparsity is density.  A vector whose entries are mostly nonzero is said to be “dense,” both in the sense that it is not sparse, as well as being slow and difficult to muddle through calculations on this vector if it is really large.  In general, dense vectors are not very useful.  Or are they?  In this post, I introduce the notion of diverse vectors, a subset of dense vectors, as a more interesting foil to sparsity than simply dense vectors. This post explores a measure of diversity called the peak-to-sum ratio (PSR). We can use PSR to find the most diverse solution in a linear program, but what can it be used for? We’ll find this linear program optimally distributes quantized, positive units, such as currency or genes. Maximizing diversity, by minimizing PSR, is desirable in several applications: • Investing • Workload Distribution • Product Distribution • Sensor Data Fusion and Machine Learning In contrast to some systems which allow dense vectors, these systems actually desire the most diverse solution. Consider the following system of equations: x1 + x2 + x3 + x4 = 12 x2 - x3     = 0 x4 = 0 The following table gives a partial list of possible solutions (for x1, x2, and x3, since x4 is already given). x3 x2 x1 0 0 12 1 1 10 2 2 8 3 3 6 4 4 4 5 5 2 6 6 0 7 7 -2 The sparsest solution is highlighted in green, (12, 0, 0, 0).  It has the most zeroes in it.  It has everything concentrated in one element of the vector, the first element.  All other elements are zero, making several computations on that vector faster and easier. Notice the solution highlighted in yellow, (4,4,4,0).  It has very few zeros in it, in fact, none except the one variable that was explicitly set to 0.  But there are other solutions that have only one zero, or are, as they say, 1-sparse.  For instance, (10,1,1,0) is also 1-sparse.  The vector (4,4,4,0) is highlighted and (10,1,1,0) is not because (4,4,4,0) is more diverse.  What makes it more diverse?  The (4,4,4,0) vector has the most equitable distribution among its elements, which is, conceptually, the opposite of being sparse.  Whereas the most sparse solution has its energy concentrated in the fewest elements, the most diverse solution has its energy distributed to the most elements.  In fact, (10,1,1,0) could be considered to be more sparse than (4,4,4,0) as it is closer to having its energy concentrated in one vector (first posited in [1]). So if we’re talking about more or less diverse/sparse, this implies that we can quantify it.  One way to quantify it is to look at the peak-to-sum  ratio (PSR).  The peak to sum ratio is defined as the ratio of the largest element in x to the sum of all elements of x. PSR can also be expressed as the ratio of two norms, the infinity norm and the first norm.  Let’s see what this ratio looks like for the solutions we considered above, plus a few more. x3 x2 x1 PSR 0 0 12 1.000 1 1 10 0.833 2 2 8 0.667 3 3 6 0.500 4 4 4 0.333 5 5 2 0.417 6 6 0 0.500 7 7 -2 0.438 8 8 -4 0.400 9 9 -6 0.375 10 10 -8 0.357 11 11 -10 0.344 12 12 -12 0.333 13 13 -14 0.325 The (4,4,4,0) solution appears to have the lowest PSR, until we proceed to solution (13,13,-14,0).  As we proceed past (13,13,-14,0), PSR becomes vanishingly small.  In fact, we see that (4,4,4,0) was just a local minima for PSR. This is unsatisfying, for a couple of reasons.  The first is linear programming.  We can’t use a linear program to minimize PSR and arrive at the most diverse solution because PSR is not convex, that is, we can’t be sure we’re on a path to smaller and smaller PSR because we could encounter a local minima, like we did at (4,4,4,0). The second way this is unsatisfying is that (13,13,-14,0) does not seem more diverse than (4,4,4,0).  If we’re considering the elements of x to be bins of energy, or some other thing that is to be distributed, then having an element that is -14 does not make any sense.  Instead of distributing, or deciding how things are to be allocated, we are actually taking them away, which defeats the whole purpose of what we are trying to do. So let’s add in a constraint that x has to be greater than 0. For this particular problem, we can proceed in a linear fashion from the first candidate solution (12,0,0,0) to the most diverse one (4,4,4,0). However, in general, is this problem convex? That is, could we choose another set of equations and be able to march towards the most diverse answer? Consider the definition of convexity for a feasible set of solutions.  A set is convex if, when proceeding from one point to the next, you stay in that set [2].  It helps to rewrite our problem in terms of linear algebra.  We seek the most diverse solution to a set of linear equations, where Ax=b represents our equations we’re trying to solve.  Since x cannot be negative, we know that, Is this convex?  The shaded region in the following graph shows the x’s that satisfy the quantity (PSR) to be minimized when it’s less than or equal to 1. In the example in the graph above, there are only two elements in x.  The possible values of x outside of the shaded region, when either x1 or x2 or both are negative, are not in the set of feasible solutions.  So do the answers in this shaded region form a convex set? Yes, because we can pick two points in the shaded region, and draw a straight line between them that has to stay in the shaded region. Notice if we restrict values of the elements of x to the integers, we still have a convex set!  This result is satisfying because we’ll want to distribute units of energy, currency, etc…, anyway. In the language of linear algebra, where Z represents the set of integers.  So the two constraints we’ve added to our convex optimization problem to find the most diverse solution: 1. Must not contain a negative distribution (x greater than or equal to 0) 2. Must have a base unit of distribution (x must contain integers) We can now use an integer program to perform the optimization.  Restricted to positive, integer values, when we minimize PSR, our linear program will march algorithmically, inexorably to the most diverse solution to a system of linear equations. References [1]  Zonoobi et al. “Gini Index as Sparsity Measure for Signal Reconstruction from Compressive Samples,”  IEEE Journal of Selected Topics in Signal Processing.  vol 5, no. 5, Sept. 2011. [2]  Strang, Gilbert.  Computational Science and Engineering.  Wellesley, MA: Wellesley-Cambridge Press, 2007. Tags: applied math, convexity, diversity, integer program, linear program, sparsity Posted in Uncategorized | Comments Off
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213479161262512, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=3882480
Physics Forums ## How to Locate the Center of Gravity Hi, I have some questions regarding on the locating of the center of gravity. I have read some txtbk regarding on finding the COG for structure/element that is homogeneous. Therefore, the COG will conincide with the centroid of the the volume. Am i right to say that homogeneous structure/element means that it is made up of the same material? How am i supposed to find the COG if the element is made up of different materials? Lastly, if i know the COG of several components (of different materials) which are subsequently fixed together, am i right to say that it has to be treated as a problem in relation to non-homogeneous element? My boss has tasked me to find out how this is done but i have difficulties finding relevant reading materials for the above problems. Your help is greatly appreciated. Thanks!! Siukwok PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Gold Member Science Advisor Staff Emeritus Yes, saying that an object is "homogeneous" means it made of the same material throughout so, in particular, the density is constant. if the objecft is not homogeneously but the density varies continuously, and the geometry of the figure is reasonably simple, you can try integrating the density to find the mass: $M= \int\int\int_V \rho dV$ where $\rho$ is the density funcdtion. Then the x, y, and z coordinates of the center of mass are given by $$\overline{x}= \int\int\int x\rho dV/M$$ $$\overline{y}= \int\int\int y\rho dV/M$$ $$\overline{z}= \int\int\int z\rho dV/M$$ If the the object can be divided into several pieces, each being uniform, you can find the center of mass of each piece then form a "weighted" average of the coordinates, weighted by the mass of each piece. Hi HallsofIvy, Thanks for ur swift reply. However, i probably need some examples to get a better idea of how it is done. I have sketched an object made up of 2 cuboids as attached. The one on top is of 50x70x55 mm^3 and the one below is 100x150x40 mm^3. Say the density for the cuboid on top is 7800 kg/m^3 and the density for the cuboid at the bottom is 7000 kg/m^3. The COG for both cuboids are apparently at the center of the respective cuboids. How am i supposed to locate the COG of the object as a whole? Thanks! Siukwok Attached Thumbnails ## How to Locate the Center of Gravity wish you were able to get it worked out! First find the center of mass of each block. For a symmetric object made of the same material, its just the geometric center. Now fix the blocks together and write their center locations in terms of some overall coordinate system. (for instance, if two blocks are combined and then place flat on a table, you would say the upper block's center is 3 inches from the tabletop, y2 = 3, and the bottom block's center is 0.5 inch from the tabletop, y1 = 0.5; the upper block's center is 5 inches from the table edge, x2 = 5, etc.) Also measure the mass of each block. The center of mass of the two blocks combined is then: x = (m1 x1 + m2 x2)/(m1 + m2) y = (m1 y1 + m2 y2)/(m1 + m2) z = (m1 z1 + m2 z2)/(m1 + m2) This is what HallsofIvy meant by a weighted average, but this is really just a simplified version of the integrals he gave. Thank you so much! =) Thread Tools | | | | |----------------------------------------------------------|-------------------------------|---------| | Similar Threads for: How to Locate the Center of Gravity | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 3 | | | Classical Physics | 6 | | | General Physics | 4 | | | General Physics | 28 | | | Introductory Physics Homework | 6 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418279528617859, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/19584/what-is-the-insight-of-quillens-proof-that-all-projective-modules-over-a-polynom/19598
## What is the insight of Quillen’s proof that all projective modules over a polynomial ring are free? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) One of the more misleadingly difficult theorems in mathematics is that all finitely generated projective modules over a polynomial ring are free. It involves some of the most basic notions in commutative algebra, and really sounds as though it should be easy (the graded case, for example, is easy), but it's not. The question at least goes as far back as Serre's FAC, but it wasn't proved until 1976, by Quillen EDIT: and also independently by Suslin. I decided that this is the sort of fact that I should know a rough outline of how to prove, but the paper was not very helpful. Usually when someone kills off a famous conjecture in 5 pages, it's because they've developed some fantastic new piece of machinery people didn't have before. And, indeed, Quillen is famous for inventing some fancy and wonderful machinery, and the paper is only 5 pages long, but as far as I can tell, none of that fancy machinery actually appears in the proof. So, what was it that Quillen saw, that Serre missed? - 1 It is said that there is an elementary proof years later. – 7-adic Mar 28 2010 at 5:14 3 Bit surprised not to see Suslin's name get appended. Does the en.wikipedia.org/wiki/… give pointers to helpful literature, or are those sources just repeating what you already outlined in your post? – Yemon Choi Mar 28 2010 at 5:16 7 Vaserstein gave a more elementary proof of the Quillen-Suslin theorem. This was included in the second and third editions of Lang's Algebra. – Robin Chapman Mar 28 2010 at 7:03 1 I didn't think to look at the Wikipedia page, which obviously I should have, but it basically just links to the original papers. Leaving out Suslin was really just a matter of me not knowing. My recollection had been that Serre had proved the theorem, not that he had asked the question, until did a little Googling and found Quillen's paper. – Ben Webster♦ Mar 28 2010 at 16:51 3 T Y Lam has a book on the subject. Two actually. He's a first-class expositor. – David Feldman Dec 14 2010 at 5:58 show 3 more comments ## 2 Answers Here is a summary of what I learned from a nice expository account by Eisenbud (written in French), can be found as number 27 here. First, one studies a more general problem: Let $A$ be a Noetherian ring, $M$ a finite presented projective $A[T]$-module. When is $M$ extended from $A$, meaning there is $A$-module $N$ such that $M = A[T]\otimes_AN$? The proof can be broken down to 2 punches: Theorem 1 (Horrocks) If $A$ is local and there is a monic $f \in A[T]$ such that $M_f$ is free over $A_f$, then $M$ is $A$-free (this statement is much more elementary than what was stated in Quillen's paper). Theorem 2 (Quillen) If for each maximal ideal $m \subset A$, $M_m$ is extended from $A_m[T]$, then $M$ is extended from $A$ (on $A$, locally extended implies globally extended). So the proof of Serre's conjecture goes as follows: Let $A=k[x_1,\cdots,x_{n-1}]$, $T=x_n$, $M$ projective over $A[T]$. Induction (invert all monic polynomials in $k[T]$ to reduce the dimension) + further localizing at maximal ideals of $A$ + Theorem 1 show that $M$ is locally extended. Theorem 2 shows that $M$ is actually extended from $A$, so by induction must be free. Eisenbud note also provides a very elementary proof of Horrocks's Theorem, basically using linear algebra, due to Swan and Lindel (Horrocks's original proof was quite a bit more geometric). As Lieven wrote, the key contribution by Quillen was Theorem 2: patching. Actually the proof is fairly natural, there is only one candidate for $N$, namely $N=M/TM$, so let $M'=A[T]\otimes_AN$ and build an isomorphism $M \to M'$ from the known isomorphism locally. It is hard to answer your question: what did Serre miss (-:? I don't know what he tried? Anyone knows? - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Actually, Quillen's proof did contain a fantastic new idea : Quillen patching. It states that if P is a fg projective R[t]-module then the set of all f in R, such that (for localizations at the multiplicative systems {1,f,f^2,...}) P_f is extended from R_f (that is P_f = Q \otimes_{R_f} R_f[t] for a projective R_f-module Q, is an ideal of R. Apply this to R=k[x1,...,xd] then the localization of P at the set of all monic polynomials in R[t] is a projective k(t)[x0,...,xd] module whence free by induction. But then P_g is free for some monic poly g in t. Now take a maximal ideal m of R and consider the extension Pm of P to R_m[t] (the localization at R-m). Because (Pm)_g is a free (R_m[t])_g module it follows from Horrocks result that Pm is a free R_m[t]-module and extended from R_m. Whence the Quillen-ideal for P equals R. Serre's conjecture follows by considering 1 and induction. - 2 That's right. And when you look how he proves that, you see that he observes it suffices to consider covers of the spectrum of a ring with just two standard open sets. This elementary observation came as a shock. – Wilberd van der Kallen Mar 28 2010 at 8:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9586499333381653, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/27455/any-use-for-f-4-in-hep-th
# Any use for $F_4$ in hep-th? In high energy physics, the use of the classical Lie groups are common place, and in the Grand Unification the use of $E_{6,7,8}$ is also common place. In string theory $G_2$ is sometimes utilized, e.g. the $G_2$-holonomy manifolds are used to get 4d $\mathcal{N}=1$ susy from M-theory. That leaves $F_4$ from the list of simple Lie groups. Is there any place $F_4$ is used in any essential way? Of course there are papers where the dynamics of $d=4$ $\mathcal{N}=1$ susy gauge theory with $F_4$ are studied, as part of the study of all possible gauge groups, but I'm not asking those. - 2 If you search for `F(4)` in INSPIRE, you will see a number of papers. There is an old paper by Larry Romans with the construction of an $F_4$ gauged six-dimensional supergravity which was popular in its day. Also I seem to recall a paper with an $F_4$ string theory, probably prompted by the fact that the dimension of the fundamental representation of $F_4$ is 26. – José Figueroa-O'Farrill Oct 7 '11 at 17:15 2 Wasn't Romans' F(4) the super algebra F(4)? – Yuji Oct 7 '11 at 17:33 Hmm, you're probably right. The stringy $F_4$ was the exceptional Lie algebra, though. I can't seem to locate the paper, though. – José Figueroa-O'Farrill Oct 7 '11 at 17:52 ## 1 Answer $F_4$ is the centralizer of $G_2$ inside an $E_8$. In other words, $E_8$ contains an $F_4\times G_2$ maximal subgroup. That's why by embedding the spin connection into the $E_8\times E_8$ heterotic gauge connection on $G_2$ holonomy manifolds, one obtains an $F_4$ gauge symmetry. See, for example, http://arxiv.org/abs/hep-th/0108219 Gauge theories and string theory with $F_4$ gauge groups, e.g. in this paper http://arxiv.org/abs/hep-th/9902186 depend on the fact that $F_4$ may be obtained from $E_6$ by a projection related to the nontrivial ${\mathbb Z}_2$ automorphism of $E_6$ which you may see as the left-right symmetry of the $E_6$ Dynkin diagram. This automorphism may be realized as a nontrivial monodromy which may break the initial $E_6$ gauge group to an $F_4$ as in http://arxiv.org/abs/hep-th/9611119 Because of similar constructions, gauge groups including $F_4$ factors (sometimes many of them) are common in F-theory: http://arxiv.org/abs/hep-th/9701129 More speculatively (and outside established string theory), a decade ago, Pierre Ramond had a dream http://arxiv.org/abs/hep-th/0112261 http://arxiv.org/abs/hep-th/0301050 that the 16-dimensional Cayley plane, the $F_4/SO(9)$ coset (note that $F_4$ may be built from $SO(9)$ by adding a 16-spinor of generators), may be used to define all of M-theory. As far as I can say, it hasn't quite worked but it is interesting. Sati and others recently conjectured that M-theory may be formulated as having a secret $F_4/SO(9)$ fiber at each point: http://motls.blogspot.com/2009/10/is-m-theory-hiding-cayley-plane-fibers.html Less speculatively, the noncompact version $F_{4(4)}$ of the $F_4$ exceptional group is also the isometry of a quaternion manifold relevant for the maximal $N=2$ matter-Einstein supergravity, see http://arxiv.org/abs/hep-th/9708025 In that paper, you may also find cosets of the $E_6/F_4$ type and some role is also being played by the fact that $F_4$ is the symmetry group of a $3\times 3$ matrix Jordan algebra of octonions. A very slight extension of this answer is here: http://motls.blogspot.com/2011/10/any-use-for-f4-in-hep-th.html -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180243015289307, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/1545/cpt-and-heat-equation/1548
# CPT and heat equation I haven't understood this thing: Physics is invariant for CPT trasform...But the Heat or diffusive equation $\nabla^2 T=\partial_t T$ is not invariant for time reversal...but it's P invariant..So CPT simmetry would be violed... What I haven't undestood? Thank you - ## 2 Answers The heat equation is a macroscopic equation. It describes the flow of heat from hot objects to cold ones. Of course it can not be time-reversible, since the opposite movement never happens. Well, I say 'of course' but you actually have stumbled on something important. As you say, the fundamental laws of nature should be CPT invariant, or at least we expect them to be. The reason the heat equation is not CPT invariant is that it is not a fundamental law, but a macroscopic law emerging from the microscopic laws governing the motions of elementary particles. There is however a problem here, how does this time asymmetry arise from microscopic laws that are themselves time reversal invariant? The answer to that is given by statistical mechanics. While the microscopic laws are time-reversible (I'll focus on T, and leave CP aside), not all states are equally likely with respect to certain choices of the macroscopic variables. There are more configurations of particles corresponding to a room filled with air than with a room where all the air would be concentrated in one corner. It is this asymmetry that forms the basis of all explanations in statistical mechanics. I hope that clears things up a bit. - Ok! A last question now: the Schrodingher equation isn't invariant for time reversal, for the first derivative in t. But is not that a microscopical law? – Boy Simone Dec 2 '10 at 12:54 6 Actually, the Schrödinger equation is invariant. But you have to take the complex conjugate of $\psi$. Since $\psi^*$ and $\psi$ have the same probability distributions $|\psi|^2$, the physics remains the same. – Raskolnikov Dec 2 '10 at 12:58 Great! Thank you :-) – Boy Simone Dec 2 '10 at 13:14 Nice summary. It worth noting, however, that this is a deep enough topic that multi-hundred page books have been written on the matter. – dmckee♦ Dec 2 '10 at 19:33 @dmckee: Of course, I didn't mean to give an exhaustive explanation. In fact, I left my explanation open to many attacks on purpose. I hope that Boy will think further and come to these questions by himself. But a thorough answer would indeed need a thorough course in statistical mechanics. – Raskolnikov Dec 2 '10 at 22:43 show 1 more comment CPT theorem is not a theorem for all of physics but only for a quantum field theory (QFT). Also CPT invariance doesn't mean that QFT is necessarily invariant with respect to any of C, P and T (or PT, TC and CP, which is the same by CPT theorem) transform. Indeed, all of these symmetries are violated by weak interaction. Second, even if the macroscopic laws were completely correct it wouldn't mean that they need to preserve microscopic laws. E.g. most of the microscopic physics is time symmetric (except for small violation by the weak interaction) but second law of thermodynamics (which is universally true for any macroscopic system just by means of logic and statistics) tells you that entropy has to increase with time. We can say that the huge number of particles breaks the microscopical time-symmetry. Now, the heat equation essentially captures dynamics of this time asymmetry of the second law. It tells you that temperatures eventually even out and that is an irreversible process that increases entropy. - thank you for the answer! And why, in your example, huge number of particles breaks the microscopical time-symmetry? Why don't macroscopic effects preserve microscopical invariance CPT of quantum-field theory? – Boy Simone Dec 2 '10 at 12:39 2 @Boy: that has to do with statistical mechanics. You should really ask this as a separate question because answer is not completely simple. But in short: any given macroscopic state (given e.g. by energy and pressure) of the system can be realized by many microscopic states. Now your answer boils down to basic questions in probability theory: the more microscopic states there are, the more likely the resulting macroscopic state is. So system is more likely to move to move from the less probable state to more probable state and not in the other way. – Marek Dec 2 '10 at 12:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940663754940033, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/115477/on-well-separated-point-sets-in-the-plane/115664
## On well separated point sets in the plane ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let us say that a finite set $A$ in the plane is $1$-separated if: 1) it has an even number of points; 2) no open ball of diameter $1$ contains more than $|A|/2$ points. For a $1$-separated set $A$ define $G(A)$ to be a graph where two points $x,y$ in $A$ are joined by an edge iff the distance between them is at least $1$. Question: can one find a finite set of graphs $G _ 1,\dots,G _ n$ such that any $1$-separated set $A$ can be partitioned into non-empty $1$-separated sets $A _ 1,\dots,A _ k$ such that $G(A _ i)$ is isomorphic to one of the $G _ j$'s? Comment: The definition makes sense on the real line (the ball of diameter $1$ is replaced by an interval of length $1$). In that case we can take $n=1$ and $G_1$ to be a graph on two vertices joined by an edge (that is, $G(A)$ contains a matching). - ## 2 Answers I think Domotorp is correct. Take a regular $(2n-1)$-gon such that its longest diagonal is 1, along with its center. Then $A$ cannot be partitioned, and $G(A)=C_{2n-1}$. - Thx Alfred! ps. I suppose you answered instead of commenting as you did not have enough rep points. But I wonder how come both of our answers have one upvote right now. I have not voted on yours, meaning either you have not voted on mine although you think I am correct, or someone else thinks you're correct but I'm not... – domotorp Dec 7 at 20:01 I have not voted on either of your or Alfred's answers. My favorite is Alfred's because his description is harder for me to mess up or misconstrue. Gerhard "It Is Just My Opinion" Paseman, 2012.12.07 – Gerhard Paseman Dec 7 at 20:04 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. No, there is a counterexample. Take a circle, whose diameter is slightly larger than 1 and put |A|-1 points evenly around its boundary and the last point to its center. This set will be 1-separated, but no matter how you partition it, the part containing the center will not be 1-separated. - Uh, this example is not 1-separated. (E.g. take 2 or 4 points to start.) Gerhard "Ask Me About System Design" Paseman, 2012.12.06 – Gerhard Paseman Dec 7 at 3:55 1 If you choose the diameter of the circle appropriately, then it is, imo, but of course I might be wrong. – domotorp Dec 7 at 11:01 It is possible I misunderstand. My assumption was that the longest diagonal was less than 1 because the diameter was very close to 1. As Alfred mentions, there is a choice of A following your suggestion in which G(A) is an odd cycle. Gerhard "Having A Problem With Focus" Paseman, 2012.12.07 – Gerhard Paseman Dec 7 at 17:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9591383337974548, "perplexity_flag": "head"}
http://theoryofcomputing.org/articles/v003a011/
http://theoryofcomputing.org     ISSN 1557-2862 Endorsed by ACM SIGACT Volume 3 (2007) Article 11 pp. 211-219 The Randomized Communication Complexity of Set Disjointness by Published: October 15, 2007 [PDF (120K)]    [PS (209K)]    [PS.GZ (61K)] [Source ZIP] Keywords: communication complexity, randomized protocol, set disjointness Categories: short, complexity theory, algorithms, communication complexity ACM Classification: F.2.2 AMS Classification: 68Q25 Abstract: [Plain Text Version] We study the communication complexity of the disjointness function, in which each of two players holds a $k$-subset of a universe of size $n$ and the goal is to determine whether the sets are disjoint. In the model of a common random string we prove that $O(k)$ communication bits are sufficient, regardless of $n$. In the model of private random coins $O(k + \log {\log n})$ bits suffice. Both results are asymptotically tight.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8834006190299988, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/27929/list
## Return to Question 3 typo Motivation: In Razborov and Rudichs article "Natural proofs" they define a class of proofs they call "natural proofs" and show that under certain assumptions you can't prove that $P\neq NP$ using a "natural proof". I know that this kind of results is common in complexity theory, but I don't know any good examples from other fields. This is why I ask: Question: Can you give an example of a statement S that isn't know known to be unprovable (it could be an unsolved problem or but it could also be a theorem), a promising-looking class of proofs and a proof that a proof from this class can't prove S. I'm interested in both famous unsolved problems and in elementary examples, that can be used to explain this kind of thinking to, say, freshmen. 2 "the" -> "a" and checked "community wiki"; [made Community Wiki] 1 # Examples of statements that provably can't be proved using the promising looking method Motivation: In Razborov and Rudichs article "Natural proofs" they define a class of proofs they call "natural proofs" and show that under certain assumptions you can't prove that $P\neq NP$ using a "natural proof". I know that this kind of results is common in complexity theory, but I don't know any good examples from other fields. This is why I ask: Question: Can you give an example of a statement S that isn't know to be unprovable (it could be an unsolved problem or but it could also be a theorem), a promising-looking class of proofs and a proof that a proof from this class can't prove S. I'm interested in both famous unsolved problems and in elementary examples, that can be used to explain this kind of thinking to, say, freshmen.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9663417339324951, "perplexity_flag": "middle"}
http://cs.stackexchange.com/questions/9769/what-is-the-relation-between-functors-in-sml-and-category-theory
# What is the relation between functors in SML and Category theory? Along the same thinking as this statement by Andrej Bauer in this answer The Haskell community has developed a number of techniques inspired by category theory, of which monads are best known but should not be confused with monads. What is the relation between functors in SML and functors in Category theory? Since I don't know about the details of functors in other languages such as Haskell or OCaml, if there is info of value then please also add sections for other languages. - 1 – Gilles♦ Feb 14 at 22:21 ## 3 Answers Categories form a (large) category whose objects are the (small) categories and whose morphisms are functors between small categories. In this sense functors in category theory are "higher size morphisms". ML functors are not functors in the categorical sense of the word. But they are "higher size functions" in a type-theoretic sense. Think of concrete datatypes in a typical programming language as "small". Thus `int`, `bool`, `int -> int`, etc are small, classes in java are small, as well structs in C. We may collect all the datatypes into a large collection called `Type`. A type constructor, such as `list` or `array` is a function from `Type` to `Type`. So it is a "large" function. An ML functor is just a slightly more complicated large function: it accepts as an argument several small things and it returns several small things. "Several small things put together" is known as structure in ML. In terms of Martin-Löf type theory we have a universe `Type` of small types. The large types are usually called kinds. So we have: 1. values are elements of types (example: `42 : int`) 2. types are elements of `Type` (example: `int : Type`) 3. ML signatures are kinds (example: `OrderedType`) 4. type constructors are elements of kinds (example: `list : Type -> Type`) 5. ML stuctures are elements of kinds (example: `String : OrderedType`) 6. ML functors are functions between kinds (example: `Map.Make : Map.OrderedType -> Make.S`) Now we can draw an analogy between ML and categories, under which functors correspond to functors. But we also notice that datatypes in ML are like "small categories without morphisms", in other words they are like sets more than they are like categories. We could use an analogy between ML and set theory then: 1. datatypes are like sets 2. kinds are like set-theoretic classes 3. functors are like class-sized functions - +1 But they are "higher size functions" in a type-theoretic sense. – Guy Coder Feb 15 at 0:49 A Standard ML structure is akin to an algebra. Its signature describes an entire class of algebras of similar shape. A Standard ML functor is a map from a class of algebras to another class of algebras. An analogy is, for instance, with the functors $F : {\bf Mon} \to {\bf Grp}$, which adds an inverse operation to monoids, or $F : {\bf Ab} \to {\bf Rng}$ which adds a multiplicative monoid to abelian groups to make rings. Most of these ideas were worked out in series of papers by Burstall and Goguen in designing a specification language called CLEAR (References c5 and c6 on the DBLP page.) David MacQueen was working jointly with Burstall and Sannella at that time, and was intimately familiar with the issues. The Standard ML module system is based on these ideas. What most people would wonder is, what about morphisms? Category theoretic functors have an object part and a morphism part. Do Standard ML functors have the same? The answer is YES and NO. • The YES part of the answer applies if the structures are first-order. Then, there are homomorphisms between different structures of the same signature, and Standard ML functors automatically map them to homomorphisms of the result signature. • The NO part of the answer applies when the structures have higher-order operations. Does this mean that Standard ML is deviating from category theory? I don't think so. I rather think that Standard ML is doing the right thing, and category theory is yet to catch up. Category theory doesn't yet know how to deal with higher-order functions. Some day, it will. - "Category theory doesn't yet know how to deal with higher-order functions." Thats sounds like another question because I thought Category theory could do it all as a foundation. – Guy Coder Feb 15 at 18:38 The issue with higher-order functions is simple enough to state. A type-constructor like $T(X) = [X \to X]$ is not a functor. It should have been. A polymorphic function like ${\it twice}_X = T(X) \to T(X)$ is not a natural transformation. It should have been. If you read Eilenberg and MacLane, the intuitions they present cover those cases. But their theory doesn't. Theirs was a great paper for 1945. But, today, we need more. – Uday Reddy Feb 15 at 19:08 – Guy Coder Feb 15 at 19:10 "A Standard ML structure is akin to an algebra". Aren't functors slightly more general than that? Nothing prevents a structure to contain unrelated objects (types, values & functions), ie. not forming an algebra. – didierc Feb 15 at 21:33 1 @didierc A signature for algebras consists of one or more sorts (like our types), and one or more operations (like our functions) and optionally some axioms (like our specifications). An algebra for the signature picks particular sets for those sorts, and particular functions for those operations, such that the axioms are satisfied. SML signatures and structures are precisely such things, except that SML allows higher-order operations whereas Algebra doesn't. – Uday Reddy Feb 15 at 22:30 show 1 more comment There is, to the best of my knowledge, no formal relation between functors in category theory and functors in ML (SML or OCaml, they're close enough for our purpose here). In category theory, functors are functions that operate on objects. They are one level above morphisms, which are often functions that operate on elements (many categories have objects that are sets with some algebraic structure and arrows that are homomorphisms between these structures). An ML functor is a function that operates on modules, one level above the functions that operate on core language values. I think the resemblance stops here. ML functors were baptized by Dave McQueen in his 1985 revision of Modules for Standard ML (citeseerx) that appeared in the Polymorphism Newsletter (the original paper used the expression “parametric module” — later publications tend to use the adjective “parametrized”). Unfortunately, I can't locate a copy of that paper. In his 1986 paper Using Dependent Types to Express Modular Structure (citeseerx) he gives the name as established. - Functors are not just functions on objects, they map morphisms as well. Functors are "morphisms between categories". – Andrej Bauer Feb 14 at 22:45 @AndrejBauer Yes, functors are functions on objects. Not every function on objects is a functor, but that's a secondary consideration here. – Gilles♦ Feb 14 at 22:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417163133621216, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/118679/multiple-vs-single-linear-regression
# Multiple vs Single Linear Regression I'm having trouble understanding the relationship between multiple and single linear regression. I have six variables $(x_1, \dots, x_6)$ I'm using in my model. If I check each one individually against $y$, all but $x_2$ come up significant (and four of those are $p < 0.001$). However, when all six are used in a multiple linear regression model, only two come up as significant. This seems somewhat counter intuitive to me. I understand that multiple linear regression will look at how much each factor contributes to the overall model, but in the single regressions, it seemed like several variables were very important. Does it possibly have to do with scaling? In the single regressions, each variable produced a very different slope. - ## 1 Answer This makes total sense since your regressors aren't orthogonal to each other. Refer to this wiki for a reference on collinearity: http://en.wikipedia.org/wiki/Multicollinearity - OK, makes sense. I'm reading the "remedies" section, and I'm curious what the preferred solution is. It seems silly to not include certain values in my model, and the suggestion of "collect more data" is impossible in my case. – allie Mar 10 '12 at 22:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9494125247001648, "perplexity_flag": "middle"}
http://physics.stackexchange.com/tags/astronomy/new
# Tag Info ## New answers tagged astronomy 0 ### Parallax, obliquity, precession, and Orion? If I understand the original question correctly, it can be formulated as follows: Does ecliptic latitude of a star (its angular distance from ecliptic) change with time? The short answer (known already to Tycho Brahe) is yes, it does. Contrary to what previous comments and answers suggest, ecliptic plane does (significantly) change its position in space ... 3 ### Parallax, obliquity, precession, and Orion? The effect that you're describing is extremely small. Have a look at the following figure: Here you see the position of the Sun from a location $L$ on Earth. Let's call $R_\oplus$ the radius of the Earth, $\Delta$ the distance between the Earth and the Sun, and $\varepsilon$ the obliquity of the Earth. The angle $\delta$ is the declination of the Sun at a ... 6 ### Do all known planets and moons have magnetic field? Our two closest planetary neighbors -- Venus and Mars -- have no significant magnetic fields. In fact, the most recent numbers I know of for an Earth-like (dipole) field on Mars say that its strength as no more than 1/10000th the strength of Earth's. On the other hand, Jupiter's magnetic field is about 20000 times stronger than ours. There is no reason ... 3 ### Parallax, obliquity, precession, and Orion? There would be no effect on the image of Orion due to the obliquity of Earth. Keep in mind that the obliquity you've mentioned here is the angle of Earth's axial tilt with respect to the ecliptic. The angle of the ecliptic is relatively unchanging. While it may be true that certain locations on Earth we may see Orion shifted above or below our equatorial ... 2 ### When is the right ascension of the mean sun 0? No, the right ascension of the mean Sun is NOT zero at the vernal equinox. It is in fact nearly identical to the ecliptic longitude of the mean Sun (the difference is due to UT vs ephemeris time), and this is defined such that it coincides with the ecliptic longitude of the apparent Sun when the Earth is at perihelion. So that should be the starting time to ... 1 ### Curiosity Rover (MSL): current coordinates For the layman interested in coordinates, go here and then click on the map link for any given day: http://curiosityrover.com/tracking/drivelog.html Then you're in the "Google Mars" view, with the rover's path displayed. Of course, you could figure out something to get a list of coordinates in a standard map format. But if you're looking for something ... 1 ### Anti-Matter Black Holes According to Thorne (Black Holes and Time Warps) all matter that approaches the black hole singularity is reduced to a common degenerate form - matter and anti-matter alike. The way I interpret it, matter ceases to retain any resemblance to what existed outside the black hole. The attributes that distinguish matter and anti-matter are stripped away. ... 3 ### Distance away from earth to see it as a full disk [duplicate] How far away from earth's surface do we need to go to see its full hemispherical area? The strict mathematical answer is: no matter how far you go away from earth, you will never see its full hemisphere. So a better question is the one that generalizes your remark "I am interested to know how much of the earth you can see from the International Space ... 2 ### Distance away from earth to see it as a full disk [duplicate] Someone outside a sphere, looking at the sphere, will always see a "full disk"! That's one of the properties of a sphere; it looks like a circle from every direction and distance . The two questions seem to be: 1) How large will that disk appear in the view of the person outside that sphere? Right now, I just looked straight down and noted that the ... 3 ### Distance away from earth to see it as a full disk [duplicate] It seems to me that you have calculated the distance to the horizon rather than the altitude. The altitude would be $\frac{r}{\sin(60^\circ)} - r = r \times 0.1547 = 986\text{ km}$. Update: From within spacecraft the windows would restrict field of view, but in 1966 on gemini-11 mission Richard Gordon did a spacewalk at altitude of 1369km where the ... 1 ### Is it possible that universe might not be speeding up expansion? First of all, galaxies don't shrink. If our own galaxy were shrinking, then we would be moving towards our galactic centre, and we would observe a blueshift in that direction. Second, the accelerated expansion can be determined from the relation between redshift and brightness of distant supernovae. Neither redshift nor brightness would be affected by ... 4 ### Is it possible that universe might not be speeding up expansion? I don't see any logical connection to accelerating expansion. If shrinking of galaxies could explain away the acceleration of the expansion, then it could also explain away the expansion itself. Regardless of whether we're talking about expansion or acceleration of expansion, the effect isn't measured by watching the apparent sizes of galaxies get smaller ... 1 ### What planets are visible to the naked eye from Mars? Aside from having Earth visible in the night sky instead of Mars, you would expect the same planets to be visible. Venus will appear as a bright star close to the sun - smaller than we see it, but still very bright. Jupiter and Saturn will be easier to see in the night sky, and it should be possible to pick out Jupiter's four major moons with the naked ... 1 ### Curiosity Rover (MSL): current coordinates From Twitter: SPICE kernels for my location are available here http://naif.jpl.nasa.gov/naif/. For less technical locations, see https://foursquare.com/marscuriosity NASA is very open. For example, you can get more than 50,000 raw images from the Curiosity mission at the MSL website, and it's updated as they come in. 8 ### What happens to the electron companions of cosmic ray protons? Great question. The electric field creates such a strong force that it would be very hard to move large amounts of just one type of charge. So astrophysical systems do generally eject equal numbers of protons and electrons. In particular, the solar wind is electrically neutral. So these cosmic rays are created in very nearly equal numbers, but by the ... 1 ### How do we measure the range of distant objects despite relativistic effects? Astronomers know about this trouble, and stick with what they can measure. For distant galaxies, quasars, cosmic background radiation, etc. they use only the redshift, the "z" value. This is defined by the measured wavelength and the known laboratory value - assuming any spectral emission and absorption lines are correctly identified. To say anything about ... 0 ### How do we measure the range of distant objects despite relativistic effects? The difficulties are in some ways worse than you imagined and in other ways not as bad. The difficulties are worse in the sense that when we're dealing with distant galaxies, we need general relativity, not just special relativity. General relativity does not even have a well-defined notion of a global frame of reference, so it doesn't offer a uniquely ... -1 ### How do we measure the range of distant objects despite relativistic effects? The first thing to note is that most accurate distance measurements are at about the 5% level (except for a few exceptions), but 50% is more common - and often satisfactory. Length contraction is not an issue for two reasons. First, relative velocities between us and entire galaxies are generally small1 (especially due to 'peculiar velocities'). Second, ... -1 ### Gravity on the International Space Station Every object in a stable near-circular orbit around the Earth is actually falling towards the Earth at an accelerating speed or at an ever-increasing falling-speed, but because of its horizontal speed around the Earth, ten times faster than a rifle bullet for the ISS, the surface of the Earth curves away below it at the same falling-speed, and it therefore ... 12 ### Why don't we see solar and lunar eclipses often? Just to put you at ease first, this is not an infantile question. The reason we do not have a solar eclipse at every new moon is mostly due to the angle of Earth's axis (and by extension, the Moon's orbital plane) to the Earth-Sun line. See the picture below for a visual explanation. In the picture, the Sun is to the left. The upper image shows the orbit of ... 4 ### Why wasn't the moon visible during the day a few decades ago? Believe it or not, the Moon was visible during the day in 1949. In fact, the Moon has always been visible during the day at certain parts of the lunar cycle. We know this is true not only because of models of the Earth-Moon system, but there is historical evidence of it! There are records dating back to ancient China in 2800 BCE of solar eclipses, which are ... 0 ### Why wasn't the moon visible during the day a few decades ago? 50 years is nothing on an astronomical scale, so my guess is: you just didn't notice it/pay attention to it/don't remember correctly. 3 ### Why wasn't the moon visible during the day a few decades ago? During daylight, you can only see the moon when the sun is fairly low in the sky and in the correct phase. For example, about 1 week after full moon you should be able to see it in the morning when the sun is not very high in the eastern sky. At that time, the 1/2 moon should be quite visible between vertically and into the western sky. Similarly, about 1 ... 1 ### Alpha Centauri Bb: Comparing astrometric precision vs doppler precision Ok, I know this question was posted some time ago, but I just found it using Google because I asked myself a similar thing. First of all: No, Gaia won't be able to detect Centauri Bb. But the astrometric method has one strange property: It get's better if the planet is farther away from its host star (as long as the observational time needed doesn't get too ... Top 50 recent answers are included
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371675848960876, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/infinitesimals+calculus
Tagged Questions 3answers 119 views How $\frac{dx}{dy}=f(x)g(y) \Leftrightarrow \int \frac{dx}{f(x)} = \int g(y)dy$? In my intro differential equations class we have often used the "equivalence" stated in title. It seems to me that somehow, the intermediate step $$\frac{dx}{f(x)} = g(y)dy$$ is being used, in which ... 0answers 104 views Proving $f$ is constant. Let $f$ and $g$ be continuous function where $f,g:[a,b] \rightarrow \Bbb R$ and $\int_a^b g(x)=0$ and $\int_a^b f(x)g(x)=0$ , show that $f$ is a constant function. I tried a bunch of things ... 1answer 52 views Determining hyperreal class for $\frac{\epsilon + \delta}{\sqrt{\epsilon^2 + \delta^2}}$ I'm solidifying my calculus by going through Keisler's book that uses a hyperreal/infinitesimal approach. I'm stuck on this problem. Given infinitesimals $\epsilon,\delta > 0$, deterimine whether ... 1answer 136 views little-o and its properties I know that $f(x) = o(g(x))$ for $x \to \infty$ if (and only if) $\lim_{x \to \infty}\frac{f(x)}{g(x)}=0$ Which means than $f(x)$ has a order of growth less than that of $g(x)$. 1) I'm still ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321189522743225, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/9422/intuition-explanation-of-taylor-expansion/9444
# Intuition explanation of taylor expansion? Could you provide a geometric explanation of taylor expansion? - 1 It would be good if you could be more precise. What kind of geometric explanation? What is not intuitive that you would like cleared up? – Qiaochu Yuan Nov 8 '10 at 13:32 2 See the graph @ en.wikipedia.org/wiki/Taylor_series. Also see en.wikipedia.org/wiki/Taylor_polynomial. Taylor's theorem is derived from the Mean Value Theorem – SandeepJ Nov 8 '10 at 14:21 ## 5 Answers We know that the higher the degree of an equation, the more "turning points" it may have. For example, a parabola has one "turning point." (A parabola has an equation of the form $y=ax^2 + bx +c$.) A cubic of the form $y=ax^3 + bx^2 +cx +d$ can have up to two "turning points," though it may have fewer. In general, an equation of degree $n$ may have up to $n-1$ turning points. (Here is the polynomial $f(x) = 2x^4 - x^3 -3x^2 + 7x - 13$. It is degree 4 and it has the maximum number of turning points, 4-1=3. But, keep in mind, some degree 4 polynomials have only one or two turning points. The degree gives us the MAXIMUM number as $n=1$.) This is important because, if you want to use a polynomial to approximate a function, you will want to use a polynomial of high enough degree to match the "features" of the function. The Taylor series will let you do this with functions that are "infinitely differentiable" since it uses the derivatives of the function to approximate the functions behavior. Here are Taylor polynomials of increasing degree and the sine curve. Notice how they are "wrapping around" the sine curve, giving an approximation that fits better and better over more of the curve as the degree of the Taylor polynomial increases. (Source for this image: http://202.38.126.65/navigate/math/history/Mathematicians/Taylor.html) Since the sine curve has so many turning points it is easy to see that to match all of the features of the sine curve we will need to take the limit of the $n^{th}$ degree Taylor polynomial as $n \rightarrow \infty$.* That's the intuition behind the Taylor series. The higher the degree, the better the "fit." Why? Because higher degree curve have more "turning points" so they can better match the shape of things like the sine function. (As long as the function we are approximating is differentiable.) *Side note: A function may have only a few turning points and still need infinitely many terms of the Taylor polynomial. Take the catenary, for example, which only has one turning point since it looks like a parabola. The Taylor series for the catenary will not have any terms where the coefficients are zero, since the derivatives of the catenary are hyperbolic sinusoidal functions. But, even with the catenary, higher degree polynomials give a better approximation. - 1 The sine example looks really nice! – Jan Nov 8 '10 at 21:37 Think of a Taylor series not as one entity but as a sequence of approximations. The first term gives a constant approximation: f(x + h) is approximately f(x). The first two terms give a linear approximation: f(x + h) is approximately f(x) plus a trend term, h f'(x). The first three terms include a constant approximation, a linear trend, and a curvature term to account for the change in the linear trend: f(x + h) is approximately f(x) + h f'(x) + h^2 f''(x)/2. Next you add a term to account for the change in the curvature, etc. - I give it a try: If you want to know where you will be in x time driving a car you can find out by separating the different components: position at the moment, speed, acceleration, jolt and so on and add them all together. - 3 not a bad simplification :) – BBischof Nov 8 '10 at 14:40 We are approximating a function by polynomials at a point.As a first approximation, we give a give polynomial whose value at that point is same as the functions. In the second step, we make the first derivative equal too. In the third step,the second derivative is made equal and so on... - Predict global while computing local! A Taylor expansion of a function $f$ around some value $x_0$ is similar to a prediction of the function at a neighboring value $x$ knowing progressively more about the variation of $f$ at the point $x_0$. First step: easiest prediction: nothing changed, that is, $f(x) = f(x_0)$ Second step: we know the first derivative, so we predict the function was linear between $x_0$ and $x$ : $f(x) = f(x_0) + (x-x_0)f'(x_0)$. See, everything still local as the derivate is given at $x_0$. The next step give a generalization of this predictions for higher derivatives. The different forms give bounds to the error or more knowledge of the residual. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265822172164917, "perplexity_flag": "head"}
http://mathoverflow.net/questions/82505?sort=votes
## Elliptic genus for manifolds with boundary ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let M be a closed spin manifold of dimension $d$. One form of the elliptic genus of $M$ is $$F(q)=q^{-d/8} \hat A(M) {\rm ch} \otimes_{k=1/2,3/2,\cdots} \Lambda_{q^k}T \otimes_{\ell=1}^\infty S_{q^\ell}T [M]$$ where the notation follows that of E. Witten, The Index of the Dirac Operator in Loop Space." The coefficient of $q^{n/2-d/8}$ is the index of a Dirac operator $D_n$ which acts on sections of $S \otimes T_{R_n}$ where $S$ is the spinor bundle and $T_{R_n}$ is the bundle associated to a representation $R_n$ of $Spin(d)$ with the first few representations being $$R_0=1, \qquad R_1=T, \qquad R_2=\Lambda^2 T \oplus T$$ where $T$ is the fundamental (vector) representation. I'm interested in the generalization of the elliptic genus to manifolds with boundary. In the actual application I'm interested in one eventually takes the boundary to infinity to obtain a noncompact manifold, but I'd be happy to understand the situation for a compact manifold with boundary first. The index of the Dirac operator in such a situation acquires boundary corrections of the form $$CS[ \partial M] - \frac{1}{2}(\eta(0)+h)$$ where $h$ is the number of zero modes of the Dirac operator on $\partial M$ and $\eta(0)$ is the $\eta$ invariant. In the examples I'm interested in I believe the Chern-Simons contributions $CS[\partial M]$ vanish. Summing up these boundary contributions to the index of $D_n$ weighted by $q^{n/2-d/8}$ leads to a "boundary" contribution to the elliptic genus on manifolds with boundary with the "bulk" contribution given by $F(q)$. My questions are whether this variant of the elliptic genus has been studied and if so where, whether this leads to interesting invariants of manifolds with boundary, and whether the modular properties of the bulk and boundary contributions are known. - ## 2 Answers This, or a similar variant has been studied in Secondary Invariants for String Bordism and tmf and The f-invariant and index theory. The deviation from modularity caused by the boundary gives the interesting invariant of the boundary. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I studied such modular invariants in my thesis, Modular invariants for manifolds with boundary, http://www.tdx.cat/handle/10803/3071 , http://www.tdx.cat/bitstream/handle/10803/3071/migc1de2.pdf , http://www.tdx.cat/bitstream/handle/10803/3071/migc2de2.pdf , and never find the time to get back to them, they are very interesting. - Thank you for the reference. – Jeff Harvey Jan 19 2012 at 13:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 3, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9163349866867065, "perplexity_flag": "head"}
http://lucatrevisan.wordpress.com/2007/10/
in theory "Marge, I agree with you - in theory. In theory, communism works. In theory." -- Homer Simpson Monthly Archive You are currently browsing the monthly archive for October 2007. Dense Subsets of Pseudorandom Sets October 31, 2007 in math, theory | Tags: Additive Combinatorics, Ben Green, Pseudorandomness, Szemeredi Theorem, Tamar Ziegler, Terence Tao | 5 comments The Green-Tao theorem states that the primes contain arbitrarily long arithmetic progressions; its proof can be, somewhat inaccurately, broken up into the following two steps: Thm1: Every constant-density subset of a pseudorandom set of integers contains arbitrarily long arithmetic progressions. Thm2: The primes have constant density inside a pseudorandom set. Of those, the main contribution of the paper is the first theorem, a “relative” version of Szemeredi’s theorem. In turn, its proof can be (even more inaccurately) broken up as Thm 1.1: For every constant density subset D of a pseudorandom set there is a “model” set M that has constant density among the integers and is indistinguishable from D. Thm 1.2 (Szemeredi) Every constant density subset of the integers contains arbitrarily long arithmetic progressions, and many of them. Thm 1.3 A set with many long arithmetic progressions cannot be indistinguishable from a set with none. Following this scheme is, of course, easier said than done. One wants to work with a definition of pseudorandomness that is weak enough that (2) is provable, but strong enough that the notion of indistinguishability implied by (1.1) is in turn strong enough that (1.3) holds. From now on I will focus on (1.1), which is a key step in the proof, though not the hardest. Recently, Tao and Ziegler proved that the primes contain arbitrarily long “polynomial progressions” (progressions where the increments are given by polynomials rather than linear functions, as in the case of arithmetic progressions). Their paper contains a very clean formulation of (1.1), which I will now (accurately, this time) describe. (It is Theorem 7.1 in the paper. The language I use below is very different but equivalent.) We fix a finite universe $\Sigma$; this could be $\{ 0,1\}^n$ in complexity-theoretic applications or ${\mathbb Z}/N{\mathbb Z}$ in number-theoretic applications. Instead of working with subsets of $\Sigma$, it will be more convenient to refer to probability distributions over $\Sigma$; if $S$ is a set, then $U_S$ is the uniform distribution over $S$. We also fix a family $F$ of “easy” function $f: \Sigma \rightarrow [0,1]$. In a complexity-theoretic applications, this could be the set of boolean functions computed by circuits of bounded size. We think of two distributions $X,Y$ as being $\epsilon$-indistinguishable according to $F$ if for every function $f\in F$ we have $| E [f(X)] - E[f(Y)] | \leq \epsilon$ and we think of a distribution as pseudorandom if it is indistinguishable from the uniform distribution $U_\Sigma$. (This is all standard in cryptography and complexity theory.) Now let’s define the natural analog of “dense subset” for distributions. We say that a distribution $A$ is $\delta$-dense in $B$ if for every $x\in \Sigma$ we have $Pr [ B=x] \geq \delta Pr [A=x]$ Note that if $B=U_T$ and $A=U_S$ for some sets $S,T$, then $A$ is $\delta$-dense in $B$ if and only if $S\subseteq T$ and $|S| \geq \delta |T|$. So we want to prove the following: Theorem (Green, Tao, Ziegler) Fix a family $F$ of tests and an $\epsilon>0$; then there is a “slightly larger” family $F'$ and an $\epsilon'>0$ such that if $R$ is an $\epsilon'$-pseudorandom distribution according to $F'$ and $D$ is $\delta$-dense in $R$, then there is a distribution $M$ that is $\delta$-dense in $U_\Sigma$ and that is $\epsilon$-indistinguishable from $D$ according to $F$. [The reader may want to go back to (1.1) and check that this is a meaningful formalization of it, up to working with arbitrary distributions rather than sets. This is in fact the "inaccuracy" that I referred to above.] In a complexity-theoretic setting, we would like to say that if $F$ is defined as all functions computable by circuits of size at most $s$, then $\epsilon'$ should be $poly (\epsilon,\delta)$ and $F'$ should contain only functions computable by circuits of size $s\cdot poly(1/\epsilon,1/\delta)$. Unfortunately, if one follows the proof and makes some simplifications asuming $F$ contains only boolean functions, one sees that $F'$ contains functions of the form $g(x) = h(f_1(x),\ldots,f_k(x))$, where $f_i \in F$, $k = poly(1/\epsilon,1/\delta)$, and $h$ could be arbitrary and, in general, have circuit complexity exponential in $1/\epsilon$ and $1/\delta$. Alternatively one may approximate $h()$ as a low-degree polynomial and take the “most distinguishing monomial.” This will give a version of the Theorem (which leads to the actual statement of Thm 7.1 in the Tao-Ziegler paper) where $F'$ contains only functions of the form $\Pi_{i=1}^k f_i(x)$, but then $\epsilon'$ will be exponentially small in $1/\epsilon$ and $1/\delta$. This means that one cannot apply the theorem to “cryptographically strong” notions of pseudorandomness and indistinguishability, and in general to any setting where $1/\epsilon$ and $1/\delta$ are super-logarithmic (not to mention super-linear). This seems like an unavoidable consequence of the “finitary ergodic theoretic” technique of iterative partitioning and energy increment used in the proof, which always yields at least a singly exponential complexity. Omer Reingold, Madhur Tulsiani, Salil Vadhan and I have recently come up with a different proof where both $\epsilon'$ and the complexity of $F'$ are polynomial. This gives, for example, a new characterization of the notion of pseudoentropy. Our proof is quite in the spirit of Nisan’s proof of Impagliazzo’s hard-core set theorem, and it is relatively simple. We can also deduce a version of the theorem where, as in Green-Tao-Ziegler, $F'$ contains only bounded products of functions in $F$. In doing so, however, we too incur an exponential loss, but the proof is somewhat simpler and demonstrates the applicability of complexity-theoretic techniques in arithmetic combinatorics. Since we can use (ideas from) a proof of the hard core set theorem to prove the Green-Tao-Ziegler result, one may wonder whether one can use the “finitary ergodic theory” techniques of iterative partitioning and energy increment to prove the hard-core set theorem. Indeed, we do this too. In our proof, the reduction loses a factor that is exponential in certain parameters (while other proofs are polynomial), but one also gets a more “constructive” result. If readers can stomach it, a forthcoming post will describe the complexity-theory-style proof of the Green-Tao-Ziegler result as well as the ergodic-theory-style proof of the Impagliazzo hard core set theorem. Discovering the Cyber-Transformations October 25, 2007 in theory | Tags: NSF | 1 comment If memory serves me well, I have attended all STOC and FOCS conferences since STOC 1997 in El Paso, except STOC 2002 in Montreal (for visa problems), which should add up to 21 conferences. In most of those conferences I have also attended the “business meeting.” This is a time when attendees convene after dinner, have beer, the local organizers talk about their local organization, the program committee chair talks about how they put the program together (“papers were submitted, then we reviewed them, finally we accepted some of those. Let me show you twenty slides of meaningless statistics about said papers”), organizers of future conferences talk about their ongoing organizing, David Johnson raises issues to be discussed, and so on. The SODA drinking game gives a good idea of what goes on. A fixture of business meetings is also a presentation of the state of National Science Foundation (NSF) funding for theory in the US. In the first several conferences I attended, the NSF program director for theory would take the podium, show a series of incomprehensible slides, and go something like “there is no money; you should submit a lot of grant applications; I will reject all applications because there is no money, but low acceptance rates could bring us more money in future years; you should apply to non-theory programs, because there is no money in theory, but don’t make it clear you are doing theory, otherwise they’ll send your proposal to me, and I have no money. In conclusion, I have no money and we are all doomed.” Things hit rock bottom around 2004, when several issues (DARPA abandoning basic research, the end of the NSF ITR program, a general tightening of the NSF budget at a time of increased student tuition, a change in NSF accounting system requiring multi-year grants to be funded entirely from the budget of the year of the award, ….) conspired to create a disastrous funding season. At that point several people in the community, with Sanjeev Arora playing a leading role, realized that something had to be done to turn things around. A SIGACT committee was formed to understand what had gone wrong and how to repair it. I don’t know if it is an accurate way of putting it, but my understanding is that our community had done a very bad job in publicizing its results to a broader audience. Indeed I remember, in my job interviews, a conversation that went like “What do you do?” “Complexity theory” “Structural complexity or descriptive complexity?” “??”. (I also got a “What complexity classes do you study?”) And I understand that whenever people from the SIGACT committee went to talk to NSF higher-ups about theory, everybody was interested and the attitude was almost “why haven’t you told us about this stuff before?” For various reasons, it is easier at NSF to put funding into a new initiative than to increase funding of an existing one, and an idea that came up early on was to fund an initiative on “theory as a lens for the sciences,” to explore work in economics, quantum mechanics, biology, statistical physics, etc., where the conceptual tools of theoretical computer science are useful to even phrase the right questions, as well as work towards their solution. This idea took on a life of its own, grew much more broad than initially envisioned (so that the lens thing is now a small part of it), received an appropriately cringe-inducing name, and is now the Cyber-Enabled Discovery and Innovation (CDI) program, that is soon accepting its first round of submissions. Thanks to the work that Bill Steiger put in as program director in the last year and a half, and to the efforts of the SIGACT committee, the outlook for theory funding is now much optimistic. At the FOCS 2007 business meeting last Monday, Bill talked about the increase in funding that happened under his watch, Sanjeev Arora talked about the work of the committee and the new funding opportunities (of which CDI is only one). In addition, as happened a few times in the last couple of years, Mike Foster from NSF gave his own, generally theory-friendly, presentation. Mike is a mid-level director at NSF (one or two levels above the theory program), and the regular presence of people in his position at STOC and FOCS is, I think, without precedent before 2005. (Or at least between 1997 and 2004.) The NSF is relatively lean, efficient and competent for being a federal bureaucracy, but it is still a federal bureaucracy, with its quirks. A few years ago, it started a much loathed requirement to explicitly state the “broader impact” of any proposed grant. I actually don’t mind this requirement: it does not ask to talk about “applications,” but rather of all the important research work that is not just establishing technical result. Disseminating results, for example, writing notes, expository work, and surveys and making them available, bringing research-level material to freshmen in a new format, doing outreach, doing something to increase representation of women and minority, and so on. As reported by Sanjeev Arora in his presentation, however, NSF is now requiring to state how the research in a given proposal is “transformative.” (I just got a spelling warning after typing it.) I am not sure this makes any sense. The person sitting next to me commented, “Oh no, the goal of my research is always to maintain the status quo.” The Next Viral Videos October 25, 2007 in math, teaching, theory | Tags: Additive Combinatorics, Avi Wigderson, Boaz Barak, Moses Charikar, Princeton | 5 comments Back in August, Boaz Barak and Moses Charikar organized a two-day course on additive combinatorics for computer scientists in Princeton. Boaz and Avi Wigderson spoke on sum-product theorems and their applications, and I spoke on techniques in the proofs of Szemeredi’s theorem and their applications. As an Australian model might say, that’s interesting! Videos of the talks are now online. The quality of the audio and video is quite good, you’ll have to decide for yourself on the quality of the lectures. The schedule of the event was grueling, and in my last two lectures (on Gowers uniformity and applications) I am not very lucid. In earlier lectures, however, I am merely sleep deprived — I can be seen falling asleep in front of the board a few times. Boaz’s and Avi’s lectures, however, are flawless. Best Tutorials Ever October 21, 2007 in food, math, theory | Tags: Additive Combinatorics, Dan Boneh, Dan Spielman, FOCS 2007, Providence, Regularity Lemma, Terence Tao | 2 comments FOCS 2007 started yesterday in Providence with a series of tutorials. Terry Tao gave a talk similar to the one he gave in Madrid, discussing the duality between pseudorandomness and efficiency which is a way to give a unified view of techniques coming from analysis, combinatorics and ergodic theory. In typical such results, one has a set \$F\$ of “simple” functions (for example linear, or low-degree polynomials, or, in conceivable complexity-theoretic applications, functions of low circuit complexity) and one wants to write an arbitrary function \$g\$ as \$ g(x) = g_{pr} (x) + g_{str} (x) + g_{err} (x) \$ where \$g_{pr}\$ is pseudorandom with respect to the “distinguishers” in \$F\$, \$g_{str}\$ is a “simple combination” of functions from \$\cal F\$, and \$g_{err}\$ accounts for a possible small approximation error. There are a number of ways to instantiate this general template, as can be seen on the accompanying notes, and it is nice to see how even the Szemeredi regularity lemma can be fit into this template. (The “functions” are adjacency matrices of graphs, and the “efficient” functions are complete bipartite subgraphs.) Dan Boneh spoke on pairing-based cryptography, an idea that has grown into a whole, rich, area, with specialized conferences and, according to Google Scholar, 1,200+ papers published so far. In this setting one has a group \$G\$ (for example points on an elliptic curve) such that there is a mapping \$e: G X G \rightarrow G_T\$ that takes pairs of elements of \$G\$ into an element of another group \$G_T\$ satisfying a bilinearity condition. (Such a mapping is a “pairing,” hence the name of the area.) Although such mapping can lead to attacks on the discrete log problem in \$G\$, if the mapping is chosen carefully one may still assume intractability of discrete log in \$G\$, and the pairing can be very useful in constructing cryptographic protocols and proving their security. In particular, one can get “identity-based encryption,” a type of public key cryptography where a user’s public key can be her own name (or email address, or any deterministically chosen name), which in turn can be used as a primitive in other applications. Dan Spielman spoke on spectral graph theory, focusing on results and problems that aren’t quite studied enough by theoreticians. He showed some remarkable of example of graph drawings obtained by simply plotting a vertex \$i\$ to the point \$(v(i),w(i))\$, where \$v\$ and \$w\$ are the second largest and third largest eigenvalues of the laplacian of the adjacency matrix. The sparse cut promised by Cheeger inequality is, in such a drawing, just the cut given by a vertical line across the drawing, and there are nice algebraic explanations for why the drawing looks intuitively “nice” for many graphs but not for all. Spectral partitioning has been very successful for image segmentation problems, but it has some drawbacks and it would be nice to find theoretically justified algorithms that would do better. Typically, I don’t go to an Italian restaurant in the US unless I have been there before and liked it, a rule that runs into certain circularity problems. I was happy that yesterday I made an exception to go to Al Forno, which proved to be truly exceptional. Hillary Clinton voted in support of burning the flags of veterans’ children October 8, 2007 in diversions, politics | 3 comments Or at least she cannot sue a rival campaign if it makes such a claim. The Washington State Supreme Court has found that politicians have a constitutional right to lie. (It is still illegal to libel, but libel is defined extremely narrowly in the US. It is not enough to make a false statement of fact, it is not even enough if this is done with malice. It must, in addition, cause harm to the reputation of the victim.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 71, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451440572738647, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21259/line-bundles-on-smooth-affine-variety
## line bundles on smooth affine variety ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let L be a line bundle on a smooth affine variety X (say, over complex numbers). Is it true that L always admits a FLAT algebraic connection? - Is this motivated the Atiyah class, which lives in coherent cohomology and thus vanishes for affines? The vanishing of this class implies the existence an algebraic connection. Moreover, one can compute the Chern classes in Dolbeault cohomology from the Atiyah class. If the variety is complete, so that Dolbeault cohomology is pretty close to de Rham cohomology, does the vanishing of the Atiyah class imply the vanishing of the rational Chern classes? or is something lost in the extension data? – Ben Wieland Apr 13 2010 at 22:52 1 Yes, a line bundle on a compact Kähler variety has a connection iff it has a flat connection. This follows from the Hodge decomposition of de Rham cohomology, the obstruction for finding a flat connection lies in $H^1(X,\Omega^{\ge1})$ and the obstruction for finding a connection is its image in $H^1(X,\Omega^1)$. By the Hodge decomposition (and the fact that the obstruction is of type $(1,1)$), if the latter vanishes so does the former. – Torsten Ekedahl Apr 14 2010 at 5:31 ## 3 Answers No, any line bundle with a flat connection has a trivial rational Chern class. Now, take any smooth connected projective variety $X$ for which the Chern classes of line bundles form a group of rank $r$ larger than $1$. Removing an irreducible ample divisor $D$ from $X$ gives a smooth affine variety for which the Chern classes form a group of rank $r-1$. A specific example is $\mathbb P^1\times\mathbb P^1$ but there are lots of others of any dimension $>1$. - Dear Torsten Ekedahl, could you please explain why the Chern classes group of the affine variety $X \setminus D$ is of rank $r-1$, because I don't really get it... – Henri May 8 2011 at 21:33 1 This is a standard argument (found in Hartshorne for instance): There is a section of $\mathcal O_X(D)$ which vanishes exactly at $D$ and hence gives a trivialisation of it in the complement. This (and its tensor powers) is the only line bundles killed and every line bundle on the complement extends to $X$. – Torsten Ekedahl May 9 2011 at 4:17 Ok, I get it now. Thanks a lot! – Henri May 9 2011 at 7:48 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. No, there is no reason for an an $O$-module, even locally free rank 1, to be a $D$-module. - 1 Um, care to provide a counterexample? My intuition is the same as yours, but I couldn't come up with a counter-example on an affine variety (of course, there are lots on quasi-projective ones). – Ben Webster♦ Apr 13 2010 at 21:34 Why should I? It is obviously wrong and the guy is trying to prove something using it and he really should not:-)) And, anyway, Torsten has done it already and got my point for it... – Bugs Bunny Apr 13 2010 at 21:44 7 For the edification of humanity? I mean, what's the point of writing the answer otherwise? – Ben Webster♦ Apr 13 2010 at 22:08 I knew the argument with the truncated De Rham complex, but couldn't cook up an example - thank you, Torsten. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9150891304016113, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/5340-question-about-limits.html
# Thread: 1. ## a begginer's question about limits doesn't limits define when f(x) becomes undefined(changes it's behavior and becomes dis continuous??) as x approaches a real number a?? i could not understand the following example: ( x ^ 2 + x - 2 ) / ( x - 1 ) = ( ( x - 1 ) ( x + 2 ) / ( x - 1) ) = x + 2 ,provided x does not equal 1. it follows that the graphs of the equations (the first one) and (the second one ) are the same except for x = 1. 1- lim f(x) = 3 as x approaches 1 ,as the equation is f(x) x + 2 2- lim g(x) = 3 as x approaches 1 , as the equation is g(x) = ( x ^ 2 + x - 2 ) / ( x - 1 ) 3- lim h(x) = 3 as x approaches 1 ,as the functions is: h(x) = { g(x) if x is not equal 1 ,and 2 if x = 1 could not understand the piece-wise defined function ,isn't g(x) supposed to become undefined as x = 1 then how the limit of the h(x) = 3 ?? and what does 2 mean in this piece-wise defined function (h(x)) ?? as for f(x) it is a continuous function ,correct???and as for g(x) it is a dis continuous function at x = 1 ,correct? and here is another question ,it says prove that : lim f(x) = 1 /x as x approaches 0 does not exist. doesn't f(x) becomes undefined as x = 0 ,then how the limit of f(x) as x approaches 0 does not exist? 2. When f(x) is continuous at x = a, limit and function value at a are the same. In your example, where f(x) has a "perforation", the limit represents the value which would make f(x) continuous there, it would "fill the hole". Have you gotten a rigorous definition of a limit (epsilon-delta perhaps?) and if so, do you fully understand it? As for your last question, if f(x) is defined on a neighborhood of x = a, then the limit of f(x) for x approaching a exists if and only if upper and lower limit (or "right" and "left") exist and are equal. Use this on f(x) = 1/x to show that the limit doesn't exist. 3. thanks for the reply.the following is my understanding for a limit: a limit defines when a function changes it's state , lim f(x) exits if f(x) takes real number values as x --> a .to say the a limit for a function exits ,it's right and left limits must exist and equal each other.that what i could understand ,anyway i'm having another problem : A mail-order company adds a shipping and handling fee of \$4 for any order that wighs up to 10 lb with an additional 40 cents for each pound over 10 lb. (a)find a piecewise-defined function S for the shipping and handling fee on an order of x pounds. (b)if a is an integer greater than 10, find: lim S(x) as x approaches a from the left lim S(x) as x approaches a from the right My answear was: (a) S(x) = { 4 : if x <= 10 , 4 + 0.40 (x - 10) : if x > 10 (b) 4 , 4 The correct answear is: (a) S(x) { 4 :if 0 < x <= 10 , 4 + 0.4 [[ x - 9 ]] :if x> [[ x ]] and x > 10 , 4 + 0.4( x - 10 ) : if x = [[x]] and x > 10 (b) 0.4a ; 0.4( a + 1 ) couldn't understand what does [[]] mean ,i mean what does if x > [[ x ]] (i can't understand the second and the third function in S(x) and what is [[x - 9]] for) ?? 4. Looks to me to be the Greatest Integer Function (correct me if I'm wrong) although I thought it was more like $[x]$. Are you aware of it? $<br /> f(x)=x-\{x\},\,\, where\,\, \{x\}\,\, is\,\, the\,\, rational\,\, part\,\, of\,\, x<br />$ 5. i don't know what is the greatest integer function ,does it mean rounding a decimal number to an integer number ? i tried to search Wikipedia but it didn't work.sorry as i'm not native english and it's been a while since i opened a math book. the sign in the book is [[]] ,maybe a mistake but still don't know what this piecewise_defined function means 6. Originally Posted by mHadad i don't know what is the greatest integer function ,does it mean rounding a decimal number to an integer number ? i tried to search Wikipedia but it didn't work.sorry as i'm not native english and it's been a while since i opened a math book. the sign in the book is [[]] ,maybe a mistake but still don't know what this piecewise_defined function means Yeah, it rounds off the number having a rational part to the greatest integer which is less than it. It's like this. $[2.1]=2$ $[0.5]=0$ $[5.9]=5$ $[0.9]=0$ $[100.8]=100$ $[1]=1$ $[2]=2$ $[10]=10$ $[100]=100$ $[-0.7]=-1$ $[-0.2]=-1$ $[-2.1]=-3$ $[-2.9]=-3$ $[-3]=-3$ $[-7]=-7$ 7. thanks shubh ,i got it 8. I was not following this from the very beggining thus I do not know what is going on. But it is not necessarily to write, $[[x]]$ The greater integer function is indepotent. Meaning, $f(f(x))=f(x)$ 9. i have a question in limits ,it says: According to theory of relativity, the length of an object depends on it's velocity v ( look at the equation down the page ) Einstein also proved that the mass m of an object is related to v by the formula : m = m0 / ( sqrt. 1 - v ^ 2 / c ^ 2 ) where m0 is the mass of the object at rest. Investigate lim m as v approaches m from the left. L = L0 ( sqrt 1 - v ^ 2 / c ^ 2 ) :Where L denotes Length of a moving object, L0 length of this object at rest, velocity of the object, c speed of light don't know i have found that it is a division by zero as the velocity of this object is approaching the speed of light ,so it means that a mass would be more less as it is approaching the speed of light but it can't reach the speed of light or else it's mass state would be undetermined? 10. Originally Posted by mHadad i have a question in limits ,it says: According to theory of relativity, the length of an object depends on it's velocity v ( look at the equation down the page ) Einstein also proved that the mass m of an object is related to v by the formula : m = m0 / ( sqrt. 1 - v ^ 2 / c ^ 2 ) where m0 is the mass of the object at rest. Investigate lim m as v approaches m from the left. L = L0 ( sqrt 1 - v ^ 2 / c ^ 2 ) :Where L denotes Length of a moving object, L0 length of this object at rest, velocity of the object, c speed of light don't know i have found that it is a division by zero as the velocity of this object is approaching the speed of light ,so it means that a mass would be more less as it is approaching the speed of light but it can't reach the speed of light or else it's mass state would be undetermined? Investigate lim m as v approaches m from the left. This is supposed to be the limit of m as v approaches c from the left? $\lim_{v \to c} \frac{m_0}{\sqrt{1 - \frac{v^2}{c^2}}} \to \infty$ The meaning behind this is that as an object approaches the speed of light its mass increases without limit. aka. It takes an infinite amount of energy to accelerate an object to the speed of light, so it can't be done. (Typically this argument is given in terms of the object's momentum, not mass, but the mass argument does the job just as well.) A similar argument shows that the object's length goes to 0 as v goes to c. This is also a ridiculous statement, so again we see that it can't happen. NOTE: In both of these cases we are speaking of an observer watching the object move with a speed v according to his/her reference frame. The mass and length of the object in its reference frame are still m0 and L0 respectively. -Dan 11. what is the best, easiest and free computer graphing calculator that will draw me my function . 12. I would definitely say the graphing program Hacker showed me 13. Originally Posted by mHadad what is the best, easiest and free computer graphing calculator that will draw me my function . The best and easiest are not necessarily the same thing. If you look at the graphs attached to my posts you will see that I am not using the same tool at ImPerfectHacker and Quick et al. This is because the tool I use is what I use for other things, and it is very much more powerfully than that the other. I use it because it is on all my machines, I am fluent in it, and it does the job of plotting adequately. If I had no such tool I would use ImPerfectHacker recommendation. But for me the best and easiest is Euler. RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496164917945862, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/66733/low-pass-filter
# Low pass filter A wave defined by $f(t)=a$ for $t\in (0,T)$ and $f(t)=-a$ for $t\in (-T,0)$ (the wave is $2T$ periodic) is input into a system that transmits angular frequencies $<\omega$ and absorbs those $>\omega$. How might I find the form of the output? Firstly, I am not quite sure what is going on here. Am I supposed to find the Fourier series for $f(t)$ then eliminate the terms whose arguments of the cosines or sines are $>\omega$? But how do I find/express the "form" of the output? Thanks in advance! - You system is an ideal lowpass filter, so yes you can just remove the frequency components which are greater than the cutoff frequency $\omega$. – Unreasonable Sin Sep 22 '11 at 18:16 1 Is the wave periodic? If so, what is the period? My guess would be $2T$, but it is unclear from your question if the wave is even periodic, or simply a pulse. Also, you probably mean $t\in(0,T)$ and $t\in(-T,0)$, not $x$. – robjohn♦ Sep 22 '11 at 18:38 @robjohn: You are right. Sorry about that. – I. S. Sep 22 '11 at 19:27 @UnreasonableSin: But what is the resulting output solution? – I. S. Sep 22 '11 at 19:29 @I. S.: thanks for correcting the typo. Is the wave $2T$-periodic? – robjohn♦ Sep 22 '11 at 19:40 show 1 more comment ## 2 Answers In order to get the Fourier series you'll need to impose the condition that the waveform is periodic, which means it's a square wave with period $2T$. The Fourier series of a square wave is well known: http://en.wikipedia.org/wiki/Square_wave http://mathworld.wolfram.com/FourierSeriesSquareWave.html From that you just remove the terms of the series which have frequency components greater than the cutoff frequency $w$, which results in a truncated series. The resulting filtered waveform will not have the sharp transition at $t = 0$ that the original waveform had, but will instead have rounded edges and a slower transition from $-a$ to $a$. - Thanks, Unreasonable Sin. :-) – I. S. Sep 22 '11 at 20:08 "The resulting filtered waveform...will instead have rounded edges..." It will? Seems to me it will suffer from Gibbs phenomenon, hence have quite bad behavior around $t=0$. Maybe you meant something else with this remark? – cardinal Sep 22 '11 at 20:16 Maybe squiggly edges is a better way to put it. =) – Unreasonable Sin Sep 22 '11 at 20:40 Assuming the wave is $2T$-periodic, it is an odd function, so all of the cosine terms in the Fourier series are $0$. Thus $$f(t)=\sum_{n=1}^\infty\;b_n \sin\left(\frac{n\pi t}{T}\right)$$ where $$b_n=\frac{1}{T}\int_{-T}^T f(t)\sin\left(\frac{n\pi t}{T}\right)\;\mathrm{d}t$$ The frequency of each term is $\frac{n}{2T}$, so remove all terms of the sum for which $\frac{n}{2T}>\omega$. Your answer should be a finite sum of sine waves. Spoiler: If $n$ is even, $b_n=0$. If $n$ is odd, then $b_n=\frac{4a}{n\pi}$. Therefore, $$\tilde{f}(t)=\sum_{k=0}^{\lfloor T\omega-1/2\rfloor}\frac{4a}{(2k+1)\pi}\sin\left(\frac{(2k+1)\pi t}{T}\right)$$ - Thanks, robjohn. :-) – I. S. Sep 22 '11 at 20:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9116316437721252, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/101473/list
2 edited tags 1 # Combinatorial Interpretation of an Extension of Gaussian Polynomials It is well-known that the Gaussian polynomial (or Gaussian coefficient, q-binomial coefficient) $\binom{n}{k}_q$ counts the number of $k$-dimensional subspaces of an $n$-dimensional vector space over $GF(q)$. A generalization of $\binom{n}{k}_q$ are the so-called $p,q$-binomial coefficients, $\binom{n}{k}_{p,q}=\frac{{[n]}!}{[k]![n-k]!}$, where $[n]=\frac{p^n-q^n}{p-q}$ and $[n]!=[n][n-1]\cdots[2][1]$. The $p,q$-binomials equal the $q$-binomials when $q=1$. Question 1: Is there a vector space combinatorial interpretation for the $p,q$-binomials? If there is, how does the underlying two-parameter field look like? (There is a combinatorial interpretation in terms of tableaux and lattice paths but I'm more interested with the subspace interpretation.) Question 2 (somewhat related): A number of mathematicians have talked about the so-called $q$-disease (the widespread (at least those among working in $q$-series) interest in extending classical results to $q$-analogues.) Is there a $p,q$-disease? PS I'm supposed to write $[n]_{p,q}$ but it doesn't seem to work with \frac.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.895544171333313, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/05/25/basic-properties-of-integrable-simple-functions/?like=1&source=post_flair&_wpnonce=9916d21231
# The Unapologetic Mathematician ## Basic Properties of Integrable Simple Functions We want to nail down a few basic properties of integrable simple functions. We define two simple functions $f\sum\alpha_i\chi_{E_i}$ and $g\sum\beta_j\chi_{F_j}$ to work with. First of all, if $f$ is simple then the scalar multiple $\alpha f$ is simple for any real number $\alpha$. Indeed, $\alpha f=\sum\alpha\alpha_i\chi_{E_i}$, and so the exact same sets $E_i$ must have finite measure for both $f$ and $\alpha f$ to be integrable. It’s similarly easy to see that if $f$ and $g$ are both integrable, then $f+g$ is integrable. Thus the integrable simple functions form a linear subspace of the vector space of all simple functions. Now if $f$ is integrable then the product $fg$ is integrable whether or not $g$ is. We can write $\displaystyle fg=\sum\limits_{i,j}\alpha_i\beta_j\chi_{E_i}\chi_{F_j}=\sum\limits_{i,j}\alpha_i\beta_j\chi_{E_i\cap F_j}$ If each $E_i$ has finite measure, then so does each $E_i\cap F_j$, whether each $F_j$ does or not. Thus we see that the integrable simple functions form an ideal of the algebra of all simple functions. We can use this to define the integral of a function over some range other than all of $X$. If $f$ is an integrable simple function and $E$ is a measurable set, then $f\chi_E$ is again an integrable simple function. We define the integral of $f$ over $E$ as $\displaystyle\int\limits_Ef\,d\mu=\int f\chi_E\,d\mu$ This has the effect of leaving $f$ the same on $E$ and zeroing it out away from $E$. Thus the integral over the rest of the space $E^c$ contributes nothing. The next two properties are easy to prove for integrable simple functions, but they’re powerful. Other properties of integration will be proven in terms of these properties, and so when we widen the class of functions under consideration we’ll just have to reprove these two. The ones we will soon consider will immediately have proofs parallel to those for simple functions. Not only is the function $\alpha f+\beta g$ integrable, but we know its integral: $\displaystyle\int\alpha f+\beta g\,d\mu=\alpha\int f\,d\mu+\beta\int g\,d\mu$ Indeed, if you were paying attention yesterday you’d have noticed that we said we wanted integration to be linear, but we never really showed that it was. But it’s not really complicated: the expression $\sum(\alpha\alpha_i)\chi_{E_i}+\sum(\beta\beta_j)\chi_{F_j}$ represents $f+g$ as a simple function, and it’s clear that the formula holds as stated. Almost as obvious is the fact that if $f$ is nonnegative a.e., then $\int f\,d\mu\geq0$. Indeed in any representation of $f$ as a simple function, any term $E_i$ corresponding to a negative $\alpha_i$ must have $\mu(E_i)=0$ or else $f$ wouldn’t be nonnegative almost everywhere. But then the term $\alpha_i\mu(E_i)$ contributes nothing to the integral! Every other term has a nonnegative $\alpha_i$ and a nonnegative measure $\mu(E_i)$, and thus every term in the integral is nonnegative. This is the basis for all the nice order properties we will find for the integral. ### Like this: Posted by John Armstrong | Analysis, Measure Theory ## 4 Comments » 1. [...] of simple functions. But the neat thing is that they will follow from the last two properties we showed yesterday. And so their proofs really have nothing to do with simple functions. We will be able to point back [...] Pingback by | May 26, 2010 | Reply 2. A ‘g’ in the final integral? Comment by Hunt | May 30, 2010 | Reply 3. Thanks, copy/paste error. Comment by | May 30, 2010 | Reply 4. [...] of all, from what we know about convergence in measure and algebraic and order properties of integrals of simple functions, we can see that if and are integrable [...] Pingback by | June 3, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 44, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9115585684776306, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/150141/linear-independence-of-roots-over-q/150149
# Linear independence of roots over Q Let $p_1,\ldots,p_k$ be $k$ distinct primes (in $\mathbb{N}$) and $n>1$. Is it true that $[\mathbb{Q}(\sqrt[n]{p_1},\ldots,\sqrt[n]{p_k}):\mathbb{Q}]=n^k$? (all the roots are in $\mathbb{R}^+$) Iurie Boreico proved here that a linear combination $\sum q_i\sqrt[n]{a_i}$ with positive rational coefficients $q_i$ (and no $\sqrt[n]{a_i}\in\mathbb{Q}$) can't be rational, but this question seems to be more difficult.. - ## 2 Answers Below are links to classical proofs. Nowadays such results are usually derived as special cases of results in Kummer Galois theory. See my post here for a very simple proof of the quadratic case. Besicovitch, A. S. $\$ On the linear independence of fractional powers of integers. J. London Math. Soc. 15 (1940). 3-6. MR 2,33f 10.0X Let $\ a_i = b_i\ p_i,\ i=1,\ldots s\:,\:$ where the $p_i$ are $s$ different primes and the $b_i$ positive integers not divisible by any of them. The author proves by an inductive argument that, if $x_j$ are positive real roots of $x^{n_j} - a_j = 0,\ j=1,...,s ,$ and $P(x_1,...,x_s)$ is a polynomial with rational coefficients and of degree not greater than $n_j - 1$ with respect to $x_j,$ then $P(x_1,...,x_s)$ can vanish only if all its coefficients vanish. $\quad$ Reviewed by W. Feller. Mordell, L. J. $\$ On the linear independence of algebraic numbers. Pacific J. Math. 3 (1953). 625-630. MR 15,404e 10.0X Let $K$ be an algebraic number field and $x_1,\ldots,x_s$ roots of the equations $\ x_i^{n_i} = a_i\ (i=1,2,...,s)$ and suppose that (1) $K$ and all $x_i$ are real, or (2) $K$ includes all the $n_i$ th roots of unity, i.e. $K(x_i)$ is a Kummer field. The following theorem is proved. A polynomial $P(x_1,...,x_s)$ with coefficients in $K$ and of degrees in $x_i$ , less than $n_i$ for $i=1,2,\ldots s$ , can vanish only if all its coefficients vanish, provided that the algebraic number field $K$ is such that there exists no relation of the form $\ x_1^{m_1}\ x_2^{m_2}\:\cdots\: x_s^{m_s} = a$, where $a$ is a number in $K$ unless $\ m_i = 0\ mod\ n_i\ (i=1,2,...,s)$ . When $K$ is of the second type, the theorem was proved earlier by Hasse [Klassenkorpertheorie, Marburg, 1933, pp. 187--195] by help of Galois groups. When K is of the first type and K also the rational number field and the $a_i$ integers, the theorem was proved by Besicovitch in an elementary way. The author here uses a proof analogous to that used by Besicovitch [J. London Math. Soc. 15b, 3--6 (1940) these Rev. 2, 33]. $\quad$ Reviewed by H. Bergstrom. Siegel, Carl Ludwig. $\$ Algebraische Abhaengigkeit von Wurzeln. Acta Arith. 21 (1972), 59-64. MR 46 #1760 12A99 Two nonzero real numbers are said to be equivalent with respect to a real field $R$ if their ratio belongs to $R$ . Each real number $r \ne 0$ determines a class $[r]$ under this equivalence relation, and these classes form a multiplicative abelian group $G$ with identity element $[1]$. If $r_1,\cdots,r_h$ are nonzero real numbers such that $r_i^{n_i}\in R$ for some positive integers $n_i\ (i=1,...,h)$ , denote by $G(r_1,...,r_h) = G_h$ the subgroup of $G$ generated by [r_1],...,[r_h] and by R(r_1,...,r_h) = R_h the algebraic extension field of $R = R_0$ obtained by the adjunction of $r_1,...,r_h$ . The central problem considered in this paper is to determine the degree and find a basis of $R_h$ over $R$ . Special cases of this problem have been considered earlier by A. S. Besicovitch [J. London Math. Soc. 15 (1940), 3-6; MR 2, 33] and by L. J. Mordell [Pacific J. Math. 3 (1953), 625-630; MR 15, 404]. The principal result of this paper is the following theorem: the degree of $R_h$ with respect to $R_{h-1}$ is equal to the index $j$ of $G_{h-1}$ in $G_h$ , and the powers $r_i^t\ (t=0,1,...,j-1)$ form a basis of $R_h$ over $R_{h-1}$ . Several interesting applications and examples of this result are discussed. $\quad$ Reviewed by H. S. Butts - Wow, big guns to attack this question! – Lubin May 26 '12 at 19:34 Wow, they asked Feller to review Besicovitch's paper! Well, it's not as if Feller really did much in his review, but it's funny that a probabilist would be asked to write the review. – KCd May 26 '12 at 21:07 Nice bibliography, Bill:+1 – Georges Elencwajg May 27 '12 at 6:46 I accepted this answer because of the rich bibliography. Thanks to all anyway! – carizio May 27 '12 at 10:11 Yes, it is true that $[\mathbb{Q}(\sqrt[n]{p_1},\ldots,\sqrt[n]{p_k}):\mathbb{Q}]=n^k$ and it was proved by Besicovitch ( a student of A.A. Markov) in 1940. Although there are (almost) infinitely many books on field and Galois theory, the only book I know which proves Besicovich's theorem (but only for odd $n$) is Roman's Field Theory (Theorem 14.3.2, page 305 of the second edition). - Yes, but the proof there is only for odd $\,n\geq 3\,$ , and the proof isn't easy at all. He mentions that the case where n is even adds no new insights and is "intricate"(!!) – DonAntonio May 26 '12 at 19:44 Dear @Donantonio, you are absolutely right: thanks a lot. I suppose brave users will have to go to Richards's article which Roman follows (and mentions in his bibliography) if they want to see the intricate case of even $n$. I have made an Edit about that caveat. – Georges Elencwajg May 26 '12 at 19:58 Another book that has a proof is Lisl Gaal's "Classical Galois Theory: With Examples", and she treats the case of all $n$, not just odd $n$. (In fact, one of her first steps is to reduce to the case when $n$ is even.) See Section 4.12, which essentially starts on p. 234. This is the place where I first saw the result when I was a student. – KCd May 26 '12 at 21:18 – KCd May 26 '12 at 21:24 Thanks for the comment and the link, @KCd. Was that the book required for your course? It doesn't seem very widespread now in American universities (but I might be mistaken: I only know about these universities through online documents [yours in particular !]) – Georges Elencwajg May 26 '12 at 21:42 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 75, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413512945175171, "perplexity_flag": "head"}
http://mathoverflow.net/questions/51290/calculating-normal-bundle
## Calculating normal bundle ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I just realize that even though I know what normal bundes are, I dont know how to compute them. The main objective is to show that a ration curve C on a quintic threefold doesnt move. If C is a line, then its normal bundle in the ambient space is $\mathcal O^{\oplus 3}(1)$. If we know that C is rigid on the quintic threefold, then its normal bundle, being a subundle of $\mathcal O^{\oplus 3}(1)$, must be $\mathcal O^{\oplus 2}(-1)$. But how do we prove this? What about higher degree rational curves? - There must be a misprint when you write $\mathcal{O}^{\oplus 3}(1)$: how can the normal bundle of a curve inside a threefold have rank 3 ? – Qfwfq Jan 6 2011 at 16:05 There it is meant: the normal bundle $N_{C/X}$ of $C$ in the quintic threefold $X$ is a subbundle of the normal bundle $N_{C/\mathbb{P}^4}$ of $C$ in $\mathbb{P}^4$, and $N_{C/\mathbb{P}^4}=\mathcal{O}(1)^{\oplus3}$ – domenico fiorenza Jan 6 2011 at 16:21 ## 2 Answers Rigid'' can mean either that $C$ does not move (i.e., defines an isolated, but maybe non-reduced, point on the Hilbert scheme of lines on the smooth quintic $Q$) or that it defines an isolated reduced point. Moreover, it is possible for a line $C$ to move on $Q$; e.g., if there is a hyperplane section of $Q$ that is a cone, then the generators of the cone move on $Q$. Even if $C$ can be contracted to an isolated singularity (so certainly does not move), then its normal bundle can be $\mathcal O\oplus\mathcal O(-2)$ or $\mathcal O(1)\oplus\mathcal O(-3)$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A detailed treatment can be found in Sheldon Katz, On the finiteness of rational curves on quintic threefolds, Compositio Math 60, 151-162 (1986) available as archive.numdam.org/article/CM_1986__60_2_151_0.pdf -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920260488986969, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/252809/prove-a-functions-integral-is-greater-than-0
# Prove a functions integral is greater than $0$ Let $f$ be continuous on $[a,b]$ and suppose that $f(x)\ge0$ for all $x \in\ [a,b].$ Prove that if there exists a point $c \in\ [a,b]$ such that $f(c)>0$, then $$\int_{a}^b f(x)\,dx > 0 .$$ I feel like by proving this function is uniformly continuous, which is trivial, that it somehow shows me that the integral is greater than $0$, but I don't quite know how to get there. Can someone help? Thanks! On a side note, I don't think the mathtex stuff came out right, let me know how to fix that! - The function is continuous - what does this imply about the set $\{x: f(x)>0\}$? – Brad Dec 7 '12 at 4:15 ## 2 Answers Hint: Let $f$ be positive at the point $p$. Then by the definition of continuity, there is an interval about $p$ (one-sided if $p$ is an endpoint) such that $f(x)\gt f(p)/2$ in that interval. Thus the integral over that interval is positive. The integral(s) over the rest of $[a,b]$ are $\ge 0$. So the full integral over $[a,b]$ is positive. The above sketch assumes that you have already proved that if $f\ge g$ on some interval $[c,d]$, then $\int_c^d f(x)\,dx \ge \int_c^d g(x)\,dx$. It also assumes the result that if $a\le c\le b$, then $\int_a^b f(x)\,dx=\int_a^c f(x)\,dx+\int_c^b f(x)\,dx$. - let $c\in [a,b]$ : $f(c)>0$ then from continuous we have $\exists \delta >0 : \forall x \in (c-\delta, c+\delta)$ $\qquad f(x)>0$ now, we have $\int_a^b f(x)dx=\int_a^{c-\delta} f(x)+\int_{c-\delta}^{c+\delta} f(x)dx+\int_{c+\delta}^b f(x)d(x)\geq \int_{c-\delta}^{c+\delta} f(x)dx >0$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508296847343445, "perplexity_flag": "head"}
http://mathoverflow.net/questions/74164/other-ring-structures-on-mathbbq/74188
## Other Ring Structures on $\mathbb{Q}$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is a question I've had for a while and really don't know how to go about finding an answer: Does there exist a pair of binary operations, $\boxplus$ and $\boxtimes$, other than the usual $+$ and $\times$, such that $(\mathbb{Q}, \boxplus, \boxtimes)$ forms a ring? I realize that there's probably some "axiom of choice" proof that constructs unintelligible binary operations or even a construction using some other countably infinite ring and a bijection to the rationals. So more importantly I ask: Does there exist such a $\boxplus$ and $\boxtimes$ such that $a \boxplus b$ and $a \boxtimes b$ can be computed from (closed?) formulas that only involve $+$ and $\times$ (or other related properties of $a$ and $b$ such as prime factors, divisors, gcd, partitions, etc.)? My apologies if there's some sort of easy example out there that I'm missing. (Though in that case I'll push further and ask if it can be generalized to a larger class of examples.) Thanks. - 3 What about taking some permutation of $\mathbb Q$ which fixes $0$ or $1$ (if you want this new ring to have the same $0$ and $1$ as the original ring $\mathbb Q$; if not, then you don't have to require even this) and conjugating the operations $+$ and $\times$ by this permutation? – darij grinberg Aug 31 2011 at 14:50 Read "and" for "or" in my comment above. – darij grinberg Aug 31 2011 at 14:50 1 Maybe you want "closed formulas" to mean something like "the operations are rational functions"? – darij grinberg Aug 31 2011 at 14:51 I wasn't thinking rational functions because I feel like other properties arising from multiplication and addition should be allowed, such as prime factors, gcd, partitions, etc. I'll edit the original post to reflect this. – Aeryk Aug 31 2011 at 14:55 7 If you are only slightly generous with the notion of formula, you will pick up all countably infinite computable rings, since the formulas will allow you to specify any computable function. For example, Julia Robinson proved that the integers are definable inside the rational field in a primitive manner, and then any computable function y=f(x) is specified by there being an integer solution to a certain diophantine equation involving x and y and other integer variables. So the answer to the question will depend greatly on the details of what kind of definition you allow. – Joel David Hamkins Aug 31 2011 at 15:50 ## 6 Answers Given any bijection $f : \mathbb{Q} \to R$ where $(R,\oplus,\otimes)$ is some (necessarily countable) ring, you'll be able to get a new ring structure $(\mathbb{Q},\boxplus,\boxtimes)$ isomorphic to $(R,\oplus,\otimes)$, by setting: $a \boxplus b = f^{-1}(f(a)\oplus f(b))$ $a \boxtimes b = f^{-1}(f(a)\otimes f(b))$ The nicer $f$ is, the nicer the expressions for $\boxplus$ and $\boxtimes$ will be. Perhaps the simplest examples are if $p \in \mathbb{Q}^\times,\ q\in \mathbb{Q}$, $R = \mathbb{Q}$, then $f(x) = px+q$ will work. This generalizes Neil's and Pace's answers. The "converse" is trivially true, in that if $f : (\mathbb{Q},\boxplus,\boxtimes) \to (R,\oplus,\otimes)$ is an isomorphism from some ring structure on $\mathbb{Q}$ to a ring $R$, then $f$ is a bijection $\mathbb{Q} \to R$ and $a \boxplus b = f^{-1}(f(a)\oplus f(b))$ $a \boxtimes b = f^{-1}(f(a)\otimes f(b))$ So in some sense, the above method for getting a ring structure on $\mathbb{Q}$ is the only way to do it. The question (more or less) boils down to, "for which rings $(R,\oplus,\otimes)$ is there a 'nice' bijection $\mathbb{Q} \to R$?" It depends, of course, on what you think "nice" means. - Thanks for the answer. I will now have to think more carefully about what "nice" means. – Aeryk Aug 31 2011 at 21:56 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The simplest example is just $x\boxplus y=x+y$ and $x\boxtimes y=-xy$. This gives a ring structure with $-1$ as the multiplicative identity. The map $x\mapsto -x$ gives an isomorphism $(\mathbb{Q},\boxplus,\boxtimes)\simeq(\mathbb{Q},+,\times)$. - Does this generalize? If this is the simplest example, is there a next simplest example? – Aeryk Aug 31 2011 at 18:48 $x\oplus y=\mathrm{min}(x,y)$, $x \otimes y=x+y$ gives a semiring. The direct sum of a countable collection of finite rings gives a ring. Or lots of other things. There is certainly no classification of the countable rings. - The answer is yes (unless I made a mistake somewhere). For example, you can replace addition with the operation $a\oplus b=a+b-1$. This is a commutative, associative binary operation with identity $1$ and the inverse of $a$ is given by $2-a$. You replace multiplication by $a\odot b= a+b-ab$. This is a commutative, associative binary operation with identity $0$. All that remains is to show that the distributive laws hold. - 1 You don’t need to show any laws, just observe that $x\mapsto 1-x$ is an isomorphism of your structure with the usual $(\mathbb Q,+,\cdot)$. – Emil Jeřábek Aug 31 2011 at 18:48 True. But that seems to me to be just about as difficult (although it does give an idea of where it comes from). – Pace Nielsen Aug 31 2011 at 18:54 1 @Emil: I think this gives me some good insight. True or False: Any linear map $x \to ax+b$ will induce corresponding operations under which 1) $\mathbb{Q}$ is a ring and 2) the linear map gives the isomorphism with the standard operations? I would conjecture now that this is true. – Aeryk Aug 31 2011 at 18:55 Yes, any bijection will do, it does not have to be linear, but I gather you learned this meanwhile from AKG’s answer. – Emil Jeřábek Sep 1 2011 at 10:42 Since the question is somewhat fuzzy, I am not 100% sure what would satisfy you as an answer. Based on nothing, I am guessing you want something a little stranger than the examples Neil and Pace have provided, but a little simpler or more concrete than Joel's family of examples. There is a somewhat cute, if useless, ring structure for $\mathbb{Q_+}$ (the positive rationals) which might still be of interest to you. I haven't thought of whether it can be extended to $\mathbb{Q}$ in a nice way. The idea is that the fundamental theorem of arithmetic (existence and uniqueness of factorization in $\mathbb{N}$) gives a bijection between $\mathbb{Q}_+$ and sequences of integers with finitely many nonzero elements. Namely each number is mapped to the sequence of powers of primes in its factorization, i.e., $\frac{3}{4} = 2^{-2}\cdot 3^1 \mapsto (-2,1,0,0,\ldots)$. The set of sequences of integers with finitely many nonzero elements already has a familiar ring structure, namely that of $\mathbb{Z}[x]$. We can use the map above to transfer this ring structure to $\mathbb{Q}_+$, in which case $\boxplus$ is what we usually call multiplication (the cute part) and $\boxtimes$ is defined by the ring axioms and the condition that $p_i \boxtimes p_j = p_{i+j}$ where $p_0=2,p_1=3, p_2 = 5,\ldots$ is the sequence of primes. For example, you can check that $\frac{3}{4}\boxtimes 6 = \frac{5}{12}$. Computationally the downside is that you need to be able to factor into primes to be able to do $\boxtimes$ (as far as I know). - Nice example. I suppose the 'nicest' example of a different ring structure on $\Q$ would be to take Cantor's bijection $f:\mathbb{Q}\rightarrow \mathbb{Z}$ and then apply Amit's answer, using the usual ring structure on $\mathbb{Z}$. – Pace Nielsen Aug 31 2011 at 19:30 A more general formulation may be as follows, and close to what you desire. Given the set Q of rational numbers and a (necessarily finite) set of basic operations of finite arity, consider the set T of all term operations formed through composition from the basic operations. (Some of the operations can have arity 0, so they look like constants or constant functions.) How many pairs (a,m) of terms from T can one form so that the structure < Q, a, m > is a ring? I do not know where in the general algebra literature this is covered. Search terms that come to mind are cryptomorphism, term equivalent or polynomially equivalent algebras, and interpreting one structure inside another. I hope this helps. If I had to guess, I would guess to the above that there are infinitely many pairs of terms making Q into infinitely many distinct rings, given + , *, 0 and 1. - I haven't checked the details, but I nominate (a term corresponding to) a+a+b+b as a new form of addition. If that works, then there are countably many distinct examples indeed. Gerhard "Ask Me About Quick Guessing" Paseman, 2011.08.31 – Gerhard Paseman Aug 31 2011 at 15:32 Nope, associativity fails. I guess I had best review my notes on hyperassociativity. Gerhard "Back To The Scratch Pad" Paseman, 2011.08.31 – Gerhard Paseman Aug 31 2011 at 15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.94271320104599, "perplexity_flag": "head"}
http://mathoverflow.net/questions/42582?sort=oldest
## Penrose tilings and noncommutative geometry ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Are there "elementary" resources on Penrose Tilings in relation to noncommutative geometry? It's all a big blur to me. There are two transformations S and T that can grow the tilings and every tiling corresponds to a sequence of S's and T's. Somehow, there is a C* algebra related to this. Can the elements of this algebra be interpreted as observables in a quantum mechanical system? What is the "geometry" of this noncommutative space? - books.google.com/… – Steve Huntsman Oct 18 2010 at 6:39 1 What would count as elementary? I don't know of any account of NCG that is both honest and elementary. But then again I find some of the advertising surrounding NCG rather post hoc propter hoc, so I am not the best person to venture opinions here... – Yemon Choi Oct 18 2010 at 7:34 2 Another reference is Alain Connes's monograph on Noncommutative Geometry, downlodable from alainconnes.org/en/downloads.php . – Robin Chapman Oct 18 2010 at 7:55 1 LOL, the word "elementary" can be used for anything from high school to beginning graduate level depending on the audience. So I wouldn't be too bogged down by it. I think Prof Shor hits it on the head. Connes' own discussion is a little terse and requires some C* algebras and K-theory. – John Mangual Oct 19 2010 at 0:31 1 I find myself liking the 2nd para of this question more than the 1st – Yemon Choi Oct 21 2010 at 8:08 show 4 more comments ## 6 Answers These (1 , 2 , 3 ) look to me elementary, but I only browsed them very superficially. EdiT: A recent article: "C*-algebras of Penrose hyperbolic tilings". - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I don't know about the relation to non-commutative geometry, but for the C*-algebra you should take a look at this paper: J.E. Anderson and I.F. Putnam, Topological invariants for substitution tilings and their associated C*-algebras, Ergodic Theory & Dynamical Systems 18 (1998), 509- 537. - Much of the work of Johannes Kellendonk deals with physical applications of Penrose tilings. Not sure I'd call this "elementary", though -- but your mileage might vary. - Chp 9 in "A WALK IN THE NONCOMMUTATIVE GARDEN" by ALAIN CONNES AND MATILDE MARCOLLI, and you can find the references there. - I don't have any magical references for you, nor do I understand the NCG point of view on the Penrose tiling all that well. I learned just enough about this to convince myself that I didn't need to learn more, and I'll try to convey the solace that I achieved. First, let me say a few words about the philosophy of Connes' NCG. The basic premise is that sometimes when you have a space equipped with an equivalence relation it is not necessarily a good idea to pass to the space of equivalence classes. The best-advertised justifications for this premise are examples where the space is nice and the equivalence relation is nice but the quotient is miserable, but I want to remark before going on that the tools of NCG are still extremely useful when the quotient is nice (such as when a manifold is viewed as the quotient of its universal cover by its fundamental group). Here is how the philosophy plays out for Penrose tilings. One regards a Penrose tiling as a tiling of the plane by isometric copies of two specific triangles that are only allowed to connect in a few very specific ways. See page 181 of Connes' Book for pictures (and a more detailed, but elementary, description of what I am about to say). We declare two Penrose tilings to be equivalent if there is an isometry of the plane which carries one to the other. There is a more down to earth way to express the space of Penrose tilings modulo this equivalence relation by interpreting it combinatorially. It is the space of sequences $(a_n)$ of $0$'s and $1$'s with the property that $a_{n+1} = 0$ whenever $a_n = 1$ modulo the equivalence relation of eventual equality: $a_n = b_n$ for $n$ sufficiently large. What isn't necessarily obvious at the outset is that this space has absolutely no sensible local structure. In terms of the Penrose tilings, this can be expressed by observing that any finite patch in one tiling appears infinitely often in any other tiling by the same tiles. There is by now a standard way of fitting such a setup into the machinery of NCG. Any equivalence relation on a space gives rise to a certain groupoid whose objects are points in the space and whose morphisms are determined by the equivalence relation. The idea is supposed to be that the groupoid keeps track of which points in the space are equivalent as well as the reason why they are equivalent, rather than violently collapsing each equivalence class down to a single point. In most cases it is possible to equip the groupoid with a compatible system of measures and then use integration with respect to that system to define a convolution product on a suitable space of functions on the groupoid (generally the original space and the groupoid have a topology and the space of functions is the space of continuous functions). The C* algebra of the groupoid is defined as a certain completion of this convolution algebra. Notice that none of that last paragraph involves the details of the Penrose tiling construction in any essential way: it is a purely mechanical procedure which starts with an equivalence relation and spits out a C* algebra. One can think of this as analogous to the procedure of replacing a function with an operator (in this case convolution against the function) which flies under the moniker "quantization" in physics. Indeed, various sorts of quantization in physics can be realized via convolution algebras on groupoids - though the groupoids are generally more complicated than those which arise from an equivalence relation. As far as I know, the relationship between Penrose tilings and physics ends with this analogy. I could be tragically wrong. So when you ask about the "geometry" of the noncommutative space of Penrose tilings, you are really asking about the structure of the C* algebra spat out by the machine described above. What is the structure of this C* algebra? I don't really know. Connes claims that it has trivial center and a unique trace, which has consequences on its "measure theoretic" structure. It was at this point in the story that I noticed that I'm more interested in the geometry of manifolds and decided to move on. Still, it wouldn't surprise me if there turns out to be way more to this story than meets the eye. - 1 Awesomely informative answer! – Steven Gubkin Oct 21 2010 at 12:46 Just to add that more detail can (I'm wagering) be found in the works of Kellendonk and Putnam that are referred to below, and other authors no doubt. – Yemon Choi Oct 21 2010 at 18:22 There is a concise description of this in section 1.1 of this nice paper Trees, Ultrametrics, and Noncommutative Geometry by Bruce Hughes: http://arxiv.org/abs/arXiv:math/0605131. This isn't particularly elementary, I guess, but it is a line of investigation that embarks from Connes's example.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424461722373962, "perplexity_flag": "head"}
http://mathoverflow.net/questions/66176?sort=votes
## Which Turing machines accept the language of trivial words in a finitely presented group? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a finitely presented group with generators $g_1, g_1^{-1},\ldots, g_n, g_n^{-1}$. Let $L(G)$ be the language of all those words in $g_1, \ldots, g_n$ which represent the trivial element of $G$. It's well known that there exists a Turing machine $T$ which accepts $L(G)$ (it doesn't necessary always stop). Conversely, given an alphabet $A$ consisting of symbols $g_1, g_1^{-1}, \ldots, g_n, g_n^{-1}$, and a language $L$ on $A$ accepted by a Turing machine $T$ it's easy to give neccesary and sufficient conditions on $L$ so that for some group $G$ we have $L=L(G)$. Namely $L$ should be closed under (1) concatanation (2) reductions and additions of the terms $g_ig_i^{-1}$ and $g_i^{-1}g_i$, (3) "conjugation" i.e. given $w\in L$ the words $gwg^{-1}$ and $g^{-1}wg$ are also in $L$. Question 1. Is there a set of conditions on a Turing machine $T$ which assures that the language $L(T)$ accepted by $T$ fulfills the conditions (1)-(3) above? For the purpose of this question "a set of conditions" means an algorithm which always stops, which takes as the input a Turing machine $T$, and if $L(T)$ fulfills (1)-(3) then the algorithm returns YES (if it returns NO then it can be either way). Of course I'm interested in algorithms which output YES on a possibly big set of Turing machines. Question 2. Is there an algorithm as above which returns YES exactly on the set of those machines $T$ such that $L(T)$ fulfills conditions (1)-(3). - 2 In your conditions (1)–(3), don’t you also need (4) closed under inverse (defined in the natural way) and (5) L is nonempty? For example, I do not think that your conditions (1)–(3) rule out the language L which corresponds to the semigroup generated by the elements conjugate to g_1. – Tsuyoshi Ito Jul 20 2011 at 20:09 ## 5 Answers The answer to Question 2 is no, by Rice's theorem. The intuitive content of that theorem is that any non-trivial property of the (partial) function computed by a Turing machine will be undecidable as a property of the machine's program. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There are the so-called Miller machines (see my paper ). These are not quite Turing machines, but can be easily converted into Turing machines. It is defined by a collection of words $r_1,...,r_n$. The commands are $q\to r_i^{\pm 1} q$, $q\to aqa^{-1}$ where $a$ is any letter of the alphabet, $q$ is the (only one!) state letter. Unlike a Turing machine, a Miller machine works with words which may contain inverses of the letters, and reduces a word after every step (this is a particular case of the so-called S-machines). The word $w$ is accepted if the machine takes $wq$ to $q$. Clearly the set of words accepted by the Miller machine is exactly the set of words equal to 1 modulo relations $r_i=1$, $i=1,...,n$. - Given a Turing machine $T$, one can close it under the conditions (1), (2), (3) as follows. Given the word $w$, run machine $T$ on $w$ as well as all the words $w'$ obtained by concatenating, reducing, and conjugating. The set of such words $w'$ is recursively enumerable. Then, simply dovetail the computations which correspond to each word $w'$. If any $w'$ acecpts, output YES. Otherwise, the machine runs indefinitely. This process produces a Turing machine $T'$ which is closed under the conditions (1) -- (3). Furthermore, it is easy to check if a given machine $T'$ is produced in this way. (i.e. $T'$ is formed by closing some other machine $T$). So this gives a partial positive answer to Question 1. - As mentioned in an earlier comment, your conditions for a language to be the word problem of some group do not seem to be complete. In Proposition 3.3 of D. W. Parkes and R. M. Thomas, Groups with context-free reduced word problem, Communications in Algebra 30 (2002), 3143–3156. necessary and sufficient conditions for an arbitrary subset $W$ of the set of words $\Sigma^*$ over a nonempty alphabet $\Sigma$ to be the word problem of a group having $\Sigma$ as monoid generating set are established. These are: (1) For any `$\alpha \in \Sigma^*$` there exists $\beta \in \Sigma^*$ with $\alpha\beta \in W$. (2) If $\alpha,\beta,\gamma \in \Sigma^*$ with $\alpha\beta\gamma \in W$ and $\beta \in W$, then $\alpha\gamma \in W$. Even if you add closure under inversion to your conditions, then it is not clear that they imply (1) and (2). I would guess that Andreas Blass' negative answer still applies to the conditions (1) and (2). - Further to Derek Holt's answer, the negative answer would appear to still apply to the conditions stated in his answer. In fact, you can't even decide if a PDA (let alone a TM) accepts the word problem of a group, as stated in the introduction to: Stephen R. Laken and Richard M. Thomas, Space Complexity and Word Problems of Groups, Groups-Complexity-Cryptology Volume 1 (2009), No. 2, 261-273 The proof is simple and relies on the well known fact that one can't decide whether the language a PDA accepts over an alphabet $\Sigma$ is equal to $\Sigma^{*}$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307246804237366, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/57404/meaning-of-a-mapping-factors-over-another
# Meaning of “a mapping factors over another”? I was wondering what "a mapping factors over another mapping" generally means? Does it have something to do with commutative diagram in category theory? I have seen this usage in different situations, and would like to give examples, but cannot recall them one by one except the most recent one from Fiber bundle: Mappings which factor over the projection map are known as bundle maps. Thanks and regards! - 1 I would assume that $f$ factors over $g$ if there is some $h$ such that $f=h\circ g$ (or maybe $f=g\circ h$). I'm not certain, though. – Asaf Karagila Aug 14 '11 at 13:11 Thie meaning is equivalent to the standard meaning in/with numbers: you have two maps f,g, and you want to find--if possible-- a "factorization" thru a map h so that fog=h. Depending on the context, there are many theorems telling you when this factoring is possible; if you are working with groups, then you need some condition of the kernels of certain maps; in other cases you may be working on a quotient space and you want a map to be constant in equivalent classes. – gary Aug 14 '11 at 13:15 In the specific case you link to, bundle maps are maps from the base space B to the fiber, i.e., you choose b in B so that b is sent to its fiber $\pi^{-1}(b)$, and then you compose with the projection map $\pi$ to get b back. As an example, if your fiber is a vector space (as in a line bundle, aka vector bundle with dimension 1), a section maps b to the vector space associated with b under the projection; composing with the projection then, gives you b back. – gary Aug 14 '11 at 13:24 To be more precise, you have two maps f,g both into a common space X, and you want to find a map h that "makes the diagram commute", i.e., you have maps between, say, f between A and X, g between B and X and you want to find a map h between A and B so that goh=f. This happens in many "categories", so I can only give a broad description --but , hey don't get me wrong, I love broads. – gary Aug 14 '11 at 13:29 @gary: about the example in your broad description, for given f and g, can there be more than one mappings that make f factor over g? – Tim Aug 14 '11 at 13:31 show 3 more comments ## 2 Answers As was pointed out in the comments: Given a morphism $f:A \to C$ and a morphism $g: A \to B$ then $f$ is said to factor over $g$ if there exists $h: B \to C$ such that $hg = f$. Note that $h$ is unique as soon as $g$ is an epimorphism. In the case of bundles $\pi_{E}: E \to M$ and $\pi_F: F \to N$ then a map $\varphi: E \to F$ is a bundle map if there exists $f:M \to N$ such that $\pi_{F} \varphi = \varphi \pi_{E}$, as is noted further down on the wikipedia page you linked to. So $\varphi : E \to F$ is a bundle map if and only if $\pi_{F} \varphi$ factors over $\pi_{E}$ via a (necessarily unique) map $f: M \to N$, as the bundle projections are assumed to be surjective. Note that this means in particular that $\varphi$ maps fibers to fibers. Similarly, there is the notion of factoring through (you need to scroll down a little). I'm mostly using this for the situation $f: A \to C$ and $h: B \to C$ then $f$ factors through $h$ if there is $g: A \to B$ such that $f = hg$. If $h$ happens to be a monomorphism then $g$ is unique (if it exists). However, the distinction I'm making in this post are far from universally accepted. You'll also find $f:A \to C$ factors through $B$ if there are maps $g: A \to B$ and $h: B \to C$ such that $f = hg$ and so on, $f$ factors through $g$, $f$ factors through $h$ in this same situation. Basically, all it means is that $f$ can be decomposed as $f = hg$ in some way that should be obvious from the context. To repeat, if either $h$ is a monomorphism then $g$ is unique and if $g$ is an epimorphism then $h$ is unique. - There is also the abuse of terminology that I sometimes see (and use), where $f:A\to A$ and $h:A\to B$, and factor through is taken to mean $f$ "commutes" with $h$: $\exists g:B\to B$ so that $hf = gh$. Also, completely agree that usually the distinction between Theo's "through" and "over" is completely clear from context. – Willie Wong♦ Aug 14 '11 at 14:06 @Willie: right... That would be the special case of a bundle endomorphism, for example. What would you use for the situation $f: A \to A$ and $h: B \to A$ "$f$ lifts over $h$"? – t.b. Aug 14 '11 at 14:09 I tend to think of $h$ as the map doing the lifting. So I would write "$h$ lifts $f$ to a map $g$" or "$f$ can be lifted by $h$ to a map $B\to B$". But that may be a personal quirk. – Willie Wong♦ Aug 14 '11 at 14:18 The most general, broader description of your question is that you have two sets $A,B$ and maps $f,g$ into a common third space $C$, i.e., you have ; $f:A\rightarrow C$, and you have $g\colon B\rightarrow C$, and then you want to see if there exists a map $h$ between $A$ and $C$, so that "the diagram commutes" (if you ever try to do algebraic topology you will see this everywhere; in analysis, almost everywhere ; ) , i.e., if you follow the arrows and do the composition from $A$ to $B$ to $C$ , composing $g\circ h$, you will get the same result as if you go along the arrow from $A$ to $C$ alone. It is difficult to generalize because the results will depend on the structure of $A,B$ and $C$ given. In the case of bundles you mention, a bundle map associated with a bunlde $\pi\colon E\rightarrow B$ is a map that sends an element $b$ in the base to its fiber under $\pi$, so that, if you compose back the fiber element, you get $b$ back, i.e., $\pi \circ f(b)=b$ e.g., if your fiber is a vector space, your bundle map will send $b$ to an element in the vector space, so that the composition with $\pi$ will project back down to $b$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551683664321899, "perplexity_flag": "head"}
http://mathoverflow.net/questions/73603/how-to-calculate-a-fredholm-index-numerically
## How to calculate a Fredholm index numerically ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) How can one calculate the index of a Fredholm operator numerically ? In numerically calculations one uses always finte dimensional spaces. But linear operators on finite dimensional spaces have always index zero. - 3 If you happen to be lucky enough for the operator to be an (ordinary) differential operator, e.g. $Ly = y' - A(x)y$, and you are even more lucky so that the limits $A(\pm \infty)$ exist and have spectrum away from the imaginary axis, then the Fredholm index of L is the difference between the Morse index of A(\infty) and the Morse index of A(-\infty). – Aaron Hoffman Aug 24 2011 at 20:03 Are you thinking of operators on particular (function) spaces, or a Fredholm operator in full generality? – Yemon Choi Aug 24 2011 at 20:50 1 A linear operator between two DISTINCT finite-dimensional vector spaces does not have index zero, and that may help. – Alain Valette Aug 25 2011 at 7:27 @Aaron: of course, assuming $A(x)$ is a path of operators on a finite dimensional space. – Pietro Majer Aug 25 2011 at 7:29 Alain ... but it has non-zero index for a TRIVIAL reason! – Helge Aug 25 2011 at 11:06 show 3 more comments ## 1 Answer The two key properties of the Fredholm index are • It is a (norm)-continuous function from the bounded linear operators to the integers. In particular, if $A$ is a Fredholm operator, then there exists $\delta > 0$ such that for $\|A - B\| < \delta$, we have $index(A) = index(B)$. This tells you that you can approximate your problem. • The Fredholm index doesn't see compact perturbations. So if $A$ is Fredholm and $K$ is compact, then $index(A +K ) = index(A)$. This tells you that you cannot do naive computations like picking some finite orthonormal set $\psi_{j}$ with $j=1,\dots,N$ and hope that the $N \times N$ matrix $$A_{j,k} = \langle \psi_j, A \psi_k\rangle$$ tells you anything about the Fredholm index of $A$. So you will now need to do something smarter. This is possible in many particular cases, for example for Toeplitz operators. The first property allows one to reduce the computation of the index to the computation of the winding number of a polynomial. Or the Atiyah--Singer index theorems reduces computing the index to some topological information ... So to get a more meaningful answer, you will need to be more specific about the problem. - For more refined invariance theorems one the Fredholm index check Hoermander's The Analysis of Linear Partial Differential Operators. – Pietro Majer Aug 25 2011 at 7:32 Which volume? There are 4 if I remember correctly... – Helge Aug 25 2011 at 11:06 It's Vol III, Chapter 19 (Elliptic Operators on a Compact Manifold Without Boundary), Sec.19.1 Abstract Fredholm Theory. – Pietro Majer Aug 25 2011 at 12:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.855569064617157, "perplexity_flag": "middle"}
http://nrich.maths.org/1013/note?nomenu=1
## Make 100 Find at least one way to put in some operations signs ($+$, $-$, $\times$, $\div$) to make these digits come to $100$. # 1    2    3    4    5    6    7    8    9  =  100 ### Why do this problem? This problem offers great opportunities for mental arithmetic and estimation. It can also be used as an opening to discussing the order of operations. ### Possible approach Display the numbers $1$ - $9$ on the board and ask the children to add them up. (They might do this in any order, perhaps noticing that pairs from either end add to $10$.) As they explain their working, record it in order on the board, for example: $1+ 9 +2 + 8 + 3 + 7 + 4 + 6 + 5 = 45$ Ask if they can suggest a way to make the answer bigger, but still only using the numbers $1$ - $9$. Again, record the calculations on the board in the order that the children say them. This is likely to involve a multiplication sign. Ask if they can make it even bigger. Again, record the calculations. Then offer the problem. Allow some time for chidren to work, possibly in pairs, and provide calculators for them to use to check their arithmetic if necessary. Provide a central wall space for children to record their solutions. This would make an ideal 'simmering' activity that could go on for a week or more. (See the extension questions below.) ### Key questions How close can you get to 100 with just adding? What operation miight you use to make the result bigger? Which sorts of calculations make the most difference to the total? Which numbers less than $100$ is it possible to make? What other questions can you suggest? Do you get the same answer every time from your string of calculations? If not, why don't you? ### Possible extension An additional challenge would be for the children to decide on their own target number and see if they can make it using $1$ - $9$. If your children know and use the convention of the order of operations this can be an opportunity to ask whether the order that they have written the calculations in is the same as the order in which they would do them. Is there a better way they could write the same calculation, using the correct order of operations? ### Possible support The numbers could be written on separate pices of paper, together with several $+$, $-$, $\times$ and $\div$ signs. Being able to rearrange the numbers can sometimes help to see patterns or number bonds that help with calculations. These printed digit and operation cards could also be used. And whilst the problem offers great opportunities for mental arithmetic and estimation, pupils who are less confident at these could use a calculator. This would help to support their estimation skills and include them in a whole class activity.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421561360359192, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=420232
Physics Forums Page 1 of 3 1 2 3 > ## Volume of electron has the volume of an electron ever been calculated if yes than what is its value? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Gold Member Homework Help Science Advisor At the length scales of the electron, I don't think volume is a meaningful quantity any longer. Perhaps a professional physicist could give you a more satisfactory answer, but I doubt it. A volume can be defined for the space where it's probably located. But this depends on its state i.e. you have to specify which electron your are talking about : one that is free, one that is in a field, one that is bounded to an atom, one that is inside a solide etc. ## Volume of electron the electron is a point particle. it has zero volume. The bare electron is believed to be a point particle, but the bare electron is "shielded" by a cloud of virtual electron-positron pairs (vacuum polarization) that reduces the observed charge at large distances (low momentum transfers). This correction was first calculated by Uehling in 1935 (Phys Rev 48 55). The Uehling correction (around an atomic nucleus) is an important contribution to atomic energy levels in pionic and muonic atoms, which penetrate the virtual electron-positron cloud.. Bob S Recognitions: Gold Member Homework Help Science Advisor Quote by Bob S The bare electron is believed to be a point particle, but the bare electron is "shielded" by a cloud of virtual electron-positron pairs (vacuum polarization) that reduces the observed charge at large distances (low momentum transfers). This correction was first calculated by Uehling in 1935 (Phys Rev 48 55). The Uehling correction (around an atomic nucleus) is an important contribution to atomic energy levels in pionic and muonic atoms, which penetrate the virtual electron-positron cloud.. Bob S So, mathematically, the electron can be seen as a type of singularity, or rupture, that generates a smoothing effect around it called the Uehling correction? Classicaly there is http://en.wikipedia.org/wiki/Electron_radius where we picture the electron occupying a volume of a sphere with that radius. and from the main wikipedia page on "electron" - "it is defined or assumed to be a point particle with a point charge and no spatial extent." Quote by jnorman the electron is a point particle. it has zero volume. correct me if i m wrong but mathematically having zero volume would mean that the electron occupies no space even though it has mass. can anything has zero volume without being massless? Recognitions: Gold Member Science Advisor Doesn't the HUP and the de Broglie wavelength come in here? 'Having a volume' relates to 'where you are likely to be found or detected' so wouldn't the effective volume of a particle relate to how well specified its momentum was? Mentor Blog Entries: 27 There's several different level of confusion here. A ball has a volume AND a position. Typically, the position is the location of the center of mass or center of volume. Several people here have confused the spread in location as being the volume of the object. This is incorrect. The HUP, for example, deals with the LOCATION of the object, and not just for an electron (point particle), but also something with a "volume", such as a proton, neutron, Buckyball, etc. To answer the OP, to the best of our knowledge now, an electron as no volume. Theories that have been shown to have a high degree of validity, such as QED, treats electrons as point particles, as has been mentioned. Now, whether later on we will discover that the electron has some sort of a volume, that's a matter of speculation. But if you want to know what we do know now as far as our state of knowledge, that is it. The issue of whether something with no volume can have mass is a different bag of worms. If you buy into the Higgs mechanism, then this will no longer be an issue. This is because essentially all elementary particles are massless (yes, even the ones with "volumes"). It is how they interact with the Higgs that endows them with masses (ignoring the fact that there are indications that the quarks masses may not be entirely due to the Higgs). One can easily see how this could occur because there are many systems in which the electron acquire large masses, some time even up to 200 to 400 times its bare mass (see the heavy fermionic compounds). So having a "volume" is not a requisite to having a mass. Zz. Recognitions: Gold Member Homework Help Science Advisor Thank you, ZapperZ, for providing a clarification of the different issues clumped together here! Should we then, for now, rather than worrying about the electron's "volume" be more interested in what particular, observable effects predominate at different distances from the point particle? Chartering the local topography, so to speak, around the electron? Quote by Dr Lots-o'watts A volume can be defined for the space where it's probably located. But this depends on its state i.e. you have to specify which electron your are talking about : one that is free, one that is in a field, one that is bounded to an atom, one that is inside a solide etc. how do these different situations you describe change the answer to my question??? Noob question: If all the elementary particles are point particles with no volume, then how the macroscopic objects have volume? Recognitions: Gold Member Homework Help Science Advisor Quote by Delta² Noob question: If all the elementary particles are point particles with no volume, then how the macroscopic objects have volume? By being empty space brim full of physical interactions between the particles. That will distinguish the region from other regions of empty space, labelling it as the volume of the macroscopic object Now, don't take my word for this, I merely suggest a feasible way of thinking around this. It's most likely wrong, in many details. For example, SOME fundamental particles might have volume, whereas others don't. Many experimental observations are explained to a good level of accuracy by theories that assume the electron (& the nucleus too) to be point-particles. For electrons, I am not really sure of which experiments point towards the finite size of electrons, but I've heard of particle physicists estimating orders of magnitude ~ $${10}^{-18}m$$ for the 'diameter' of the electron. Atomic physicists are working towards finding the electric dipole moment (i.e charge distribution) of the electron, which sounds very interesting to me. :) Quote by Delta² Noob question: If all the elementary particles are point particles with no volume, then how the macroscopic objects have volume? If points have no volume, how can two of them define a one-dimensional line, or three of them define a two-dimensional plane or three-dimensional space? Answer: by not all being in the same location. Translated to electrons/particles: energy creates momentum and motion, and thereby can generate volume even if the constituent particles do not themselves have volume - as long as they have energy/momentum and the capacity to interact with others (which can be assumed by the fact that they have momentum/energy, I believe). Page 1 of 3 1 2 3 > Tags electron, volume Thread Tools | | | | |-----------------------------------------|-------------------------------------|---------| | Similar Threads for: Volume of electron | | | | Thread | Forum | Replies | | | General Physics | 2 | | | Biology, Chemistry & Other Homework | 4 | | | Calculus & Beyond Homework | 16 | | | Introductory Physics Homework | 2 | | | Quantum Physics | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298208355903625, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/36025/explicit-computations-using-the-haar-measure/36090
## Explicit computations using the Haar measure ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is somewhat related to my previous one on Grassmanians. The few times I've encountered the Haar measure in the course of my mathematical education, it's always been used in a very theoretical setting: in the right setting, it exists, it is unique (if the setting is really nice), and you can integrate against it to define new objects that will have nice properties because the measure itself does. So my question is: how practical is it to compute with? I'm talking about very concrete examples here, e.g. "G=O(4), I integrate f(M)=[some explicit function with a matrix input] and the answer I get is 42". From conversations, I got the feeling that the construction of the Haar measure allows you in principle to write such a computation explicitly. I'm concerned with the tractability of the computation itself. Examples would be great. - It is unique only up to a positive scaling factor. If you make a construction using a choice of Haar measure, you need to make sure your construction is independent of scaling your measure (trivial example: L^2-spaces w.r.t. Haar measure) to know the construction is intrinsic. – KConrad Aug 18 2010 at 21:13 It is very easy to come up with examples of functions defined on $\mathbb R$ which are impossibly complicated to carry out... Integrating is difficult to do explicitely, independently of Haar! – Mariano Suárez-Alvarez Aug 18 2010 at 21:29 2 I think Thierry is aware of both the facts mentioned in the comments above. How about the following line of thought, related to something I've been playing with: we know that in some cases we can exploit the Plancherel formula (just as when doing integrals on the circle we could use Parseval to try and express the integral as a suitable inner product in $\ell^2$). Are there more sophisticated version to handle, say, a product of three coefficient functions? – Yemon Choi Aug 18 2010 at 21:36 5 Couldn't refrain myself from saying this - isn't the answer to all our problems supposedly 42? – Somnath Basu Aug 19 2010 at 0:05 Well... some of them, especially if we use Davidac897's example with p = 43 – stankewicz Aug 24 2010 at 14:01 ## 12 Answers You might be interested in Weyl's Integration Formula. There are versions of this for all compact Lie groups; I'll state it for the unitary group. Let $f$ be a conjugacy invariant function on $U(n)$. A unitary matrix always has eigenvalues of the form $(e^{i \theta_1}, e^{i \theta_2}, \ldots, e^{i \theta_n})$, and $f$ is a symmetric function of $(\theta_1, \theta_2, \ldots, \theta_n)$. Then `$$\int_{U(n)} f(A) \ dA = \frac{1}{(2 \pi)^n n!} \int_{\theta_1=0}^{2 \pi} \int_{\theta_2=0}^{2 \pi} \ldots \int_{\theta_n=0}^{2 \pi} \prod_{j<k} |e^{i \theta_j} - e^{i \theta_k}|^2 \cdot f(\theta_1, \ldots, \theta_n) \ d \theta_1 \ \cdots \ d \theta_n$$` Here Haar measure is normalized so that the unitary group has volume $1$. There are plenty of proofs in books and online; I like the write-up in Fulton and Harris's Representation Theory. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. To make computations you need to find an example of a Haar measure on your group. The first few exercises in Section 5, Chapter XII of Lang's Real and Functional Analysis give formulas for Haar measure on some groups (exercise 9 is a nonabelian group). Chapter 14 of Royden's Real Analysis gives a method of Hurwitz for computing Haar measure on Lie groups. Hurwitz himself worked this out for orthogonal groups in the late 19th century, before general invariant measures on locally compact groups were known to exist. - Excellent! I'll be sure to check out these references--if the new semester will give me a fighting chance! Of course, 19th century mathematicians were so much more focused on explicit computations too. So primary sources might be very helpful to get a handle on this. – Thierry Zell Aug 18 2010 at 21:43 2 Thierry, ever notice that, despite apparently "analytic" nature of construction, for all matrix gps you've seen, left Haar is integration against top-degree difftl form that's "algebraic" (wrt matrix entries)? (NB: variety product does not have product topology!) Reason is left-invariant top-degree diff'tl forms (used to make the volume form on a Lie gp) works algebraically. That is, on any smooth group variety over any field, vector space of left-inv't top-deg difftl forms is 1-dim'l. See sec. 4.2 of "Neron models", a functorial/scheme version of Lie gp proof, constructive in affine case. – BCnrd Aug 18 2010 at 22:04 Thierry, the same kind of algebraic methods also give a conceptual proof that the modulus character is likewise "algebraic" with respect to the matrix entries (i.e., an algebraic homomorphism $G \rightarrow {\rm{GL}}_1$). The content is that the 1-dimensional vector group of left-invariant top-degree difftl forms is an algebraic repn space for the group (best seen by thinking functorially via Yoneda's Lemma, hence allowing the ring to be general and not just a field); see Prop. 4 in sec. 4.2 of the book "Neron Models". – BCnrd Aug 18 2010 at 22:07 Last point (from me) on this theme. One consequence of algebraicity of the modulus character is that as long as $G(\overline{k})$ is its own derived group, the modulus character must be trivial (since can compute that on $k$-points). This applies even when the group $G(k)$ of rational points over the ground field (like reals, complex, p-adic numbers, etc.) is not its own derived group. Good example for that is ${\rm{PGL}}_ n$ (for $n$ such that $k^{\times}$ is not its own $n$th-power subgroup), and various other connected semisimple groups that aren't simply connected. – BCnrd Aug 19 2010 at 2:05 1 Dear Florian: Oops, indeed I forgot to put in the absolute value (or idelic norm in the case of group of adelic points). I was thinking of the algebraic character as the more basic object (since it interacts well with ground field extension, and passage to adelic points). I should have also relaxed the condition ``$G(\overline{k})$ is its own derived group'' to the slightly more ubiquitous condition ``$G(\overline{k})$ is generated by its derived group and center'' (to explain unimodularity in connected reductive cases). – BCnrd Aug 19 2010 at 14:06 show 1 more comment The right or left Haar measures for a matrix group can be obtained in a completely straightforward manner with the aid of the right or left Maurer-Cartan form, respectively. I will show the procedure for the stochastic group of invertible stochastic matrices (i.e., invertible matrices in $GL(B)$ whose rows sum to unity), though much of it generalizes in an obvious way. (The motivation I had for figuring this out is a gauge theory of random walks on the root lattice $A_n$ which I'll finish up one of these days.) Let $R, R' \in STO(B)$, and let $R$ be parametrized by (say) ${R_{jk}} \equiv {R_{(j,k)}}$ for $1 \le j \le B, k \ne j$. Now if `$\left(\mathcal{R}^{-1}\right)_{(j,k)}^{(l,m)} := \frac{\partial(RR')_{(j,k)}}{\partial R'_{(l,m)}} \Bigg|_{R'=I}$` then the right Maurer-Cartan form on $STO(B)$ is `$\omega_{(j,k)}^{(\mathcal{R})} = \mathcal{R}_{(j,k)}^{(l,m)}dR_{(l,m)}$`. Since the right Maurer-Cartan form is right-invariant, the right Haar measure is given (up to an irrelevant constant multiple) by `$d\mu^{(\mathcal{R})} = \underset{(j,k)}{\bigwedge} \omega_{(j,k)}^{(\mathcal{R})}.$` A similar construction yields the left Haar measure. For a concrete example, let $B=2$. A straightforward calculation yields `$\omega_{(1,2)}^{(\mathcal{R})} = \frac{(1-R_{21}) \cdot dR_{12} + R_{12} \cdot dR_{21}}{1-R_{12}-R_{21}}$` and `$\omega_{(2,1)}^{(\mathcal{R})} = \frac{R_{21} \cdot dR_{12} + (1-R_{12}) \cdot dR_{21}}{1-R_{12}-R_{21}}$`. It follows that `$d\mu^{(\mathcal{R})} = \omega_{(1,2)}^{(\mathcal{R})} \land \omega_{(2,1)}^{(\mathcal{R})} = \frac{dR_{12} \land dR_{21}}{\lvert 1-R_{12}-R_{21} \rvert}$`. (The modulus is taken in the denominator to ensure a positive rather than a signed measure.) Similarly, the left Haar measure is `$d\mu^{(\mathcal{L})} = \frac{dR_{12} \land dR_{21}}{\lvert 1-R_{12}-R_{21}\rvert^2}$`. Notice that both the right and left Haar measures assign infinite volume to the set of nonnegative stochastic matrices (i.e., the unit square in the $R_{12}$-$R_{21}$ plane). However, the singular behavior of the measures occurs precisely on the set of singular stochastic matrices. Indeed, for $0 \le \epsilon < 1$ consider the sets `$X_I(\epsilon) := \{(R_{12}, R_{21}) : 0 \le R_{12} \le 1-\epsilon, \ 0 \le R_{21} \le 1 - \epsilon - R_{12} \}$` `$X_{II}(\epsilon) := \{(R_{12}, R_{21}) : \epsilon \le R_{12} \le 1, \ 1 + \epsilon - R_{12} \le R_{21} \le 1 \}$` and `$X(\epsilon) := X_I(\epsilon) \cup X_{II}(\epsilon)$`, i.e., $X(\epsilon)$ is the unit square minus a strip of width $\epsilon \sqrt{2}$ centered on the line $1 - R_{12} - R_{21} \equiv \det R = 0$. Then `$\int_{X(\epsilon)} d\mu^{(\mathcal{R})} = 2(\log \epsilon^{-1} - 1 + \epsilon)$` and `$\int_{X(\epsilon)} d\mu^{(\mathcal{L})} = 2(\epsilon^{-1} - 1)$`. It is not hard to show that for $B$ arbitrary `$d\mu^{(\mathcal{R})} = \lvert \det \mathcal{R} \rvert \underset{(j,k)}{\bigwedge} dR_{jk}$`, and similarly for the left Haar measure. The general end result is `$d\mu^{(\mathcal{R})} = \lvert \det R \rvert^{1-B} \underset{(j,k)}{\bigwedge} dR_{jk}, \quad d\mu^{(\mathcal{L})} = \lvert \det R \rvert^{-B} \underset{(j,k)}{\bigwedge} dR_{jk}$`. To see this, consider the isomorphism between the stochastic and affine groups and see, e.g. (N. Bourbaki. Elements of Mathematics: Integration II. Chapters 7-9. Springer (2004)). Finally, a Fubini-type theorem (see, e.g., L. Loomis. An Introduction to Abstract Harmonic Analysis. Van Nostrand (1953)) applies to the special stochastic group $SSTO(B)$ (i.e., the subgroup of unit-determinant stochastic matrices). If for example elements of $SSTO(2)$ are parametrized by $R_{12}$, then $d\mu = \omega_{(1,2)} = dR_{12}$ is the (right and left) Haar measure. More generally, taking ${R_{jk}}$ for an appropriate choice of pairs $(j,k)$ as parameters for $SSTO(B)$, we have that $\mathcal{R} = I = \mathcal{L}$ and the Haar measure for $SSTO(B)$ is (up to normalization) `$d\mu = \underset{(j,k)}{\bigwedge} dR_{jk}$`. This can easily be verified explicitly for small values of $B$ with a computer algebra package. One simplifying feature of the simple stochastic group is that it is unimodular, so the left and right Haar measures coincide. Moreover, the Haar measure of the set of nonnegative special stochastic matrices is (finite, and w/l/o/g equals) unity. (For $SSTO(B)$, the constant multiplying the RHS of the equation above and that provides this normalization can be shown to be $((B-1)!)^{B-1}(B-2)!$.) Although this set is not invariant, it is a semigroup and it is obviously privileged in probabilistic contexts. - Very interesting question. As a prominent harmonic analyst told me recently, when I asked him where I could learn to make explicit computations on hyperbolic spaces: "not easy to find references, and it's all Sigurdur Helgason's fault". He was joking, of course, but basically he meant: there is now an implicit understanding that for each one of your questions there's a formula somewhere in some book of SH, so why are you asking? read the books. But on the contrary, those elegant and general formulas are of no help if you really want to compute something: basically you still need a lot of work, choose proper coordinates, write down explicit formulas for every Harish-Chandra thing and so on. A slower development of the subject would have been more helpful, by now we'd have available books on special cases with explicit formulas and so on. More to the point: a beautiful example of an explicit computation using the Haar measure on $SO(3)$ is this paper on endpoint Strichartz estimates for the cubic Dirac equation. The computation is quite elementary, so you will not have troubles in reading it in case you're interested. I find it a compelling example of how useful it would be to develop some more machinery to work with Haar measures. - Couldn't resist to point this out: actually, $$\lim_{N \rightarrow \infty} \frac{1}{N^{k^2}} \mathbb{E}_{U\in U(N)} |Z_U|^{2k} = \frac{G(k+1)^2}{G(2k+1)},$$ with $G$ the Barnes $G$ function, $Z_U$ the characteristic polynomial of the matrix $U$ in $U(N)$, taken according to Haar measure and evaluated somewhere along the unit circle in $\mathbb{C}$, say 1 (where on the circle is irrelevant as Haar measure is rotationally invariant). When $k=3$, this actually evaluates to $$\frac{\mathbf{42}}{9!},$$ and accounts for the 3rd (and first new) case in the Keating-Snaith discovery of the interest of random matrices for quantitative formulations of analytic number theory conjectures, as explained in their paper "Random Matrix Theory and $\zeta (1/2 + i t )$" or less formally in http://seedmagazine.com/content/article/prime_numbers_get_hitched/ . Incidentally, the connections between random matrix theory and number theory indeed lead to many practical computations for Haar-random matrices in classical compact groups. Some are easier to understand (for instance via the Weyl integration formula or reformulations in terms of Selberg integrals), while some are much less clear (for instance, subsitute in the above statement $|Z_U'|$ instead of $|Z_U|$ and study again the behaviour for large k, or analytic continuation in k of the RHS. The renormalization in that case, however, is know, and would be $\frac{1}{N^{k^2+2k}}$). Since this is a reference request, look also at papers by Conrey or Hughes for examples of such explicit computations. - The following method can be used to integrate (by hand) polynomials in the matrix elements of the group element over the special orthogonal, special unitary and symplectic groups (with respect to the Haar measure). The description will be given for special orthogonal groups, but this method is valid for the special unitary and symplectic groups by passing to the complex numbers and the quaternions. This method reduces the integration to a series of integrations on spheres. The method relies on the following facts: • Let $V_{n,k}$ is the Stiefel manifold of orthogonal k-frames in $R^n$, then $SO(n) \cong V_{n,n-1}$, (a torsor) This is because one can view a (special) orthogonal matrix as a collection of n-1 orthonormal unit column vectors $v^{(1)}, . . ., v^{(n-1)}$. the method is valid for integrands which are polynomials in the group element which can always be written in the form $tr(c^t v^{(i)} + v^{(i)^t} c)$. Let us denote the one dimensional orthogonal projectors on $v^{(i)}$ by $\theta^{(i)} = v^{(i)} v^{(i)^t}$ • There exists a series of fibrations: $S^{n-k} \cong V_{n-k+1,1} \rightarrow V_{n,k} \rightarrow V_{n,k-1}$ The integration method is based on sequential integrations on the spherical fibers starting from $SO(n) \cong V_{n,n-1}$ and ending with $S^{n-1} \cong V_{n,1}$, such that each time the integration is performed on one of the unit vectors. • The most important part is that one must remember that the unit vectors are not independent because they are orthonormal. Thus each integration over a unit vector must be performed on the intersection of the sphere defined by it and the hyperplane orthogonal to the other unit vectors. For example if we start the integration from $v^{(1)}$, this amount replacing $v^{(1)}$ before the integration on it by: $(1 - \theta^{(2) }- . . . \theta^{(n-1)}) v^{(1)}$. This replacement unties the orthogonality constraints. • Integration of homogeneous polynomials over spheres can be performed by the replacement of the integration measure by a Gaussian measure and scaling the result (according to the ratio of a spherically invariant integrand of the same degree). - 1 it might be useful to add that these integrals of products of matrix coefficients over classical groups, with respect to the Haar measure, are given by the socalled Weingarten function en.wikipedia.org/wiki/Weingarten_function – Carlo Beenakker May 14 2011 at 14:36 If we consider the independent bond percolation model in the two dimensions the Haar Measure is obtained when you choose the parameter $p=\frac{1}{2}$. We can look for this model as a collection of measures and the only Haar measure for this model it is also characterized for a critical behavior with respect to some geometric properties of the model. The group structure of this model it seems not too much explored yet. But there are many explicit calculations using the Haar measure. The book Percolation, by Grimmett have some of this explicit calculations. - A fairly simple example is the Haar measure on $\mathbb{Q}_p$. If we scale the measure so that $\mathbb{Z}_p$ has measure $1$, and the measure is translation invariant, it follows that $a+p\mathbb{Z}_p$ has measure $\frac{1}{p}$. We can do similarly for cosets of $p^n\mathbb{Z}_p$. See Chapter 2 of Cassels-Frohlich for details on this. In this vein, one defines Haar measures on other number-theoretic objects, like adeles and ideles. Integration over these spaces can then be used to prove basic facts about more concrete objects, like zeta functions. For details, consult Koch's Number Theory, which gives many explicit examples of integration over $p$-adics and spaces of adeles and then uses them to prove Hecke's functional equation for the zeta function. (You can also find similar material in Cassels-Frohlich, though I find Koch to be much more readable.) - The word you want is "vein," as in Definition III.9.b in the OED: dictionary.oed.com/cgi/entry/… – Qiaochu Yuan Aug 24 2010 at 1:04 Davidac, we don't "know" that Z_p has measure 1. We only know that for a Haar measure on Q_p the measure of Z_p is finite since it's compact and positive since it's open (or, more broadly, since it contains an open set). Therefore we can scale Haar measure on Q_p in a unique way to give Z_p measure 1. – KConrad Aug 24 2010 at 2:44 Right...I more meant that once we scale the Haar measure for $\mathbb{Z}_p$ to have measure $1$, we can derive using that information and the definition of a Haar measure what the measure of certain other sets is. – David Corwin Aug 24 2010 at 3:18 I just stumbled upon this old thread while searching for something else, and couldn't resist saying two things: 1. if you like p-adics, the expository article http://arxiv.org/pdf/math/0205207v2 by T. C. Hales asks pretty much the same question, and gives some very interesting examples, an explanation why in general this question is very hard, and a general approach via motivic integration (a lot of progress happened in motivic integration since this article was written, but this is still a great introduction to the main ideas). 2. For a split connected reductive algebraic group over a local field, one can write down an explicit formula for the Haar measure in convenient coordinates (more precisely, one can just write down the invariant differential form that Brian Conrad mentioned): for an explicit formula, see e.g. section 2.4 in http://arxiv.org/pdf/math/0203106 (I am sure this is a classical formula, but I have never seen a reference for it -- would be grateful if someone pointed it out). - Thanks for the links! It's always nice to see that this old question of mine still has some life in it... – Thierry Zell May 14 2011 at 4:53 Thank you, Thierry! I thought it was a very nice question... – JGordon May 14 2011 at 5:46 There are some explicit calclulations in Hewitt & Ross, Abstract Harmonic Analysis - There are cases in which computation can be made easier. Consider for example a locally compact abelian group $G$ and denote by $\mu$ a Haar measure. One hypothesis on $G$ that can make life easier is to suppose $G$ totally disconnected. In such case, $G$ has a base of neighborhoods of $0$ consisting of clopen compact subgroups. Notice also that, if we have two such neighborhoods $V_1\subseteq V_2$, then $V_2/V_1$ is finite and $\mu(V_2)=|V_2/V_1|$, provided $\mu(V_1)=1$ (we can always suppose this up to a renormalization of the measure). This seems quite abstract but there are examples in which this could be very useful in concrete computation (see the answer of Davidac897 for a very explicit occurrence of what I'm saying). - Integration with respect to Haar measure is used a lot in multivariate statistical analysis, see for example Muirhead: "Aspects of Multivariate Statistical Theory". There is also lots of examples in the developing thneory of special function with matrix argument, search for books/papers by Mathai ... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279727339744568, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/169435/homotopy-type-of-specific-space-of-matrices/169503
# Homotopy type of specific space of matrices I would like to determine topological properties of $\mathbb R^8$ minus the set determined by the equation $$\mathrm{det}\begin{pmatrix} a-a' & b-b'\\ c-c' & d-d' \end{pmatrix}=0$$ where $a,a',b,b',c,c',d,d'\in\mathbb R$. How do I determine the homotopy type and how many connected components this space has? If this does not turn out to be a standard space, I would also like to determine (co)homology groups. - $A$ such that $\mathrm{det}A\dots$? – Olivier Bégassat Jul 11 '12 at 12:10 Isn't this space $\operatorname{GL}(2, \mathbb{R})$? Why is it described as above? – Serkan Jul 11 '12 at 12:15 It should be $\mathbb R^8$ minus a closed subvariety determined by the equation in the 8 variables given by the determinant of the above matrix... So, for $a,b,c,d$ fixed, it is $\mathrm{GL}_2\mathbb R$. Maybe that makes it into a $\mathrm{GL}_2\mathbb R$ bundle over $\mathrm{GL}_2\mathbb R$, but I am not sure. – user1205935 Jul 11 '12 at 12:20 Although there are $8$ parameters, the space of matrices has only 4 degrees of freedom, and the way you've described it, it is homeomorphic to $GL(2,R)$. Perhaps you want to ask what is the subset of $\mathbb R^8$ described by the condition that the above determinant is nonzero. – Grumpy Parsnip Jul 11 '12 at 12:27 Assuming that's what you want, your space has a continuous map to GL(2,R), which you could try showing is a homotopy equivalence. (Just speculating, I haven't thought about it carefully.) – Grumpy Parsnip Jul 11 '12 at 12:31 show 1 more comment ## 2 Answers Let's denote your subset of $\mathbb R^8$ by $X$. Then there is a surjective continuous map $X\to GL(2,\mathbb R)$. The homotopy type of $GL(2,R)$ is two copies of $SL(2,R)$. Anyway, from this you can already tell that $X$ has at least two connected components! Now, I claim that $X$ is actually homotopy equivalent to $GL(2,R)$. Given an $8$-tuple in $X$, perform a homotopy where $(a,a')\mapsto (a-t,a'-t)$ for $t\in[0,a]$. Similarly do this for the other coordinates. This deformation retracts $X$ onto the space where $a=b=c=d=0$. Which is exactly $GL(2,R)$. As mentioned by user8268, $SL(2,R)\simeq S^1$, so $X\simeq S^1\cup S^1$. - Thank you for your answer – user1205935 Jul 11 '12 at 23:34 Change the coordinates: replace $a',\dots d'$ with $A=a-a',\dots,D=d-d'$. Then you see that the space is $GL_2(\mathbb{R})\times\mathbb{R}^4$, which is homotopy equivalent to $GL_2(\mathbb{R})$, and hence to $O_2(\mathbb{R})$, which is a disjoint union of two circles. - He said the complement of the set $\det=0$. – Grumpy Parsnip Jul 11 '12 at 15:19 @JimConant: oh I see; I edited out that discussion. – user8268 Jul 11 '12 at 16:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940820574760437, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/11314/how-is-an-arbitrary-operator-usually-denoted-in-quantum-mechanics
# How is an arbitrary operator usually denoted in quantum mechanics? Which symbols are usually used to denote an arbitrary operator in quantum mechanics, such as O in the following example? $O \mbox{ is Hermitian} \Leftrightarrow \Im{\left< O \right>} = 0$ - ## 2 Answers It's common to put a hat over anything that's an operator instead of a c-number, so that $\hat A$ is an operator, $A$ is a c-number. Then we can use any letter as either an operator or as a c-number. Your $\hat O$ or $O$ to some extent suggests something that is specifically an observable quantity, just as $\hat H$ suggests a Hamiltonian operator, although one would usually expect such uses to be explicitly stated. At the end of the day, however, it's a matter of art to get a paper to look recognizably like the other papers in a field. The use of a particular symbol for a particular purpose can come almost to define a particular area of Physics, and people in that field may stop reading a paper for flagrantly breaking such a rule unless a good enough reason is given for doing something else. To have those nuances down pat requires that you read enough papers in the field you want to write in carefully enough to notice what is used consistently for what. - The hat convention seems useful, thanks. Do you know of a letter I could use instead of $\hat{O}$, that carries no strong connotations? – user1778 Jun 20 '11 at 11:57 As you see, I used $\hat A$. $\hat B$ and $\hat C$ do not have strong connotations unless you're writing in a particular field, provided they're said to be arbitrary operators, $\hat D$ might imply something to do with differentiation, but not if it's declared to be arbitrary. There's a writing style that makes these things work out OK that you should look for. – Peter Morgan Jun 20 '11 at 12:07 For me $\hat D$ implies some dipole. As you say, it's field dependent... – Frédéric Grosshans Jun 21 '11 at 14:43 You can also use a different font or style, such as \mathcal (i.e. $O$ becomes $\mathcal{O}$, provided the amsmath package is called). – Olaf Jun 21 '11 at 16:16 In my experience, which covers basic QM and quantum field theory to some extent, people tend to use $A$ or $O$ for a generic operator. The convention that $O$ indicates an observable is not universal - at least, not universally used, although it's probably at least familiar to most physicists. (Same with the hat convention that Peter mentioned; everyone understands it but not everyone uses it all the time.) If a paper refers to multiple generic operators, then it's typical (again, in my experience) to denote them using sequential letters starting from $A$. So for example, one might talk about a commutator $[A,B]$ or $[[A,B],C]$ or something like that. Other than that, though, I would consider it unusual to see letters other than $A$ or $O$ used to denote generic operators. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956232488155365, "perplexity_flag": "middle"}
http://motls.blogspot.com/2012/11/obama-and-earth-moon-l2-lagrange-point.html
# The Reference Frame ## Saturday, November 10, 2012 ... ///// ### Obama and Earth-Moon L2 Lagrange point base Some media including Dvice.com spread rumors about a possible imminent Branco Bamma's plan to announce a new space station. Yellow is the Earth, blue is the Moon. And 1,2,3,4,5 are the Lx Lagrange points. Instead of resembling the International Space Station that orbits the Earth just 400+ km above the surface (see current location to check whether you may see the dot above you), it would be placed in a more exotic place – the Earth-Moon L2 Lagrange point. The point is located about 60,000 km behind the Moon so you can't see it from the Earth. I said "it is located". When is it located? The funny feature of the Lagrange points is that if you place something at a Lagrange point – defined by its position relatively to two celestial bodies that orbit each other – it stays there. It stays there from the viewpoint of the rotating reference frame which keeps the orbiting plane as well as the locations of the two celestial bodies fixed. You see that there are five Lagrange points for each pair; L4 and L5 form a mirror pair. L2 is behind the Moon and has the same angular frequency as the Moon – so it has a higher velocity. Consequently, the centifugal acceleration $v\omega$ is greater than the Moon's, but that's OK because the attractive acceleration is also greater than the acceleration that attaches the Moon to the Earth: the Moon itself adds its own force. Note that the L2's distance from the Earth is about $7/6$ times the distance of the Moon, so the centrifugal acceleration $v\omega$ is about $7/6$ times greater than it is for the Moon. The Earth's attractive acceleration is $(7/6)^2$ times smaller than for the Moon, because of the inverse square law, but\[ \zav{\frac 67}^2 + \frac{1}{81}\cdot 6^2\sim 1.18\sim \frac 76 \] which gives approximately the right enhancement, with the help of the term from the Moon which is suppressed by Moon's $81$ times smaller mass but enhanced by the $6$ times shorter distance from the Moon. An Orion capsule (picture above) could be sent over there by the Space Launch System between 2017 and the 2020s so don't change your dinner plans yet. A particular plan suggests the first people at L2 around 2021. They would be the first people who would have the privilege of seeing no Earth around them for extended periods of time. :-) Instead, they would be watching the other side of the (six times larger than usual) Moon than almost all other mortals. Note that the Earth's radius is just $3.66$ times larger than Moon's but the Earth is six times further from L2 than the Moon, so it's too small to be seen behind the Moon. Solar activity from planetary tidal forces Incidentally, I am kind of fascinated by the new paper (via WUWT) showing a rather remarkable agreement in the Fourier transformation of two charts. One of them is the solar activity, as reconstructed from some cosmogenic radionuclides (geology), and the other one is a theoretical model calculating the tidal forces of all the planets on the Sun's tachocline. The agreement in the precise frequencies of the peaks looks totally remarkable. A priori, I would think that the tides are too weak. On the other hand, many things may be strengthened by various effects and because it's a real measurable effect now, tides scaling as the justifiable, measurable yet modest $1/r^3$, and not a hypothetical effect of the relative position of the barycenter and the Sun which would contradict the equivalence principle, none of my criticisms against the "barycenter theories" apply. Even if this planetary influence on the solar activity is genuine, it doesn't imply that either of these curves is a good proxy to the Earth's climate. But the picture looks rather intriguing... Needless to say, it's rather compatible with the 30 years of warming in the recent era and it predicts some cooling for the following decades (see e.g. Figure 3 in the paper). Posted by Luboš Motl | Other texts on similar topics: astronomy, politics, science and society #### snail feedback (6) : reader Shannon said... Three years ago I found this video from Bad Astronomy, Phil Plait, on the Lagrangian points which I knew nothing about (of course). I like it. If they miss seeing the Earth from their capsule why don't we put a huge mirror on L4 or L5 ? ;-) reader Luboš Motl said... You're so creative, Shannon! Maybe not the cheapest idea for replacing a photograph but it could be great for signal transmission in general. reader Shannon said... Thanks for the compliment, Humble servant ;-) reader Gene said... Lubos, I am doubtful of planetry influence on solar activity; the tidal forces are so very tiny. One has to be very cautious about such correlations, remarkable as they may be. Feynman warned about this; it is so very easy to fool one’s self. Small effects can have cumulative results but they seem to always involve resonances. For instance, two pendulum clocks hung on the same wall have a strong tendency to (slowly) become synchronized but in the turbulence of the inner sun I don’t think anything can be so finely tuned. Call me skeptical. reader Luboš Motl said... Dear Gene, I am also skeptical, of course, 95+ percent against it. Remarkable claims need remarkable evidence. But this claim seems plausible times remarkable enough so that it's pretty interesting to me... Right, the Fourier modes are results of resonances. They're not the most elementary frequencies you get from the planets but some more complicated ones and the hypothetical effect is accumulated over 1 century or centuries. reader Sören F said... Somebody must have misunderstood something as this source http://www.3news.co.nz/Obama-win-puts-NASA-over-the-moon---and-beyond/tabid/1160/articleID/276105/Default.aspx puts it "The 'L2' point NASA wants to investigate lies about 1,500,000km further away from the Sun than the Earth." as if it'd be about the sun-earth L2 point. The graphic also has earth sun-yellow.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423534870147705, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/dice+combinatorics
# Tagged Questions 1answer 114 views ### Dice question: Probability of rolling at least 2 of 3 dice with score 3 or less my first post here. I've recently gotten back to boardgaming (Labyrinth: The War on Terror in this case) and would like to get clear on the probability of various actions. I've learnt about 'basic' ... 4answers 256 views ### Probability of rolling three dice without getting a 6 I am having trouble understanding how you get $91/216$ as the answer to this question. say a die is rolled three times what is the probability that at least one roll is 6? 2answers 57 views ### Yahtzee large straight strategy. I've recently got into playing Yahtzee with my brother. During a game we had on my first roll for my last round I got (5 4 4 2 1). I only had my large straight left, so naturally I was hopeful with ... 3answers 29 views ### probability proof requiring conditional probability Consider the experiment where two dice are thrown. Let $A$ be the event that the sum of the two dice is 7. For each $i\in\{1,2,3,4,5,6\}$, let $B_i$ be the event that at least one $i$ is thrown. ... 2answers 60 views ### What are the probabilities in this particular spin-off of BlackJack A few of my friends are playing on a gaming server where this game exists. I was curious about the probabilities in it but i was unable to derive them. Any help would be MUCH appreciated. Note: For ... 1answer 122 views ### Probability of a certain dice roll sum disregarding lowest rolls The number of ways to obtain a total of $p$ in $n$ rolls of $s$-sided dice is: $$c=\sum_{k=0}^{\lfloor(p-n)/s\rfloor}(-1)^k\binom{n}k\binom{p-sk-1}{n-1}\;.$$ What I'm interested in is making the $n$ ... 1answer 105 views ### Probability of a fair die appearing to be biased I'm interested in the probability of a die appearing to be biased when it is, in fact, fair. I'm trying to derive a result given, without proof, on YouTube: http://youtu.be/6guXMfg88Z8?t=1m29s The ... 2answers 83 views ### Question about an 8 sided die I have a question about an 8 sided die problem. I will put up my work what I have if someone can tell me how to proceed I will appreciate it. We roll an 8 sided die numbered 1 to 8 six times and ... 1answer 114 views ### Three dice having sides labelled 0,1,e,π,i,√2 are rolled. Find the probability of getting the product of the three results a real number. Three fair 6-sided dice each have their sides labeled $0\,,\,1\,,\,e\,,\,\pi\,,\,i\,,\,\sqrt 2\,$. If these dice are rolled, the probability that the product of all the numbers is real can be ... 1answer 198 views ### “8 Dice arranged as a Cube” Face-Sum Problem I found this here: Sum Problem Given eight dice. Build a $2\times 2\times2$ cube, so that the sum of the points on each side is the same. $\hskip2.7in$ Here is one of 20 736 ... 1answer 81 views ### Probability - Average value of the sum of 3 bigger dices in 4 throws (D&D) There is a certain game based in a D&D mechanics in which the player throws 4 perfect dices and sums the three bigger values to obtain his score. Which is the average value to be obtained after ... 1answer 89 views ### Probability - Tales Game A dice game played by two players is like this: each player throw two dice and sum theier results; that is the number of points the player scored. Whoever scores more, wins. One additional detail is ... 1answer 187 views ### Probability of predicting, then throwing, a particular multiset for 5 dice. My friend shared with me a story that after losing to his SO at Yahtzee, before they put the game away he just randomly predicted he would roll four 5's and a 1. He then got that roll and freaked out. ... 1answer 93 views ### Throwing the dice, sum of the points We are throwing the die (original cube for the board games). How many are ways to get the sum of the points equal to $n$ ? I've heard this problem today in the morning and still can't deal with ... 3answers 194 views ### Probability that no side of a dice is rolled more than $k$ times Suppose, we roll a fair die $n$ times. The expected value of how many times each side of the die will appear is $\frac{n}{6}$. The question is, what is the probability that any side of the die will ... 3answers 782 views ### “Go-first” dice for $N$ players I'm interested in sets of dice that can be used to determine who "goes first" (hence the name) in an $N$-player game; more generally, I want to determine a complete ordering of the players with a ... 2answers 342 views ### “8 Dice arranged as a Cube” Face-Sum Equals 14 Problem I found this here: Sum Problem Given eight dice. Build a $2\times 2\times2$ cube, so that the sum of the points on each side is the same. $\hskip2.7in$ Here is one of 20 736 ... 1answer 144 views ### Simple Dice Rolling Problem If you play poker dice by simultaneously rolling 5 dice, why is $P\text{{five alike}} =.0008$? I guess I understand the fact that each dice has the probability to land on the same number $1/6$ of the ... 2answers 141 views ### Probability of throwing the same multiset twice in a row with six dice Six dice are thrown. The six dice are thrown a second time. What is the probability of getting the same numbers as in the first throw? If the order of the six numbers matters, the problem is easy, but ... 2answers 1k views ### Probability of either dice showing a specific number If I throw 4 dice together, what is the probability that either one of them will show the number 3 ? I tried to calculate it and got to $\frac{(4)}{6}$ (which is highly unlikely to be correct).. any ... 3answers 315 views ### Probability of rolling a die I roll a die until it comes up $6$ and add up the numbers of spots I see. For example, I roll $4,1,3,5,6$ and record the number $4+1+3+5+6=19$. Call this sum $S$. Find the standard deviation of ... 2answers 200 views ### Two combinatorics problems. I'm not 100% confident in my answers These are two problems from my combinatorics assignment that I'm not quite confident in my answer. Am I thinking of these the right way? Problem 1: On rolling 16 dice. How many of the $6^{16}$ ... 4answers 565 views ### Confusion over probability problems I am stuck with these probability problems, A pair of unbiased dice is rolled together till a sum of either $5$ or $7$ is obtained. Find the probability that $5$ comes before $7$? A letter is ... 2answers 798 views ### Expected number of rolling a pair of dice to generate all possible sums A pair of dice is rolled repeatedly until each outcome (2 through 12) has occurred at least once. What is the expected number of rolls necessary for this to occur? Notes: This is not very deep ... 2answers 696 views ### Probability of 3 of a kind with 7 dice Similar questions: Chance of 7 of a kind with 10 dice Probability of getting exactly $k$ of a kind in $n$ rolls of $m$-sided dice, where $k\leq n/2$. Probability was never my thing, so please bear ... 3answers 343 views ### Probability of Sum of Different Sized Dice I am working on a project that needs to be able to calculate the probability of rolling a given value $k$ given a set of dice, not necessarily all the same size. So for instance, what is the ... 3answers 426 views ### Best Strategy for a die game You are allowed to roll a die up to six times. Anytime you stop, you get the dollar amount of the face value of your last roll. Question: What is the best strategy? According to my calculation, for ... 2answers 724 views ### How can I (algorithmically) count the number of ways n m-sided dice can add up to a given number? I am trying to identify the general case algorithm for counting the different ways dice can add to a given number. For instance, there are six ways to roll a seven with two 6-dice. I've spent quite ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435157179832458, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/40350?sort=newest
## Automorphisms of the rooted tree operad ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This follows Ryan Budney's comment to the question asked here. What is the automorphism group of the rooted tree operad? (By the rooted tree operad, I just mean the operad with object rooted trees and morphisms given by grafting a root to a leaf). - Are you interested in the linear operad, or the setwise operad? Also what is the arity of a tree, is it the number of vertices or the number of leaves? Also I wouldn't call it the rooted tree operad, I would guess that most people would take this to be a form of the PreLie operad as described by Chapoton and Livernet. – James Griffin Sep 29 2010 at 13:38 arity = number of leaves. – Dr Shello Sep 29 2010 at 14:27 ## 1 Answer I think the answer to the question as literally stated is "the trivial group", but I think there are related inquiries which get into some deep combinatorics. One way of thinking about the rooted tree operad is that it is the free operad $O(F)$ generated by the Joyal species $F$ (a functor $\mathbb{P} \to Set$ where $\mathbb{P}$ is the groupoid of finite sets ${1, \ldots, n}$ and permutations) where $F(0)$ is empty and $F(n)$ is a singleton for $n \geq 1$. You can think of the element of $F(n)$ as a "sprout" $s_n$ consisting of a root, $n$ leaves, and no other nodes, and then the elements of $O(F)$ are obtained recursively by starting with sprouts and applying grafting operations. So we're looking at operad automorphisms $\phi: O(F) \to O(F)$. By freeness, the endomorphisms of $O(F)$ are in bijection with natural transformations $\psi: F \to U O(F)$ where $U O(F)$ is the underlying species or permutation representation of $O(F)$. Concretely, to give such a natural transformation is to give a collection of trees $t_n = \psi_n(s_n)$ for all $n \geq 1$ where each $t_n$ must be invariant under permuting the leaves, since the sprout $s_n$ is invariant under such permutations. That's a pretty strong condition on $t_n$, and there are actually precious few such collections. But now you want more: you want $\phi$ to be an automorphism as well. So each sprout $s_n$ must be in the image of $\phi_n$. But no nonsprout tree $u$ can ever map to $s_n$ under $\phi_n$, because if $u$ is obtained by grafting together more than one sprout $s_k$, then $\phi_n(u)$ is obtained by similarly grafting together more than one tree $t_k$, and this is never a sprout. So in order for there to exist $u$ such that $\phi_n(u) = s_n$, we must have $u = s_n$. To have that for all $n$ means $\psi(s_n) = s_n$ for all $n$, hence the only operad automorphism is the identity automorphism. I think a more interesting inquiry is to understand the groupoid of rooted trees and isomorphisms between them. This is an incredibly rich object. Edit: Let me make my last suggestion more precise. Let's define a rooted tree to be a finite set $X$ equipped with a function $f: X \to X$ and an element $r \in X$ such that $f^{(n)}(X) = {r}$ for sufficiently large $n$. The idea is that $f(x)$ is one step closer to the root than $x$, unless $x$ is the root. Then an isomorphism is a function $\phi: X \to Y$ which preserves the stepping-closer function and the root. It is determined by its restriction to the leaf set. But even this groupoid isn't that mysterious; it seems automorphism groups are iterated wreath products of symmetric groups. Here is a different but related inquiry which I think is rather more interesting: regarding trees $T$ as posets, describe the category of order-preserving bijections between trees. - Very interesting. Can you say more about this groupoid of rooted trees? Also, what are examples of: - a "simple" operad that has a non-trivial automorphism group? - an operad that has a relatively simple automorphism group? – Dr Shello Sep 29 2010 at 14:34 1 1. The groupoid is equivalent to a sum over isomorphism classes of trees of the automorphism groups of class representatives, and each such automorphism group is an iterated wreath products of symmetric groups. A useful picture might be to think of a tree as a hereditarily finite multiset. Then an automorphism of a multiset consists of a permutation of multiple copies of an element together with an automorphism of each element (as a multiset). 2. E.g., a monoid or group can be viewed as an operad where each operation has arity one. Pick a monoid with an interesting automorphism group. (Cont.) – Todd Trimble Sep 30 2010 at 7:38 For example, free groups have interesting automorphism groups; they contain braid groups for instance (cf. Artin representation). 3. I am not absolutely sure, but I think the automorphism group of the operad whose algebras are monoids might be Z mod 2. The nontrivial automorphism would send an operation of arity n, namely a total ordering of the elements 1, 2, ..., n, to the reverse ordering. – Todd Trimble Sep 30 2010 at 7:50 1 That's right the automorphism group of the (set-wise) associative operad is Z mod 2. A really important point is that automorphisms of operads induce automorphisms of categories of algebras. In the case of the associative operad the non-trivial automorphism takes an algebra A to its oppositive algebra. – James Griffin Sep 30 2010 at 10:54 1 Oh and I think that we should point out that the automorphism group of the operad in the original question is only trivial if we take it to be the set-wise operad. Working instead with the linear version I think we get a group resembling the upper-triangular matrices, but I haven't checked the details. – James Griffin Sep 30 2010 at 10:58 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340670704841614, "perplexity_flag": "head"}
http://mathoverflow.net/questions/78422/insolvable-number-fields-ramified-only-at-one-small-prime/78429
## Insolvable number fields ramified only at one (small) prime ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In his first Eilenberg Lecture at Columbia, Benedict Gross says that only recently have we been able to give examples of finite galoisian extensions $K$ of ${\bf Q}$ which are ramified only at $2$ (respectively $3$) and for which the group Gal$(K|{\bf Q})$ is not solvable. He seemed to suggest that this was made possible by recent progress in the Langlands Programme. Question. Which such number fields have been discovered recently, and which bits of the Langlands Programme are needed to construct them ? - These must be things roughly like 2-adic or 3-adic reps associated to the $\Delta$ function. I guess these are not too recent. But it's curious that there aren't suppose to be elementary constructions of finite extensions of the right sort. Is this really true? – Minhyong Kim Oct 18 2011 at 6:22 1 By the way, one really striking recent application of a similar flavor is this paper of Clozel and Chenevier: ams.org/journals/jams/2009-22-02/… – Minhyong Kim Oct 18 2011 at 6:27 1 OK, forget all my silly comments. $GL2(F_3)$ is still solvable, of course. But anyways, I guess we can figure out what needs to be done and that we will need automorphic forms on bigger groups. That's where recent work comes in, I suppose. As usual, I will leave my comments up, so others can benefit from my stupidity. – Minhyong Kim Oct 18 2011 at 6:39 1 One could try to adapt the argument by looking at Artin reps with image $\mathrm{GL}_2(F)$ where $F$ is a finite field of char. $p \in \{2,3\}$. But Serre and Tate showed that every such representation has to be ramified outside $p$ (this is one of the first steps of the proof of Serre's conjecture). So I guess one has to look at other kinds of automorphic forms. – François Brunault Oct 18 2011 at 6:48 3 @Francois/Minhyong: exactly! One needs bigger groups, so one has to wait until we can compute Hilbert modular forms better, and that's what happened, thanks mostly to Dembele. – Kevin Buzzard Oct 18 2011 at 6:55 show 1 more comment ## 2 Answers Minhyong's comments indicate the issue here. If I want to come up with an extension unramified outside $p$ then why not look at the 2-dimensional mod $p$ representation attached to the $\Delta$ function? This works for all but a very small set of $p$, where either the mod $p$ representation is degenerate, or $p$ is so small that $GL(2,p)$ is solvable anyway. For example the semisimple mod 2 representation attached to the $\Delta$ function is trivial. Gross' question was how to deal with this small set of primes. For these small primes one can try other level 1 forms, of course. For example $p=691$ is a funny case where the representation attached to Delta is reducible, but for $p=691$ you can just use the level 1 weight 16 form instead. The problem with the smaller primes is harder to deal with, because e.g. a modular mod 2 representation unramified outside 2 must be reducible by an old theorem of Tate (look at bounds on discriminants -- the argument is delicate). So one has to try and look elsewhere. The trick with $p=2$ is due to Lassina Dembélé and you can get the paper at his website http://www.warwick.ac.uk/staff/L.Dembele/ Classical modular forms don't cut the mustard, so one seeks to try the same trick with Hilbert modular forms defined over a totally real field ramified only at 2. Such totally real fields are not hard to find, so the issue is now the following computational one -- how to compute the level 1 forms? Dembélé did this, and found an explicit example which gave a Galois representation into $GL(2,k)$ with $k$ of size 8 if I remember correctly (I have to get the kids out of bed -1 minutes ago so can't check any more details). - So it is "recent progress in the Langlands programme" in the following sense. The key theoretical result is Carayol's theorem from 1986 beefed up by Blasius-Rogawski and Taylor in 1991 or so, attaching Galois representations to Hilbert modular forms. The recent progress is in our ability to compute examples of such things on a computer. – Kevin Buzzard Oct 18 2011 at 6:54 Many thanks. The paper in question is warwick.ac.uk/staff/L.Dembele/papers/…, with and appendix warwick.ac.uk/staff/L.Dembele/papers/… by Serre. – Chandan Singh Dalawat Oct 18 2011 at 6:56 1 The case $p=3$ seems also to be done in warwick.ac.uk/staff/L.Dembele/papers/… – François Brunault Oct 18 2011 at 6:58 For the record, the case $p=5$ is treated in arxiv.org/abs/0906.4374, and the case $p=7$ in arxiv.org/abs/1005.4209 (as I learnt from this answer by Kevin Ventullo: mathoverflow.net/questions/109298/…). – Chandan Singh Dalawat Oct 11 at 14:32 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The construction of such number fields is not limited to the Langlands program. For instance, to what extent are there number fields with few ramified primes and Galois group S_n? You won't be able to cook these out of automorphic forms. In this connection there is a remarkable recent example of David Roberts: a number field whose Galois group is the symmetric group on 15875 letters, and whose discriminant is $-2^{130729}5^{63437}$! - See also worldscinet.com/ijnt/07/0702/…, where David Roberts writes down an explicit polynomial of degree $25$ whose splitting field $K$ is ramified only at $5$ and the group $\mathrm{Gal}(K|\mathbf{Q})$ is not solvable. – Chandan Singh Dalawat May 23 at 10:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344637393951416, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/108710-rates-change-word-problems-print.html
# Rates of Change word problems Printable View • October 18th 2009, 01:33 AM funnytim Rates of Change word problems Hey everyone! I'm having some problems getting started on some Calculus problems here. (I know, complete answers aren't supposed to be given out, but I don't even know how to approach the problem [Yes, I've already tried to re-read the notes + textbook, but to no avail]. I missed a week of classes due to being sick so am trying to catch up). 1) "A sector S of a circle with radius R whose angle at the centre of the circle is φ radians, is rolled up to form the curved surface of a right cone standing on a circular base. The semi-vertical angle of this cone is θ radians. Express φ in terms of sin θ and show that the volume V of the cone is given by 3V = πR^3 sin^2 θ cos θ. If R is constant and θ varies, find the positive value of tan θ for which dV/dθ = 0. Also, show further that when this value of tan θ is taken, the maximum value of V is obtained. Therefore show that the maximum value of V is( 2πR^3√3)/27 " 2) A water trough with vertical cross-section in the form of an equilateral triangle is being filled at a rate of 4 cubic metres per minute. Given that the trough is 12 metres long, how fast is the level of the water rising when the water reaches a dept of 1.5 metres." With no formula to work with, am I supposed to graph it first or something? Thanks guys! • October 18th 2009, 03:15 AM HallsofIvy Quote: Originally Posted by funnytim Hey everyone! I'm having some problems getting started on some Calculus problems here. (I know, complete answers aren't supposed to be given out, but I don't even know how to approach the problem [Yes, I've already tried to re-read the notes + textbook, but to no avail]. I missed a week of classes due to being sick so am trying to catch up). Okay, so you don't want "complete answers", you want suggestions on how to approach the problems. Quote: "A sector S of a circle with radius R whose angle at the centre of the circle is φ radians, is rolled up to form the curved surface of a right cone standing on a circular base. The semi-vertical angle of this cone is θ radians. Express φ in terms of sin θ and show that the volume V of the cone is given by 3V = πR^3 sin^2 θ cos θ. If R is constant and θ varies, find the positive value of tan θ for which dV/dθ = 0. Also, show further that when this value of tan θ is taken, the maximum value of V is obtained. Therefore show that the maximum value of V is( 2πR^3√3)/27 " Differentiate 3V = πR^3 sin^2 θ cos θ with respect to θ and set it equal to 0. Solve for θ (the question implies that it might be easier to solve for tan θ first.) Since the derivative of V, for this θ is 0, there may be a max or min here. Show it is a max by looking at the second derivative and using the "second derivative test". Quote: 2) A water trough with vertical cross-section in the form of an equilateral triangle is being filled at a rate of 4 cubic metres per minute. Given that the trough is 12 metres long, how fast is the level of the water rising when the water reaches a dept of 1.5 metres." With no formula to work with, am I supposed to graph it first or something? Thanks guys! No, you are supposed to come up with your own formula! The volume of a cylindrical solid is the length of the cylinder times the base- and here the base is a triangle so that should be easy. But isn't there something missing here? Is nothing said about large the equilateral triangle is? For the moment, let the length of the sides of the equilateral triangle cross section be "L". Start by drawing a picture. The base of the triangle (here, the top) forming the trough has length L and its height is $L\sqrt{3}/2$ (Pythagorean theorem). The water will form a triangle sharing the bottom vertex and so is "similar" to the entire side of the trough. If the depth of the water is "h" then the ratio of height to base, h/b, for the water is the same as that ratio for the entire side of the trough: $h/b= (L\sqrt{3}/2)/L= \sqrt{3}/2$ so $h= b\sqrt{3}{2}$ and $b= 2h/\sqrt{3}$. Since the area of a triangle is (1/2)bh, you can write the area of a "side" of the water in terms of h only and then multiply by the length of the trough to get the volume. All times are GMT -8. The time now is 06:43 AM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452947974205017, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/225973/adding-sine-waves-of-different-phase-sin-2-sin3-2/226150
# Adding sine waves of different phase, sin(π/2) + sin(3π/2)? Adding sine waves of different phase, what is $\sin(\pi/2) + \sin(3\pi/2)$? Thanks. - Whatever else you are doing, $\sin(\pi/2)+\sin(3\pi/2)$ is not the result of adding two sine waves of different phase; it is the result of adding two constants, and the sum happens to be $0$ as lab bhattacharjee shows you in the answer you have accepted. The sum of two sine waves (of the same frequency) but different phases would be $\sin(\omega t + \theta_1) + \sin(\omega t + \theta_2)$ which would also be $0$ if you chose $\theta_1$ and $\theta_2$ to be $\pi/2$ and $3\pi/2$, but is a sinusoid of the same frequency but different amplitude and phase in general. – Dilip Sarwate Oct 31 '12 at 16:16 ## 2 Answers $\sin(\pi+x)=\sin \pi\cos x+\cos\pi\sin x=-\sin x$ as $\sin \pi=0,\cos\pi=-1$ This can be achieved directly using "All-Sin-Tan-Cos" formula or using $\sin 2A+\sin 2B=2\sin(A+B)\cos(A-B)$, $\sin(\pi+x)+\sin x=2\sin\left(x+\frac \pi 2\right)cos\left(\frac \pi 2\right)=0$ So, $\sin(x)+\sin(\pi+x)=0$ More generally, $\sin(x+c)+\sin(x+d)=2\sin\left(x+\frac{c+d}2\right)\cos\left(\frac{c-d}2\right)$ So, the resultant phase is the arithmetic mean of the phases of the original waves provided $\cos\left(\frac{c-d}2\right)\ne 0,$ i.e., $\frac{c-d}2\ne \frac{(2n+1)\pi}2$ or $c-d\ne(2n+1)\pi$ where $n$ is any integer. Here, $c-d=0-\pi=-\pi\implies \cos\left(\frac{c-d}2\right)=0$ - Heres the plot for $\sin(L)$ where $L$ goes from $(0, \pi/2)$ Heres the plot for $\sin(L) + \sin(3L)$ where $L$ goes from $(0, \pi/2)$ I hope this distinction is useful to you. This was done in Mathematica. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289373755455017, "perplexity_flag": "head"}
http://mathoverflow.net/questions/18860/a-ring-of-invariants-in-characteristic-2/71161
## A ring of invariants in characteristic 2 ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $K$ be an algebraic closure of `$\mathbb{F}_2$`. The cyclic group $C_{2^n}$ acts on `$K[x_0, \dots, x_{2^n-1}]$` by cyclically permuting the $x_i$: `$a : x_i \rightarrow x_{i + a \bmod 2^n}$`. Is there a nice description of the ring of invariants of $C_{2^n}$ acting this way on $K[x]$? Things are quite easy when the characteristic $\ne 2$, but look quite a bit more intricate here since, e.g. the group ring $K[C_{2^n}]$ is not semi-simple. - What would be really nice if there was a recursive description. If `$R_n$` is the ring of invariants (given explicitly with generators and syzygies) for `$C_{2^n}$`, then looking at `$R_n \times R_n$` in `$R_{n+1}$` there is an action (involving the cocycle giving the group extension) of `$C_{2^{n+1}}$`, which could be used to "clean up" the invariants. – Victor Miller Mar 20 2010 at 17:19 1 A reasonable amount of work seems to have been done (for any indecomposable representation not just a permutation representation). A starting point is (as well as the references to it in SciMath): MR0499459 (81b:14024) Almkvist, Gert; Fossum, Robert Decomposition of exterior and symmetric powers of indecomposable $Z/pZ$-modules in characteristic $p$ and relations to invariants. Séminaire d'Algèbre Paul Dubreil, 30ème année (Paris, 1976--1977), pp. 1--111, Lecture Notes in Math., 641, Springer, Berlin, 1978. – Torsten Ekedahl Mar 21 2010 at 10:58 @Torsten: Thanks! I just starting looking at the Almkvist-Fossum paper, and it looks like it has just what I need. – Victor Miller Mar 22 2010 at 4:48 ## 4 Answers It seems to me that the characteristic of the field does not play a big role in this question: here is a sketch of an argument. Note first of all, that all the invariants are linear combinations of "symmetrized monomials": if m is a monomial in the polynomial ring, then form the sum of all the translates of m by the elements of your group. This means that every invariant in the polynomial ring comes from an invariant polynomial with coefficients in the prime field $\mathbb{F}_2$ of K and that invariants with coefficients in $\mathbb{F}_2$ are the reduction of invariants with integer coefficients. Thus we have translated the question over characteristic two to a question over the integers: it suffices to find generators and relations for the ring of invariants of your group over the integers to find generators and relations in any ring. Over the integers I do not know what the answer is, but if you do know what the answer is over any field of characteristic different from two, maybe you can now fill in the argument. Thinking briefly about the set of generators, it seems like you might simply need the "symmetrized square-free monomials", with relations that are a bit tedious to write down, but that maybe can be nicely interpreted. EDIT: The square-free monomials are not enough, but it seems that you do not have to look much further to describe explicitly a finite set of generators for the group algebra of a finite cyclic group over the integers. Indeed, let S be the set of monomials m for which there exists an integer r such that the exponents of m are the integers $\{0,1,\ldots,r\}$. Then the product of all the variables of the polynomial ring together with the symmetrizations of all the monomials in S seems to generate the ring of invariants. "Symmetrize a monomial m" means sum over the cosets of the stabilizer in the cyclic group of m. I have not thought about relations, but there will be plenty! If you really need to make explicit the fact that the ring of invariants is not Cohen-Macaulay, maybe you can, but maybe you do not need to do that... - The characteristic certainly plays a role in the answer. For all characteristics but two the invariant ring is Cohen-Macaulay but in characteristic two it isn't. It is of course true that you might find a (more or less) characteristic free presentation of the invariant ring where this fact is not apparent. – Torsten Ekedahl Mar 22 2010 at 5:17 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Indeed the "symmetrized square-free monomials" seem to generate. (Order lexicographically and look what the highest term in a product looks like. Now use that to concoct rewriting rules.) [Oops! This is less obvious than it seemed. The symmetrizations are not with respect to the full symmetric group. In fact it fails for the cyclic group of order four, where the square-free case is not enough to generate all invariants of degree three.] They also seem to be independent, as the transcendence degree matches. [Oops! Also wrong. It would contradict the Chevalley–Shephard–Todd theorem. There may be many orbits of our cyclic group in the set of square free monomials of a given degree.] One may wish to check the degree of the full ring as a module over the predicted subring. For example, K[x,y] as a module over K[x+y,xy] has basis 1, x, but why? [Because of the minimal polynomial (T-x)(T-y) over that subring. But this reasoning is less helpful for larger degree. Nevertheless one may wish to look at our full ring as a (free) module over the polynomial ring in the elementary symmetric functions. Is there a basis of that module that is permuted by our cyclic group? And one really wants the ring structure, not just the vector space.] Wilberd - Note that the invariant ring is in general not Cohen-Macaulay so it is not free as a module over the symmetric polynomials. I think that excludes the possibility of a basis permuted by the group. – Torsten Ekedahl Mar 21 2010 at 19:37 I was also thinking that the symmetrized square free monomials might generate. Here's another approach that may be fruitful. Over $\mathbb{Q}$, let $K_n$ the field obtained by adjoining by a primitive $2^n$th root of unity $\zeta_n$. The prime 2 is totally ramified in $K_n$. Set $y_j = \sum_{k=0}^{2^n-1} \zeta_n^{jk} x_k$. Then the action of $C_{2^n}$ on the $y_j$ is to multiply them by a suitable root of unity. Thus, one wants the set of non-negative integers $a_j$ such $\prod y_j^{a_j}$ is invariant, which amounts to them being an additive submonoid of an integral lattice ..continued.. – Victor Miller Mar 22 2010 at 4:10 Continuation: and thus finitely generated. For $n=2$ we get as generators $(1,0,0,0),(0,1,0,1),(0,0,2,0),(0,2,1,0),(0,4,0,0),(0,0,0,4)$. For each such $\alpha=(a,b,c,d)$, set $z_{\alpha} = \prod_j y_j^{\alpha_j}$. Note that some of the the $z_{\alpha}$ are congruent modulo $\pi$, the unique prime above 2. So take sums and differences of those, dividing by $\pi$ and reduce mod $\pi$, plus the reduction of all of the others. Do those reductions generated the invariant ring in characteristic 2 ? – Victor Miller Mar 22 2010 at 4:18 Ooops,forgot to throw in $(0,0,1,2)$ as a generator. – Victor Miller Mar 22 2010 at 4:20 The paper [H.E.A. Campbell, J. Harris and D.L. Wehlau, Internal duality for resolutions of rings, J. of Algebra, 215 (1999) 1--33.] considers this question (or a closely related one) when n=3. Also note that since the group acts via permutations, the answer is (essentially) the same for all fields of characteristic 2 so it suffices to work over ${\mathbb F}_2$. - I found an answer to my question (well almost, I'd still like a more explicit description) in the following paper: MR894515 (88k:13004) 13B05 (11T99 20J06) Landweber, Peter S. (1-RTG); Stong, Robert E. (1-VA) The depth of rings of invariants over finite fields. Number theory (New York, 1984–1985), 259–274, Lecture Notes in Math., 1240, Springer, Berlin, 1987. In it they prove (their Theorem 5) that if $V$ is a finite dimensional vector space over a field $k$ of characteristic $p$, and $G$ is a finite group acting linearly on $V$ that if the subspace of covariants `$V_G = V/\{ gv - v : v \in V, g \in G\}$` has codimension 1 in $V$ ( which is the case for my question ), then the ring of invariants $k[V]^{G}$ is a polynomial ring. There's another paper: "Invariants of some Abelian $p$-groups in characteristic $p$" by Mara D. Neusel in Proc. AMS v. 125, no 7, pp. 1921-1931, showing that a similar result hold when codim$(V^G) = 2$ or codim$(V_G) = 2$, where `$V^G = \{ v : v = gv \text{ for all } g \in G\}$`. Unfortunately neither applies in my case. - This looks to be in contradiction with the result that says that the invariant ring is not Cohen-Macaulay as a polynomial ring is certainly Cohen-Macaulay. Also according to the Math Review the condition is that the subspace $\{gv-v\}$ be $1$-dimensional not of codimension $1$. (The space of covariants is the quotient of $V$ by that space.) – Torsten Ekedahl Apr 1 2010 at 18:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218751788139343, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/94018/list
## Return to Answer 2 added 11 characters in body I am not sure if this is the type of direct proof that you are looking for, but here it goes. I will start with a more general theorem: Let $M$ be a CANCELABLE monoid, and $K$ be the left adjoint to the forgetful functor, $U:GROUP\rightarrow MONOID$. Then $BM$ is homotopy equivalent to $BK(M)$. The way I like to see this is to think of both monoids and (therefore groups) as categories. By $B$, I mean the nerve of the category that turns the category into a simplicial set. Now I find these sorts of nerve theorems are much easier to see in the world of simplicial sets. To see this particular theorem, it suffices to try to build a minimal fibration (that is a fibrant replacement). In the case of a monoid, all of the inner horns are filled, and we must only find out how to fill the outer horns. But these will just be adding in the inverse that are not yet in the monoid. Further the minimal fibration condition will ensure that horn fillers are unique. Essentially, what you are doing is to perform a geometric version of the $K$ functor in the category of simplicial sets. As an interesting exercise, it would be good to take the minimal fibration associated to $B\mathbb{N}$ and see that you get the simplicial set, $B\mathbb{Z}$. Now for the James construction that you mention (even though this is not part of your question, it is worth mentioning) their is a simplicial set version of this called the Milnor FK construction. What you do is you start with a reduced simplicial set, $X$ (a simplicial set with one vertex). We then define a simplicial group called $FK(X)$ the n-th group is the free group on the elements of $X_n$ modulo the image of the iterated degeneracey, $s_0^n(pnt)$, where $pnt$ is the vertex. Post Undeleted by Spice the Bird Post Deleted by Spice the Bird 1 I am not sure if this is the type of direct proof that you are looking for, but here it goes. I will start with a more general theorem: Let $M$ be a monoid, and $K$ be the left adjoint to the forgetful functor, $U:GROUP\rightarrow MONOID$. Then $BM$ is homotopy equivalent to $BK(M)$. The way I like to see this is to think of both monoids and (therefore groups) as categories. By $B$, I mean the nerve of the category that turns the category into a simplicial set. Now I find these sorts of nerve theorems are much easier to see in the world of simplicial sets. To see this particular theorem, it suffices to try to build a minimal fibration (that is a fibrant replacement). In the case of a monoid, all of the inner horns are filled, and we must only find out how to fill the outer horns. But these will just be adding in the inverse that are not yet in the monoid. Further the minimal fibration condition will ensure that horn fillers are unique. Essentially, what you are doing is to perform a geometric version of the $K$ functor in the category of simplicial sets. As an interesting exercise, it would be good to take the minimal fibration associated to $B\mathbb{N}$ and see that you get the simplicial set, $B\mathbb{Z}$. Now for the James construction that you mention (even though this is not part of your question, it is worth mentioning) their is a simplicial set version of this called the Milnor FK construction. What you do is you start with a reduced simplicial set, $X$ (a simplicial set with one vertex). We then define a simplicial group called $FK(X)$ the n-th group is the free group on the elements of $X_n$ modulo the image of the iterated degeneracey, $s_0^n(pnt)$, where $pnt$ is the vertex.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945188581943512, "perplexity_flag": "head"}
http://planetmath.org/APropertyOfTruthValueSemanticsForIntuitionisticPropositionalLogic
# a property of truth-value semantics for intuitionistic propositional logic In this entry, we show the following: if $\neg A$ is a tautology of $V_{n}$, then $\neg A$ is a theorem. First, we need the following lemma, which is the intuitionistic version of one for classical propositional logic, found here. Given an interpretation $v$, define $v[A]\textrm{ is }\left\{\begin{array}[]{ll}\neg A&\textrm{if }v(A)=0,\\ \neg\neg A&\textrm{otherwise.}\end{array}\right.$ It is easy to see that for any $A$, $v(v[A])=n$ for any $v$, so that $v[A]$ is always true. In addition, we have the following table: $v(A)$ $v[A]$ $v(B)$ $v[B]$ $v(A\land B)$ $v[A\land B]$ $v(A\lor B)$ $v[A\lor B]$ $v(A\to B)$ $v[A\to B]$ $0$ $\neg A$ $0$ $\neg B$ $0$ $\neg(A\land B)$ $0$ $\neg(A\lor B)$ $0$ $\neg\neg(A\to B)$ $0$ $\neg A$ $\neq 0$ $\neg\neg B$ $0$ $\neg(A\land B)$ $\neq 0$ $\neg\neg(A\lor B)$ $n$ $\neg\neg(A\to B)$ $\neq 0$ $\neg\neg A$ $0$ $\neg B$ $0$ $\neg(A\land B)$ $\neq 0$ $\neg\neg(A\lor B)$ $0$ $\neg(A\to B)$ $\neq 0$ $\neg\neg A$ $\neq 0$ $\neg\neg B$ $\neq 0$ $\neg\neg(A\land B)$ $\neq 0$ $\neg\neg(A\lor B)$ $\neq 0$ $\neg\neg(A\to B)$ The proofs of the following lemmas use instances of the theorem schemas below (proofs here): $1$ $2$ $3$ $(C\to D)\to(\neg D\to\neg C)$ $\neg\neg\neg C\to\neg C$ $C\to\neg\neg C$ ###### Lemma 1. $v[A],v[B]\vdash v[A\land B]$. ###### Proof. Since $\vdash A\land B\to A$ and $\vdash A\land B\to B$, by modus ponens and instances of the theorem schema 1 above, we have $\vdash\neg A\to\neg(A\land B)$ and $\vdash\neg B\to\neg(A\land B)$. This proves the first three cases. For the last case, we start with the axiom $A\to(B\to A\land B)$, or $A\vdash B\to A\land B$ by the deduction theorem. Apply modus ponens twice to instances of schema 1, we get $A\vdash\neg\neg B\to\neg\neg(A\land B)$, or $\neg\neg B\vdash A\to\neg\neg(A\land B)$ by the deduction theorem twice. Again, applying modus ponens twice to instances of 1, we have $\neg\neg B\vdash\neg\neg A\to\neg\neg\neg\neg(A\land B)$, or $\neg\neg B,\neg\neg A\vdash\neg\neg\neg\neg(A\land B)$ by the deduction theorem. One application of modus ponens to an instance of schema 2, we have $\neg\neg B,\neg\neg A\vdash\neg\neg(A\land B)$, as desired. ∎ ###### Lemma 2. $v[A],v[B]\vdash v[A\lor B]$. ###### Proof. Since $\vdash A\to A\lor B$ and $\vdash B\to A\lor B$, by modus ponens twice to instances of the schema 1, we have $\vdash\neg\neg A\to\neg\neg(A\lor B)$ and $\vdash\neg\neg B\to\neg\neg(A\lor B)$. This settles the last three cases. For the first case, we use the axiom $(A\to\perp)\to((B\to\perp)\to((A\lor B)\to\perp))$, which is just $\neg A\to(\neg B\to\neg(A\lor B))$, or $\neg A,\neg B\vdash\neg(A\lor B)$ by the deduction theorem twice. ∎ ###### Lemma 3. $v[A],v[B]\vdash v[A\to B]$. ###### Proof. For the first two, all we need is $\neg A\vdash\neg\neg(A\to B)$. To see this, we have deduction $A\to\perp,A,\perp,\perp\to B,B,$ so $\neg A,A\vdash B$, or $\neg A\vdash A\to B$ by the deduction theorem. Since $(A\to B)\to\neg\neg(A\to B)$ is an instance of schema 3, by modus ponens, $\neg A\vdash\neg\neg(A\to B)$ as desired. For the third, by the deduction theorem, it is enough to show $\neg\neg A,\neg B,A\to B\vdash\perp$. Now, $\neg A\to\perp,\neg B,A\to B,(A\to B)\to(\neg B\to\neg A),\neg B\to\neg A,\neg A,\perp$ is a deduction of $\perp$ from $\neg\neg A,\neg B$, and $A\to B$, where $(A\to B)\to(\neg B\to\neg A)$ is a theorem. For the last, all we need to show is $\neg\neg B\vdash\neg\neg(A\to B)$. We start with $B\to(A\to B)$, which is an axiom. Applying modus ponens twice to instances of 1, we have $\vdash\neg\neg B\to\neg\neg(A\to B)$, or $\neg\neg B\vdash\neg\neg(A\to B)$. ∎ ###### Lemma 4. Suppose $p_{1},\ldots,p_{n}$ are all the propositional variables in a wff $A$. Then $v[p_{1}],\ldots,v[p_{m}]\vdash v[A].$ ###### Proof. We use induction on the number $n$ of primitive logical connectives ($\land,\lor$, and $\to$) in $A$. If $n=0$, then $A$ is either $\perp$ or a propositional variable $p$. If $A$ is $\perp$, then $\perp\vdash\perp$, or $\vdash\neg\perp$, or $\vdash v[\perp]$. If $A$ is $p$, then clearly $v[p]\vdash v[p]$. Now, if $A$ has $n+1$ connectives, and is either $B\land C$, $B\lor C$, or $B\to C$, then $B$ and $C$ has no more than $n$ connectives. By induction, $v[p_{{i(1)}}],\ldots,v[p_{{i(s)}}]\vdash v[B]\qquad\mbox{and}\qquad v[p_{{j(1)% }}],\ldots,v[p_{{j(t)}}]\vdash v[C]$ or $v[p_{1}],\ldots,v[p_{m}]\vdash v[B]\qquad\mbox{and}\qquad v[p_{1}],\ldots,v[p_% {m}]\vdash v[C]$ By the first three lemmas above, $v[B],v[C]\vdash v[A]$, so by modus ponens twice, $v[p_{1}],\ldots,v[p_{m}]\vdash v[A].$ ∎ We are now ready for the main result: ###### Theorem 1. If $A$ is a tautology of $V_{n}$, then $\vdash\neg\neg A$. ###### Proof. Let $v$ be any interpretation, then $v[p_{1}],\ldots,v[p_{m}]\vdash v[A]$ by the last lemma, where $p_{1},\ldots,p_{m}$ are all the propositional variables in $A$. Since $A$ is a tautology, $v[p_{1}],\ldots,v[p_{m}]\vdash\neg\neg A.$ If $m=0$, then we are done. Otherwise, let $v_{1}$ and $v_{2}$ be two interpretations such that $v_{1}[p_{i}]=v_{2}[p_{i}]$ for $i=1,\ldots,m-1$, and $v_{1}[p_{m}]=\neg p_{m}$ and $v_{2}[p_{m}]=\neg\neg p_{m}$, so that $v[p_{1}],\ldots,v[p_{{m-1}}],\neg p_{m}\vdash\neg\neg A\qquad\mbox{and}\qquad v% [p_{1}],\ldots,v[p_{{m-1}}],\neg\neg p_{m}\vdash\neg\neg A.$ By applying the deduction theorem twice to each of the above deductive relations, we get $v[p_{1}],\ldots,v[p_{{m-1}}],\neg A\vdash\neg\neg p_{m}\qquad\mbox{and}\qquad v% [p_{1}],\ldots,v[p_{{m-1}}],\neg A\vdash\neg\neg\neg p_{m}.$ Apply schema 2 to the second deductive relation above, we get $v[p_{1}],\ldots,v[p_{{m-1}}],\neg A\vdash\neg p_{m}.$ By the deduction theorem once more, we have $v[p_{1}],\ldots,v[p_{{m-1}}]\vdash\neg A\to\neg\neg p_{m}\qquad\mbox{and}% \qquad v[p_{1}],\ldots,v[p_{{m-1}}]\vdash\neg A\to\neg p_{m}.$ With the axiom instance $(\neg A\to\neg p_{m})\to((\neg A\to\neg\neg p_{m})\to\neg\neg A)$, apply modus ponens to each of the last two deductive relations, we get $v[p_{1}],\ldots,v[p_{{m-1}}]\vdash\neg\neg A,$ so that $v[p_{m}]$ is removed from the original deductive relation. Continue this process until all of the $v[p_{i}]$ are removed on the left, and we get $\vdash\neg\neg A.$ ∎ We record to immediate corollaries: ###### Corollary 1. If $\neg A$ is a tautology of $V_{n}$, then $\vdash\neg A$. ###### Proof. By the theorem, $\vdash\neg\neg\neg A$. But $\vdash\neg\neg\neg A\to\neg A$, $\vdash\neg A$ by modus ponens. ∎ In the next corollary, we use $\vdash_{c}A$ and $\vdash_{i}$ to distinguish that $A$ is a theorem of classical and intuitionistic propositional logic respectively. ###### Corollary 2. (Glivenko’s Theorem) $\vdash_{c}A$ iff $\vdash_{i}\neg\neg A$. ###### Proof. If $\vdash_{c}A$, then by the soundness theorem of classical propositional logic, $A$ is a tautology of truth-value semantics, which is just $V_{2}$, and therefore by the theorem above, $\vdash_{i}\neg\neg A$. Conversely, if $\vdash_{i}\neg\neg A$, then certainly $\vdash_{c}\neg\neg A$, as PL${}_{i}$ is a subsystem of PL${}_{c}$. Since $\neg\neg A\to A$ is a theorem of PL${}_{c}$, we get $\vdash_{c}A$ by modus ponens. ∎ In particular, $\vdash_{c}\perp$ iff $\vdash_{i}\perp$, since $\vdash_{i}\neg\neg\perp\leftrightarrow\perp$. In other words, PL${}_{c}$ is consistent iff PL${}_{i}$ is. Major Section: Reference Type of Math Object: Result ## Mathematics Subject Classification 03B20 Subsystems of classical logic (including intuitionistic logic) ## Recent Activity May 17 new image: sinx_approx.png by jeremyboden new image: approximation_to_sinx by jeremyboden new image: approximation_to_sinx by jeremyboden new question: Solving the word problem for isomorphic groups by mairiwalker new image: LineDiagrams.jpg by m759 new image: ProjPoints.jpg by m759 new image: AbstrExample3.jpg by m759 new image: four-diamond_figure.jpg by m759 May 16 new problem: Curve fitting using the Exchange Algorithm. by jeremyboden new question: Undirected graphs and their Chromatic Number by Serchinnho ## Info Owner: CWoo Added: 2012-06-24 - 02:57 Author(s): CWoo ## Versions (v16) by CWoo 2013-03-22 Powered By: = + + ♡
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 185, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8800755739212036, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/15294/why-is-the-area-under-a-curve-the-integral?answertab=oldest
# Why is the area under a curve the integral? I understand how derivatives work based on the definition, and the fact that my professor explained it step by step until the point where I can derive it myself. However when it comes to the area under a curve for some reason when you break it up into an infinite amount of rectangles, magically it turns into the anti-derivative. Can someone explain why that is the definition of the integral and how Newton figured this out? - 5 There are 2 potential questions here. One is the question of why the definite Riemann integral gives the correct notion of "area under a curve" for a (nonnegative, Riemann integrable) function. The other, which seems to be what you're really asking, is the question of why an antiderivative evaluated at the endpoints of an interval and subtracted yields that definite integral. The latter question is answered by an understanding of the fundamental theorem of calculus. The purpose of this comment is just to help clarify your question. – Jonas Meyer Dec 23 '10 at 4:58 Its also possible that he is seeking the intuition for Calculus. Specifically integration – picakhu Dec 23 '10 at 5:19 1 @picakhu: I think that he or she is certainly seeking intuition, but the phrase "magically it turns into the anti-derivative" indicates that this is really about the fundamental theorem of calculus, and not just about the definition of the integral. – Jonas Meyer Dec 23 '10 at 5:25 I don't understand the difference between the two – qwertymk Dec 23 '10 at 5:27 1 @qwertymk: Well, now you have a fabulous explanation from Arturo Magidin, but I'll also mention that en.wikipedia.org/wiki/Riemann_integral is a good starting point for learning about how the integral is actually defined, while en.wikipedia.org/wiki/Fundamental_theorem_of_calculus is a good starting point for learning about the relationship with antiderivatives. Both include excellent graphics! – Jonas Meyer Dec 23 '10 at 5:32 show 1 more comment ## 3 Answers First: the integral is defined to be the (net signed) area under the curve. The definition in terms of Riemann sums is precisely designed to accomplish this. The integral is a limit, a number. There is, a priori, no connection whatsoever with derivatives. (That is one of the things that makes the Fundamental Theorems of Calculus such a potentially surprising things). Why does the limit of the Riemann sums actually give the area under the graph? The idea of approximating a shape whose area we don't know both from "above" and from "below" with areas we do know goes all the way back to the Greeks. Archimedes gave bounds for the value of $\pi$ by figuring out areas of inscribed and circumscribed polygons in a circle, knowing that the area of the circle would be somewhere between the two; the more sides to the polygons, the closer the inner and outer polygons are to the circle, the closer the areas are to the area of the circle. The way Riemann tried to formalize this was with the "upper" and "lower" Riemann sums: assuming the function is relatively "nice", so that on each subinterval it has a maximum and a minimum, the "lower Riemann sum" is done by taking the largest "rectangle" that will lie completely under the graph by looking at the minimum value of the function on the interval and using that as height; and the "upper Riemann sum" is done by taking the smallest rectangle for which the graph will lie completely under it (by taking the maximum value of the function as the height). Certainly, the exact area under the graph on that interval will be somewhere between the two. If we let $\underline{S}(f,P)$ be the lower sum corresponding to some fixed partition $P$ of the interval, and $\overline{S}(f,P)$ be the upper sum, we will have that $$\underline{S}(f,P) \leq \int_a^b f(x)\,dx \leq \overline{S}(f,P).$$ (Remember that $\int_a^bf(x)\,dx$ is just the symbol we use to denote the exact (net signed) area under the graph of $f(x)$ between $a$ and $b$, whatever that quantity may be.) Also, intuitively, the more intervals we take, the closer these two approximations (one from below and one from above) will be. This does not always work out if all we do is take "more" intervals. But one thing we can show is that if $P'$ is a refinement of $P$ (it includes all the dividing points that $P$ had, and possibly more points) then $$\underline{S}(f,P)\leq \underline{S}(f,P')\text{ and } \overline{S}(f,P')\leq \overline{S}(f,P)$$ so at least the approximations are heading in the right direction. To see why this happens, suppose you split one of the subintervals $[t_i,t_{i+1}]$ in two, $[t_i,t']$ and $[t',t_{i+1}]$. The minimum of $f$ on $[t_i,t']$ and on $[t',t_{i+1}]$ are each greater than or equal to the minimum over the whole of $[t_i,t_{i+1}]$, but it may be that the minimum in one of the two bits is actually strictly larger than the minimum over $[t_i,t_{i+1}]$. The areas we get after the split can be no smaller, but they can be larger than the ones we had before the split. Similarly for the upper sums. So, let's consider one particular sequence of partitions: divide the interval into 2 equal parts; then into 4; then into 8; then into 16; then into 32; and so on; then into $2^n$, etc. If $P_n$ is the partition that divides $[a,b]$ into $2^n$ equal parts, then $P_{n+1}$ is a refinement of $P_n$, and so we have: $$\underline{S}(f,P_1) \leq\cdots \leq \underline{S}(f,P_n)\leq\cdots \leq\int_a^b f(x)\,dx \leq\cdots \leq\overline{S}(f,P_n)\leq\cdots \leq \overline{S}(f,P_2)\leq\overline{S}(f,P_1).$$ Now, the sequence of numbers $\underline{S}(f,P_1) \leq \underline{S}(f,P_2)\leq\cdots \leq \underline{S}(f,P_n)\leq\cdots$ is increasing and bounded above (by the area). So the numbers have a supremum; call it $\underline{S}$. This number is no more than $\int_a^b f(x)\,dx$. And the numbers $\overline{S}(f,P_1) \geq \overline{S}(f,P_2)\geq\cdots \geq \overline{S}(f,P_n)\geq\cdots$ are decreasing and bounded below, so they have a minimum; call this $\overline{S}$; again, it is no less than $\int_a^bf(x)\,dx$. So we have: $$\lim_{n\to\infty}\underline{S}(f,P_n) = \underline{S} \leq \int_a^b f(x)\,dx \leq \overline{S} = \lim_{n\to\infty}\overline{S}(f,P_n).$$ What if we are lucky? What if actually we have $\underline{S}=\overline{S}$? Then it must be the case that this common value is the value of $\int_a^b f(x)\,dx$. It just doesn't have a choice! It's definitely trapped between the two, and if there is no space between them, then it's equal to them. What Riemann proved was several things: 1. If $f$ is "nice enough", then you will necessarily get that $\underline{S}=\overline{S}$. In particular, continuous functions happen to be "nice enough", so it will definitely work for them (in fact, continuous functions turn out to be "very nice", not just "nice enough"). 2. If $f$ is "nice enough", then you don't have to use the partitions we used above. You can use any sequence of partitions, so lonthe "mesh size" (the size of the largest subinterval in the partition) gets smaller and smaller, and has limit of $0$ as $n\to\infty$; if it works for the partitions "divide-into-$2^n$-equal-intervals", then it works for any sequence of partitions whose mesh size goes to zero. So, for example, we can take $P_n$ to be the partition that divides $[a,b]$ into $n$ equal parts, even though $P_{n+1}$ is not a refinement of $P_n$ in this case. 3. In fact, you don't have to do $\underline{S}(f,P)$ and $\overline{S}(f,P)$. For the partition $P$, just pick any rectangle that has as its height any value of the function in the subinterval (that is, pick an arbitrary $x_i^*$ in the subinterval $[t_i,t_{i+1}]$, and use $f(x_i^*)$ as the height). Call the resulting sum $S(f,P,x_1^*,\ldots,x_n^*)$. Then you have $$\underline{S}(f,P) \leq S(f,P,x_1^*,\ldots,x_n^*)\leq \overline{S}(f,P)$$ because $\underline{S}(f,P)$ is computed using the smallest possible values of $f$ throughout, and $\overline{S}(f,P)$ is computedusing the largest possible values of $f$ throughout. But since we already know, from 1 and 2 above, that $\underline{S}(f,P)$ and $\overline{S}(f,P)$ have the same limit, then the sums $S(f,P,x_1^*,\ldots,x_n^*)$ also get squeezed and must have that same limit, which equals the integral. In particular, we can always take the left endpoint (and get a "Left Hand Sum") or we can always take the right endpoint (and get a "Right Hand Sum"), and you will nevertheless get the same limit. So in summary, you can pick any sequence of partitions, whichever happens to be convenient, so long as the mesh size goes to $0$, and you can pick any points on the subintervals (say, ones which make the calculations simpler) at each stage, and so long as the function is "nice enough" (for example, if it is continuous), everything will work out and the limit will be the number which must be the value of the area (because it was trapped between the lower and upper sums, and they both got squeezed together trapping the limit and the integral both between them). Now, (1) and (2) above are the hardest part of what Riemann did. Don't be surprised if it sounds a bit magical at this point. But I hope that you agree that if the lower and upper sums for the special partitions have the same limits then that limit must be the area that lies under the graph. Thanks to that work of Riemann, then (at least for continuous functions) we can define $\int_a^b f(x)\,dx$ to be the limit of, say, the left hand sums of the partitions we get by dividing $[a,b]$ into $n$ equal parts, because these partitions have mesh size going to $0$, we can pick any points we like (say, the left end points), and we know the limit is going to be that common value of $\underline{S}$ and $\overline{S}$, which has to be the area. So that, under this definition, $\int_a^b f(x)\,dx$ really is the net signed area under the graph of $f(x)$. It just doesn't have a choice but to be that, when $f$ is "nice enough". Second, the area does not turn into "the" antiderivative. What happens is that it turns out (perhaps somewhat magically) that the area can be computed using an antiderivative. I'll go into some more details below. As to how Newton figured this out, his teacher, Isaac Barrow, was the one who discovered there was a connection between derivatives and tangents; some of the basic ideas were his. They came from studying some simple functions and some simple formulas for tangents he had discovered. For example, the tangents to the parabola $y=x^2$ were interesting (there was generally geometric interest in tangents and in "squaring" regions, also known as finding the "quadrature" of a region, that is, finding a way to construct a square or rectangle that had the same area as the region you were considering), and let to associate the parabola $y=x^2$ to lines of the form $y=2x$. It does not take too much experimentation to realize that if you look at the area under $y=2x$ from 0 to a, you end up with $a^2$, establishing a connection. Barrow did this with arguments with infinitesimals (which were a bit fuzzy and not set on entirely correct and solid logical foundation until well into the 20th century), which were generally laborious, and only for some curves. When Newton extended Barrow's methods to more general curves and tangents, he also extended the discovery of the connection with areas, and was able to prove what is essentially the Fundamental Theorem of Calculus. Now, here is one way to approach the connection. We want to figure out the value of, say, $$\int_0^a f(x)\,dx$$ for some $a$. This can be done using limits and Riemann sums (Newton and Leibniz had similar methods, though not set up quite as precisely as Riemann sums are). But here is an absolutely crazy suggestion: suppose you can find a "master function" $\mathcal{M}$, which, when given any point $b$ between $0$ and $a$, will give you the value of $\int_0^b f(x)\,dx$. If you have such a master function, then you can use it to find the value of the integral you want just by taking $\mathcal{M}(a)$! In fact, this is the approach Barrow had taken: his great insight was that instead of trying to find the quadratire a particular area, he was trying to solve the problem of squaring several different (but related) areas at the same time. So he was looking for, for instance, a "master function" for the region was like a triangle except that the top was a parabola instead of a line (like the area from $0$ to $a$ under $y=x^2$), and so on. On its face, this is a completely ludicrous suggestion. It's like telling someone who is trying to know how to get from building A to building B that if he only memorizes the map for the entire city first, then he can use that knowledge to figure out how to get form A to B. If we are having trouble finding the integral $\int_0^a f(x)\,dx$, then the "master function" seems to require us to find not just that area, but also all areas in between! It's like telling someone who is having trouble walking that he should just run very slowly when he wants to walk. But, again, the interesting thing is that even though we may not be able to say what the "master function" is, we can say how it changes as b changes (remember, $\mathcal{M}(b) = \int_0^b f(x)\,dx$ is a number that depends on $b$, so $\mathcal{M}$ is a function of $b$). Because figuring out how functions change is easier than computing their values (just thing about derivatives, and how we can easily figure out the rate of change of $\sin(x)$, but we have a hard time actually computing specific values of $\sin(x)$ that are not among some very simple ones). (This is also something Barrow already knew, as did Newton). For "nice functions" (if $f$ is continuous on an interval that contains $0$ and $a$), we can do it using limits and some theorems about "nice" functions: Using limits, we have: \begin{align} \lim_{h\to 0}\frac{\mathcal{M}(b+h)-\mathcal{M}}{h} &= \frac{1}{h}\left(\int_0^{b+h}f(x)\,dx - \int_0^bf(x)\,dx\right)\\ &= \frac{1}{h}\int_b^{b+h}f(x)\,dx. \end{align} Since we are assuming that $f$ is continuous on $[0,a]$, it is continuous on the interval with endpoints $b$ and $b+h$ (I say it this way because $h$ could be negative). So it has a maximum and a minimum (continuous function on a finite closed interval). Same the maximum is $M(h)$ and the minimum is $m(h)$. Then $m(h) \leq f(x) \leq M(h)$ for all $x$ in the interval, so we know, since the integral is the area, that $$hm(h) \leq \int_b^{b+h}f(x)\,dx \leq hM(h).$$ That means that $$m(h) \leq \frac{1}{h}\int_b^{b+h}f(x)\,dx \leq M(h)\text{ if $h\gt 0$}$$ and $$M(h) \leq \frac{1}{h}\int_b^{b+h}f(x)\,dx \leq m(h)\text{ if $h\lt 0$.}$$ As $h\to 0$, the interval gets smaller, the difference between the minimum and maximum value gets smaller. One can prove that both $M$ and $m$ are continuous functions, and that $m(h)\to f(b)$ as $h\to 0$, and likewise that $M(h)\to f(b)$ as $h\to 0$. So we can use the Squeeze Theorem to conclude that since the limit of $\frac{1}{h}\int_b^{b+h}f(x)\,dx$ is squeezed between two functions that both have the same limit as $h\to 0$, then $\frac{1}{h}\int_b^{b+h}f(x)\,dx$ also has a limit as $h\to 0$ and is in fact that same quantity, namely $f(b)$. That is $$\frac{d}{db}\mathcal{M}(b) = \lim_{h\to 0}\frac{\mathcal{M}(b+h)-\mathcal{M}(b)}{h} = \lim_{h\to 0}\frac{1}{h}\int_{b}^{b+h}f(x)\,dx = f(b).$$ That is: when $f$ is continuous, the "Master function" for areas turns out to have a rate of change equal to $f$. This is not that crazy, if you think about it: how is the area under $y=f(x)$ from $x=0$ to $x=b$ changing? Well, it's changing by whatever $f$ is. This means that, whatever the "Master function" turns out to be, it will be an antiderivative of $f(x)$. We also know, because we are very good with derivatives, that if $\mathcal{F}(x)$ and $\mathcal{G}(x)$ are two functions, and $\mathcal{F}'(x) = \mathcal{G}'(x)$ for all $x$, then $\mathcal{F}$ and $\mathcal{G}$ differ by a constant: there exists a constant $k$ such that $\mathcal{F}(x) = \mathcal{G}(x)+k$ for all $x$. So, we know that the "Master function" is an antiderivative. If, by some sheer stroke of luck, we happen to find any antiderivative $F(x)$ for $f(x)$, then we know that the only possible difference between $\mathcal{M}(b)$ and $F(b)$ is a constant. What constant? Well, luckily we know one value of $\mathcal{M}(b)$: we know that $\mathcal{M}(0) = \int_0^0f(x)\,dx$ should be $0$. So, $M(0) = 0 = F(0)-F(0)$, which means the constant has to be $-F(0)$. That is, we must have $M(b) = F(b)-F(0)$ for all $b$. So, if we find any antiderivative $F$ of $f$, then $\mathcal{M}(b) = F(b)-F(0)$ is in fact the "Master function" we were looking for, the one that gives all the integrals between $0$ and $a$, including $0$ and including $a$. So that we have that two very different processes (computing areas using limits of Riemann sums, and derivatives) are connected: if $f(x)$ is continuous, and $F(x)$ is any antiderivative for $f(x)$ on $[0,a]$, then $$\int_0^a f(x)\,dx = \mathcal{M}(a) = F(a)-F(0).$$ But the integral did not "magically turn" into an antiderivative. It's that the "Master function" which can be used to keep track of all integrals of $f(x)$ has rate of change equal to $f$, which gives us a "back door" to computing integrals. Newton was able to prove this because he had the guide of Barrow's insight that this was happening for the functions he worked with. Barrow's insight was achieved because he had the brilliant idea of trying to come up with a "Master function" instead of trying to rectify lots of different areas one at a time, and he noticed the connection because he had already worked with tangents/derivatives for those functions. Leibniz likewise had access to Barrow's ideas, so the connection between the two was also known to him. - 3 Just a tiny note: "finding a way to construct a rectangle that had the same area as the region you were considering" would be what they called in those days "quadrature". Otherwise, this answer is a work of Art, as expected. :) – J. M. Dec 23 '10 at 5:49 @J.M. That's what I get for not checking the proper term in English and trying to go by with dim recollections of what it was in Spanish... Thanks. (And don't call me "Art"; I don't care for it (-; ). – Arturo Magidin Dec 23 '10 at 6:07 3 It was intended to be a (horrible) pun, but sure, I won't do it again. ;) – J. M. Dec 23 '10 at 6:11 3 @J.M.: Oh, I got the pun all right (hence the winking smiley, and I did see your smiley). Just wanted to nip any possibility of the nickname getting picked up in the bud. – Arturo Magidin Dec 23 '10 at 6:12 That was awesome! – MyUserIsThis Feb 24 at 20:07 show 1 more comment One way you can perhaps "justify"/give an intuitive reason is to consider the following figure: $A(x)$ is the area under the curve from $0$ to $x$, the brown region. $A(x+dx)$ is the area under the curve from $0$ to $x + dx$, the brown + gray. Now for for really small $dx$, we can consider the gray region to be a rectangle of side width $dx$ and height $f(x)$. Thus $\dfrac{A(x+dx) - A(x)}{dx} = f(x)$. Thus as $dx \to 0$, we see that $A'(x) = f(x)$. It is kind of intuitive to define area by approximating by very thin rectangles. The above gives an intuition as to why the derivative of the area gives the curve. - 2 +1.. For graphics and explanation. – night owl Apr 17 '11 at 5:29 I strongly recommend that you take a look at the first chapter of Gilbert Strang's Calculus textbook: http://ocw.mit.edu/resources/res-18-001-calculus-online-textbook-spring-2005/textbook/MITRES_18_001_strang_1.pdf. This chapter provides an insightful introduction to integration that likely takes an approach that is very different from your professor's. A typical explanation of integration is as follows: We want to know the area under a curve. We can approximate the area under a curve by summing the area of lots of rectangles, as shown above. It is clear that with hundereds or thousands of rectangles, the sum of the area of each rectangle is very nearly the area under the curve. In the limit, we get that the sum is exactly equal to the area. This animation may help with the intuition, We define the integral to be the limit described and depicted above: $\int _{ a }^{ b }{ f(x)dx=\lim _{ n\rightarrow \infty }{ \sum _{ i=1 }^{ n }{ f({ x }_{ i })\Delta x } } }$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 167, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9616135358810425, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/38399/what-is-a-good-non-technical-introduction-to-theories-of-everything/38414
# What is a good non-technical introduction to theories of everything? I'm not a physicist but I'm interested in unified theories, and I do not know how to start learning about it. What would be a good book to read to start learning about this topic? - I'm not sure if this is going to be on topic here, but for now I just edited it to be one of our book recommendations. We'll see what the community thinks about its on-topicness. – David Zaslavsky♦ Sep 26 '12 at 17:36 ## 3 Answers As a theory of everything includes a theory of all particular things, it would be good if you start by learning about the theories that need to be unified. This means first • some quantum mechanics, • something about classical electromagnetism, • something about special anf general relativity, then • some quantum field theory, • something about quantum electrodynamics, • something about the standard model. So you should look at the recommendations for introductions to these subjects available at our Book recommendations. Unless you are content with such books as ''The elegant universe'' http://en.wikipedia.org/wiki/The_Elegant_Universe where you learn the buzzwords without a deeper understanding. - These sorts of questions are hard to answer unless there is some understanding about the level of math and physics the asker actually has. Probably one of the most irritating aspect about trying to learn physics is that there is a lot of literature out there that completely ignores any discussion of the Lagrangian and Hamiltonian which are so fundamental to physics formulations (in fact I have an entire college physics text that doesn't use either term once, which is mind numbing and infuriating in retrospect, and I even had a physics grad argue blue in the face that the Hamiltonian was not $\mathcal{H}=T+V$). If a person has not been exposed previously to these concepts, then it is very difficult to have a coherent conversation about physics let alone the Theory of Everything. Assuming you have at least this level of introductory understanding, the next place I would start is to get an understanding of algebra, lie groups and the standard model. The best introductory paper I have ever read is A Simple Introduction to Particle Physics by Robinson, Bland, Cleaver and Dittman of Baylor University. I would focus especially on Part II - Algebraic Foundations. The reason this is critical is that although the Theory of Everything is not some sort of crude algebra (despite attempts by some to make it so), modern physics is understood in the language of algebra, and even more importantly a supersymmetry algebra which makes us of Grassmann numbers for the fermionic fields. Another good introductory text which is worth exploring is the book Lie Groups, Physics and Geometry by Robert Gilmore. I would pay special attention to chapter one, as there is an excellent discussion of why the general polynomial of degree 5 or more can not be solved using radicals. It also gives an excellent initial expose about Lie groups and their classification as simple vs. solvable. In any case, if you want to master Theories of Everything, it is the language of algebra that needs to mastered first. - I particularly like the links to the Robinson et al. paper and the book about Lie groups and geometry for myself +1, but I'm not sure if this is what the asker had in mind too, since these resources are rather technical ... – Dilaton Sep 27 '12 at 16:24 1 @Dilaton Probably true, I am just to the point where I think people need to be realistic in the level of effort they are going to have to invest in order to get a good understanding to discuss TOE's. Most popular discussions really barely touch on some of the more difficult concepts. Algebra is really not too difficult, its just that we don't introduce it early enough in the curriculum for people to make the mental connections. – Hal Swyers Sep 27 '12 at 18:50 of course, the hamiltonian is T + V, but I would argue that that is a horrible definition to use for the Hamiltonian--fundamentally, it is the Legendre transformation of the lagrangian, replacing time derivatives with their canonical momenta. – Jerry Schirmer Oct 30 '12 at 13:18 The book Out of this world written by Stephen Webb is a good introduction if you have really no idea what modern fundamental physics is up to. It is the one that made me excited about this stuff for the first time several years ago, and the excitement still holds on :-). It gently starts by explaining why symmetries are important in physics, followed by an overview about QM and GR, what particle physics generally is about, and the standard model. Then the key ideas behind GUTs (Grand Unified Theories), supersymmetry, and extra dimensions leading to supergravity as a first unified theory including gravity, get introduced. The second part focuses on explaining string and M-theory (other approaches are shortly mentioned too) and some topics, such as black holes, the holographic principle, and cosmology that can be addressed by it. The book builds up the wisdom it explains in a logical and systematic manner and it is written in a absorbing style that made it difficult for me to put it away. Historical notes about when which ideas and concepts are discovered by whom are included and the way the narrator tells the story made me think that these are all nice people who do awesome and cool things when reading it for the first time. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9598476886749268, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/28265?sort=votes
## Proving Hodge decomposition without using the theory of elliptic operators? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In the common Hodge theory books, the authors usually cite other sources for the theory of elliptic operators, like in the Book about Hodge Theory of Claire Voisin, where you find on page 128 the Theorem 5.22: Let P : E → F be an elliptic differential operator on a compact manifold. Assume that E and F are of the same rank, and are equipped with metrics. Then Ker P is a finite-dimensional subspace of the smooth sections of E and Im P is a closed subspace of finite codimension of the smooth sections of F and we have an L²-orthogonal direct sum decomposition: smooth sections of E = Ker P ⊕ Im P* (where P* is the formal L²-adjoint of P) In the case of Hodge Theory, we consider the elliptic self-adjoint operator Δ, the Laplacian (or: Hodge-Laplacian, or Laplace-Beltrami operator). A proof for this theorem is in Wells' "Differential Analysis on Complex Manifolds", Chapter IV - but it takes about 40 pages, which is quite some effort! Now that I'm learning the theory of elliptic operators (in part, because I want to patch this gap in my understanding of Hodge Theory), I wonder if this "functional analysis" is really always necessary. Do you know of any class of complex manifolds (most likely some restricted class of complex projective varieties) where you can get the theorem above without using the theory of elliptic operators (or at least, where you can simplify the proofs that much that you don't notice you're working with elliptic operators)? Maybe the general theorem really requires functional analysis (I think so), but the Hodge decomposition might follow from other arguments. I would be very happy to see some arguments proving special cases of Hodge decomposition on, say, Riemann surfaces. I would be even happier to hear why this is implausible (this would motivate me to learn more about these fascinating elliptic differential operators). If this ends up being argumentative and subjective, feel free to use the community wiki hammer. - ## 4 Answers The hard part of the proof of the Hodge decomposition (which is where the serious functional analysis is used) is the construction of the Green's operator. In Section 1.4 of Lange and Birkenhake's "Complex Abelian Varieties", they prove the Hodge decomposition for complex tori using an easy Fourier series argument to construct the Green's operator. - 2 Moreover, studying tori isn't a diversion from the general case; the Sobolev theory can be done neatly on tori using Fourier series, and then transplanted to other manifolds. This is the line taken by Griffiths & Harris, who I think give a very good account of the Hodge theorem. Wells sets things up in the generality needed by Atiyah-Singer - for Hodge theory, this is overkill. – Tim Perutz Jun 15 2010 at 22:15 Thank you for the reference to Lange and Birkenhake, I didn't know this book. It is exactly what I was looking for: a class of Kähler manifolds where we are able to prove Hodge decomposition more easily. This example might be useful to see more directly what's going on (in the proof). – Konrad Voelkel Jun 16 2010 at 17:19 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is the final revision and probably my final post for a long time (deadlines). I wanted to write a more definitive answer, about how to approach the Hodge theorem, since I've thinking about it for a while. This a bit long. So the short summary is that there is no really easy path, but each is beautiful in it's own way. Method 1 (orthogonal projections): This is the standard proof, although there are many variations which can be found in Griffiths-Harris, Warner, Wells,... Here's a very rough idea. The basic problem is to show that the space $closed(X)$ of closed $C^\infty$ forms on a compact Riemannian manifold $X$ is a direct sum of the space of exact forms $exact(X)$ and the space of harmonic forms $harm(X)$. A closed form is harmonic if and only if it is orthogonal to closed form (easy), so one might try to first prove that $$closed(X)= exact(X)\oplus exact(X)^\perp$$ and then identify the latter with $harm(X)$. Since these are infinite dimensional, the decomposition isn't automatic. However, one can make it work by using $L^2$ forms and applying Hilbert space methods to obtain: $$\overline{closed(X)}= \overline{exact(X)}\oplus \overline{exact(X)}^\perp$$ But at end one wants to come back to $C^\infty$ forms, and here is where the magic of elliptic operators comes in. The basic result which makes this work is the regularity theorem: a weak solution of elliptic equation, e.g Laplace's equation, is in fact a true $C^\infty$ solution. The space on the right of the second decomposition is therefore $harm(X)$, and this is all one needs. As you said, the full details entail quite a bit of work, but one can look at standard books on Riemann surfaces (Farkas-Kra, Forster, Narasimhan...) for some instructive special cases. Method 2 (Heat Equation): The heuristic is that if you think of the closed form as an initial temperature, governed by the heat equation, it should evolve toward a harmonic steady state. The nice thing is that this can solved explicity on Euclidean space, and this gives a good (short time) approximation for the general case. If I may make a shameless plug, I wrote up an outine in chapter 8 of my notes http://www.math.purdue.edu/~dvb/preprints/book.pdf Method 3 (Deligne-Illusie): This is really an amplification of Kevin Lin's answer. One important consequence of the Hodge theory is the degeneration of the Hodge to De Rham spectral sequence $$H^j(X,\Omega_X^i) \Rightarrow H^{i+j}(X,\Omega_X^\bullet) = H^{i+j}(X,\mathbb{C})$$ when $X$ is smooth and projective along with Kodaira vanishing. The first algebraic proof of this was due to Falting (on the way to Hodge-Tate). Deligne and Illusie gave a comparatively elementary proof. Although as Ravi Vakil commented, it is not the best way to first learn this stuff. Nor does it give the full Hodge decomposition. However, for people who want to go this route, an introduction aimed at students can be found in the book by Hélène Esnault and the late Eckart Viehweg. Addendum (added June 24): I wanted to briefly address the question of how much of the Hodge decomposition can be understood algebraically. One can define algebraic de Rham cohomology with its Hodge filtration coming from the spectral sequence above. What is missing is a purely algebraic description of the "Betti lattice" $image [H^i(X,\mathbb{Z})\to H^i(X,\mathbb{C})]$ and the fact that the conjugate filtration $\overline{F}$ and $F$ are opposed in Deligne's sense. The first issue already seems serious. Even if $X$ is defined over $\mathbb{Q}$, a basis for the lattice typically involves transcendentals. This is already clear in the simplest example $H^1(\mathbb{A}^1-{0})$, the lattice is spanned by $[\frac{dz}{2\pi iz}]$. - @Donu: so does your final paragraph mean that getting decomposition from degeneration isn't evident to you either (as it is not to me)? – Boyarsky Jun 17 2010 at 18:42 1 @Boyarsky: I think you are correct. I think Hodge decomposition does not follow from degeneration. I think there is no purely algebraic proof of Hodge decomposition... – Kevin Lin Jun 17 2010 at 20:43 You can, however, use Hironaka and Hodge theory for compact complex algebraic varieties to get a mixed hodge structure on the cohomology of noncompact singular complex varieties :-) – Konrad Voelkel Jun 18 2010 at 7:17 I guess Kevin Lin answered the question, but the existence of a (mixed) Hodge structure on cohomology cannot be proved by pure algebra at present. – Donu Arapura Jun 18 2010 at 19:06 I agree with the suggestion of starting with Riemann surfaces. The higher dimensional proof is more technical and involved, but involves no essential new ideas. – Deane Yang Jun 19 2010 at 1:39 show 1 more comment Deligne-Illusie prove degeneration of the Hodge-de Rham spectral sequence for smooth projective varieties using purely algebraic methods. Then probably the corresponding result in the analytic category follows by GAGA or some such thing. - But one has to prove that complex conjugation splits the Hodge filtration. Is that obvious just from knowing the dimensions add up correctly (which is what we'd get from GAGA)? – Boyarsky Jun 15 2010 at 16:21 6 This is great, but hard, and unfortunately seems not an ideal alternate first route into Hodge theory. – Ravi Vakil Jun 15 2010 at 16:52 4 Plus, to prove GAGA you need to know that the cohomology groups of a sheaf over a compact manifold are finite-dimensional. I've only seen this proved as an application of some hardcore functional analysis. – Gunnar Magnusson Jun 16 2010 at 8:08 2 Just to point out the reference: Voisin mentions this in her Hodge Theory book, too, on page 206. She also mentions GAGA there but the complex conjugation property doesn't seem to follow from this (otherwise she would have mentioned that). – Konrad Voelkel Jun 16 2010 at 16:54 The Hodge decomposition can be proved, in a very nice, abstract-functional-analysis setting, on so-called Hilbert complexes. Brüning and Lesch wrote an excellent paper on the topic in J. Funct. Anal., first developing the theory on arbitrary Hilbert complexes, and then discussing the application to elliptic complexes. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93568354845047, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/05/17/egoroffs-theorem/?like=1&source=post_flair&_wpnonce=2c354f29b9
# The Unapologetic Mathematician ## Egoroff’s Theorem Let’s look back at what goes wrong when a sequence of functions doesn’t converge uniformly. Let $X$ be the closed unit interval $\left[0,1\right]$, and let $f_n(x)=x^n$. Pointwise, this converges to a function $f$ with $f(x)=0$ for $0\leq x<1$, and $f(1)=1$. This convergence can’t be uniform, because the uniform limit of a sequence of continuous functions is continuous. But things only go wrong at the one point, and the singleton $\{1\}$ has measure zero. That is, the sequence $f_n$ converges almost everywhere to the function with constant value $0$. The convergence still isn’t uniform, though, because we still have a problem at $\{1\}$. But if we cut out any open patch and only look at the interval $\left[0,1-\epsilon\right]$, the convergence is uniform. We might think that this is “uniform a.e.”, but we have to cut out a set of positive measure to make it work. The set can be as small as we want, but we can’t get uniformity by just cutting out $\{1\}$. However, what we’ve seen is a general phenomenon expressed in Egoroff’s Theorem: If $E\subseteq X$ is a measurable set of finite measure, and if $\{f_n\}$ is a sequence of a.e. finite-valued measurable functions converging a.e. on $E$ to a finite-valued measurable function $f$, then for every $\epsilon>0$ there is a measurable subset $F$ with $\mu(F)<\epsilon$ so that $\{f_n\}$ converges uniformly to $f$ on $E\setminus F$. That is, if we have a.e. convergence we can get to uniform convergence by cutting out an arbitrarily small part of our domain. First off, we cut out a set of measure zero from $E$ so that $\{f_n\}$ converges pointwise to $f$. Now we define the measurable sets $\displaystyle E_n^m=\bigcap\limits_{i=n}^\infty\left\{x\in X\bigg\vert\lvert f_i(x)-f(x)\rvert<\frac{1}{m}\right\}$ As $n$ gets bigger, we’re taking the intersection of fewer and fewer sets, and so $E_1^m\subseteq E_2^m\subseteq\dots$. Since $\{f_n\}$ converges pointwise to $f$, eventually the difference $\lvert f_i(x)-f(x)\rvert$ gets down below every $\frac{1}{m}$, and so $\lim_nE_n^m\supseteq E$ for every $m$. Thus we conclude that $\lim_n\mu(E\setminus E_n^m)=0$. And so for every $m$ there is an $N(m)$ so that $\displaystyle\mu(E\setminus E_{N(m)}^m)<\frac{\epsilon}{2^m}$ Now let’s define $\displaystyle F=\bigcup\limits_{m=1}^\infty\left(E\setminus E_{N(m)}^n\right)$ This is a measurable set contained in $E$, and monotonicity tells us that $\displaystyle\mu(F)=\mu\left(\bigcup\limits_{m=1}^\infty\left(E\setminus E_{N(m)}^n\right)\right)\leq\sum\limits_{m=1}^\infty\mu\left(E\setminus E_{N(m)}^n\right)<\sum\limits_{m=1}^\infty\frac{\epsilon}{2^m}=\epsilon$ We can calculate $\displaystyle E\setminus F=E\cap\bigcap\limits_{m=1}^\infty E_{N(m)}^m$ And so given any $m$ we take $n\geq N(m)$. Then for any $x\in E\setminus F$ we have $x\in E_n^m$, and thus $\lvert f_n(x)-f(x)\rvert<\frac{1}{m}$. Since we can pick this $n$ independently of $x$, the convergence on $E\setminus F$ is uniform. ### Like this: Posted by John Armstrong | Analysis, Measure Theory ## 1 Comment » 1. [...] Uniform Convergence From the conclusion of Egoroff’s Theorem we draw a new kind of convergence. We say that a sequence of a.e. finite-valued measurable [...] Pingback by | May 18, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 51, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.912735641002655, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Compass_and_straightedge
# Compass and straightedge constructions (Redirected from Compass and straightedge) Creating a regular hexagon with a ruler and compass Construction of a regular pentagon Compass-and-straightedge or ruler-and-compass construction is the construction of lengths, angles, and other geometric figures using only an idealized ruler and compass. The idealized ruler, known as a straightedge, is assumed to be infinite in length, and has no markings on it and only one edge. The compass is assumed to collapse when lifted from the page, so may not be directly used to transfer distances. (This is an unimportant restriction, as this may be achieved via the compass equivalence theorem.) More formally, the only permissible constructions are those granted by Euclid's first three postulates. Every point constructible using straightedge and compass may be constructed using compass alone. A number of ancient problems in plane geometry impose this restriction. The most famous straightedge-and-compass problems have been proven impossible in several cases by Pierre Wantzel, using the mathematical theory of fields. In spite of existing proofs of impossibility, some persist in trying to solve these problems.[1] Many of these problems are easily solvable provided that other geometric transformations are allowed: for example, doubling the cube is possible using geometric constructions, but not possible using straightedge and compass alone. Mathematician Underwood Dudley has made a sideline of collecting false ruler-and-compass proofs, as well as other work by mathematical cranks, and has collected them into several books. ## Compass and straightedge tools A compass The "compass" and "straightedge" of compass and straightedge constructions are idealizations of rulers and compasses in the real world: • The compass can be opened arbitrarily wide, but (unlike some real compasses) it has no markings on it. Circles can only be drawn using two existing points which give the centre and a point on the circle. The compass collapses when not used for drawing, it cannot be used to copy a length to another place. • The straightedge is infinitely long, but it has no markings on it and has only one edge, unlike ordinary rulers. It can only be used to draw a line segment between two points or to extend an existing line. The modern compass generally does not collapse and several modern constructions use this feature. It would appear that the modern compass is a "more powerful" instrument than the ancient compass. However, by Proposition 2 of Book 1 of Euclid's Elements, no computational power is lost by using such a collapsing compass; there is no need to transfer a distance from one location to another. Although the proposition is correct, its proofs have a long and checkered history.[2] Each construction must be exact. "Eyeballing" it (essentially looking at the construction and guessing at its accuracy, or using some form of measurement, such as the units of measure on a ruler) and getting close does not count as a solution. Each construction must terminate. That is, it must have a finite number of steps, and not be the limit of ever closer approximations. Stated this way, compass and straightedge constructions appear to be a parlour game, rather than a serious practical problem; but the purpose of the restriction is to ensure that constructions can be proven to be exactly correct, and is thus important to both drafting (design by both CAD software and traditional drafting with pencil, paper, straight-edge and compass) and the science of weights and measures, in which exact synthesis from reference bodies or materials is extremely important.[citation needed] One of the chief purposes of Greek mathematics was to find exact constructions for various lengths; for example, the side of a pentagon inscribed in a given circle. The Greeks could not find constructions for three problems: • Squaring the circle: Drawing a square the same area as a given circle. • Doubling the cube: Drawing a cube with twice the volume of a given cube. • Trisecting the angle: Dividing a given angle into three smaller angles all of the same size. For 2000 years people tried to find constructions within the limits set above, and failed. All three have now been proven under mathematical rules to be impossible generally (angles with certain values can be trisected, but not all possible angles). ## The basic constructions The basic constructions All compass and straightedge constructions consist of repeated application of five basic constructions using the points, lines and circles that have already been constructed. These are: • Creating the line through two existing points • Creating the circle through one point with centre another point • Creating the point which is the intersection of two existing, non-parallel lines • Creating the one or two points in the intersection of a line and a circle (if they intersect) • Creating the one or two points in the intersection of two circles (if they intersect). For example, starting with just two distinct points, we can create a line or either of two circles (in turn, using each point as centre and passing through the other point). If we draw both circles, two new points are created at their intersections. Drawing lines between the two original points and one of these new points completes the construction of an equilateral triangle. Therefore, in any geometric problem we have an initial set of symbols (points and lines), an algorithm, and some results. From this perspective, geometry is equivalent to an axiomatic algebra, replacing its elements by symbols. Probably Gauss first realized this, and used it to prove the impossibility of some constructions; only much later did Hilbert find a complete set of axioms for geometry. ## Constructible points and lengths Trisecting a segment with ruler and compass. ### Formal proof There are many different ways to prove something is impossible. A more rigorous proof would be to demarcate the limit of the possible, and show that to solve these problems one must transgress that limit. Much of what can be constructed is covered in intercept theory. We could associate an algebra to our geometry using a Cartesian coordinate system made of two lines, and represent points of our plane by vectors. Finally we can write these vectors as complex numbers. Using the equations for lines and circles, one can show that the points at which they intersect lie in a quadratic extension of the smallest field F containing two points on the line, the center of the circle, and the radius of the circle. That is, they are of the form $x+y{\sqrt{k}}$, where x, y, and k are in F. Since the field of constructible points is closed under square roots, it contains all points that can be obtained by a finite sequence of quadratic extensions of the field of complex numbers with rational coefficients. By the above paragraph, one can show that any constructible point can be obtained by such a sequence of extensions. As a corollary of this, one finds that the degree of the minimal polynomial for a constructible point (and therefore of any constructible length) is a power of 2. In particular, any constructible point (or length) is an algebraic number, though not every algebraic number is constructible (i.e. the relationship between constructible lengths and algebraic numbers is not bijective); for example, $\sqrt[3]{2}$ is algebraic but not constructible. ## Constructible angles There is a bijection between the angles that are constructible and the points that are constructible on any constructible circle. The angles that are constructible form an abelian group under addition modulo 2π (which corresponds to multiplication of the points on the unit circle viewed as complex numbers). The angles that are constructible are exactly those whose tangent (or equivalently, sine or cosine) is constructible as a number. For example the regular heptadecagon is constructible because $\cos{\left(\frac{2\pi}{17}\right)} = -\frac{1}{16} \; + \; \frac{1}{16} \sqrt{17} \;+\; \frac{1}{16} \sqrt{34 - 2 \sqrt{17}} \;+\; \frac{1}{8} \sqrt{ 17 + 3 \sqrt{17} - \sqrt{34 - 2 \sqrt{17}} - 2 \sqrt{34 + 2 \sqrt{17}} }$ as discovered by Gauss.[3] The group of constructible angles is closed under the operation that halves angles (which corresponds to taking square roots). The only angles of finite order that may be constructed starting with two points are those whose order is either a power of two, or a product of a power of two and a set of distinct Fermat primes. In addition there is a dense set of constructible angles of infinite order. ## Compass and straightedge constructions as complex arithmetic Given a set of points in the Euclidean plane, selecting any one of them to be called 0 and another to be called 1, together with an arbitrary choice of orientation allows us to consider the points as a set of complex numbers. Given any such interpretation of a set of points as complex numbers, the points constructible using valid compass and straightedge constructions alone are precisely the elements of the smallest field containing the original set of points and closed under the complex conjugate and square root operations (to avoid ambiguity, we can specify the square root with complex argument less than π). The elements of this field are precisely those that may be expressed as a formula in the original points using only the operations of addition, subtraction, multiplication, division, complex conjugate, and square root, which is easily seen to be a countable dense subset of the plane. Each of these six operations corresponding to a simple compass and straightedge construction. From such a formula it is straightforward to produce a construction of the corresponding point by combining the constructions for each of the arithmetic operations. More efficient constructions of a particular set of points correspond to shortcuts in such calculations. Equivalently (and with no need to arbitrarily choose two points) we can say that, given an arbitrary choice of orientation, a set of points determines a set of complex ratios given by the ratios of the differences between any two pairs of points. The set of ratios constructible using compass and straightedge from such a set of ratios is precisely the smallest field containing the original ratios and closed under taking complex conjugates and square roots. For example the real part, imaginary part and modulus of a point or ratio z (taking one of the two viewpoints above) are constructible as these may be expressed as $\mathrm{Re}(z)=\frac{z+\bar z}{2}\;$ $\mathrm{Im}(z)=\frac{z-\bar z}{2i}\;$ $\left | z \right | = \sqrt{z \bar z}.\;$ Doubling the cube and trisection of an angle (except for special angles such as any φ such that φ/6π is a rational number with denominator the product of a power of two and a set of distinct Fermat primes) require ratios which are the solution to cubic equations, while squaring the circle requires a transcendental ratio. None of these are in the fields described, hence no compass and straightedge construction for these exists. ## Impossible constructions The following three construction problems, whose origins date from Greek antiquity, were considered impossible in the sense that they could not be solved using only the compass and straightedge. With modern mathematical methods this "consideration" of the Greek mathematicians can be proved to be correct. The problems themselves, however, are solvable, and the Greeks knew how to solve them, without the constraint of working only with straightedge and compass. ### Squaring the circle Main article: Squaring the circle The most famous of these problems, squaring the circle, otherwise known as the quadrature of the circle, involves constructing a square with the same area as a given circle using only straightedge and compass. Squaring the circle has been proven impossible, as it involves generating a transcendental number, that is, ${\sqrt{\pi}}$. Only certain algebraic numbers can be constructed with ruler and compass alone, namely those constructed from the integers with a finite sequence of operations of addition, subtraction, multiplication, division, and taking square roots. The phrase "squaring the circle" is often used to mean "doing the impossible" for this reason. Without the constraint of requiring solution by ruler and compass alone, the problem is easily solvable by a wide variety of geometric and algebraic means, and has been solved many times in antiquity. ### Doubling the cube Main article: Doubling the cube Doubling the cube: using only a straight-edge and compass, construct the side of a cube that has twice the volume of a cube with a given side. This is impossible because the cube root of 2, though algebraic, cannot be computed from integers by addition, subtraction, multiplication, division, and taking square roots. This follows because its minimal polynomial over the rationals has degree 3. This construction is possible using a straightedge with two marks on it and a compass. ### Angle trisection Main article: Angle trisection Angle trisection: using only a straightedge and a compass, construct an angle that is one-third of a given arbitrary angle. This is impossible in the general case. For example: though the angle of π/3 radians (60°) cannot be trisected, the angle 2π/5 radians (72° = 360°/5) can be trisected. This problem is also easily solved when a straightedge with two marks on it is allowed (a neusis construction). ## Constructing regular polygons Main article: Constructible polygon Construction of a square. Some regular polygons (e.g. a pentagon) are easy to construct with straightedge and compass; others are not. This led to the question: Is it possible to construct all regular polygons with straightedge and compass? Carl Friedrich Gauss in 1796 showed that a regular n-sided polygon can be constructed with straightedge and compass if the odd prime factors of n are distinct Fermat primes. Gauss conjectured that this condition was also necessary, but he offered no proof of this fact, which was provided by Pierre Wantzel in 1837.[4] ## Constructing with only ruler or only compass It is possible (according to the Mohr–Mascheroni theorem) to construct anything with just a compass if it can be constructed with a ruler and compass, provided that the given data and the data to be found consist of discrete points (not lines or circles). It is impossible to take a square root with just a ruler, so some things that cannot be constructed with a ruler can be constructed with a compass; but (by the Poncelet–Steiner theorem) given a single circle and its center, they can be constructed. ## Extended constructions ### Markable rulers Archimedes and Apollonius gave constructions involving the use of a markable ruler. This would permit them, for example, to take a line segment, two lines (or circles), and a point; and then draw a line which passes through the given point and intersects both lines, and such that the distance between the points of intersection equals the given segment. This the Greeks called neusis ("inclination", "tendency" or "verging"), because the new line tends to the point. In this expanded scheme, any distance whose ratio to an existing distance is the solution of a cubic or a quartic equation is constructible. It follows that, if markable rulers and neusis are permitted, the trisection of the angle (see Archimedes' trisection) and the duplication of the cube can be achieved; the quadrature of the circle is still impossible. Some regular polygons, like the heptagon, become constructible; and John H. Conway gives constructions for several of them;[5] but the 11-sided polygon, the hendecagon, is still impossible, and infinitely many others. When only an angle trisector is permitted, there is a complete description of all regular polygons which can be constructed, including above mentioned regular heptagon, triskaidecagon (13-gon) and enneadecagon (19-gon).[6] It is open whether there are infinitely many primes p for which a regular p-gon is constructible with ruler, compass and an angle trisector. ### Origami Main article: Huzita–Hatori axioms The mathematical theory of origami is more powerful than compass and staightedge construction. Folds satisfying the Huzita-Hatori axioms can construct exactly the same set of points as the extended constructions using a compass and a marked ruler. Therefore origami can also be used to solve cubic equations (and hence quartic equations), and thus solve two of the classical problems.[7] ### The extension field In abstract terms, using these more powerful tools of either neusis using a markable ruler or the constructions of origami extends the field of constructible numbers to a larger subfield of the complex numbers, which contains not only the square root, but also the cube roots, of every element. The arithmetic formulae for constructible points described above have analogies in this larger field, allowing formulae that include cube roots as well. The field extension generated by any additional point constructible in this larger field has degree a multiple of a power of two and a power of three, and may be broken into a tower of extensions of degree 2 and 3. ## Computation of binary digits In 1998 Simon Plouffe gave a ruler and compass algorithm that can be used to compute binary digits of certain numbers.[8] The algorithm basically involves the repeated doubling of an angle and becomes physically impractical after about 20 binary digits. ## See also • Constructible number • Constructible polygon • Geometrography • Interactive geometry software may allow the user to create and manipulate ruler-and-compass constructions. • List of interactive geometry software, most of them show compass and straightedge constructions • Mohr–Mascheroni theorem • Poncelet–Steiner theorem ## References 1. Underwood Dudley (1983), "What To Do When the Trisector Comes", The Mathematical Intelligencer 5 (1): 20–25 2. Godfried Toussaint, "A new look at Euclid’s second proposition," The Mathematical Intelligencer, Vol. 15, No. 3, (1993), pp. 12-24. 3. Kazarinoff, Nicholas D. (2003). Ruler and the Round. Mineola, N.Y.: Dover. pp. 29–30. ISBN 0-486-42515-0. 4. Conway, John H. and Richard Guy: The Book of Numbers 5. Gleason, Andrew: "Angle trisection, the heptagon, and the triskaidecagon", Amer. Math. Monthly 95 (1988), no. 3, 185-194. 6. Row, T. Sundara (1966). Geometric Exercises in Paper Folding. New York: Dover. 7. Simon Plouffe (1998). "The Computation of Certain Numbers Using a Ruler and Compass". Journal of Integer Sequences 1. ISSN 1530-7638.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9138073921203613, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/152319/law-of-cosines-distance-formula-proof
# Law of Cosines Distance Formula Proof So I'm trying to understand a law of cosines proof that involves the distance formula and I'm having trouble. I've included the proof below from wikipedia that I'm trying to follow. What I'm have trouble understanding is the way they define the triangle point A. I've always been taught that cosine represents $\frac{adj}{hyp}$ so I'm not sure what cosine represents outside the context of a right triangle. After I understand this I can follow the proof I'm just trying to understand how $b \cos\gamma,\ b \sin\gamma$ represent the x and y coordinates with a generic angle in $\gamma$. Any other resources or advice would be appreciated. $A = (b \cos\gamma,\ b \sin\gamma),\ B = (a,\ 0),\ \text{and}\ C = (0,\ 0)\,.$ By the distance formula, we have $c = \sqrt{(a - b \cos\gamma)^2 + (0 - b \sin\gamma)^2}\,.$ Now, we just work with that equation: :\begin{align} c^2 & {} = (a - b \cos\gamma)^2 + (- b \sin\gamma)^2 \\ c^2 & {} = a^2 - 2 a b \cos\gamma + b^2 \cos^2 \gamma + b^2 \sin^2 \gamma \\ c^2 & {} = a^2 + b^2 (\sin^2 \gamma + \cos^2 \gamma) - 2 a b \cos\gamma \\ c^2 & {} = a^2 + b^2 - 2 a b \cos\gamma\,. \end{align} - – robjohn♦ Jun 1 '12 at 4:16 – Joe Jun 1 '12 at 4:16 1 I actually think I've figured it out, writing out your question really helps you learn. I think I was just setting up my triangles wrong, I noticed a right triangle can be formed that makes sense. – Math_Illiterate Jun 1 '12 at 4:22 ## 1 Answer $c$ is the length of the line $AB$, i.e. from $(b \cos\gamma,\ b \sin\gamma)$ to $(a,\ 0)$, which has horizontal component $a - b \cos\gamma$ and vertical component $0 - b \sin\gamma$. So using Pythagoras, $c = \sqrt{(a - b \cos\gamma)^2 + (0 - b \sin\gamma)^2}$. Then expand, and simplify using Pythagoras again in $\sin^2 \gamma + \cos^2 \gamma =1$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.917787492275238, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/71593/deriving-fourier-series-using-complex-numbers-introduction
# Deriving fourier series using complex numbers - introduction So this is the follow up thread to the one I asked before but you don't need to read the other one for this to make sense. If you want to, read PZZ's answer: link to the thread. So I know that there exist a basis in $L^2$ which is a set of functions in the form $e^{inx}$. It turns out that this is an orthonormal basis. Also given any $f \in L^2$, there exists a sequence of complex numbers $(c_n)$ such that $$f = \sum_{n \in \mathbb{Z}}{c_ne^{inz}}$$ It turns out these sequences lie in the vector space $l^2$. What I am confused about is that in my lecture notes, I have the following derivation. Suppose $f = \sum_{n \in \mathbb{Z}}{c_ne^{inz}}$ is the fourier series of $f :[-\pi,\pi] \rightarrow \mathbb{C}$ Then by definition: $$\begin{align} c_n &= \frac{1}{2\pi}\int_{-\pi}^{\pi}f(x)e^{-inx} dx \\ c_{-n} &= \overline{c_n} = \frac{1}{2\pi} \int_{-\pi}^{\pi}f(x)e^{inx} dx \end{align}$$ And so we can rewrite the fourier series as following: $$\sum_{n=\infty}^{\infty}c_n e^{int} = c_0 + \sum_{n=1}^{\infty}\left( c_ne^{int}+c_{-n}e^{-int} \right)$$ And this is where my problem is. If someeone could explain how the one single sum series is equal to a constant + sum of series with starting n = 1, it would be appreciated. I am basically looking for a "summary" of what fourier series are. I've googled countless pdfs and lecture but they all start of with the definition that $$f(t) = c_n + \sum a_n \cos{nt} + b_n \sin{nt}$$. I can derive that using my above definition (sub $c_n = a_n + ib_n$) but I'd like to know where its coming from and what fourier series actually are. - – Tyler Hilton Oct 11 '11 at 1:25 "If someone could explain how the one single sum series is equal to a constant + sum of series with starting $n = 1$, it would be appreciated. " - $\sum\limits_{n=-\infty}^\infty \alpha_n=\left(\sum\limits_{n=-\infty}^{-1} \alpha_n\right)+\alpha_0+\left(\sum\limits_{n=1}^\infty \alpha_n\right)=\alpha_0+\sum\limits_{n=1}^\infty (\alpha_{-n}+\alpha_n)$. – J. M. Oct 11 '11 at 1:37 ## 1 Answer I'm not quite sure what you're asking...but...first, in the world of complex variables trigonometric functions and exponential functions all unify. Thus by moving to complexes we have a nice uniform way of dealing with sine and cosine. If $c_n=a_n+ib_n$, then $$c_ne^{int}+c_{-n}e^{-int} = c_ne^{int}+\bar{c_{n}}e^{-int} =$$ $$(a_n+ib_n)(\cos(nt)+i\sin(nt))+(a_n-ib_n)(\cos(nt)-i\sin(nt)) = 2a_n\cos(nt)-2b_n\sin(nt)$$ So dealing with trig functions or exponentials is just a matter of notation/taste. Now to address your question as to what these series are for... Recall that an analytic function is equal to its Taylor series: $$f(x) = \sum\limits_{k=0}^\infty \frac{f^{(k)}(a)}{k!}(x-a)^k$$ If we stop this sum, say at $N$, we have $$f(x) \approx \sum\limits_{k=0}^N \frac{f^{(k)}(a)}{k!}(x-a)^k$$ which is a polynomial approximation of $f(x)$. Polynomials are nice. Taylor polynomials give us easily understood approximations of our function. Now Fourier series do essentially the same thing for periodic functions. If we stop the summation at some $N$, we get a Fourier polynomial. This is a nice approximation made up of "easy" to understand trig functions. From a theoretic viewpoint, Taylor series are wonderful because you can treat analytic functions sort of like polynomials. Fourier series allow one to treat nice periodic functions sort of like trig functions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419097304344177, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/93877/list
## Return to Question 2 Filled out the definition, as it was incomplete I have been looking at hyperelliptic curves over an algebraically closed field $k$ of characteristic two, with a view towards finding the basis for the vector space of holomorphic differentials. To do this I have viewed the curves as the function field $k(x,y)$, originally restricting to those defined by $$y^2 - y = f(x), \ f(x)\in k[x].$$ After this, I thought that to generalise to all hyperelliptic curves I should allow $f(x)$ to be any rational function. However, looking at the literature it seems like the definition is instead: A hyperelliptic curve of genus $g$ ($g\geq 1$) is an equation of the form $$y^2 - g(xh(x) y = f(x),\ f(x),g(x)\in f(x),h(x)\in k[x]. ,$$ where the degree of $h(x)$ is at most $g$, and $f(u)$ is a monic polynomial of degree $2g +1$, with no elements of $k\times k$ satisfying the original equation and both of it's partial derivatives. I don't see what was wrong with my initial intuition, so if anyone could tell me, or explain why the definition given is correct, I would be much obliged. edit - to give fuller definition 1 # Hyperelliptic curves over characteristic two fields I have been looking at hyperelliptic curves over an algebraically closed field $k$ of characteristic two, with a view towards finding the basis for the vector space of holomorphic differentials. To do this I have viewed the curves as the function field $k(x,y)$, originally restricting to those defined by $$y^2 - y = f(x), \ f(x)\in k[x].$$ After this, I thought that to generalise to all hyperelliptic curves I should allow $f(x)$ to be any rational function. However, looking at the literature it seems like the definition is instead $$y^2 - g(x) y = f(x),\ f(x),g(x)\in k[x].$$ I don't see what was wrong with my initial intuition, so if anyone could tell me, or explain why the definition given is correct, I would be much obliged.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9775058627128601, "perplexity_flag": "head"}
http://www.ams.org/samplings/feature-column/fcarc-cusa
| # How Not to Square the Circle Nicholas of Cusa was attacking a problem dating back to the ancient Greeks. The solution would have made him famous forever... Tony Phillips Stony Brook University Email Tony Phillips ### Introduction In 1965 my late friend and colleague John Stallings wrote "How not to prove the Poincaré conjecture." This work appeared in the Proceedings of the Wisconsin Topology Seminar and is still available on John's Berkeley website. It begins with the declaration: "I have committed the sin of falsely proving Poincaré's Conjecture. But that was in another country; and besides, until now no one has known about it." It continues with the exposition of the main ideas relating the conjecture to statements in algebra, and is certainly what Stephen Miller had in mind when he wrote, in the AMS Notices, after John's death, "His 1960s papers on the 3-dimensional Poincaré Conjecture are both brilliant and hilarious at the same time." In 1445 Nicholas of Cusa wrote De Geometricis Transmutationibus (On Geometric Transmutations); my account is based on a recent translation into French of all of Nicholas' mathematical works: Nicolas de Cues, Les Écrits mathématiques by Jean-Marie Nicolle (Honoré Champion, Paris, 2007). This was the first of Cusa's writings in which he addressed the problem of squaring the circle. Literally, squaring the circle means devising the straightedge-and-compass construction of a square whose area equals that of a given circle. This means a construction relating a segment of length 1 (the radius of the circle) to a segment of length $\sqrt{\pi}$ (the side of the square). Nicholas's plan was start from an equilateral triangle and construct an isoperimetric circle; this is the content of the First Premise in De Geometricis Transmutationibus. If the triangle had perimeter 1, the circle would have diameter $1/{\pi}$. Then the composition of two more standard straightedge and compass constructions could start from that diameter and generate first a segment of length $\sqrt{1/\pi}$, and from that one a segment of the reciprocal length $\sqrt{\pi}$. Examples of straight-edge and compass arithmetic: Left: square root. A segment $AB$ of length $x$ is extended by a segment $BC$ of length 1. Choose one of the points $D$ where the (green) circle with diameter $AC$ intersects the perpendicular through $B$. Then by plane geometry $DB^2 = AB\cdot BC = AB$, so $DB$ has length $\sqrt{x}$. Right: reciprocal. The construction starts with a segment $EF$ of length 2 extended by $FG$ of length $\frac{1}{2}$. A circle (green) is constructed with $EG$ as diameter. For any $x$ between $1/2$ and 2, for example $\sqrt{1/\pi}$, a circle (blue) of radius $x$ is drawn with center $F$. Choose one of the intersection points $X$ of the two circles and draw the line through $X$ and $F$. It will intersect the green circle at a second point $Y$; the length $y$ of the segment $FY$ will be the reciprocal of $x$, since by standard plane geometry $XF\cdot FY = EF\cdot FG = 1$. Other circles, lines and points used in the constructions are shown in black. ### Nicholas of Cusa Nicholas of Cusa (1401-1464) was one of the leading intellectual figures in early 15th-century Europe. He is often described as a transitional figure between the Middle Ages and the Renaissance, and in fact he was personally involved in one of the great events that mark that transition: Pope Eugene IV sent him to Constantinople in 1437 as part of a delegation to negotiate the participation of the Eastern Orthodox hierarchy in the Council of Florence. They came, with an entourage of distinguished Greek scholars who stayed, and lectured, in Florence; contributing to the surge of interest in humanistic learning which led to the new age. Nicholas' principal occupations were ecclesiastical politics and administration (he was named Cardinal in 1449) and, relatedly, theology/philosophy. Those were tumultuous times for the Church; Nicholas was at the center both of bitter jurisdictional controversies and of intense disputation about the exact wording of dogmatic texts, where the placement of a comma could assume cosmic importance. In those days philosophy, theology and natural science were closely linked: the physical structure of the universe had deep theological implications. Nicholas' energetic and erudite mind, in a priori meditation, led him to scientific insights that turned out to be prophetic. For example, he understood that the earth, the sun and the moon were objects moving through space; and he rejected the idea that all orbits had to be circular or even that the universe had a center (De Docta Ignorantia, Book II). Here he was a predecessor of Kepler (who referred to him as "divine") and of Giordano Bruno. Nicholas' interest in mathematics seems to have been its status as an impregnable logical system. He believed that by testing his philosophical theories in mathematics he could produce convincing evidence of their validity. He outlines the parallelism between geometry and theology in De Circuli Quadratura, dated July 1450. "Transport yourself by assimilation from these mathematics to theology. ... Just as the circle is perfection in a figure, since any perfection of figures is worked into it, its surface contains all the surface of all figures and has nothing in common with all the other figures, but is absolutely one and simple in itself; likewise absolute eternity is the form of all forms ... having nothing in common with any other form. And whatever the figure of the circle therein may be, since it has neither beginning nor end, it has resemblance with eternity ... . ... Likewise, if a triangle wanted to triangulate the circle, or a square to square it and so forth for the other polygons, thus also intellectual nature wants to understand [God]." ### The First Premise and its "proof" Nicholas of Cusa's First Premise: $a$ is the center of the equilateral triangle $bcd$. "You divide the side $bc$ into four equal parts which you mark $e, f, g$: I assert that, if one extends the line drawn from $a$ to $e$ by its fourth, which gives $ah$, this will be the radius of the circle whose circumference is equal to the three sides of the triangle." One of the thought schemata Nicholas devised for use in theology was the "concidence of opposites." Here is how he applied the principle to the proof of his First Premise. The construction involves a parameter, namely the position of the point $e$ on the line $cb$. Nicholas observes that when $e$ is at the midpoint $f$ the length of the segment $ah$ is smaller than the desired radius, and that when $e$ is at $b$ the length is larger. He applies the principle: ubi est dare magis et minis, quod ibi sit dare aequale (where one can give a greater and a lesser, one can also give an equal; essentially the Intermediate Value Theorem) and concludes correctly that for some intermediate position $x$ the length $ah$ must be exactly equal to that radius, "and that is the point $e$ equidistant between $b$ and $f$." The last statement made with no justification. The construction is in fact plausible: suppose the sides of the triangle have length 1. Then $ef = \frac{1}{4}$; similarity of triangle $abf$ with a half-equilateral triangle, and the Pythagorean theorem, yield $af = \frac{1}{2\sqrt{3}}$; so $ae = \sqrt{\frac{7}{48}}$, and $ah = (5/4)\sqrt{\frac{7}{48}}$; the First Premise states that $2\pi\cdot ah = 3$, which implies $\pi = \frac{6}{5}\sqrt{\frac{48}{7}} = 3.1423...$ . This value, which Nicholas could have calculated but never mentions explicitly, was within the bounds $[\frac{223}{71}, \frac{22}{7}] = [3.14084... , 3.14285...]$ established by Archimedes. Therefore, until better approximations to $\pi$ were available, there was no way to prove Nicholas's construction wrong, even though there were obvious gaps in his proof. ### Later developments: Things get worse Nicholas circulated copies of his work among his friends, who included Paolo Toscanelli (1397-1482), a Florentine astronomer and physician. He had been Nicholas' classmate, and they remained good friends for life. Toscanelli wrote back with objections. To us, now, it is clear that there was no way the argument could be repaired. Nicholas' solution was to devise a different, and considerably more complicated, construction. The diagram for Nicholas of Cusa's second quadrature construction, from Quadratura Circuli, 1450. The construction starts from a triangle $cde$, superimposes an isoperimetric square $ilkm$ and yields $rq$ as the radius of the isoperimetric circle. Nicholas would have done better to stay with his first construction. The new one was reprinted and minutely analyzed by Regiomontanus (Johannes Müller, 1436-1476) who showed that the implied value for $\pi$ was outside the Archimedean bounds (Nicolle calculates it as 3.154); this is part of a 60-odd page appendix to his De triangulis omnimodis, dated 1464, published in 1533. There Regiomontanus takes up all of Nicholas' constructions one by one and "does the math" (Nunc ad numeros descendendum), using his knowledge of trigonometry to show "that Nicholas' approximations to $\pi$ were --except one-- not even within the limits established by Archimedes," according to Menso Folkerts, who characterizes Regiomontanus as "a gifted student of Archimedes," and Nicholas of Cusa as "an amateur in mathematics." The one exception is presumably the First Premise above. ### The moral of the story Nicholas of Cusa was attacking a problem dating back to the ancient Greeks. The solution would have made him famous forever, and might even have helped bolster his side in theological disputations. No one knew at the time that squaring the circle is impossible: the proof requires calculus, which was 200 years away; and even then it was not discovered until 1882. John Stallings was also attacking a famous problem: 50 years old, a very long time in modern mathematics. In this case the problem was not impossible, but the methods that led to its solution lay far in the future. Richard Hamilton's introduction of the Ricci flow, which led to Gregory Perelman's ultimate victory, came out in 1982, some 17 years after Stallings wrote "How not to prove the Poincaré Conjecture." But Stallings discovered his error by himself, before publishing, whereas Nicholas seems to have believed until the end that he had squared the circle, but perhaps had not been able to find the right argument to substantiate his claim. Here is how Stallings ends his story: "... I was unable to find flaws in my 'proof' for quite a while, even though the error is very obvious. It was a psychological problem, a blindness, an excitement, an inhibition of reasoning by an underlying fear of being wrong. Techniques leading to the abandonment of such inhibitions should be cultivated by every honest mathematician." ### Why circle-squaring is impossible We will see that any length occurring in a compass and straightedge construction starting from length one must be an algebraic number, i.e. it must be a root of a polynomial with integer coefficients. Considerably more intricate is the proof that $\pi$, and therefore $\sqrt{\pi}$, is transcendental, i.e. not algebraic. Some references are given here. A random compass-straightedge construction: all the coordinates of the vertices produced by the construction are of a special form: they are obtained from 1 by composing a finite number of operations, which can be arithmetic (sum, product, quotient, etc.) or the extraction of a square root. For future use, let's call the set of these numbers S. In this example, the construction starts with the vertices $O = (0,0)$ and $A = (1,0)$; the line they span is the $x$-axis. The circle of center $O$ and of radius $OA$ intersects the circle of center $A$ and of radius $OA$ at $B = (\frac{1}{2}, \frac{\sqrt{3}}{2})$, the $x$-axis at $C = (-1,0)$ and the $y$-axis (the perpendicular bisector of $AC$, constructed as usual by two circles and a line) at $E = (0,1)$. The circle of center $E$ and radius $EA$ intersects the line through $O$ and $B$ at $D = (\frac{\sqrt{3} + \sqrt{7}}{4}, \frac{3+\sqrt{21}}{4})$. The circle of center $A$ and radius $AD$ intersects the $x$-axis at $F = (1 + AD,0) = (1 + \frac{1}{2}\sqrt{7 + \sqrt{21} - \sqrt{3} -\sqrt{7}}, 0).$ As the construction continues, the number of embeddings of radicals into radicals tends to rise, but the numbers always have this general form. They are clearly algebraic, since the radicals can be peeled off by continued squaring and rearranging. In fact these constructible numbers form a special class of algebraic numbers: those that can be reached from the rational numbers by a finite number of quadratic extensions, i.e. by arithmetic operations and taking square roots a finite number of times. To show squaring the circle is impossible, "algebraic" is sufficient; but other impossibilities (duplicating the cube, trisecting the angle) require this additional information. • To see why this works in general, note first that if points $P$ and $P'$ have their coordinates in S, then by the Pythagorean theorem their distance $PP'$ = $r$ must also belong to S. So the circle of radius $PP'$ about $P$, say, has the equation $(x-p_1)^2 + (y - p_2)^2 = r^2$. Another circle constructed from two points with coordinates in S will have a similar equation, say $(x-q_1)^2 + (y - q_2)^2 = s^2$. All these coefficients lie in S. The coordinates of the intersection points of the two circles (if they intersect) will be the pairs $(x,y)$ satisfying both equations. From the first equation we can write $y = \pm\sqrt{r^2 -(x - p_1)^2} + p_2$. Substituting this value in the second equation yields a polynomial equation in $x$; it looks like it might have degree 4, but the higher powers cancel and it is a quadratic equation with coefficients in S. The quadratic formula involves arithmetic and a square root, so the solutions it produces will again belong to S. For intersections of a circle and a line no cancellation is needed; the equation is quadratic; and for the intersection of two lines it is linear. ### Reference: Besides the works mentioned in the text, Menso Folkerts, "Regiomontanus' role in the transmission and transformation of Greek mathematics," in Tradition, transmission, transformation: proceedings of two Conferences on Pre-Modern Science held at the University of Oklahoma, ed. by F. Jamil Ragep and Sally P. Ragep. With Steven Livesey. Leiden; New York; Köln; Brill 1996. Tony Phillips Stony Brook University tony at math.sunysb.edu Welcome to the Feature Column! These web essays are designed for those who have already discovered the joys of mathematics as well as for those who may be uncomfortable with mathematics. Read more . . . Search Feature Column Feature Column at a glance
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 98, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9565595984458923, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/22138/simple-model-of-the-solar-system-parameters-accuracy
# Simple model of the solar system. Parameters? Accuracy? I was thinking of making a simple 2D model of the solar system, with planets moving along ellipses like $$x(t) = k_x \sin(t + k_t) (\sin(k_\phi) + \cos(k_\phi))$$ $$y(t) = k_y \cos(t + k_t) (cos(k_\phi) - \sin(k_\phi))$$ and, for earth at least, a angle that some longitude (say the Greenwich Meridian) is facing in the $xy$ plane: $$d(t) = k_dt+k_e$$ or something equally minimal. Two questions: • Where can I find the appropriate constants in the most cut-and-paste-able form? • How long will this kind of model be accurate for? (If I want to use it to vaguely look in the right part of the sky for a particular planet) - 2 Search for the word "planetarium". There exist computer models as well as real world analogues. You have to realize that when more than two bodies are involved in a gravitational solution it is only numerical approximations that can work, for a while. For the solar system I do not think your model will be true for long. You would be better served to use a free planetarium on the web. – anna v Mar 9 '12 at 15:55 1 Well that's no fun ;) – Lucas Mar 9 '12 at 17:00 Your ellipses have the sun at the center, and not at the foci. You should at least use equations that put the sun at a focus of the ellipse and get the angular velocity correct. – Peter Shor Mar 9 '12 at 22:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215190410614014, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/2598/discussion-of-the-rovellis-paper-on-the-black-hole-entropy-in-loop-quantum-grav/2919
Discussion of the Rovelli's paper on the black hole entropy in Loop Quantum Gravity In a recent discussion about black holes, space_cadet provided me with the following paper of Rovelli: Black Hole Entropy from Loop Quantum Gravity which claims to derive the Bekenstein-Hawking formula for the entropy of the black hole. Parts of his derivation seem strange to me, so I hope someone will able to clarify them. All of the computation hangs on the notion of distinguishable (by an outside observer) states. It's not clear to me how does one decide which of the states are distinguishable and which are not. Indeed, Rovelli mentions a different paper that assumes different condition and derives an incorrect formula. It seems to me that the concept of Rovelli's distinctness was arrived at either accidentally or a posteriori to derive the correct entropy formula. Is the concept of distinguishable states discussed somewhere more carefully? After this assumption is taken, the argument proceeds to count number of ordered partitions of a given number (representing the area of the black hole) and this can easily be seen exponential by combinatorial arguments, leading to the proportionality of the area and entropy. But it turns out that the constant of proportionality is wrong (roughly 12 times smaller than the correct B-H constant). Rovelli says that this is because number of issues were not addressed. The correct computation of area would also need to take the effect of nodes intersecting the horizon. It's not clear to me that addressing this would not spoils the proportionality even further (instead of correcting it). Has a more proper derivation of the black hole entropy been carried out? - About the proportionality constant, keep in mind that the Rovelli paper is 14 years old, when LQG was still in its infancy. Anyway, I'll get back to you in greater detail in an answer. – user346 Jan 7 '11 at 15:44 @space_cadet: oh, I didn't notice the date, thanks for pointing that out. So I guess all of the problems have been sorted out already and I am looking forward to reading newer papers on the topic :-) – Marek Jan 7 '11 at 17:17 I know that Ashtekar has published something more recently than that Rovelli paper. Maybe later tonight, I'll go and look it up. – Jerry Schirmer Jan 7 '11 at 20:46 2 – Sklivvz♦ Jan 15 '11 at 23:09 – Matt Reece Sep 10 '12 at 0:29 4 Answers Dear Marek, it has been showed that the paper by Rovelli was invalid for lots of reasons, including those related to yours. First of all, as you hint, it is incorrect to treat the interior and exterior of the black hole asymmetrically because the location of the event horizon may only be determined a posteriori - after a star collapses. So there's no qualitative difference between the interior and the exterior. It follows that in the "real LQG", there would also be an entropy coming from the interior which would be volume-extensive. No one has ever showed that this term is absent; the absence is just a wishful thinking, so the proportionality law to the surface is just a result of an omission. However, even if one removes the interior by hand, Rovelli's paper was showed incorrect. The numerical constant turned out to be incorrect, and newer calculations showed that even with the assumption that the black hole entropy comes from the horizon - which could make the area-law for the entropy tautological - the actual calculable entropy is actually not proportional to the area at all. The corrections to Rovelli's paper - showing that his neglecting of the higher spins etc. were invalid - appeared e.g. in http://arxiv.org/abs/gr-qc/0407051 http://arxiv.org/abs/gr-qc/0407052 If you're looking for papers that show that it suddenly makes sense, you will be disappointed. Quite on the contrary, it has been showed that none of the early dreams that LQG could produce the right black hole entropy works. This is also particular self-evident in the case of the quasinormal modes that were hypothesized to know about the "right" unnatural value of the Immirzi parameter - a multiplicative discrepancy in the Rovelli-like calculations. I showed that for the Schwarzschild, the result really contained $\ln(3)/\sqrt{2}$ and similar right things, but we also showed with Andy Neitzke - and with many other people who followed - that the number extracted for other black holes is totally different and excludes the heuristic conjecture. So today, it's known that the relationship supported by the same Immirzi parameter on "both sides" was actually wrong on both sides, not just one. There is no calculation of an area-extensive entropy in LQG or any other discrete model of quantum gravity, for that matter. Best wishes Lubos - 1 Thank you again Luboš! Indeed, it was my suspicion that LQG actually doesn't work at all for a long time but that was just based on other people's (you included) reports, so I wanted to get acquainted with LQG myself so that I could see the contradictions first-hand. I am sorry to state that even for a theoretical physics student it hasn't taken more than few hours (true time devoted to reading few basic papers and thinking) to arrive at that conclusion. – Marek Jan 14 '11 at 14:16 @Lubos you have a follower! @Marek aren't you also from the Czech republic, not that would bias your opinion in any way, I'm sure :) Anyhow your feelings seem to have shifted radically from what I saw reflected in your earlier questions on LQG and from your reactions to some of my answers. Maybe someday you'll feel less regret over having spent a few hours learning LQG. Fingers crossed ;) – user346 Jan 14 '11 at 18:30 Thanks, space_cadet! Your positive words are appreciated. ;-) By the way, judging by the name Marek which looks purely Czech - our version of Marc - I would also say that Marek is my countrymate but I honestly don't know. There may be another nation who spells it the same way. – Luboš Motl Jan 14 '11 at 18:57 @space_cadet: sure, things have changed precisely because I learned things I hadn't know before (and I don't regret that at all). Actually, I still like LQG approach from the mathematical point of view, the connection with spin-networks etc. There seem to be some interesting ideas. But if it can be shown (as seems to be the case) that LQG is not a physical theory then there's probably nothing more to talk about on this site, right? – Marek Jan 14 '11 at 19:18 @space_cadet: as for my origin: almost correct, I am from Slovakia ;-) But I definitely don't follow anyone. If by follow you don't mean respecting an answer of an established physicist who has moreover backed up his argument with papers :-) – Marek Jan 14 '11 at 19:21 show 4 more comments Dear Marek. The distinction between distinguishable and indistinguishable microstates is the following. For an observer outside the BH, two microstates are distinguishable if they can affect the future evolution of the observer differently. Two microstates with a different geometry of the horizon are distinguishable. Instead, if the geometry differs only inside the horizon, there is no way the outside observer can be affected by the difference. Why is this relevant for the entropy? Because the entropy is a quantity that characterizes the heat exchanges with a system. These exchanges are determined by the number of different distinguishable microstates the system can be in, and not by the total number of states. If a system has a part which is completely isolated, including thermally, then its states are irrelevant for the thermodynamical behavior of the system. Does this mean that the entropy depends on which observer sees it? Yes of course, but this is well known. The entropy depends a lot on the observer; for instance it depends on the macroscopic quantities chosen to describe the system. A system has an entropy only after you specify how you are looking at it, namely which are the macroscopic quantities that you use to describe it. Then the entropy is determined by the number of states at those macroscopic parameters fixed. Yes, the story of BH entropy in Loop Gravity has much evolved since that paper of mine, and many more things have been understood. I think that the BH counting in LQG is a success, but I also think that the problem is not resolved, and the situation is still perplexing. I am not convinced by the idea that the solution is just fixing a parameter to make it come out right. If anybody is interested in what I think today about the black hole entropy calculations in LQG, the place to look is my very recent review http://fr.arxiv.org/abs/1012.4707, which is written for a large audience, and where I try to asses the state of the field, including the BH entropy problem. Since Lubos Motl often writes about my work and about LQG, I think I should at least say what I think about his criticisms. What Lubos says about loop quantum gravity is complete nonsense. He does not know the field at all, and just keeps repeating that everything in LQG is wrong, presenting technical arguments that sound informed, but are always empty. Best to everybody, Carlo Rovelli - "... the sky opened up and a chorus of angels appeared from the heaven" Welcome to Physics.SE @Carlo :) Some things are best heard from the horse's mouth, so to speak. – user346 Jan 27 '11 at 4:09 on a more serious note, with all due respect I would suggest an edit to remove your somewhat more personal comments about @Lubos. While, morally, you are entitled to defend your work in the strongest terms possible, I think such personal opinions are not needed to support your answer :) – user346 Jan 27 '11 at 4:13 ok, space_cadet, you convinced me. i have edited away all personal considerations. – Carlo Rovelli Jan 28 '11 at 14:55 Thanks @Carlo. There is indeed a great deal of misinformation on LQG on this site. Hopefully your arrival should change that for the better! – user346 Jan 28 '11 at 16:28 [This was intended as a comment on Lubos' answer above, but grew too big to stay a comment.] (@Lubos) It is well understood that the horizon is, by definition, a trapping surface. Consequently external observers can gain no information about anything that happens in the interior once the trapping surface is formed. This is not an understanding peculiar to LQG. That is in fact what makes the results of LQG more robust in the end. You state that: There is no calculation of an area-extensive entropy in LQG or any other discrete model of quantum gravity, for that matter. An easy counterexample to that statement, for instance, is Srednicki's 1993 PRL "Entropy and Area" (which has 359 citations so far). This paper shows that this entropy-area relation is a very universal aspect of plain old quantum field theory with no inputs whatsoever from loops or strings. Also, the papers you cite (by Domagala, Lewandowski and Meissner) - while these fix an error in Rovelli's work they are not intended to negate the basic procedure of counting states associated with quanta of area, but to reinforce it. So you may hate or love that specific paper by Rovelli, but that does not change the validity of the rest of the vast amount of work done on this topic in LQG. For a comprehensive bibliography I suggested looking up the references in Ashtekar and Lewandowski's 2005 "LQG: Status Report" paper and by doing arXiv searches for papers by Alejandro Corichi and collaborators. The fact that Black Hole entropy should be determined solely by counting the microscopic surface states of the horizon (and not those of the bulk interior) is something we know from Bekenstein and Hawking's work based on semiclassical QFT. Any microscopic theory, based on loops or strings or whatever, must ultimately yield the same results under coarse graining. LQG does this in a simple and natural way. The key lies in the notion of the area operator - which by itself is a construction natural to and shared by any theory of quantum geometry. Rovelli's paper is one the earliest (with Kirrill Krasnov, Baez and Ashtekar being among the other pioneers) which outlines the general notion. It is significant for these reasons. Please allow me to stress that in no way am I trying to cast doubts on your (@Lubos') work with quasinormal modes and such. I have yet to properly understand that calculation and I also do not claim to have a universal understanding of all the work on black hole entropy from the loop perspective or otherwise. My hope is simply to refute the notion "that LQG actually doesn't work at all"! This statement is unfounded and far more evidence than simply noting the error in Rovelli's paper is needed to back up such claims. Needless to say there are errors in the early papers on quantum mechanics, general relativity and string theory. Do those mistakes imply that either one of these frameworks "doesn't work at all"? Edit: There are some very recent papers which hopefully are big steps towards resolve the black hole entropy question in LQG, and should be of interest to some of the readers here - Detailed black hole state counting in loop quantum gravity (published in PRD) and Statistical description of the black hole degeneracy spectrum. Edit (v2): There are some persistent misunderstandings as reflected in the comments about the nature of the Ashtekar formulation. Let me restate, as I mentioned below, that Ashtekar's variables are nothing more than a canonical transformation which lead to a simpler form of the ADM constraints. There are no assumptions about area quantization and such which go into the picture at this stage. Area and volume quantization is the outgrowth of natural considerations regarding quantum geometry. These were undertaken in the mid-90s, seven or eight years after Ashtekar's original papers. Perhaps the single best and most comprehensive reference for the Ashtekar variables and more generally the complete framework of canonical quantum gravity is Thomas Thiemann's habilitation thesis. - 3 Dear space_cadet, I wasn't making any controversial statement. Your "counterexample" is not a counterexample because I only spoke about discrete models of quantum gravity and there is nothing discrete whatsoever about Mark Srednicki's paper. It's a standard massless field. ... I don't know what you mean by "reinforcing a procedure etc.". My statement was merely that the result of the procedure disagrees with the value required by gravity. This fact may be obscured but it is a fact: LQG doesn't work. Science is not a business "it doesn't matter". Falsification in science kills a conjecture. – Luboš Motl Jan 14 '11 at 19:02 4 It is not at all obvious that to "quantize" geometry you need quantized versions of area and volume. One reason to have doubts about that notion is that you should try to define things in an operational and gauge-invariant way. It's not clear to me how I would measure tiny areas or volumes of order Planck size. It sounds like a suspiciously local question, and for quite general reasons one should have doubts about sharp definitions of extremely local quantities in theories of quantum gravity. – Matt Reece Jan 14 '11 at 22:43 3 And as a general rule of thumb, when making a statement along the lines of "any theory of quantum gravity must do X," it is useful to ask yourself "does string theory do X?" I'm not aware of any stringy version of an "area operator," and for the reasons alluded to in my last comment I doubt that one exists. – Matt Reece Jan 14 '11 at 22:45 4 LQG was "man-made" - the very Ashtekar field redefinition, trying to argue that a bulk SU(2) gauge field "is the same thing" as a bulk gravity, was derived from the assumption that the areas should be quantized - which they're not. It's a wrong initial guess, a lethal bomb in the very pillars of LQG that can be identified as the culprit of all the contradictions between LQG and gravity. In proper science, like string theory, one makes many fewer arbitrary assumptions - the careful analyses of the theory teach us the answers. – Luboš Motl Jan 20 '11 at 9:24 5 Lubos is obviosuly wrong in saying that "the Ashtekar field redefinition was derived from the assumption that the areas should be quantized". The Ashtekar field definition was made in 1986, almost ten years earlier anybody even thought about area quantization (1994)!! Maybe Lubos thinks that Ashtekar reads the future! – Carlo Rovelli Jan 29 '11 at 6:22 show 13 more comments By the way, the quantization of areas, as explained elsewhere, directly contradict special relativity. If you pick a near null surface in the Minkowski space, even though its coordinate differences may be macroscopic, its proper area can be arbitrarily small (but positive). This is implied by relativity because it is the Lorentz transform of a tiny spacelike (or mixed) area. In LQG, the proper area will be essentially the number of intersections of the area with the spin network - it can clearly never go to zero for near-null surfaces, implying a maximum violation of Lorentz symmetry. – Luboš Motl Jan 20 '11 at 9:27 that is related to?: http://arxiv.org/pdf/gr-qc/0411101v1.pdf ...One such candidate is loop quantum gravity which leads to a discrete structure of the geometry of space. This discreteness can be expected to lead to small-scale corrections of dispersion relations, just as the atomic structure of matter modifies continuum dispersion relations once the wave length becomes comparable to the lattice size. There have been several studies already which derive modified dispersion relations motivated from particular properties of loop quantum gravity... ...The difficulty lies in the fact that loop quantum gravity is very successful in providing a completely non-perturbative and background independent quantization of general relativity which makes it harder to re-introduce a background such as Minkowski space over which a perturbation expansion could be performed... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498383402824402, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/34154/list
## Return to Answer Post Made Community Wiki by S. Carnahan♦ 1 The symmetric square of a genus $2$ curve is a blow up of a 2-torus in one point (the canoncal divisor in the Jacobian). Nice example for Hilbert schemes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7743594646453857, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/231979/a-probability-question-about-winning-the-casino
# A probability question about winning the casino Assume A has \$10 and he goes to casino and play a fair game with 50 percent win. Assume each time he bet \$1 and if he win he will get\$1. He decided to leave the casino once he win \$5.What is the probability that he will win \\$5 dollars? I have tried the gambler model but seem not quite reasonable with the answer. When using the Gambler model, if A want to win \$3 with a capital of \$10, then we can assume the casino has \$3 then the probability of winning the \$3 =$\frac{10}{13}$ which has a probability higher than the casino, which i think is counterintuitive. Not sure if it is doing right. - 2 – joriki Nov 7 '12 at 7:18 you mean this is a gambler ruin problem? – Mathematics Nov 7 '12 at 7:20 If ever there was one. – Did Nov 7 '12 at 7:21 But i don't quite get, if you consider the casino has capital 5, then it means that we have probability $\frac{1}{2}$ to win, but consider now i want to win 3 dollars, then the probability becomes $\frac{7}{10}$ which is quite strange that the winning probability is greater than the casino, isn't it? – Mathematics Nov 7 '12 at 7:24 Also, consider if A want to win \$10 this time, the probability is still$\frac{1}{2}$which has the same probability as winning \$5, that make me feel strange with Gambler model – Mathematics Nov 7 '12 at 8:00 show 11 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9639137983322144, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/142497-separation-variables-pdes.html
# Thread: 1. ## Separation of Variables for PDEs Couple problems: 1. $yX'Y'+XY=0$ 2. $yX'Y+xXY'=0$ Where we are searching for $u(x,y)=X(x)Y(y)$ What's throwing me off is the little x and y in these. I have no problem separating stuff like $X'Y+3XY'=0$, for example. Thanks! 2. For the first one, divide by $X' Y$ and rearrange to get: $\frac{X'}{X}=-\frac{Y}{yY'}=\lambda$ Now they're separated. The second one divide by what? $xX$, maybe $xY'$, $xyY'$, keep trying with different combinations until you separate the variables. 3. I know HOW to separate them. I don't know where to go from there because I don't know how to deal with both little x/y and big X/Y. 4. Lil' x and y are variables and big X and Y are functions. So take for example the first part of the first one: $-\frac{Y}{yY'}=\lambda$ rearranging I get: $Y'+\frac{1}{\lambda y} Y=0$ which you can solve for $Y(y)$ by finding an integrating factor. Same dif' for the other ones in X. 5. Thanks...care to walk me through it? This isn't a HW problem; I'm preparing for an exam so I need to see how it's done. I had about a 2-year break from Calculus so there's some basic stuff I forget how to do (like integrating factors). 6. Can I ask a question of the first one Originally Posted by brisbane Couple problems: 1. $yX'Y'+XY=0$ 2. $yX'Y+xXY'=0$ Where we are searching for $u(x,y)=X(x)Y(y)$ Did you really mean $yX'Y'+XY=0$ or $yX'Y+XY'=0$? 7. No, the problem is correct as written. Please, just a clear walkthrough? I'm pretty sure this is a straightforward problem. 8. Originally Posted by brisbane No, the problem is correct as written. Please, just a clear walkthrough? I'm pretty sure this is a straightforward problem. If $y X' Y' + X Y = 0$ then $\frac{y Y'}{Y} = - \frac{X}{X'}$ Since the LHS is only a function of $y$ and the RHS only a function of $x$ then each must be constant. Thus, $\frac{y Y'}{Y} = - \frac{X}{X'} = \lambda$ 1) $\frac{y Y'}{Y} = \lambda$. Separate $\frac{dY}{Y} = \frac{\lambda}{y}$ so $\ln Y = \lambda \ln y + \ln c_1$. Thus, $Y = c_1 y^{\lambda}$. 2) $- \frac{X}{X'} = \lambda$. Separate $\frac{dX}{X}= - \frac{dx}{\lambda}$ so $\ln X = - \frac{1}{\lambda} x + \ln c_2$. Thus, $X = c_2 e^{-x/\lambda}$. Then multiply $X Y$ and combine your constants to a single constant. 9. Originally Posted by Danny If $y X' Y' + X Y = 0$ then $\frac{y Y'}{Y} = - \frac{X}{X'}$ Since the LHS is only a function of $y$ and the RHS only a function of $x$ then each must be constant. Thus, $\frac{y Y'}{Y} = - \frac{X}{X'} = \lambda$ 1) $\frac{y Y'}{Y} = \lambda$. Separate $\frac{dY}{Y} = \frac{\lambda}{y}$ so $\ln Y = \lambda \ln y + \ln c_1$. Thus, $Y = c_1 y^{\lambda}$. 2) $- \frac{X}{X'} = \lambda$. Separate $\frac{dX}{X}= - \frac{dx}{\lambda}$ so $\ln X = - \frac{1}{\lambda} x + \ln c_2$. Thus, $X = c_2 e^{-x/\lambda}$. Then multiply $X Y$ and combine your constants to a single constant. You can also let $y = e^{ry}$ and sub into the equation to find r. Whatever you find best!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 46, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335631728172302, "perplexity_flag": "head"}