url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/101282/trees-on-omega
Trees on $\omega$ Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am going to give a construction of a tree on $\omega$ that at first appears as though it is well founded. However, this tree cannot be well-founded because, using the rank function on finite sequences from $\omega$ into the ordinals `$\phi_T(u) = supremum\{\phi(u$^(x))+1|u^(x)$\in T\}$`, `$\phi(\emptyset)=\omega^\omega$`. Because $\phi$ is onto `$\omega^\omega$` we get that T is of size continuum, which is impossible. The construction proceeds as follows: 1) Create a tree `$T_0$` s.t. `$\phi_{T_0}(\emptyset)=\omega$` by having a branch of length n for all $n\in\omega$ Note `$T_0$` is well founded. 2) Create a tree `$T_1$` s.t. `$\phi_{T_1}(\emptyset)=\omega+\omega$` by having level 1 branches `$u_i$` s.t. `$\phi(u_i)=\omega+i$`. Note `$T_1$` is well_founded (each `$T[u_i]=\{v\in T|$` v is compatible with `$U_i\}$` is well founded.) 3) Similarly create trees `$T_n$` s.t. `$\phi_{T_n}(\emptyset)=n*\omega$ 4) Create a tree `$T_\omega$` with level 1 branches `$T_n$` so that `$\phi_{T_\omega}(\emptyset)=\omega*\omega$`. Note that because each of the branches is well founded. 5) Similarly create trees `$T_{n*\omega}$` s.t. `$\phi_{T_{n*\omega}}(\emptyset)=\omega^n$` 6) Finally, using the `$T_{n*\omega}$` as level 1 branches, make a tree T s.t. `$\phi_T(\emptyset)=\omega^\omega$`. It would seem that this T is well founded (each `$T_{n*\omega}$`) is well-founded. However this is impossible because the set of finite sequences from $\omega$ to $\omega$ is countable. Is it the case that all of the infinite branches of this tree are undefinable? Am I missing something? Is it not the case that supremum`$\{\omega^n|n\in\omega\}$` is `$\omega^\omega$`? - 2 You are confusing ordinal and cardinal exponentiation. The supremum of the ordinals $\omega^n$ is a countable ordinal, $\omega^\omega$ in the sense of ordinal exponentiation. Unfortunately, the same notation is sometimes used for cardinal exponentiation, and the cardinal exponential $\omega^\omega$ has the cardinality of the continuum. – Andreas Blass Jul 4 at 5:41 1 Answer You are considering the ordinal arithmetic. Then $\omega^{\omega}$ is the limit of $\omega^{n}$'s. So it is a countable ordinal. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261390566825867, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?s=0fafef472dd170ba7824385878d2ba30&p=3849559
Physics Forums ## What is supersymmetry I've read in the wikipedia supersymmetry page,that supersymmetry is a symmetry which relates fermions to bosons. Can we tell that this means there is aOne-to-One Correspondence between fermions and bosons and we can substitute a fermion with its corresponding boson and vice versa,to get a similar interaction? thanks PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Science Advisor Supersymmetry is an extention of the standard model which says that to each fundamental particle that we know of, there exists a supersymmetric corresponding particle. Specifically for each fundamental fermion, there is a supersymmetric boson, while for each fundamental boson there is a supersymmetric fermion. To date no such particles have been found. Examples: electron partner is called selectron, while photon partner is called photino. Quote by mathman Supersymmetry is an extention of the standard model which says that to each fundamental particle that we know of, there exists a supersymmetric corresponding particle. Specifically for each fundamental fermion, there is a supersymmetric boson, while for each fundamental boson there is a supersymmetric fermion. To elaborate just a little, if the universe was exactly supersymmetric you could perform this switcharoo and swap each fundamental particle with its superpartner and the universe would stay exactly the same. However there are many experimental reasons why this cannot be the case, so the solution is to assume the world is only supersymmetric at very high energies, and that there is a supersymmetry breaking mechanism which gives all the superpartners a large mass at low energies so that they would not have been seen in any current experiments. ## What is supersymmetry Is the symmetry breaking,which causes the electromagnetic and weak forces emerge from electroweak force,the same thing as you mentioned kurros? Thanks to both Quote by Shyan Is the symmetry breaking,which causes the electromagnetic and weak forces emerge from electroweak force,the same thing as you mentioned kurros? Thanks to both No it is a seperate thing, although somewhat similar in that there is a symmetry of the fundamental theory which needs to be broken. The electroweak unification scale is around the terascale, i.e. 10^3 GeV, whereas the scale at which SUSY breaking is often theorised to occur is at something more like the grand unification scale, way up at 10^19 GeV or so usually. So up to now,considering GUTs,after big bang,our universe is supersymmetric until it cools down to $10^{19} \ GeV$.At this energy,supertsymmetry breaks and as a result,electroweak and strong forces become distinct.Then when the energy reaches $10^3 \ GeV$ , The electroweak force divides to electromagnetic and weak forces and we have three fundamental interactions.right? Something like that, as far as I know. I am not an expert on GUT's though :). Actually I think the breaking of the GUT group is generally separate to the SUSY breaking (the SUSY breaking occurring at an even higher scale I think), but I am not very familiar with it, and there are various different GUTs that all do things a bit differently. Actually, there is a nice connection between the SUSY breaking and electroweak symmetry breaking that I neglected to mention. In the Standard Model there is a parameter in the higgs potential which gives the potential a non-zero minima if it is negative, often called μ^2, and the fact that the minima is non-zero is what causes electroweak symmetry breaking to occur. In SUSY models it is possible for this parameter to start out positive at high scales (so the higgs potential has no non-zero minima and no symmetry breaking is possible) and then due to radiative corrections that arise due to the existence of the massive superpartners this parameter can be dynamically flipped to negative as you go down in energy scale, pushing the minima of the Higgs potential out to non-zero values and triggering electroweak symmetry breaking. This is a pretty cool thing to have happen. It is called radiative electroweak symmetry breaking, or "REWSB" if you want to read more about it. Thread Tools | | | | |--------------------------------------------|---------------------------|---------| | Similar Threads for: What is supersymmetry | | | | Thread | Forum | Replies | | | Beyond the Standard Model | 1 | | | Beyond the Standard Model | 0 | | | Beyond the Standard Model | 7 | | | Advanced Physics Homework | 0 | | | Beyond the Standard Model | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9203683137893677, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/8134/existence-of-projective-resolutions-in-abelian-categories/8138
## Existence of projective resolutions in abelian categories ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is a standard result of elementary homological algebra that to every R-module $A$ there exists a projective resolution. It is often said that the category of R-modules has "enough projectives." In which other categories is this also true? In particular is it true for abelian categories? - ## 3 Answers Among the standard examples of abelian categories without enough projectives, there are 1. the categories of sheaves of abelian groups on a topological space (as VA said), or sheaves of modules over a ringed space, or quasi-coherent sheaves on a non-affine scheme; 2. the categories of comodules over a coalgebra or coring. No abelian category where the functors of infinite product are not exact can have enough projectives. In Grothendieck categories (i.e. abelian categories with exact functors of small filtered colimits and a set of generators) there are always enough injectives, but may be not enough projectives. Among the standard examples of abelian categories with enough projectives, there are 1. the category of functors from a small category to an abelian category with enough projectives (as VA said), or the category of additive functors from any small additive category to the category of abelian groups (this class of examples includes the categories of modules over any rings); 2. the category of pseudo-compact modules over a pseudo-compact ring (see Gabriel's dissertation); 3. the category of contramodules over a coalgebra or coring (see Eilenberg-Moore, "Foundations of relative homological algebra"). - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Way too optimistic: in many abelian categories there are not enough projectives (and in the dual category there are not enough injectives). The most standard example is sheaves of abelian groups on a topological space X. For most X, this category does not have enough projectives. See for example this question where this was discussed: http://mathoverflow.net/questions/5378/when-are-there-enough-projective-sheaves-on-a-space-x/5470#5470 On the positive side: if A has enough projectives and I is a small category then the category $A^I$ (i.e. the category of functors $F:I\to A$) has enough projectives (assuming arbitrary sums exist in A). In particular, the category of complexes in A has enough projectives (taking $I=\mathbb Z$ with arrows $d_n:n\to n+1$ satisfying $d_{n+1}\circ d_n=0$). See this question: http://mathoverflow.net/questions/6776/how-to-construct-pair-of-adjoint-functors-from-category-a-to-category-adcategor/6836#6836 All of these are abelian categories. - ... assuming A is abelian of course. – VA Dec 7 2009 at 22:40 A simple example of an abelian category having not enough projectives is the category of finite abelian groups. In fact, it contains neither non-trivial projective objects nor non-trivial injective objects. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8744308948516846, "perplexity_flag": "head"}
http://mathoverflow.net/questions/9754/magic-trick-based-on-deep-mathematics/19911
## Magic trick based on deep mathematics ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am interested in magic tricks whose explanation requires deep mathematics. The trick should be one that would actually appeal to a layman. An example is the following: the magician asks Alice to choose two integers between 1 and 50 and add them. Then add the largest two of the three integers at hand. Then add the largest two again. Repeat this around ten times. Alice tells the magician her final number $n$. The magician then tells Alice the next number. This is done by computing $(1.61803398\cdots) n$ and rounding to the nearest integer. The explanation is beyond the comprehension of a random mathematical layman, but for a mathematician it is not very deep. Can anyone do better? - 6 Please make this community wiki? – Theo Johnson-Freyd Dec 25 2009 at 22:47 28 I am informed that Persi Diaconis is the correct person to answer this question. – Sam Nead Dec 26 2009 at 0:09 15 I have discussed this question with Persi. He could not come up with anything significant (though he did not think about it very long). – Richard Stanley Dec 26 2009 at 16:30 10 I've also heard Persi talk about this subject, and my guess is that he would say that the requirements of "deep mathematics" and "would actually appeal to a layman" are nearly incompatible in practice. – Mark Meckes Dec 27 2009 at 13:54 2 I don't think they should be incompatible: the deep mathematics are the reason the trick works; you don't have to understand them to be stunned by the trick! – Sam Derbyshire Jan 17 2010 at 17:06 show 9 more comments ## 42 Answers "The best card trick", an article by Michael Kleber. Here is the opening paragraph: "You, my friend, are about to witness the best card trick there is. Here, take this ordinary deck of cards, and draw a hand of five cards from it. Choose them deliberately or randomly, whichever you prefer--but do not show them to me! Show them instead to my lovely assistant, who will now give me four of them: the 7 of spades, then the Q of hearts, the 8 of clubs, the 3 of diamonds. There is one card left in your hand, known only to you and my assistant. And the hidden card, my friend, is the K of clubs." - 12 Martin Gardner gave an interesting variant of this in one of his books, where the volunteer also gets to choose which 4 of the 5 cards the assistant hands to the mathematician. This seems like only 4!=24 pieces of information to convey one of 48 cards: the extra bit is whether the assistant passes the cards right side up or upside down. – David Speyer Feb 2 2010 at 13:19 3 That's a really nifty trick... I wonder if I can find an "assistant" to help me run this soon! – Gwyn Whieldon May 16 2010 at 7:48 show 4 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This was fascinating for me. Somehow the man takes a bagel and with one cut arrives with two pieces that are interlocked. Whether this qualifies as "magic" I dunno (it's hard to say once the trick's been explained), but it sure seems like it to me. It doesn't hurt that I love bagels, and have the opportunity to perform this with friends/family/non-math people and can teach a little about problems/topology/counter-intuitive facts about the universe. - 8 I was amused by the connected bagel that resulted when my friend cut along a Mobius strip instead of a full-twisted strip. – Elizabeth S. Q. Goodman Feb 11 2010 at 6:50 show 1 more comment Five unrelated items: ## Mobius strip One of the best mathematical tricks is what happens when you cut a Mobius strip in the middle. (Look here) (And what happens when you cut it again, and when you cut it not in the middle.) This is truly mind boggling and magicians use it in their acts. And it reflects deep mathematics. ## Diaconis mind reading trick I also heard from Mark Gorseky this description of a mathematical based card game "Mark described a card trick of Diaconis where he takes a deck of cards, gives it to a person at the end of the room, lets this person “cut” the deck and replace the two parts, then asks many other people do the same and then asks people to take one card each from the deck. Next Diaconis is trying to read the mind of the five people with the last cards by asking them to concentrate on the cards they have. To help him a little against noise coming from other minds he asks those with black cards to step forward. Then he guesses the cards each of the five people have. Mark said that Diaconis likes to perform this magic with a crowd of magician since it violates the basic rule: “never let the cards out of your control”. This trick is performed (with a reduced deck of 32 cards) based on a simple linear feedback shift register. Since all the operations of cuting and pasting amount to cyclic permutations, the 5 red/black bits are enough to tell the cylic shift and no genuine mind reading is required." I think there is a paper by Goresky and Klapper about a version of this magic and relations to shift registers. ## The Link Illusion I heard a wonderful magic from Nahva De Shalit. You tie a string between the two hands of two people and link the two strings. The task is to get unlinked. This ties with what I heard from Eric Demaine about the main principle behined many puzzles (Some of which he manufectured with his father whan he was six!) ## Symmetry Illusion Sometimes things are not as symmetric as they may look. ## commutators-based magic (I heard this from Eric Demaine and from Shahar Mozes.) If we hang a picture (or boxing gloves) with one nail, once the nail falls so does the picture. If we use two nails then ordinarily if one nails falls the picture can still hangs there. Mathematics can come for the rescue for the following important task: use five nails so that if any one nail falls so does the picture. - 4 They stand up if they have a red card and stay seated if they have a black card. That'll tell him (for example) that he's looking at the sequence of cards corresponding to 01100 or 11001, etc. He has the cyclic order of the cards he handed out memorized, and just reads of the corresponding card names. – Gwyn Whieldon May 16 2010 at 0:31 show 6 more comments - I saw this trick demonstrated at a math camp once. When it works, it is extremely impressive to non-mathematicians and mathematicians alike. Have a volunteer shuffle a deck of cards, select a card, show it to the audience, and shuffle it back into the deck. Take the deck from him, and fling all of the cards into the air. Grab one as it falls, and ask the volunteer if it is his card. 1 in 52 times (this is the deep mathematics part), the card you grab will be the card the volunteer selected. Even most statisticians should be amazed at this feat. Just make sure you never perform this trick twice to the same audience. - 1 This is brilliant! – Qiaochu Yuan Jun 15 2010 at 10:15 12 xkcd.com/628 – Ryan Reich Nov 7 2011 at 20:34 The following trick uses some relatively deep mathematics, namely cluster algebras. It will probably impress (some) mathematicians, but not very many laypeople. Draw a triangular grid and place 1s in some two rows, like the following except you may vary the distance between the 1s: ````1 1 1 1 1 1 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 1 1 1 1 1 ```` Now choose some path from the top row of 1s to the bottom row and fill it in with 1s also, like so: ````1 1 1 1 1 1 1 1 . . . . . . 1 . . . . . 1 . . . . . . 1 . . . . . . 1 . . . . 1 1 1 1 1 1 1 ```` Finally, fill in all of the entries of the grid with a number such that for every 2 by 2 "subsquare" ```` b a d c ```` the condition $ad-bc=1$ is satisfied, or equivalently, that $d=\frac{bc+1}{a}$. You can easily do this locally, filling in one forced entry after another. For example, one might get the following: ````1 1 1 1 1 1 1 1 2 3 2 2 1 . 1 5 5 3 1 . 1 2 8 7 1 . . 1 3 11 2 1 . . 1 4 3 1 . 1 1 1 1 1 1 1 ```` The "trick" is that every entry is an integer, and that the pattern of 1s quickly repeats, except upside-down. If you were to continue to the right (and left), then you would have an infinite repeating pattern. This should seem at least a bit surprising at first because you sometimes divide some fairly large numbers, e.g. $\frac{5\cdot 11+1}{8} = 7$ or $\frac{7\cdot 3+1}{11} = 2$ in the above picture. Of course, the larger the grid you made initially, the larger the numbers will be, and the more surprising the exact division will be. Incidentally, if anyone can provide a reference as to why this all works, I'd love to see it. I managed to prove that all of the entries are integers, and that they're bounded, and so there will eventually be repetition. However, the repetition distance is actually a simple function of the distance between the two rows of 1, which I can't prove. - 2 The fact you are looking for is that, in a finite type cluster algebra, if you mutate each vertex once in the order given by the orientation of the Dynkin diagram, the resulting operation has period h+2, where h is the Coxeter number. See front.math.ucdavis.edu/0111.3053 – David Speyer Feb 2 2010 at 13:28 6 I'm not sure what level you want an answer on. My preferred proof for this particular case is to notice that the numbers you are getting are the Plucker coordinates of a point in G(2,n), and that presentation makes it obvious that they will be periodic modulo n. – David Speyer Feb 2 2010 at 13:31 show 1 more comment A late addition: The Fold and One-Cut Theorem. Any straight-line drawing on a sheet of paper may be folded flat so that, with one straight scissors cut right through the paper, exactly the drawing falls out, and nothing else. Houdini's 1922 book Paper Magic includes instructions on how to cut out a 5-point star with one cut. Martin Gardner posed the general question in his Scientific American column in 1960. For the proof, see Chapter 17 of Geometric Folding Algorithms: Linkages, Origami, Polyhedra. We include instructions for cutting out a turtle, which, in my experience, draws a gasp from the audience. :-) - Persi Diaconis and Ron Graham just published Magical Mathematics. The book contains a plethora of magic tricks rooted in deep mathematics. - 2 Dear Sami, Welcome to MO – Gil Kalai Nov 7 2011 at 19:42 The coffee mug trick Give a coffee mug (full if you're brave) to someone and ask them to rotate 360 degrees without spilling the (real or imaginary) coffee, so that their hand ends up in the same position. This is impossible, so you get to smirk while they contort themselves and become more and more baffled (this works better with more than one person since it turns into a kind of "competition") Finally, take the cup and show that while it's impossible to turn it once (as has been "proven"), it's possible to turn it twice (!) and end up in the same position. Has to do with the fundamental group of SO(3) being $\mathbb{Z}/2\mathbb{Z}$, and when we require the cup to stay upright we end with a non-trivial loop. - 1 Sometimes called the "plate trick" or the "belt trick". – Sam Nead Oct 11 2010 at 8:54 1 And then, you pretend that the mug is an electron and your arm tracks its spin. – Elizabeth S. Q. Goodman Nov 8 2011 at 7:04 1 An easier solution than the plate trick: Stand up, pick up the mug, and walk in a circle around it. – Sam Nead Apr 7 2012 at 14:44 This trick exploits the thinness of coins. http://www.howtodotricks.com/easy-coin-magic-trick.html - 1 I remember this - a nice trick! Very similar to betting someone "I can walk through this piece of paper." [Holding up a piece of letter paper]. So this is a real mathematical trick; it uses the interaction between length and area. Is there a way to make a deeper version? – Sam Nead Jan 17 2010 at 14:38 You can use hamming codes to guess a number with lying allowed. For example, here is a way to guess a number 0-15 with 7 yes-or-no questions, and the person being questioned is allowed to lie once. (The full cards are here). - Here is a card trick from Edwin Connell's Elements of Abstract and Linear Algebra, page 18 (it can be found online). I always do this trick to my undergraduate number theory class in the first minutes of the first day. A few weeks later, after they've learned some modular arithmetic, we come back to the trick to see why it works. I quote from Connell: "Ask friends to pick out seven cards from a deck and then to select one to look at without showing it to you. Take the six cards face down in your left hand and the selected card in your right hand, and announce you will place the selected card in with the other six, but they are not to know where. Put your hands behind your back and place the selected card on top, and bring the seven cards in front in your left hand. Ask your friends to give you a number between one and seven (not allowing one). Suppose they say three. You move the top card to the bottom, then the second card to the bottom, and then you turn over the third card, leaving it face up on top. Then repeat the process, moving the top two cards to the bottom and turning the third card face up on top. Continue until there is only one card face down, and this will be the selected card." When I do this trick, I always use big magician's cards (much easier for an audience to see), but a regular deck works too. To get to the trick faster, I skip the first part and just pick 7 cards myself, showing them all the cards so they see nothing is funny (like two ace of spades or something). I then spread the cards in one hand face-down and let a student pick one and show it to everyone else but me before I take it back face down. When the student is showing the cards to the class I move the rest of the cards behind me so that before I get the card back I already have the rest behind my back. You need to make sure students at the side of the room won't be able to see what you're doing behind your back (namely, putting the mystery card on the top of the deck), so stand close to the board. Practice this with yourself many times first to be sure you can do it without screwing up. The hard part is remembering to keep the last card you reached in the count on the top of the deck; that same card will be used when you start the count in the next round. If you stick it on the bottom before counting off cards again then you'll mess everything up. For instance, if someone picks the number 3 then I start counting from the top of the deck and say (with hand movements in brackets) "One [put it under], two [put it under], three [turn it over, put it on top FACE UP and stop]. This [show face-up card to everyone] is not your card. [Put it back face-up on top] One [now put it under], two [put it under], three [turn over and put on top FACE UP and stop]. This etc. etc." Connell advises telling people to pick up a number from 1 to 7 but not allow 1. In practice there's no need to tell people not to pick 1. They never do (it's never happened to me). They don't pick 7 either. And if they did pick 1, well, just turn over the top card and you're done! Again, that never really happens. - Here's an example of a magic trick that works with high probability, based on a careful analysis of the riffle shuffle, in which an audience member performs a number of riffle shuffles and then moves a single card, and the magician guesses which card has been moved. - Audience asked to choose an integer from 0 to 1000. Ask to give remainder when divided by 7, 11, and 13 respectively. Magician gives original integer by Chinese Remainder Theorem. Works because 7×11×13=1001. - 1 You mean 13, not 3. 13x7x11 = 1001. – Jason DeVito Dec 26 2009 at 3:41 2 I've changed the 3 to a 13. @Jason: since the post is Community Wiki, you could have changed the 3 to a 13 as well. – Anton Geraschenko♦ Dec 26 2009 at 6:56 show 1 more comment Here is a general trick that you can use to make yourself look like you have an amazing memory. Start with a finite abelian group $(G,+)$ in which you are comfortable doing arithmetic. Be sure to know the sum $$g^* = \sum_{g \in G} g.$$ Take a set $S$ of $|G|$ physical objects with an easily computable set isomorphism $$\varphi : S \longrightarrow G.$$ Allow your audience to remove one random element from $a \in S$ and then shuffle $S$ without telling you what $a$ is. [Shuffling means we need $G$ to be abelian.] Now inform your audience that you are going to look briefly at each remaining element of $S$ and remember exactly which elements you saw, and determine by process of elimination which element of $S$ was removed. Now glace through all the remaining elements of $S$ one by one and keep a "running total" to compute $$\varphi(a) = g^* - \sum_{s \in S-{a}} \varphi(s).$$ Finally apply $\varphi^{-1}$ and obtain $a.$ Note that $\varphi$ is not "canonical" in the sense there are definitely choices to be made. On the other hand in should be "natural" in the sense that you should be very comfortable saying $s = \varphi(s).$ The prototypical example is to take $G$ to be $\Bbb Z / 13 \Bbb Z \times V_4,$ $S$ to be a standard deck of 52 cards, and $\varphi(s)$ to be $( \text{rank}(s) , \text{suit}(s) )$. - Peter Suber writes: By the way, the single best knot trick I've ever found is at pp. 98-99 of Louis Kauffman's On Knots, a mathematical treatise listed below with the books on knot theory. I'm sure you've seen the trick in which someone ties an overhand knot by crossing their arms before picking up the cord, and then uncrossing them. Kauffman shows you how to do the same trick without crossing your arms first. The version of this trick in Ashley #2576 and Budworth 1977 [p. 151] is not nearly as good. Work out how it is possible for yourselves! A link to the book is here. [Edit: This magic trick does not rely on mathematics -- instead it violates an important mathematical fact, that the trefoil is not unknotted! The Chinese rings have a similar feel, but the mathematics violated (linking number) is less deep.] - 3 Here's a video of that trick: math.toronto.edu/~drorbn/Gallery/KnottedObjects/… Actually, I've tried to reproduce this trick many many times, but I've never succeeded. The trick also seems to imply that the trefoil knot is trivial, which is weird... – Kevin Lin Dec 26 2009 at 17:26 show 5 more comments Magician: "Here is a deck of 27 cards. Select one, memorize it, put it back and shuffle at libitum. Now name a number between 1 and 27 inclusive (=: N)." Then the magician deals the cards face up into three heaps. You have to tell him in which heap the selected card lies, and he quickly ramasses the three heaps. This is done three times, then he hands you the deck, and you have to count N cards from its back. The N'th card is flipped over, and it turns out to be the card you have originally selected. - Not so much a magic trick as a math trick, in that I can prove it works in theory but I have never tried it in practice. Take a very long one-dimensional frictionless billiard table, with a wall at one end. Away from the wall, place a billiard ball with mass $10^{2n}$ for $n$ positive. Between that ball and the wall, place another billiard ball with mass $1$. Then start the heavy ball rolling slowly towards the light one. Of course, they bounce, setting the light one traveling quickly towards the wall, which it bounces off, and then it hits the heavy ball, etc., until all the momentum from the heavy ball has been transferred and it starts rolling away. Assume that all collisions are perfectly elastic. Then at the end of the day, there will be finitely many collisions. Indeed, the number of collisions will calculate the digits of $\pi$, in the sense that there will be $\lfloor \pi \times 10^n \rfloor$ collisions. I prefer this method of calculating $\pi$ much better than the probabilistic one. - 4 This reminds me of a joke that ends in a mathematician explaining his solution to a real world problem starting with "let $C$ be a spherical chicken..." – Mariano Suárez-Alvarez Jan 4 2010 at 4:07 2 Another trick for calculating $\pi$;, observed by David Boll (home.comcast.net/~davejanelle/mandel.html) and proven by Aaron Klebanoff (home.comcast.net/~davejanelle/mandel.pdf), is the following. Let $z_0 = 0$; then let $z_j = z_{j-1}^2 + c$ where $c = -.75 + \epsilon i$ for some small number $epsilon$. Then for $k \lt \pi/\epsilon + O(1)$, $z_k$ is in a circle of radius 2 around the origin; for larger $k$ it's not. (Boll came across this while investigating the Mandelbrot set. There are other points near the boundary of the set that behave similarly.) – Michael Lugo Jan 4 2010 at 16:39 show 8 more comments Ask someone to lay out the 52 cards in a deck, face up, in 4 rows of 13 cards each, in any order the person wants. Then you can always pick 13 cards, one from each column, in such a way as to get exactly one card of each denomination (that is, one ace, one deuce, ..., one king). As a trick, it's not up there with sawing a woman in half, but its explanation does require Hall's Marriage Theorem. - 3 Actually, Hall's Marriage Theorem has a constructive version: the augmenting-paths algorithm for finding a perfect matching in a bipartite graph, which runs in polynomial time. The existence of this algorithm might help explain why the problem isn't so hard in practice... – Scott Aaronson Jul 13 2010 at 4:35 show 2 more comments Two persons, A and B, perform this trick. The public (or one from the public) chooses two natural numbers and give A the sum and B the product. A and B will ask each other, alternatively, the only single question "Do you know the numbers?" answering only yes or no until both find the numbers. There is a strategy such that for any input and only doing this, A and B will manage to find the original numbers. I have never seen magicians actually performing this, but is perfectly doable. This was a problem in the shortlist of the proposed problem for some international mathematical olympiad. Unfortunately I don't remember which. If someone remembers or finds it. Tell us please. i would also like to know. - I gave a talk about card shuffling to a general audience recently and wanted to memorise a "random-looking" deck so as to motivate a correct definition of what it means for a deck to be random. Most magicians actually use memory tricks to learn off the deck but I thought it would be much cleverer to order the cards in the obvious way, and then find a recursive sequence of length 52 containing all of 1 to 52. In the end, caught for time I settled on using the Collatz recursive relation with seed 18 --- this allowed me to name off 21 distinct cards effortlessly and when I held up the deck prior to the demonstration, the audience voted that the deck was random. Can anyone think of a suitable recursive sequence with the desired property? We can either take a random-looking order and a "regular" recursive sequence but I think it would be much better to find an easy to compute recursive sequence that "looks random" when using a more canonical order simply because if we can remember a "random looking order" we're pretty much going to have to remember the whole deck --- the problem I'm exactly trying to avoid. PS: I did one of the simpler Diaconis tricks. A deck is riffle shuffled three times, the top card shown to the audience, inserted into the deck, and after laying the cards out on the table the top card can be easily recovered by looking at the descents. The key is that the order of the deck is known beforehand --- a simple demonstration that three shuffles does not suffice to mix up a deck of cards (with respect to variation distance). - I forgot the historical name for this and I'm pretty sure this is classical and well-known. Consider a circular disk and remove an interior circular region, not necessarily concentric. In this annulus we play the following game. Start at any point $p_{1}$ of the outer boundary and draw a line through this point which is tangent to the inner circle. This line intersects the outer circle at another point $p_2$. Now repeat the same procedure with $p_2$ and get $p_3$. Iterating this procedure ad infinitum we either conclude that these sequence of points are periodic or not. What's true is that the periodicity or lack of it is independent of the starting point $p_1$. I believe there is a proof involving Lefschetz fixed point theorem involving the torus but any details on this and the history of this is more than welcome. - 1 I believe you are referring to Poncelet's theorem. mathworld.wolfram.com/PonceletsPorism.html – Gjergji Zaimi Dec 26 2009 at 0:50 11 It is hard to see how this idea can be turned into an actual trick. It requires either infinite accuracy (if the procedure is done by drawing on paper) or lots of complicated computation. Moreover, generically there is no periodicity, and the audience will not be very impressed by the prediction of non-periodicity. – Richard Stanley Dec 26 2009 at 16:37 1 I'm not sure I agree. Choosing the circles so that the periodicity is very low, say 3 or 4, and letting a computer do the calculation at the touch of a button for audience-member-chosen initial points (a bit of a pain but definitely something you can accomplish nowadays), you can definitely turn this into something pretty interactive and fun. – Emilio Pisanty Jun 22 at 11:27 show 1 more comment So two points of note. I did not read all the posts above in detail but did do a search for the Faro Shuffle and got no results... So: This is a shuffle where all the cards interweave absolutely perfectly (so a perfect riffle shuffle). There's quite a lot of maths behind this. For instance, 8 shuffles takes you back to the order you started shuffling the cards in. Martin Gardner talked about this a bit in at least one of his SA columns. The problem with the faro shuffle is it takes a long long time to learn... personally well over a year, and that was with the benefit of having been a practicing amateur magician for along time. Still if interested the book to look for is The Collected Works of Alex Elmsley, this really lays the foundations for mathematical faro work... Another trick I came across whilst working towards an Ergodic Theory exam uses the Birkhoff Ergodic Theorem at its core. You can read about it in these notes: http://www.maths.manchester.ac.uk/~cwalkden/ergodic-theory/lecture22.pdf Owen. - Here is a simple trick based on group theory. Ask a person to choose four numbers from 1 to 9 and write them in a row on a piece of paper. Pause for a moment and then write a number on a piece of paper without letting the other person see what it is. Turn the paper over and place it on the table. Now ask the person to choose two of the numbers from the list and put a line though them. Ask the person to compute a*b + a + b and put it in the list to replace the two chosen numbers. Continue to do this until there is only one remaining number. Turn over the paper and show that the numbers match. The simplest way of explaining this is to show that a * b + a + b is isomorphic to multiplication using the transform T(x) = x + 1. (a*b + a + b) + 1 = (a + 1)(b + 1). If we denote the operation a * b + a + b as a & b, this means that a & b is commuative and associative, just as multiplication is. For any list of numbers ai, the final number can be computed as the (a1 + 1)(a2 + 1)...(an + 1) - 1. - Here's another Fibonacci trick, from Benjamin & Quinn's "Proofs that really count". The magician hands a volunteer a sheet of paper with a table whose rows are numbered from one to ten, plus a final row for the total. She asks him to fill in the first two rows with his favorite two positive integers. She then asks him to fill in row three with the sum of the first two rows, row four with the sum of row two and row three, etcetera... She then hands him a calculator and asks him to add up all ten numbers together. Before he's able to finish that, the magician has a quick look at the sheet of paper and announces the total. The magician then asks the volunteer to divide row 10 by row 9, and cut up the answer to the second decimal digit. The volunteer performs the division and says: 1.61. And the magician: "Now turn over the paper and look what I've written". The paper says: "I predict the number 1.61". The first part of the trick uses the following well-known Fibonacci identity: $$\sum_{i=1}^nF_i=F_{n+2}-1$$ Indeed, call $x$ the number in row 1 and $y$ the number in row 2. Then for $n \geq 3$, the number in row $n$ is $F_{n-2} x+F_{n-1} y$, where $F_n$ is the $n$-th Fibonacci number. So the number in row 7 is $F_5 x + F_6 y=5x+8y$ and the total is $$x+y+\sum_{i=3}^{10} (F_{i-2} x+F_{i-1} y)= F_{10} x + F_{11} y=55x+88y$$ by the Fibonacci identity mentioned at the beginning. Therefore all the magician has to do to find the total is multiply row 7 by the number 11. The second part of the trick uses an inequality for the freshman sum ;-) of two fractions. That is, given positive fractions $\frac{a}{b}$ and $\frac{c}{d}$ such that $\frac{a}{b}<\frac{c}{d}$ we have: $$\frac{a}{b} < \frac{a+c}{b+d} < \frac{c}{d}$$ Just note that the number in row 9 is $13x+21y$ while the number in row 10 is $21x+34y$. Hence: $$1.615 \dots =\frac{21x}{13x} < \frac{21x+34y}{13x+21y} < \frac{34y}{21y}=1.619 \dots$$ - Apart from tricks based on numbers, there are topological objects whose properties can seem quite magical, like the Möbius strip or the unknot. E.g. take a standard page of paper, show that it has two sides (number them with a pen, show that any straight pen path meets a boundary). Next, cut out a long strip from it (not needed of course, but adds to the drama), and ask the audience "and how many sides does this have?". They reply "two". Then you put the the two small ends of the strip together to form a ring and you ask "and now, how many sides?", they still reply "two!". At this point do a little diversion, like putting a pair of scissors on the table saying out loud "I'll use this in a minute". Now do a half-twist with the strip before putting the small ends together and ask again "for the last time people, how many sides?". They answer "twoo!!", and you say "the magic has worked people, there's only one side!" (you show that now the pen paths along the long direction never meet a boundary and come back). Most laymen are quite bemused. Now do two half-twists and ask again, some won't dare an answer... - 1 Have an assistant cut a cylinder in half along its median circle. Then cut a Mobius strip along its median. – Douglas Zare Jan 17 2010 at 23:08 show 3 more comments You may ask the person to encode something by RSA, then you decode it (you have the private key) OR To divide two 40-digit integers and give you the decimal result to 100 digits, you then use continued fractions to find the original fraction (reduced) OR To compute pq and pr where p,q,r are prime, you then find p,q,r by the Euclidean algorithm (no very deep, but it's the best i've got) - The coffee mug trick is also called the Philipine Wine Trick and should be related to the Dirac String Trick, which you can find by a web search, for example here and also in my presentation Out of Line, where rotations in 3-space are related to the Projective Plane. A knot trick, I am not sure you would call it magic, has been shown to children and academics in many places. It rquires a pentoil knot of width 20" made of copper tubing, about 7mm diameter (made by a university workshop) shown in the following diagram: It also needs some nice flexible boating rope. The rope is wrapped round the $x,y$ pieces according to the rule $$R=xyxyxy^{-1}x^{-1}y^{-1}x^{-1}y^{-1}$$ and the ends tied together, as in the following picture: A member of the sudience is then invited to come up and manipulate the loop of rope off the knot, starting by turning it upside down. This justifies the rule $R=1$. Of course the rule is the relation for the fundamental group of complement of the pentoil, which can, for the right audience, be deduced from the relations at each crossing given by the diagram and can be easily demonstrated with the knot and rope. It is also of interest to have a copper trefoil around to compare the relations. One warning: the use of rope does not really model the fundamental group, so be careful with a demo for the figure eight knot! I did the demo for one teenager and he said:"Where did you get that formula?" This demo knot has been well travelled, for many different types of audience; on one occasion the airline lost my luggage with the rope and I had to ask the taxi from the airport to to stop at a hardware store for me to buy d=some clothesline. I devised this trick for an undergraduate course in knot theory in the late 1970s. - Place $K$ faced-down cards on a table, blindfold yourself and ask him/her for a number $1 < n < K$. Allow him/her to flip $n$ random cards up. Cover the cards with an opaque box that has two holes for you to put your hands in and claim that you can split the cards into 2 stacks, each with same number of faced-up cards. Based on a well known logic puzzle: http://usna.edu/Users/physics/mungan/_files/documents/Scholarship/CoinPuzzle.pdf Modified the process to make it harder for audience to figure out what you did and used cards so that they will not think that you did it by differentiating the surface of the coins. - Here's a couple of well-known simple topology tricks: Tie ends of a long enough piece of rope to your wrists, while wearing a loosely fitting jacket or sweatshirt. With your arms tied like that, take the jacket off your back and put it back on inside out. It's easier to figure out how to do it than to explain it in words, so I'll skip the explanation. The more risque version is to tie the ankles and do the trick with pants. The other one I haven't tried, but maybe it can be done at a party if you have a stick and some plasticine around.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476609826087952, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=578682
Physics Forums ## How to generate an equation for estimating maximum error? Hi, I am hoping to generate an equation that will describe a maximum error in estimation of angles. I would like an advice on the type of math/methods/devices (e.g., differential equations?) that I could use to solve this. Here's the problem: Imagine a square with an oblique line inside the square. Lets say the oblique line touches the left bottom corner of the square. Of interest is the error in estimation of angles between the oblique and the left side of the square. I will refer to it as the angle of an oblique. By error I mean the difference between the true angle and the estimated angle. If the length of two parallel sides of a square is misestimated relative to the other two sides, then the angle of an oblique will also be misestimated in a very specific way. Here's the twist: the misestimation of angles will vary depending on the physical angle between an oblique and the left side of a square. If one of the sides of a square is misestimated by less than 50 %, for example, the maximum error will presumably occur for angles that are about 40 degrees to the left side of the square. I would like to generate a function that will show (1) when the maximum error should occur (i.e., at which angle of an oblique?), (2) what is the maximum error (e.g., angle is overestimated by 12 degrees; therefore error = +12 degrees). Maximum error will depend on percent error in misestimation of lengths of a square, and the physical angle of an oblique. I hope this is clear enough. Thank you for your help! Recognitions: Science Advisor Quote by PatternSeeker If the length of two parallel sides of a square is misestimated relative to the other two sides, then the angle of an oblique will also be misestimated in a very specific way. Here's the twist: the misestimation of angles will vary depending on the physical angle between an oblique and the left side of a square. You need to explain exactly how the estimated angle depends on the estimated sides of the square. ( and do you mean "square" or "quadrilateral"?) For example if the vertices of the figure are given counterclockwise as ABCE with the line passing through A and hitting side BC, how is the estimate of the length BC used to calculate the angle? Does the estimator assume all sides of the square have length equal to his estimate for BC? Thank you for your reply, Stephen. This question is about a square that is perceived as a rectangle. For example, an observer may underestimate sides "AE" and "BC" but accurately estimates sides "AB" and "EC." An observer does not assume that all sides of the square have length equal to his estimate for BC. A line passing through A and hitting side BC will be at a larger angle from side "AE" in a rectangle above than a square. This is because a component of the line parallel to side "AE" will be underestimated by the same amount (in % error) as the side of the square. As an analogy, consider what happens with a diamond inscribed within a square seen as a rectangle. The diamond will look squashed in the latter. Consequently, the angle between its side and the adjacent side of the square will be overestimated. ## How to generate an equation for estimating maximum error? Hey PatternSeeker and welcome to the forums. This might sound anal, but it might help if you give us a simple diagram outlining what you are trying to measure. In the graphic show all the variables and then if you can use an arrow to point to the 'angle' and if necessary work out the relationships between the sides that specify the angle in terms of them. Thanks, chiro. I attached a picture which I hope will clarify things. The angle of interest is angle alpha. Any suggestions will be much appreciated! Thank you for welcoming me to the forum! I hear good things about it. Attached Thumbnails Recognitions: Science Advisor PatternSeeker, You have not clearly described the experiment that collects this data. For example, does the person being tested report "I think angle alpha = 65 degrees"? Or are you deducing the person's opinion about angle alpha from his opinion about other dimensions of the diagram. For example, does the person report "I think AE = 3" or do they report "I think AE = 3/5 of EC?". If the person does not report a perception of the angle, I think calling the 65 degrees a "perceived" angle is misleading. If you are computing the angle, how are you computing it? My speculation is that you know the actual dimensions of AP and RP. When the subject reports his guess g about distance $\overline{AE}$ , you assume all vertical distances are shrunk by the factor $\frac{g}{\overline{AE}}$. So you compute the "perceived" angle as $\alpha' = \arctan ( \frac{ \frac{g}{\overline{AE}}(\overline{RP})}{\overline{AP}})$. Is that what you do? Hi Stephen, Thank you for your reply. I used the term "perceived" for angles that would be estimated by observers. In this example, they would report angle as 65 degrees. Real angles are physical angles. The data I put up on this portal are made-up. I used the formulas such as the one you gave to generate these hypothetical data. % error in perception of length of stimuli is derived from a theory. Just as a reminder, I would like a suggestion about how to develop an equation that will produce an answer to where the maximum error (perceived - real angles) should occur. For example, assume that the % error in misestimation of lengths is - 40%. At what real angle between two edges of stimuli should the maximum error in estimated angle occur? I can certainly generate lists of values to figure this out, but I need a formula that will give me a single answer. I provided more details in the attached picture, and in previous posts. Thank you for your attention and suggestions about how to clarify this problem. I hope to hear from you again. Any further suggestions will be appreciated. Recognitions: Science Advisor Can we agree that the problem is: Given $\lambda > 0$ find angle $\alpha$ that maximizes $f(r) = | \arctan(r) - \arctan( \lambda r) |$ where $r = tan(\alpha)$. Hi Stephen, That may be so. Can you just tell me what λ stands for? Is it % error? In translation I would state the problem as: given that the % error in estimated lengths is less than zero, find physical angle α that maximizes the change between the estimated angle α' and the physical angle α. I'm not sure if that's exactly what you wrote in the mathematical language :) Please clarify. Recognitions: Science Advisor For a "40% underestimate", $\lambda$ would be 0.60. (I try to avoid using the terminology "percent" whenever possible. It often introduces confusion.) Just glancing at the problem, I think the angle of maximum error is $\arctan( \frac{1}{\sqrt{\lambda}})$, but I must go do something now. I'll look at it closer later. Stephen, I think you got it! Except that it's not arctan(1√λ), but that formula subtracted from 90 degrees. How did you arrive at that solution? I will try to figure this out on my own, but if you could clarify further, that would be great. Thank you so much Recognitions: Science Advisor It's one of those calculus problems where you find extrema of a function by setting its derivative equal to zero and solving for values that make that happen. For $f(r) = | \arctan(r) - \arctan( \lambda r) |$ $f'(r) = \frac{1}{1 + r^2} - \frac{\lambda}{1 + \lambda^2 r^2 }$ if $\lambda < 1$ If you're looking for a maximum, you must verify that a given solution of $f'(r) = 0$ produces a maximum instead of a minimum or an inflection point. You also have to check if the endpoints of the range of $r$ are extrema. In this case $r = 0$ is a minimum. Since angles (in mathematics) are usually measured counterclockwise from a horizontal reference line, I computed the angle that way. Hi Stephen, I appreciate your work. Can you just explain to mee exactly how did you get f(r) = f(r)=|arctan(r)−arctan(λr)| where r=tan(α)? If this is for a reported angle minus physical angle, wouldn't the equation read arctan(λr)-arctan(r)? Why did you use arctans the way you did? A few more steps would be helpful. Recognitions: Science Advisor Suppose we have right triangle ABC with hypoteneus AC, let $\alpha = \angle BAC$. Let $r = \frac{BC}{AB}$. Then $\alpha = \arctan(r)$. If $BC$ is multiplied by a factor of $\lambda$ then $r$ is multiplied by a factor of $\lambda$ and $\angle BAC$ becomes $\alpha' = \arctan( \lambda r)$. So the absolute value of the change in angle is $|\arctan(r) - \arctan(\lambda r) |$. Thanks, Stephen! Thread Tools | | | | |--------------------------------------------------------------------------------|----------------------------|---------| | Similar Threads for: How to generate an equation for estimating maximum error? | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 2 | | | Calculus & Beyond Homework | 0 | | | Calculus & Beyond Homework | 8 | | | Calculus & Beyond Homework | 4 | | | General Physics | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333502650260925, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/148900/finding-the-space-between-two-vectors
# Finding the space between two vectors? Could someone explain why the formula: $$\theta = \cos^ {-1}(a \cdot b)$$ provides the angle between vector $a$ and vector $b$? All the online resources seem to explain how to find the angle, but not why the method works. Resources I looked at: http://chemistry.about.com/od/workedchemistryproblems/a/scalar-product-vectors-problem.htm http://www.euclideanspace.com/maths/algebra/vectors/angleBetween/index.htm http://en.m.wikipedia.org/wiki/Dot_product - How do you define the dot product? – mixedmath♦ May 23 '12 at 18:31 You can take this as a definition of angle. In any case, this is the law of cosines. – Qiaochu Yuan May 23 '12 at 18:33 Multiplying corresponding entries of the vectors and then summing the products. – user26649 May 23 '12 at 18:33 @QiaochuYuan I suspected it to have something to do with the cosine law, but I don't see any mention of the third side. Using cosine law, wouldn't $\theta = \cos^{-1}\left(\frac{c^2 - a^ 2 - b^2}{-2ab}\right)$ – user26649 May 23 '12 at 18:39 2 The vectors must be unit vectors. Otherwise it's $\cos^{-1}\left( \dfrac{a.b}{\|a\| \|b\|} \right)$ – Robert Israel May 23 '12 at 18:41 show 1 more comment ## 3 Answers The dot product can be derived from the cosine law. $$c^2=a^2+b^2-2ab\cos(C)$$ where C is the angle between $a$ and $b$. If you consider $a$ and $b$ as your vectors, then side $c$ can be represented as $(b-a)$. So (noting that I'm talking about distances) $$|b-a|^2=|a|^2+|b|^2-2|a||b|cos(C)$$ $${(b_x-a_x)}^2+{(b_y-a_y)}^2={a_x}^2+{a_y}^2+{b_x}^2+{b_y}^2-2|a||b|cos(C)$$ $$({b_x}^2-2a_xb_x+{a_x}^2)+({b_y}^2-2a_yb_y+{a_y}^2)={a_x}^2+{a_y}^2+{b_x}^2+{b_y}^2-2|a||b|cos(C)$$ cancel out the squared terms and you get $$-2(a_xb_x+a_yb_y)=-2|a||b|cos(C)$$ $$a_xb_x+a_yb_y=|a||b|cos(C)$$ $$a\cdot b=|a||b|cos(C)$$ which gives you a nice way of getting the angle between vectors if you only have their components, by the simple rearrangement: $$C=\arccos(\frac{a\cdot b}{|a||b|})$$ It's less obvious why this works in the general dimensional case (and not just 2D), but a good place to start is to notice that any two vectors can always be put in a plane, and showing that expressing the $(x,y)$ of the plane in terms of the higher-dimensional vector components, then applying this calculation makes all the math come out okay. - Thank You Robert! – user26649 May 23 '12 at 19:05 A combination of linear algebra and geometry can be useful. If $R$ is a rotation matrix, it satisfies $R^TR=I_n$, where $I_n$ is the $n\times n$ identity matrix. Assume $v,w$ are unit vectors. We compute $$Rv\cdot Rw=(Rv)^T(Rw)=v^T(R^TR)w=v^TI_nw=v^Tw=v\cdot w$$ using matrix transpose properties. Thus the dot product is rotation-invariant. Rotations act transitively, which for our purposes means that given $v,w$ we can find a rotation $R$ so that $Rv=e_1$ is the first basis unit vector. After that we rotate around the $x$-axis so that $Rw$ becomes a vector on the $xy$-plane (while $Rv=e_1$ remains unchanged), and we are reduced to the case of $\Bbb R^2$. Here, $$(1,0)\cdot(\cos\theta,\sin\theta)=\cos\theta,$$ as desired (where $\theta$ is the angle between the second vector and the $x$-axis, or equivalently $e_1$). - We know that $$\cos(\alpha-\beta)=\cos(\alpha)\cos(\beta)+\sin(\alpha)\sin(\beta)$$ Now let $\alpha$ be the angle between $a$ and the $x$-axis, and $\beta$ between $b$ and the $x$-axis. From the definition of sin and cos you got: $$\cos(\alpha-\beta)=\frac{a_1}{|a|}\frac{b_1}{|b|}+\frac{a_2}{|a|}\frac{b_2}{|b|}=\frac{ab}{|a||b|}$$ Since the angle between a and b is $\theta=\alpha-\beta$, you got $$\theta=\arccos\left(\frac{ab}{|a||b|}\right)$$ QED -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9198631048202515, "perplexity_flag": "head"}
http://mathoverflow.net/questions/39882?sort=oldest
## Product of Borel sigma algebras ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) If $X$ and $Y$ are separable metric spaces, then the Borel $\sigma$-algebra $B(X \times Y)$ of the product is the the $\sigma$-algebra generated by $B(X)\times B(Y)$. I am embarrassed to admit that I don't know the answers to: Question 1. What is a counterexample when $X$ and $Y$ are non separable? Question 2. If $X$ is an uncountable discrete metric space, does $B(X)\times B(X)$ generated the Borel $\sigma$-algebra on $X \times X$? Question 3. If $X$ and $Y$ are metric spaces with $X$ separable, does $B(X)\times B(Y)$ generated the Borel $\sigma$-algebra on $X \times Y$? - ## 4 Answers Q1. Discrete spaces with cardinal > c ... then the diagonal is a Borel set, but not in the product sigma-algebra. This also answers Q2 (no) but not Q3. - 1 I think he does mean $>c$. The assertion being made is exercise 29 in the Radon measures chapter of Folland's real analysis text. – Keenan Kidwell Sep 24 2010 at 18:25 Thanks, Jerry & Keenan. Simple but nice. What is the answer to Q2 when $X$ has cardinality the continuum? – Bill Johnson Sep 24 2010 at 18:34 If $X$ is of size $c$, say $X=\mathbb R$, then the complement of the diagonal in $X^2$ is the union of countably many rectangles. This follows from the fact that $\mathbb R^2$ has a countable basis for the topology consisting of rectangles. Now, we are interested in the discrete topology on $X$, but the fact remains, the diagonal in $X^2$ is in the product $\sigma$-algebra. – Stefan Geschke Sep 24 2010 at 18:35 Of course, this does not show that the product $\sigma$-algebra is the same as the $\sigma$-algebra on the product. It just shows that the diagonal does not distinguish the two algebras. – Stefan Geschke Sep 24 2010 at 18:39 Here is an accesible proof: david.efnet-math.org/?p=16 – Michael Greinecker Sep 24 2010 at 19:31 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The answer to question 3 is yes. At least according to Lemma 6.4.2 of the second volume of Bogachev's book "Measure Theory". He requires both spaces to be Hausdorff and one of them to have a countable base. They need not be metric spaces. - Oh, yes, Byron. This is simple enough that this simpleton should have seen the argument. Thanks, and sorry that I can only accept one answer. – Bill Johnson Sep 24 2010 at 19:00 3 No problem. I really didn't do anything except follow my own advice: when you have an "unusual" measure theory question, try looking in Bogachev. His book is an impressive achievement. – Byron Schmuland Sep 24 2010 at 19:05 Yes, it is very nice. I had never looked at it before reading your answer. Thanks again. – Bill Johnson Sep 24 2010 at 19:08 To close a gap: From the answer of Gerald Edgar, we know that the answer to the second question is no if the spaces involved have cardinality larger than $\mathfrak{c}$. This leaves open what happens when they do have cardinality $\mathfrak{c}$. The answer is yes under the continuum hypothesis, and in general it holds that $2^{\omega_1}\otimes 2^{\omega_1}=2^{\omega_1\times\omega_1}$. This was shown in B. V. Rao, On discrete Borel spaces and projective sets Bull. Amer. Math. Soc. Volume 75, Number 3 (1969), 614-617. In Bogachev's remarkable book, it can be found as Proposition 3.10.2. - This is should probably rather be a comment to Michael Greinecker's answer, but I do not have the necessary privileges. Michael Greinecker's answer leaves open what happens with a continuum-sized discrete space when one does not assume the continuum hypothesis. Arnold W. Miller showed in section 4 of On the length of Borel hierarchies that it is consistent relative ZFC that no universal analytic set $U \subset [0,1] \times [0,1]$ belongs to the product $\sigma$-algebra $\mathcal{P}[0,1] \otimes \mathcal{P}[0,1]$. Combined with Rao's result mentioned by Michael Greinecker, this shows that $2^{\mathfrak{c \times c}} = 2^{\mathfrak{c}} \otimes 2^\mathfrak{c}$ is independent of ZFC. See my answer to Universally measurable sets of $\mathbb{R}^2$ on math.stackexchange.com for related results and more details and references. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178448915481567, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/135795-integral-sec-3-theta-d-theta.html
# Thread: 1. ## Integral of sec^3 (theta) d(theta) How do I integrate sec^3 (theta) d(theta) ? I am doing another problem and this is the step I am stuck on. I know all my previous steps are correct because math software told me but I'd like to be able to manually do this. Any help would be greatly appreciated! Thanks in advance! 2. Originally Posted by s3a How do I integrate sec^3 (theta) d(theta) ? I am doing another problem and this is the step I am stuck on. I know all my previous steps are correct because math software told me but I'd like to be able to manually do this. Any help would be greatly appreciated! Thanks in advance! 1)express sec as cos 2)multiply numerator and denominator by cos(theta) 3)denominator becomes an even power of cos. so express it as sin 4)substitute sin(theta)=t 5)use partial fractions 6)integrate 3. I subsitute for u instead of t because I am more familiar with it and I am at Integral of 1/(1-u^2)^2 du but am a bit lost now. Could you show me what to do with the partial fractions? I don't know if the following is right: 1 = A(u+1)(u-1)^2 + B(u-1)^2 + C(u+1)^2 * (u-1) + D(u+1)^2 4. Originally Posted by s3a How do I integrate sec^3 (theta) d(theta) ? I always thought the method of parts was a neat way to do it ... $\int \sec{t} \cdot \sec^2{t} \, dt$ $u = \sec{t}$ ... $du = \sec{t}\tan{t} \, dt$ $dv = \sec^2{t} \, dt$ ... $v = \tan{t}$ $\int \sec^3{t} \, dt = \sec{t}\tan{t} - \int \tan^2{t}\sec{t} \, dt$ $\int \sec^3{t} \, dt = \sec{t}\tan{t} - \int (sec^2{t}-1)\sec{t} \, dt$ $\int \sec^3{t} \, dt = \sec{t}\tan{t} - \int sec^3{t}-\sec{t} \, dt$ $\int \sec^3{t} \, dt = \sec{t}\tan{t} - \int sec^3{t} \, dt + \int \sec{t} \, dt$ $2\int \sec^3{t} \, dt = \sec{t}\tan{t} + \int \sec{t} \, dt$ $2\int \sec^3{t} \, dt = \sec{t}\tan{t} + \ln|\sec{t}+\tan{t}| + C<br />$ $\int \sec^3{t} \, dt = \frac{1}{2}\left[\sec{t}\tan{t} + \ln|\sec{t}+\tan{t}|\right] + C$ 5. Just a question, what if I had a higher power though. Like what if I had Integral (sec^5(x))dx is there any easy method for that?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9339536428451538, "perplexity_flag": "middle"}
http://gowers.wordpress.com/2010/09/
# Gowers's Weblog Mathematics related discussions ## Archive for September, 2010 ### Polymath3 now active September 30, 2010 After a long initial discussion period, Polymath3, a project on Gil Kalai’s blog that aims to solve the polynomial Hirsch conjecture, has now started as an active research project. There is already quite a lot of material on his blog, and soon some of it should have migrated to a wiki, which will be a good place to get up to speed on the basics. I will update this post from time to time with news of how the project is going. Something that is maybe worth pointing out is that although the problem looks at first as though you need to know all sorts of facts about convexity, there is a very nice purely combinatorial statement (about set systems) that would imply the conjecture. So there is no excuse not to think about it … Posted in News, polymath | 1 Comment » ### Is the Tricki dead? September 24, 2010 This is a post I’ve been meaning to write for some time. As most readers will know, at the very end of 2008 Alex Frolkin, Olof Sisask and I started the Tricki, a wiki-like website where people could post articles about mathematical techniques. The hope was that after the site had reached some level of completeness, it would be possible to take a mathematics research problem (or subproblem) and search efficiently for known techniques that were likely to be relevant. It would be doing something a little different from Wikipedia mathematics articles, which concentrate more on what I like to think of as “things with names”. For instance, if you suspect that discrete Fourier analysis is likely to be a useful tool for your problem, then you can type “discrete Fourier analysis” into Google and find many links, including to a Wikipedia article that contains many of the basic facts. But what if it doesn’t occur to you that discrete Fourier analysis is what you need (even though it in fact is)? The idea was that, using easily identifiable features of your problem, you would be able to follow links to ever more specific Tricki pages until you would reach a page that explained when discrete Fourier analysis was useful and how it was used. In general, the whole site would be about how to do mathematics rather than about lists of facts. (more…) Posted in Mathematics on the internet | 46 Comments » ### EDP21 — restrictions on possible proofs September 21, 2010 The situation we are now in with EDP is roughly this. We have a statement that would imply EDP and that seems easier to prove because it is a reasonably simple existential statement (there exists a decomposition with certain properties) rather than a universal statement (every $\pm 1$ function has unbounded discrepancy on HAPs). However, the existential statement seems itself to be hard, despite one’s initial expectation that it ought to be easier. So what do we do? One tool that we have at our disposal is duality, which is what we used to convert the problem to an existential one in the first place. Now obviously we don’t want to apply duality twice and end up with the original problem, but, perhaps surprisingly, there are ways that applying duality twice could be useful. Here are two such ways. The first is that you prove that a certain kind of decomposition would be sufficient to prove EDP. Then you argue that if such a decomposition exists, then a more restricted kind of decomposition must also exist. Dualizing again, one ends up with a new discrepancy problem that is different from the original one (though it will imply it). The second way is this: if it is not easy to write down a decomposition that works, then one wants to narrow down the search space. And one way of doing that is to prove rigorously that certain kinds of decompositions do not exist. And an efficient way of doing that is to use duality: that is, one finds a function with low discrepancy on the class of sets that one was hoping to use for the decomposition. Since this class is restricted, solving the discrepancy problem is easier than solving EDP (but this time it doesn’t imply EDP). We have already had an example of the first use of dualizing twice. In this post I want to give in detail an example of the second. (more…) Posted in polymath5 | 93 Comments » ### Are these the same proof? September 18, 2010 I have no pressing reason for asking the question I’m about to ask, but it is related to an old post about when two proofs are essentially the same, and it suddenly occurred to me while I was bathing my two-year-old son. Consider the problem of showing that the product of any $k$ consecutive positive integers (or indeed any integers, not that that is a significant extension) is divisible by $k!.$ I think the proof that most experienced mathematicians would give is the slick one that $n(n+1)\dots(n+k-1)$ divided by $k!$ is $\binom {n+k-1}k,$ and so is the number of ways of choosing $k$ objects from $n+k-1$ objects. Since the latter must be an integer, so must the former. One might argue that this is not a complete proof because one must show that $\binom{n+k-1}k$ really is the number of ways of choosing $k$ objects from $n+k-1,$ but that is not hard to do. (more…) Posted in Somewhat philosophical | 41 Comments » ### EDP20 — squares and fly traps September 10, 2010 I think this will be a bit long for a comment, so I’ll make it a post instead. I want to try to say as clearly as I can (which is not 100% clearly) what we know about a certain way of constructing a decomposition of the identity on $\mathbb{Q}.$ Recall from the last post or two that what we want to do is this. Define a square in $\mathbb{N}\times\mathbb{N}$ to be a set of the form $[r,s]^2,$ where by $[r,s]$ I mean the set of all positive integers $n$ such that $r\leq n\leq s.$ Let us identify sets with their characteristic functions. We are trying to find, for any constant $C,$ a collection of squares $S_1,\dots,S_k$ and some coefficients $\lambda_1,\dots,\lambda_k$ with the following properties. • $C\sum_{i=1}^k|\lambda_i|\leq\sum_{i=1}^k\lambda_it_i,$ where $S_i=[r_i,s_i]^2$ and $t_i=(s_i-r_i+1)$ is the number of points in the interval that defines $S_i,$ or, more relevantly, the number of points in the intersection of $S_i$ with the main diagonal of $\mathbb{N}\times\mathbb{N}.$ • Let $f(x,y)=\sum_i\lambda_iS_i(x,y).$ Then for any pair of coprime positive integers $a,b$ we have $\sum_{n=1}^\infty f(na,nb)=0.$ The second condition tells us that the off-diagonal elements of the matrix you get when you convert the decomposition into a matrix indexed by $\mathbb{Q}_+$ are all zero, and the first condition tells us that we have an efficient decomposition in the sense that we care about. In my previous post I showed why obtaining a collection of squares for a constant $C$ implies that the discrepancy of an arbitrary $\pm 1$ sequence is at least $C^{1/2}.$ In this post I want to discuss some ideas for constructing such a system of squares and coefficients. I’ll look partly at ideas that don’t work, so that we can get a sense of what constraints are operating, and partly at ideas that might have a chance of working. I do not guarantee that the latter class of ideas will withstand even five minutes of serious thought: I have already found many approaches promising, only to dismiss them for almost trivial reasons. [Added later: the attempt to write up even the half promising ideas seems to have killed them off. So in the end this post consists entirely of half-baked ideas that I'm pretty sure don't work. I hope this will lead either to some new and better ideas or to a convincing argument that the approach I am trying to use to create a decomposition cannot work.] (more…) Posted in polymath5 | 63 Comments » ### A Disappearing Number on in London September 10, 2010 I said in my post about the fourth day of the ICM that if you got the chance to see Simon McBurney’s play A Disappearing Number then you should. Well, I have just learned that it has a short run in London coming up — from today to the 25th of this month. If you open their West End Leaflet you will find some information about the play, and tickets can be booked at this page. ### EDP19 — removing some vagueness September 6, 2010 In the comments on EDP18 we are considering a certain decomposition problem that can be understood in its own right. At various points I have asserted that if we can find a decomposition of a particular kind then we will have a positive solution to EDP. And at various points in the past I have even sketched proofs of this. But I think it would be a good idea to do more than merely sketch a proof. So in this post I shall (I hope) give a completely rigorous derivation of EDP from the existence of an appropriate decomposition. (Well, I may be slightly sketchy in places, but only about details where it is obvious that they can be made precise.) I shall also review some material from earlier posts and comments, rather than giving links. Representing diagonal matrices First, let me briefly look again at how the ROD (representation of diagonal) approach works. If $P$ and $Q$ are HAPs, I shall write $P\otimes Q$ for the matrix $A$ such that $A_{xy}=1$ if $(x,y)\in P\times Q$ and 0 otherwise. The main thing we need to know about $P\times Q$ is that $\langle x,(P\otimes Q)x\rangle=(\sum_{i\in P}x_i)(\sum_{j\in Q}x_j)$ for every $x.$ Suppose now that $D$ is a diagonal matrix with diagonal entries $d_1,\dots,d_n$ and that we can write it as $\sum_r\lambda_rP_r\otimes Q_r,$ where each $P_r$ and each $Q_r$ is a HAP. Then $\sum_r\lambda_r(\sum_{i\in P_r}x_i)(\sum_{j\in Q_r}x_j)=\sum_kd_kx_k^2.$ If $\sum_r|\lambda_r|=c\sum_kd_k$ and $x_k=\pm 1$ for every $k,$ then it follows that there exists $r$ such that $|\sum_{i\in P_r}x_i||\sum_{j\in Q_r}x_j|\geq c^{-1},$ and from that it follows that there is a HAP $P$ such that $|\sum_{i\in P}x_i|\geq c^{-1/2}.$ So if we can make $c$ arbitrarily small, then EDP is proved. (more…) Posted in polymath5 | 6 Comments » ### EDP18 — apparently P does not equal NP September 3, 2010 The title of this post is meant to serve various purposes. First and foremost, it is a cheap trick designed to attract attention. Secondly, and relatedly, it is a nod to the amusing events of the last week or so. [Added later: they were from the last week or so when I wrote that sentence.] But there is a third reason that slightly excuses the first two, which is that the current state of play with the EDP project has a very P$\ne$NP-ish feel to it. Indeed, that has been the case for a while: we are trying to find a clever decomposition of a diagonal matrix, which is a difficult search problem, even though we can be fairly confident that if somebody came up with a good candidate for a decomposition, then checking that it worked would be straightforward. And just in case that is not true, let’s make it trivially true by saying that we are searching for a decomposition that can easily be checked to work. If it can easily be checked to work, then it can easily be checked that it can easily be checked to work (the algorithm being to try to check it and see whether you succeed). But now I want to air a suggestion that reduces the search problem to another one that has similar properties but may be easier. A brief word also on why I am posting again on EDP despite the fact that we are nowhere near 100 comments on the previous post. The main reason is that, now that the rate of commenting has slowed to a trickle, it is far from clear that the same rules should apply. I think the 100-comment rule was a good sufficient condition for a new post, but now I think I want to add a couple more: if there is something to say and quite a long time has elapsed since the previous post, or if there is something to say that takes a while to explain and is not a direct continuation of the current discussion, then it seems a good idea to have a new post. And both these conditions apply. [Added later: this is a rather strange post written over a few weeks during which my thoughts on the problem were constantly changing. So everything that I say, particularly early on, should be taken with a pinch of salt as I may contradict it later. One approach to reading the post might be to skim it, read the very end a bit more carefully, and then refer back to the earlier parts if you want to know where various ideas came from.] (more…) Posted in polymath5 | 101 Comments » ### ICM2010 — final post September 2, 2010 The previous post was the final post in the sense of being the last post describing my experience of the ICM. But here I’ll just quickly collect together a few bits of information that it might be handy to have in the same place. I’ll start with links to the recordings of all the talks I have described that were recorded. (You can find these, and all the other talks, by going to the ICM website, but my experience is that they are organized in a rather irritating way: on one page you have a schedule but no links to videos, and on a separate page you have links to lots of videos but are not told which link is to which talk.) Then I’ll collect together my favourite quotes from my four days at the congress. Finally, I’ll give a collection of links. If anyone has any suggestions for possible additions to this page, I’ll be happy to consider them. Talks discussed on this blog Opening ceremony Part I (This starts with a close-up of Kevin O’Bryant, includes about 15 minutes before the ceremony started, which allows you to hear, not very well, the Indian music that was going on, and gets up to just before the announcement of the Fields medallists.) Opening ceremony Part II (This takes you from the announcement of the Fields medals to Martin Grötschel’s amusing discussion of impact factors.) Opening ceremony Part III (The last ten minutes, starts in the middle of Grötschel’s talk and includes his demonstration of the IMU page with all ICM proceedings on it) Laudationes Part I (Starts with twenty minutes of empty stage — the result of the laudationes starting late — and gives you all of Furstenberg on Lindenstrauss and the beginning of Arthur on Ngo) Laudationes Part II (The rest of Arthur on Ngo, then almost all of Kesten on Smirnov) Laudationes Part III (The rest of Kesten on Smirnov, then H-T Yau on Villani. Ends with a shot of the audience while Kalai gets ready to start talking about Spielman.) Laudationes Part IV (Gil Kalai’s talk with the introduction cut off, and the first half or so of Varadhan’s Abel lecture.) (more…) Posted in ICM2010, News | 11 Comments » ### ICM2010 — fourth day September 1, 2010 I’ve entitled this post “fourth day” in an attempt to encourage myself to write less and get this account finished: with each passing day I find that more has slipped out of my mind (for instance, there are several hours of this day that I no longer remember anything about), and in any case the fourth day of a nine-day conference that ended last week is hardly hot news any more. Having said that, I have tried the trick with several previous posts in this sequence and been forced to change their titles. Yet again the organizers gave the first slot of the day to a speaker I couldn’t bear to miss — David Aldous, one of the world’s very top probabilists. So yet again I arrived exhausted at the convention centre. Incidentally, here is a photo (from the second day, as it happens) that shows what arriving at the convention centre looked like. If you look closely you’ll see that there is a dramatic gender imbalance: that is because the “ladies” had been told to go to a different queue. At first I was extremely surprised by this, but there was a simple reason for the segregation: the male queue had a male frisker and the female queue had a female frisker. You can also just make out the airport-like metal-detecting cuboid skeletons we had to walk through on entering the building. (more…) Posted in ICM2010, News | 10 Comments »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 57, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9623103141784668, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/63200-different-arrangments-number-ways.html
# Thread: 1. ## different arrangments number of ways Eight different colored blocks are in a box. How many different color arrangements taking eight blocks at a time can be made on a table. (Note: Order makes a different arrangement.) 2. ## different arrangments..Help pleeez ? If I understand this correctly, there are eight blocks, and all of them will be used each time - it's just a question of what order they get used in. So there are 8 blocks you could pull the first time out. There are 7 blocks left that you could pull the second time out. Then, there are 6 blocks left that you could pull the third time out. There are 5 blocks left that you could pull the fourth time out. Etc. The idea is that the number of combinations should be 8*7*6*5*4*3*2*1 = 40,320, based on the above. - Steve J 3. There are 8 different blocks and so the number of arrangements of 8 blocks possible is $8!$ 4. ya'll are the best...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331755638122559, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/273489/how-to-find-the-c-satisfied-left-y-left-t-right-right-ct
How to find the C satisfied $\left| y\left( t\right) \right| < Ct$ $a\left( t\right)$ is a continuity function on $R^{+}$ (x>0), $y\left( t\right)$ is a function satisfied the equation $y''+a\left( t\right)y=0$, if $\int _{0}^{+\infty }t|a\left( t\right)|dt < \infty$,prove there exist a C>0 (C is a constant),such that if $t\in R^{+}$ and t is large enough, $|y\left( t\right)|\leq Ct$. I don't know where to start ,how to use the differential equation - 1 Write $y$ as an integral of $y'$. Then integrate by parts to create $y''$ there. Then use the equation. – user53153 Jan 9 at 14:03 @PavelM.Thanks, I will try it later. – frame99 Jan 9 at 14:05 – Babak S. Jan 9 at 14:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8695200085639954, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/2416-urgent-algebra.html
# Thread: 1. ## URGENT - algebra f(x) = xcubed - 2xsquared + ax + b, where a and b are constants. when f(x) is divided by (x-2), the remainder is 1. when f(x) is divided by (x+1), the remainder is 28. a) Find the value of a and the value of b. b) Show that (x-3) is a factor of f(x) Thanks a lot! 2. This is about the factor and remainder theorem: the remainder when f(x) is divided by (x-a) is just f(a). In this case, you're being told f(2)=1, f(-1)=28. That means that 2^3 - 2.2^2 + 2a + b = 1 and (-1)^3 - 2.(-1)^2 - a + b = 28. This is a pair of simultaneous linear equations in a and b, namely 2a+b=1, b-a=31, with solution a=-10, b=21. Now you're asked about f divided by (x-3), so look at f(3) = 3^3 -2.(3^2) -10.3 + 21 = 0. So the remainder on division by (x-3) is zero, and f is divisible by x-3. 3. thanks a lot!! 4. Originally Posted by devilicious f(x) = xcubed - 2xsquared + ax + b, where a and b are constants. when f(x) is divided by (x-2), the remainder is 1. when f(x) is divided by (x+1), the remainder is 28. a) Find the value of a and the value of b. b) Show that (x-3) is a factor of f(x) Thanks a lot! Hello, I presume that you know how to do long division: $\left( x^3-2x^2+ax+b \right)/(x-2)=x^2+a \ remainder\ b+2a$ $\left( x^3-2x^2+ax+b \right)/(x+1)=x^2-3x+(a+3)$. The remainder is here: b-a-3 According to the text of your problem you'll get a system of two linear equations: $\left\{\begin{array}{cc}2a+b=1\\-a+b-3=28\end{array}\right.$ You get a = -10 and b = 21. So your equation now reads: $f(x)=x^2-2x^2-10x+21=(x-3) \cdot (x^2+x-7)$, which shows that f(x) is divisible by (x-3). Greetings EB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9297773241996765, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/11567/gcdbx-1-by-1-b-z-1-b-gcdx-y-z-1/11636
# $\gcd(b^x - 1, b^y - 1, b^ z- 1,…) = b^{\gcd(x, y, z,…)} -1$ [duplicate] Possible Duplicate: Number theory proving question? Dear friends, Since $b$, $x$, $y$, $z$, $\ldots$ are integers greater than 1, how can we prove that $$\gcd (b ^ x - 1, b ^ y - 1, b ^ z - 1 ,\ldots)= b ^ {\gcd (x, y, z, .. .)} - 1$$ ? Thank you! Paulo Argolo - – Timothy Wagner Nov 23 '10 at 20:09 1 @Timothy: ah, you're right. I should've checked for duplicates. – Qiaochu Yuan Nov 23 '10 at 20:20 ## marked as duplicate by Qiaochu Yuan, Aryabhata, Robin Chapman, KennyTMNov 25 '10 at 20:07 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 2 Answers It suffices to prove it for two terms, that is, $\gcd(a^n - 1, a^m - 1) = a^{\gcd(n,m)} - 1$. The basic idea is that we can use the Euclidean algorithm on the exponents, as follows: if $n > m$, then $$\gcd(a^n - 1, a^m - 1) = \gcd(a^n - 1, a^n - a^{n-m}) = \gcd(a^{n-m} - 1, a^m - 1).$$ So we can keep subtracting one exponent from the other until we get $\gcd(n, m)$ as desired. Another way to look at this computation is to write $d = \gcd(a^n - 1, a^m - 1)$ and note that $$a^n \equiv 1 \bmod d, a^m \equiv 1 \bmod d \Rightarrow a^{nx+my} \equiv 1 \bmod d$$ from which it readily follows, as before, that $a^{\gcd(n,m)} \equiv 1 \bmod d$, so $d$ dividess $a^{\gcd(n,m)} - 1$. On the other hand, $a^{\gcd(n, m)} - 1$ also divides $d$. What's really nice about this result is that it holds both for particular values of $a$ and also for $a$ as a variable, e.g. in a polynomial ring with indeterminate $a$. You can readily deduce several seemingly nontrivial results from this; for example, the sequence defined by $a_0 = 2, a_n = 2^{a_{n-1}} - 1$ is a sequence of pairwise relatively prime integers, from which it follows that there are infinitely many primes. By working only slightly harder you can deduce that in fact there are infinitely many primes congruent to $1 \bmod p$ for any prime $p$. - Hint $\$ The simple equivalences demonstrated below prove that both sides of your equation have the same common divisors $\rm\:d\:,\:$ therefore they have the same greatest common divisor. $\ \$ QED $$\begin{eqnarray}\rm\ \ mod\,\ d\!:\ \ b^M,\:b^N\equiv 1&\iff&\rm ord(b)\ |\ M,N\iff ord(b)\ |\ (M,N)\iff b^{\,(M,N)}\equiv 1\\ \rm i.e.\ \ \ d\ |\ b^M\!-\!1,\:b^N\!-\!1\ &\iff&\rm\ d\ |\ b^{\,(M,N)}\!-\!1,\qquad\ \ \, where \rm\quad (M,N)\, :=\, gcd(M,N) \end{eqnarray}$$ Note $\$ The conceptual structure at the heart of this simple proof is the ubiquitous order ideal. $\$ See my post here for more on this and the more familiar additive form of a denominator ideal. More generally $\rm\ gcd(f(m), f(n))\ =\ f(gcd(m,n))\ \ \ if\ \ \ f(n)\ \equiv\ f(n\!-\!m)\ \ (mod\ f(m)),\:$ and $\rm\: f(0)\ =\ 0.\$ See my post here for a simple inductive proof. In fact there is a q-analog: the result also holds true for polynomials $\rm\ \ f(n)\, =\, (x^n\!-\!1)/(x\!-\!1),\$ and $\rm\ x\to 1\$ yields the integer case (Bezout identity) - see my post here for a simple proof. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356833696365356, "perplexity_flag": "head"}
http://faculty.virginia.edu/austen/blog/files/tag-statistical-mechanics.html
# Austen Lamacraft ## condensed matter and atomic physics statistical mechanics # Congratulations Yifei! 03/11/11 09:48 In August my student, Yifei Shi, had his first preprint out on the arXiv, concerning the fascinating statistical mechanics of bosons that can form single particle and pair condensates. A few days ago we found out that this work will be appearing in PRL shortly. Congratulations on your first paper, Yifei! Tags: statistical mechanics, topological defects, critical phenomena # Strings and things 09/04/11 10:32 Our PRL on the statistical mechanics of 2D polar condensates just came out this week. This is the work of my former postdoc Andrew James (now at Brookhaven), showing that a spin-1 Bose gases has an interesting phase diagram in two dimensions driven by the interplay of two different types of topological defects: vortices and strings. In general the order parameter in a spin-1 condensate is a complex three component vector. For the polar case, which corresponds to spin-spin interactions of antiferromagnetic sign (the situation prevailing in 23Na), this vector is restricted to be a real vector multiplied by a phase $$\phi=\mathbf{n} e^{i\theta}$$. This parametrization has some redundancy: $$(\mathbf{n},\theta)$$ and $$(-\mathbf{n},\theta+\pi)$$ describe the same state. An immediate consequence of this is that the elementary vortex in a polar condensate has only a $$\pi$$ winding of the phase, and thus half the circulation quantum, of a vortex in a regular superfluid. It must, however, coincide with a disclination in the vector $$\mathbf{n}$$. Below you can see a picture of a pair of such half-vortex / disclination defects, where the blue arrows indicate the phase $$\theta$$ and the red arrows the vector $$\mathbf{n}$$ The main point of our paper is that, once you turn on a magnetic field, the quadratic Zeeman effect creates an easy axis anisotropy that causes the $$\mathbf{n}$$ to align either parallel or antiparallel to the field. Thus in the picture above the red arrows mostly lie horizontally. However, they still have to reverse going around the center of each of the defects, but now this reversal is confined to a string of a well-defined thickness and energy per length (or tension) set by the field. The Kosterlitz-Thouless transition mediated by the vortices and the Ising transition mediated by the strings fit together in an interesting way. We’ll have more to say about systems with this kind of phase diagram soon! Tags: spinor condensates, statistical mechanics, topological defects
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137721657752991, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/108635-idempotent-matrix.html
# Thread: 1. ## Idempotent matrix This involves a bit of distribution theory but I think the solution involves more linear algebra than statistics so I'm posting here. Question: A is an idempotent matrix. If $X \sim N (0, I_k )$ (where X is a n dimensional vector). The rank of A is m. Show that $X'AX \sim \chi ^2_m$. (Hint: Use the fact that A's eigenvalues are either 0 or 1) My attempt: I'm thinking I have to diagonalise A? So let A = SDS' where D is a diagonal matrix with A's eigenvalues on its diagonal. So $X'AX = X'SD^{\frac{1}{2}}D^{\frac{1}{2}}S'X$ Let $T = D^{\frac{1}{2}}S'X$ Then E(T) = 0 and $Var(T) = D^{\frac{1}{2}}S'SD^{\frac{1}{2}}$ ?? I think I'm not going anywhere with this. 2. Originally Posted by WWTL@WHL This involves a bit of distribution theory but I think the solution involves more linear algebra than statistics so I'm posting here. Question: A is an idempotent matrix. If $X \sim N (0, I_k )$ (where X is a n dimensional vector). The rank of A is m. Show that $X'AX \sim \chi ^2_m$. (Hint: Use the fact that A's eigenvalues are either 0 or 1) My attempt: I'm thinking I have to diagonalise A? So let A = SDS' where D is a diagonal matrix with A's eigenvalues on its diagonal. So $X'AX = X'SD^{\frac{1}{2}}D^{\frac{1}{2}}S'X$ Let $T = D^{\frac{1}{2}}S'X$ Then E(T) = 0 and $Var(T) = D^{\frac{1}{2}}S'SD^{\frac{1}{2}}$ ?? I think I'm not going anywhere with this. What is N(0,I_k)? What does " X ~ N(0, I_k) " mean? What is X_m? A characteristic fucntion of some set? Later you write about E(T) and Var(T)...what are these?? Perhaps this is related to probability or something, but this section is about algebra, and the above looks chinese to me. Tonio
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9192397594451904, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/313/how-to-collect-result-continuously-interruptible-calculation-when-running-para?answertab=active
# How to collect result continuously (interruptible calculation) when running parallel calculations? This is the most common pattern to compute a table of results: ````Table[function[p], {p, parameters}] ```` (regardless of how it's implemented, it could be a `Map`) The problem with this is that if the calculation is interrupted before it's finished, the partial results will be lost. We can do this in a safely interruptible way like so: ````Do[AppendTo[results, {p, function[p]}], {p, parameters}] ```` If this calculation is interrupted before it's finished, the intermediate results are still preserved. We can easily restart the calculation later, for those parameter values only for which `function[]` hasn't been run yet. Question: What is the best way to achieve this when running calculations in parallel? Assume that `function[]` is expensive to calculate and that the calculation time may be different for different parameter values. The parallel jobs must be submitted in a way to make best use of the CPU. The result collection must not be shared between the parallel kernels as it may be a very large variable (i.e. I don't want as many copies of it in memory as there are kernels) Motivation: I need this because I want to be able to make my calculations time constrained. I want to run the function for as many values as possible during the night. In the morning I want to stop it and see what I got, and decide whether to continue or not. Notes: I'm sure people will mention that `AppendTo` is inefficient and is best avoided in a loop. I think this is not an issue here (considering that the calculations run on the subkernels and `function[]` is expensive). It was just the simplest way to illustrate the problem. There could be other ways to collect results, e.g. using a linked list, and flattening it out later. `Sow`/`Reap` is not applicable here because they don't make it possible to interrupt the calculation. About the long running time: The most expensive part of the calculations I'm running are in C++ and called through LibraryLink, but they still take a very long time to finish. - ## 4 Answers Regarding using Sow instead of AppendTo, you may find this trick useful: ````Last[Last[Reap[CheckAbort[Do[Pause[0.1]; Sow[x], {x, 30}], ignored]]]] ```` (Try running this and aborting it partway through. It runs for 3 seconds due to the `Pause[0.1]` commands.) Do is used instead of Table, and the results are returned with Sow. The CheckAbort catches when you abort your computation partway through and does the useful tidying up (in this case, returning something, anything, to the enclosing Reap). You can combine this with a version of Sow that always run on the master kernel: ````SetSharedFunction[ParallelSow]; ParallelSow[expr_] := Sow[expr] ```` (Tangentially related blog post I did: http://blog.wolfram.com/2011/04/20/mathematica-qa-sow-reap-and-parallel-programming/) Then you could use this parallelized version: ````In[3]:= Last[ Last[Reap[ CheckAbort[ParallelDo[Pause[0.1]; ParallelSow[x], {x, 30}], ignored]]]] Out[3]= {6, 1, 7, 2, 8, 3, 9, 4, 10, 5, 16, 11, 17, 12, 18, 13, 19, \ 14, 20, 15, 21, 26, 22, 27, 23, 28, 24, 29, 25, 30} ```` However, as you can see, the results come in in an unpredictable order so something slightly cleverer is in order. Here is one way (probably not the best but the first thing I thought of): ````In[5]:= Catch[ Last[Last[ Reap[CheckAbort[ Throw[ParallelTable[Pause[0.1]; ParallelSow[x], {x, 30}]], ignored]]]]] Out[5]= {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, \ 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30} ```` The Throw is used to jump outside the Reap if the ParallelTable finishes. (Getting messy!) To be safe this should be wrapped up in a function and tags (a.k.a. the optional second argument) should be used on the Throw, Catch, Sow, Reap. - It's not a problem if the results come in in an unpredictable order. All I need is to run `function` for each value of `p`, I can always correct the order later if it necessary (but in my practical problem it probably won't be) – Szabolcs Jan 20 '12 at 8:49 Andrew, it is not completely clear to me why `ParallelSow` will always run `Sow` on the master kernel if it is set as a shared function. It is not obvious from looking at the documentation of `SetSharedFunction`. Do shared functions always get evaluated on the master kernel, even when called from a parallel kernel? – Szabolcs Jan 20 '12 at 17:28 2 – Andrew Moylan Jan 20 '12 at 17:55 This single piece of information just solved (or simplified) many more parallel-related problems I had! – Szabolcs Jan 20 '12 at 20:13 Does `SetSharedVariable` act the same way? I objected to the other answer here because I thought that after doing `SetSharedVariable[var]`, each kernel would have a separate copy of `var`(meaning `$KernelCount + 1` copies in memory). Is this really the case? If not, does `var` get transferred to a parallel kernel at every access, then transferred back (meaning temporary duplication of `var`)? It's important to underatnd this to be able to optimize memory usage and performance. – Szabolcs Jan 20 '12 at 23:58 Here's another possible way using dynamic, first the serial version: ````timeLeft[start_, frac_] := With[ {past = AbsoluteTime[] - start}, If[frac == 0 || past < 1, "-", Floor[past/frac-past]] ]; SafeMap[func_, list_] := DynamicModule[ {len, size, abortedQ, lastresult, starttime, n}, len = Length[list]; size = 0; abortedQ = False; lastresult; starttime = AbsoluteTime[]; n = 0; Monitor[ Table[ If[TrueQ[abortedQ], $Aborted, lastresult = func[list[[n]]]; size += ByteCount[lastresult]; lastresult ], {n, Range[len]} ], Refresh[Panel@Column[{ ProgressIndicator[n/len, ImageSize -> 350], Row[{Button["Abort", abortedQ = True], Grid[{{"Element", "Memory (kB)", "Time left (s)"}, {StringForm["``/``", n, len], ToString @ NumberForm[size/10.^3, {3, 1}], ToString @ timeLeft[starttime, n/len]} }, Spacings -> {1, 1}, ItemSize -> {10, 1}, Dividers -> Center ]}, Spacer[5]] }], UpdateInterval -> 0.5, TrackedSymbols -> {} ] ] ] ```` Try running the following example: ````results = SafeMap[(Pause[1];Plot[x^2-#^2==0,{x,0,10}])&, Table[i, {i,10}]] ```` The panel displays the current element being evaluated as well as the total memory used thus far, and the estimated time remaining. If you click "abort" you get the partially generated list. For parallelization, only a minor change is needed: ````Clear[SafeMap]; SafeMap[func_, list_, ker_:$KernelCount] := DynamicModule[ {len, bag, size, lastresults, starttime, n, results, t}, len = Length[list]; size = 0; starttime = AbsoluteTime[]; results = {}; SetSharedVariable[results, size]; Monitor[ t = Table[ParallelSubmit[{i}, With[{r = func[list[[i]]]}, size += ByteCount[r]; AppendTo[results, {i, r}]]], {i, Range[len]}]; CheckAbort[WaitAll[t], AbortKernels[]]; SortBy[results, First] , Dynamic@Refresh[Panel @ Column[{ ProgressIndicator[Length[results]/len, ImageSize -> 350], Row[{Button["Abort", AbortKernels[]], Grid[{{"Element", "Memory (kB)", "Time left (s)"}, {StringForm["``/``", Length[results], len], ToString @ NumberForm[size/10.^3, {3, 1}], ToString @ timeLeft[starttime, Length[results]/len]} }, Spacings -> {1, 1}, ItemSize -> {10, 1}, Dividers -> Center ]}, Spacer[5]] }], UpdateInterval -> 0.1, TrackedSymbols -> {} ] ] ] ```` This allows you to abort by CMD+. or by pressing the button. For example ````SafeMap[(Pause[1]; #^2)&, Table[i, {i, 30}], 4] ```` - Nice progress indicator! :-) Can you make it work for parallel calculations? (The main problem when I asked the question was to preserve all results computed so far when using several subkernels, e.g. `ParallelTable`, `ParallelMap`, `ParallelDo`, etc.) – Szabolcs May 24 '12 at 19:39 I amended the function to use ParallelSubmit but I think the other functions wouldn't be difficult to use instead. – M.R. May 24 '12 at 22:10 @Szabolcs can you think of other questions like this where a nice GUI interface would be appropriate? – M.R. May 24 '12 at 22:15 In this answer, I have implemeted Abort-able `Table` based on `Do` loops and `Reap`-`Sow`, using a technique similar to what is described in the answer of @Andrew (look at the bottom of the post for the second, more compact implementation). It seems that all you have to do is take that code and replace `Do` with `ParallelDo`. - I'd forgotten about that answer! Have you ever thought about compiling your own Mathematica recipe book? – Simon Jan 20 '12 at 23:57 1 @Simon Yes, I thought of it - but I still wanted to avoid the recipe format. My idea is to select the best "recipes" and include them in my new book, but in a way which would not take them out of context. Many of these code snippets represent complex code and / or concepts, so in a recipe format I will either have to not explain them properly, or repeat the same rather complex and / or long explanations over and over. This is one of the major challenges of writing an advanced book with the goal of making it accessible, I think. – Leonid Shifrin Jan 21 '12 at 9:09 @Simon By the way, I checked the code and the situtation is not as simple as I thought. My suggestion will also have to use Andrew's `ParallelSow`, and even then, is quite sow for some reason. I will look into that as time permits. – Leonid Shifrin Jan 21 '12 at 9:11 Have you tried `ParallelDo`? Here's an example implementation: First, we need a function to simulate a lengthy calculation. `f` randomly generates a number $0<p<1$; if the number is $>0.5$ it calls `Abort[]`, otherwise it returns the number afterwards. ````f := If[# > .5, Abort[], #] &@RandomReal[] ```` Generate some dummy data (not sure whether `SetSharedVariable` is necessary, at least it doesn't hurt), ````data = ConstantArray[0, 10] SetSharedVariable[results]; results = {}; ```` Launch the calculation and return results, ````ParallelDo[AppendTo[results, f], {i, 10}] results ```` The program aborts the calculation (almost) every time, and then prints the values calculated. Note that this method may become very ineffective for longer result lists, as they're stored as fixed-size arrays internally. In that case. - I'd prefer not sharing the result collection between kernels as it may be a very large data structure. – Szabolcs Jan 20 '12 at 0:06 1 Then you'll have to create a result variable for each individual kernel and combine them afterwards. What you want to have is a single object filled by multiple kernels after all, so it has to be shared somehow by them. – David Jan 20 '12 at 0:08 The single object that holds the result doesn't really need to be shared between kernels (i.e. I don't want all of them to have a full copy of it, as it happens in the `SetSharedVariable` method you describe). I would simply like to return results from each kernel, and let the main kernel add them to the result collection. It is not quite trivial how to do this. I'll edit my question and clarify about not sharing the result collection (should have done it before) – Szabolcs Jan 20 '12 at 0:14 (Actually if I do this in practice and leave the calculation running indefinitely, I'll need to protect against running out of memory.) – Szabolcs Jan 20 '12 at 0:18 – Szabolcs Jan 20 '12 at 0:29 show 1 more comment lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8763332366943359, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/77071/generalized-beilinson-spectral-sequences/77109
Generalized Beilinson spectral sequences Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Assume we are workling on $\mathbb{P}^n$ for some $n\geq 1$ and we have a coherent sheaf $F$ on it. Then there are two (well known?) spectral sequences $E_r^{p,q}$ with $E_1$-term: $E_1^{p,q}=H^q(\mathbb{P}^n,F(p))\otimes \Omega^{-p}(-p)$ $E_1^{p,q}=H^q(\mathbb{P}^n,F\otimes \Omega^{-p}(-p))\otimes O_{\mathbb{P}^n}(p)$ both converging to $F$. Here $\Omega^{p}=\wedge^p((T_{\mathbb{P}^n})^{*})$, see e.g. Okonek/Spindler/Schneider Ch.2 §3. In special cases these sequences lead to a monad description for $F$, i.e. a complex $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$, which is exact at $A$ and $C$ such that $F$ is the cohomology of this complex. The main ingredient of the proof of this fact is the existence of a Koszul resolution for the diagonal $\Delta\subset \mathbb{P}^n\times\mathbb{P}^n$. Now assume with have an additional "structure" sheaf $R$ of noncommutative rings or algebras on $\mathbb{P}^n$, such that $F$ is also an $R$-module. Is there a generalization or a way to adjust these spectral sequences that also uses the extra structure as an $R$-module? Maybe there are more general Koszul resolutions which one can use here? Everything in a more noncommuative setting. Maybe there is something like this in the literature? One case i'm especially interested in is that of maximal orders on the projective plane. That is $R$ is a sheaf of noncommutative algebras, say of rank $4$, which is an Azumaya algebra $\mathcal{A}$ on the complement of a (smooth) divisor $D\subset \mathbb{P}^2$, such that the generic algbera $R_\eta$ is a nontrivial quaternion algebra. So we have a trace pairing $tr: R\otimes R \rightarrow O_{\mathbb{P}^2}$ which is nondegenerate away from $D$. For every point $p\in D$ the module $R_p$ is a maximal $O_p$-order in the generic stalk $R_\eta$. - 1 Answer The answer depends very strongly on your algebra $R$. For example, a particular case is when $R = O + L$ (where $L$ is a line bundle) and the multiplication is given by a map $L^2 \to O$ (given by a divisor $D$) the category $Coh(P^n,R)$ is equivalent to $Coh(X)$, where $X$ is the double covering of $P^n$ ramified in $D$. And the homological properties of $D^b(Coh(X))$ very strongly depend on $D$. - Thanks. That's very interesting, but also dissapointing, if this already gets so difficult in such an "easy" example. I added the type of algebras i'm interested in in the main post, but it seems they are too complex to hope that anything i expected could work. The fact that $Coh(P^n,R)$ is equivalent to $Coh(X)$: does this follow from the fact that if $f: X \rightarrow P^n$ is the double cover defined by $L$, then $f_{*}O_X=R$? – TonyS Oct 4 2011 at 15:49 Yes, precisely. – Sasha Oct 4 2011 at 18:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933925211429596, "perplexity_flag": "head"}
http://boundary.com/blog
Dennis Callaghan on the Changing World of IT Monitoring Posted by Boundary on May 17th, 2013 Dennis Callaghan Dennis Callaghan is a senior analyst on 451 Research’s Infrastructure Computing for the Enterprise (ICE) team, leading the firm’s coverage of application and Internet performance management, service-level monitoring and management, and IT asset and service management. Follow Dennis on Twitter at @DennisCallaghan. What are the biggest challenges for small and large companies today in the area of IT operations management? What comes back to us from the research we do, whether it’s about product trends or talking to customers, there are two things that stand out. First, the nature of distributed IT environments means that you need a good discovery plan in place to find out what is the infrastructure underpinning the applications, and where that infrastructure is located. Then, whether the application is running in traditional environments or in the cloud, you need to diagnose the performance issue and the impact on the end-user―what they are experiencing. What performance metrics are companies most interested in tracking these days? Response time is a key measure. How long do you have to wait to get a response or result from doing a transaction or a task? It’s always important to look at backend metrics like CPU and I/O but mostly, you need to understand how users are affected. And if you’re running in the cloud, you really need to understand latency. With the cloud and distributed environments, have these metrics changed or increased in number? The metrics themselves haven’t changed, but it’s more that the infrastructure is different and the environment is more complex. A decade or so ago, companies were primarily looking at the app server, the Java server. Now, they are also looking at network service levels, database performance, the storage array, the web server and they also need to get some decent end user metrics at the browser level. So there are a lot of different areas that people are looking at to get a more complete view of app performance, and when there’s an issue, they have to be able to triage it and figure out how to best remedy it. The reason behind this drive for a more comprehensive view is partly the demand for instant results. As an example, Google search now serves up results when you start typing a word into the search box. Yet the larger issue is that you have so many different systems interfacing with each other today. A few weeks ago, an IT end user  told me that he has several different systems that are exchanging process flows and he has no way to monitor the transaction performance between those systems, just within them. That is a challenge a lot of people are dealing with. How are the tools evolving to meet these needs – and do you see the continuation of the need for many tools a.k.a. Best of Breed environment? There are always going to be new issues for products to solve, and vendors tend to specialize in covering different layers of the infrastructure so having five or six tools is very common. Companies also don’t want to trust their environment to just one vendor. People often want an integrated console to see all this information in one place, but still, using multiple consoles is pretty common. There have been standardization drives to allow performance management tools from different vendors to work together, but they haven’t really gone anywhere. WSDM—Web Services Distributed Management—hasn’t been heard from in almost seven years now. Are large companies switching away from their legacy APM and monitoring tools? On a grand scale, no, not really. I think that’s true when they are launching development projects with new technologies such as Ruby on Rails and PhP. But they are not throwing out systems from CA, BMC, HP or IBM―at least not yet. But these one-off projects and proof of concepts with some of the newer vendors can certainly lead to that transition down the road. Why not? Aren’t these tools poorly suited for new, distributed environments? It has to do with the investment they’ve made in the legacy systems but also it has a lot to do with the back-end applications that the tools are monitoring. Believe it or not, mainframes are still very common especially in the Fortune 500. And the incumbent technologies are the best ones to monitor those environments. Yet the startups are beating incumbents on new technology clients that are running modern, distributed environments. For instance, AppDynamics (a Boundary partner) is the main management tool for Hotels.com and also has Netflix as a customer. So that’s how the market is being segmented right now. What area of innovation is hot right now in the space of application and network monitoring? We are very high on the SaaS model. The new vendors are all SaaS-based. It is just so much easier to get up and running and maintain the software that way. Boundary does a lot of big data handling that is offloaded to the cloud, which is another wonderful benefit for a customer. Lots of VC dollars are being poured into this space right now and we are only at the tip of the iceberg. The Big 4 have all introduced new SaaS offerings because they have to in order to compete, but they are going to be playing catch-up for a while. Boundary helps Netradar launch mobile performance app Posted by Boundary on May 16th, 2013 Netradar is a global mobile network measurement and analysis service and smart phone app designed by researchers at Aalto University in Finland. Netradar delivers data on mobile performance to users through a smartphone app (Android, iOS, Windows Phone, Symbian and Meego). Their app has more than 30,000 installs to date. Users can see an aggregated map of network throughput between different carriers and various statistics about devices and networks. Arttu Tervo, a developer with Netradar, re-built the server architecture in early 2013 using Amazon Web Services primarily, to prepare for growth in users and features. How Boundary Is Helping Network planning and potential cost reduction:  Tervo deployed Boundary to help make design decisions as he was rewriting Netradar. “Using Boundary helped me detect potential issues in the system where there would be over-use of the network,” he observes. “Seeing that ahead of time was very important so that we can optimize our system and save on our service costs.” He estimates that Boundary helped conserve 50 percent of the required internal data transfer amounts needed to run the platform. Development agility: Boundary is also helping the Netradar team with rapid development processes, through enabling a simple method to test new features in the staging environment for their performance-readiness. “With the help of automated Capistrano deployment events to Boundary, we can see the effect of even the smallest change on network usage immediately,” Tervo remarks.  “It hasn’t been this easy in my previous projects.” “The most important benefit of Boundary to Netradar so far is the ability to detect possible performance issues before we make any changes in the production system. As we grow and scale our service around the globe, we will benefit from Boundary’s trending views showing how much data we transferred over our customer base, which will help us plan better for our AWS usage.” — Arttu Tervo, developer with Netradar Approximate Heavy Hitters -The SpaceSaving Algorithm Posted by Ben Linsay on May 14th, 2013 Here at Boundary we do analytics a little bit differently than the standard “store it and batch it” approach. We’ve built a streaming analytics system that processes data as it comes to us in real time. Given our first-hand experience we think Streaming algorithms are really cool and insanely practical. Fortunately they’re heavily researched. Unfortunately they’re also relatively new in the professional world which means that when talking about these algorithms, the divide between the academic world and industry can feel pretty large. We thought we’d do our part to help bridge that gap by publishing a few blog posts about cool streaming algorithms we’ve come across. Defining the Problem The problem that inspired this post can be stated pretty simply. Given a single pass over an infinite stream of data, find the most frequently occurring items in the stream. This is a pretty common problem, and one we encounter at Boundary. It turns out that the provably most efficient solution to this problem scales badly (we’ll come back to this). This is also a common enough problem that there’s a lot of research being done on trying to get around this restriction by finding an approximate solution. Approximation algorithms are awesome – they’re almost always far more space efficient or computationally efficient than than the exact solution to the problem they’re trying to solve, in exchange for some (hopefully bounded) form of error in the solution. Depending on the problem and the kind of error it introduces, using an approximate algorithm can feel like getting something for nothing. We’ll be looking at an approximation algorithm called SpaceSaving that tries to solve this problem in the minimum amount of space. It turns out, the algorithm is able to give you an approximate solution with bounded error (the bound on error is determined ahead of time) in a fixed amount of space. That’s pretty awesome. Graham Cormode and Marios Hadjieleftheriou published an excellent overview of some known approximate solutions to this problem in particular. If this blog post is even mildly interesting to you, I highly recommend this paper, and will be using some terminology from it during this post. We need to spend a few minutes on definitions before diving in: We’re going to assume that our infinite stream of data contains elements of some infinite set. We’ll call the set of items we’ve seen so far $A$. So, at any point in time, we’ve seen exactly $|A|$ distinct elements. We’re going to use $N$ to refer to the length of the stream so far. This is the total number of items seen so far, not the total number of distinct items seen so far, so $N$ is always at least as large as $|A|$. We also need a formal, mathematical definition of the problem. Finding the most frequently occurring items means finding items that occur more often than some threshold. This threshold is usually defined as some fraction of $N$ – this bounds the solution space relative to the size of the stream without forcing a solution to include exactly $k$ items (for some arbitrary choice of $k$). Given that, the the problem can be formalized as finding all items in a stream whose frequency is greater than $p N$ for some $0 < p < 1$. The approximate version of the problem introduces some error $0 < \epsilon < 1$  and asks for all items with frequency greater than $(p - \epsilon)N$. In English, we’re trying to find items whose frequency accounts for more than a given percent of the stream, and we’re going to allow some error around what the actual percentage is. That is, if we specify that we’d like to see items that each account for at least 10% of the stream, and 2% error, it’s acceptable to include items that account for only 8% of the stream. We’ll borrow some nomenclature from the overview paper I mentioned above, and refer to this as “the frequent items problem”.  Frequent items are also referred to in the literature as “heavy hitters”. We’ll throw that name around too. It’s pretty common for items in a stream to have some kind of explicit value attached. The canonical example is a stream of cash-register transactions, where the world’s unluckiest cashier has to record how much money the store made in an infinite number of transactions.  We can tweak our definition of the frequent items problem to cover this by letting items in a stream have non-negative weights. If you see an item $e_1$ with weight $w_1$, that’s basically equivalent to seeing item $e_1$ exactly $w_1$ times in a row. There’s more rigor you can apply, especially if you’d like to talk about negative weights, but we’ll ignore all of that for this blog post. Actually estimating the frequency of individual items can be formalized as finding an estimate $f'$ of an item’s frequency $f$ such that $|f - f'| < \epsilon N$. This says that estimates are allowed to be incorrect, but the difference between the estimate and the true value has to be bounded by a fraction of the length of the stream so far. The error-term $\epsilon$ here is the same as the $\epsilon$ above. Again, we’ll borrow some nomenclature, and refer to this as “the frequency estimation problem”. Finally, we’re going to be pragmatic about what infinity means here: really really really big. Annoyingly, we’re going to define it as big enough that solution involving more computing power or more storage just won’t work – the stream is big enough that you can never buy enough RAM, or can’t just add another node to your cluster-computing setup of choice. Think about something like the Twitter firehose or a high-volume stream of netflow data, or something with equal volume that never stops. Solving the Problem Exactly Let’s also spend a moment and look at the exact solution for counting frequent items, so that we have a baseline for understanding what we gain/lose in by using an approximate algorithm like SpaceSaving. In pseudo-code the simplest way to find the most frequent items in a finite data-set looks something like: ```weights = {} // Initialize a weight counter for all items to 0 for (item, weight) in data_set: weights[item] += weight sort_by_weight(weights) heavy_hitters = take_largest_10(weights)``` In English, this keeps a running sum of weights per-item, and eventually sorts the list of observed items by weight. There are a few things to point out: So, the exact solution to the problem requires keeping track of every item seen so far so that you can sum weights together. This means that the exact approach takes at least $O(|A|)$ space. This approach also needs to do a lookup for every item in the stream to increment the corresponding counter, and then needs to sort the list of all items which takes  $O(N) + O(|A|log(|A|))$ time.  This can be done a little more efficiently by using a priority queue instead of sorting the entire list in place – if you’re after the top-$k$ elements of a stream, you can finish in $O(N) + O(|A|log(k))$ time. Since the time-bounds on this approach are fairly reasonable, we’re going to leave them alone for now. They will act as a nice baseline later, though, so don’t forget about them entirely. However, the space bounds for this approach are a little more problematic. Scaling linearly with the number of distinct items in the stream, $O(|A|)$, isn’t really going to cut it; $|A|$ is only bounded by $N$, the length of the stream so far, which we defined so that it can be arbitrarily large. Our machine is going to start running out of available memory as our stream gets larger and larger and larger. This is the point at which you say “just add more RAM” and I remind you that we (in)conveniently defined the problem so that wasn’t an option. We’ll have to find another approach. Wait, seriously, that works? Invariably, while explaining the idea for this blog post to folks, someone would say – “wait, can’t you just keep around a fixed size priority queue and update it as you see new items? Just kick out the small things as you go”. As It turns out, this doesn’t end up working; a frequently occurring item with low weight can get accidentally ignored, as can a frequent item that doesn’t show up until well into the stream, since they’ll never be able to displace the smallest item in the priority queue. However it turns out that, with a bit of a tweak and a lot of math, an eerily similar approach ends up working. Like we mentioned above, the SpaceSaving algorithm devised by Metwally et. al (and the accompanying StreamSummary data structure) solves both the frequent items problem and the frequency estimation problem using a fixed amount of space. In a little more detail, SpaceSaving is a deterministic, counter-based algorithm for solving the frequent items problem with fixed error guarantees relative to $N$. On heavily skewed data sets, the algorithm performs extremely well and the error bounds can be made extremely tight (The paper does an excellent job of showing the difference in error when the algorithm is applied to Zipfian data, but I’m not going to cover that here). So, what’s the secret? SpaceSaving works by keeping exact `(item, count)` pairs for the first $m$ distinct items (we’ll discuss what $m$ is shortly) observed in the stream. Subsequent items, if they’re already monitored, increment the `count` in the proper pair. If an incoming item isn’t already being monitored, it replaces the `item` in the `(item, count)` pair with the smallest `count`, and then increments its `count` as usual. Any replaced item also needs an error associated with it – the first $m$ items are counted exactly, so their error counts are set to 0. Whenever an item replaces the previous minimum, it could have been seen anywhere between 1 and `min(count)+1` times, so its error counter gets set to `min(count)`, which is the most that item’s count may have been overestimated by. In pseudo-code, SpaceSaving looks like this: ```counts = { } // An empty map of item to count errors = { } // An empty map of item to error count for (item, weight) in stream: if len(counts) < m: counts[item] += weight else: if item in counts: counts[item] += weight else: prev_min = item_with_min_count(counts) counts[item] = counts[prev_min] + weight errors[item] = counts[prev_min] counters.remove_key(prev_min)``` The algorithm has a few nice properties right off the bat: • It’s deterministic. The same input in the same order always produces the same results. The algorithm doesn’t introduce any randomness. • Every item in the stream increments some counter, so the sum of all counters will be equal to the current size of the stream, $N$. • With a predetermined $\epsilon$ for the frequent items problem, the algorithm uses space inversely proportional to the error. In other words, the space required is $O(\frac{1}{\epsilon})$. This is where the $m$ mentioned above comes from: the algorithm will solve the frequent items problem with the given error if $m$ is chosen to be larger than $\frac{1}{\epsilon}$. Whoa. Those are all pretty cool guarantees. Determinism is particularly cool for an approximate algorithm, but the fixed size and the sum of the counters always being $N$ is kind of confusing. The $m$ items that the algorithm tracks aren’t all going to be the most frequent items. This is where the error counts from the algorithm come in. An item is a frequent element if the `count - error` reported by the algorithm is larger than $pN$. From above, remember that $pN$ is just some fraction of the number of items seen so far, so this says that an item monitored by the algorithm is a frequent item if the counts that we can guarantee are larger than the cutoff. Conservative and correct. This wouldn’t be quite so useful, except that the authors also prove that any item whose true frequency is at least `min(count)` is monitored by the algorithm. The combination of these two properties is what makes SpaceSaving an approximate solution to the frequent items problem, and not just something weird you can do to your data. This is also where the error $\epsilon$ comes back into play. Even though it monitors $m$ items, it doesn’t guarantee they’re all frequent items. In fact, it might report items that are not-quite-frequent enough, or might not report borderline items if their error terms are high enough. That’s a pretty cool algorithm. The authors also spend part of the paper proposing a StreamSummary data structure that efficiently maintains (item, count, error) counters and tracks the minimum (there’s an implementation here). In exchange for using slightly more space, you can do the same thing with an implementation based on a priority queue and a hash-table to check whether or not an item is being monitored. This ends up being less space-efficient than StreamSummary in exchange for better insert/update and query times. We’ve been playing with an implementation in that style, and might post about it at some point in the future. The only downside of SpaceSaving/StreamSummary seems to be approximating the individual frequencies of items. The way $\epsilon$ is defined, the error in the individual frequencies is relative to the number of items seen so far, and not relative to the frequencies themselves. This means that the error on the frequency counts for individual elements can be arbitrarily large, relative to the true frequency of the actual element. In practice, this can mean an outrageous amount of error relative to the true frequency of an individual element. We haven’t thrown an implementation into production here at Boundary because of this – we’d like to be able to guarantee our customers that the flow-data we report to them is correct. Despite this downside, we think SpaceSaving (and StreamSummary) is particularly cool because it’s so simple: no hashing is involved and the correctness proofs are relatively accessible. It’s also a perfect example of the kind of tradeoff you have to make when dealing with an approximate algorithm. By giving up on error bounds on individual item frequencies, you gain bounded error on frequent items relative to $N$ and get to solve the problem in a fixed amount of space. Bibliography We linked to the following papers above: There’s much more excellent reading to be done on heavy hitter algorithms, an probabilstic algorithms in general: the gang at Aggregate Knowledge has done a short post on Count-Min Sketch which solves the same heavy-hitters problem by taking a bloom-filter-esque approach. Opscode and Boundary: Visibility Plus Change Automation Posted by Boundary on May 10th, 2013 We spoke with Bryan Hale, VP of Online Services at cloud infrastructure automation provider Opscode, to learn how Opscode customers benefit from Chef’s integration with Boundary, and to get his opinion on the future of IT monitoring. Before Opscode, Hale worked at VC firm Draper Fisher Jurvetson (DFJ), and in the Corporate Development group at salesforce.com. Follow Bryan on Twitter: @halebr. Boundary: What are the top problems that your product solves for customers? Hale: Opscode Chef is an open source configuration management and IT automation framework whose corporate contributors include Facebook, Google, Rackspace, Dell, HP and hundreds of other leading technology companies. Chef is flexible enough to be wielded in any number of different use cases, but two of the most popular are: • Server Configuration Management: Chef saves operations engineers time and money by making sure that every system in a given environment is built and maintained exactly as it needs to be, in a fully automated fashion: every package is installed, file written, service turned on and so forth. • Continuous Delivery: Chef’s templates (“cookbooks” and “recipes”) can be managed exactly like code, allowing development and operations teams to quickly and consistently deliver enhancements to both applications and the infrastructure that supports applications. Boundary: How did your product’s integration with Boundary come about, and how do customers benefit? Hale: Many of our customers have highlighted the fact that Opscode and Boundary are working on complementary technologies that, when working in concert, can deliver an awful lot of value. Both Chef and Boundary are optimized for public, private and hybrid clouds, where visibility and instantaneous response are essential. The new Boundary event management service and API allows Chef to send richer information into Boundary, enabling IT operations teams to instantly see the detailed status of each automation request. Boundary: From your view, how is the world of monitoring, and monitoring technologies, evolving? Hale: It’s no secret that core IT infrastructure is undergoing a massive shift towards systems that are immediately and abundantly available and that need to handle rapid change. This has a wide set of consequences for the world of management and monitoring. To keep up, monitoring tools face a tall order. They must keep up with a greater number of potential issues within environments that have more scale and complexity than ever before. Offerings such as Boundary, with very fine levels of granularity and a SaaS delivery model, are purpose-built for this new world. AppDynamics and Boundary: The Full-Circle View of Application Health Posted by Boundary on May 10th, 2013 We grabbed a few minutes of time from Steve Burton, director of technology evangelism at our partner, AppDynamics. The San Francisco-based application monitoring company is breaking new ground in application performance, and we are happy to be working with their team to bring integrated benefits to our customers. AppDynamics is focused on helping customers get to the root cause of Web application issues in complex environments faster– and in that they share a common goal with Boundary. Boundary: What are the top problems that your product solves for customers? Burton: If we are looking at where applications were just five years ago, it was pretty simple for customers to manage a few instances of Weblogic, Tomcat or Oracle. Back then applications were relatively static with not a lot of change, and so monitoring solutions just needed someone to tell them what components to monitor. Today, apps have become more distributed and with more components, distributed business logic and the mainstreaming of agile development, it’s now much harder to manage them with constant change. The older legacy APM solutions just don’t fit because you can’t keep re-configuring them every time your application changes. When a customer’s application slows down or crashes, they can find the root cause within minutes, not hours or days, using our software. We help customers solve complex problems quickly through simplifying how they monitor their production applications. Take Netflix, which has thousands of servers and hundreds of SOA apps behind its video service. They are using AppDynamics to resolve issue across all those apps. We can go down to the line of code and give them the visibility they need to find out what’s causing the slowdowns. That makes it a lot easier for people to stream “Arrested Development” any time they want to… Burton: We have an on-premise version as well as a SaaS version, and we also work with managed service providers that are deploying our SaaS product within their private cloud. Even though the two products are exactly the same, 70% of our business is still from the on-premise version. The SaaS business is growing, but in a lot of large companies, especially insurance and finance sectors, they don’t want data or products outside their firewall. There’s still a need or desire for control. Boundary: Tell us how your product’s integration with Boundary came about, and how this helps your customers? Burton: We are big on innovation and disruptive technology. So when Boundary launched 14 months ago, we saw that they were addressing the same problems as us but from a different perspective. Monitoring application performance from the network layer is disruptive because it’s bringing the network perspective to application performance. In addition, Boundary is up and running in 10 or 15 minutes and it’s very lightweight, which makes it time-to-value identical to AppDynamics. Ultimately, this integration gives customers the inside and outside view of their applications. It’s a 360-degree view of apps—and with this greater visibility you have a lot fewer blind spots. Our product does some serious deep drilling into applications and business transactions, and the events it collects appear as annotations in Boundary. If for instance a web transaction doesn’t complete, the customer can see this event in Boundary and drill into AppDynamics to see exactly what happened. Boundary: How is the world of monitoring, and monitoring technologies, evolving? Burton: Consumerization is playing a greater part. The apps on your iPhone can be downloaded in seconds and are simple to install and use. That may seem like an unfair bar for something as complicated as enterprise software, but the reality is, users are expecting an “iPhone app” experience from their enterprise software vendors. Software like AppDynamics and Boundary have to be that easy to install and use. In addition, application monitoring previously required domain expertise to bring any value. Today though, products are doing much more of the heavy lifting. Power users of monitoring technology used to be developers and architects, but today it’s Ops and application support teams. They don’t have that deep hardware and software expertise. And that’s fine because the entire troubleshooting process is becoming automated. In our latest release, we introduced something called Application Run Book Automation, which is a technology that orchestrates the workflows and provides automatic fixes for issues, based on policies. ZDNet called it “on-the-fly” fixing, and that’s very accurate. Boundary: What do customers get most frustrated about most when they’re trying to resolve performance issues? Burton: I think the biggest hassle is the fact that performance issues come and go with no warning. A tiny percentage of developers, maybe 1%, performance test the code before releasing it. So the operations people are most always reactive when issues come up. They might be using log file data but that only gives about 10% of the picture on application availability. So not having complete visibility into an application’s behavior is frustrating. We think that’s something that we are doing a great job of solving for customers. Legacy APM users are really fed up with the visibility problem and they are moving off these old platforms in droves. Legacy APM vendors are becoming tired, obsolete and are on the verge of extinction. That’s good news for companies like AppDynamics and Boundary. • 1 • 2 • 3 • 4 • 5 • ... • 38 Page 1 of 38 Chat with Us!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 46, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421730637550354, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/arithmetic+computer-science
# Tagged Questions 1answer 61 views ### Euclidean Division to avoid need for floating point arithmetic In simple terms (that Google has been unable to provide the answer), is there an approach to dividing a whole integer by a quotient & remainder? As a specific example, ... 0answers 95 views ### Square root using simple arithmetic shift, inversion etc Suppose we have a function which is sampled by a sampling time 10ms. This function comes in to the computer, then this computer should calculate square root (for every sampling time) from that ... 0answers 113 views ### 1/3+2/3 in double precision When I add 1/3 and 2/3 in double precision, I ended up with $1.\boxed{111\ldots1}1\times2^{-1}$, where the boxed part is the 52-bit mantissa. By the rounding to even rule, I should round it up, right? ... 2answers 313 views ### Do calculators have floating point error? As a programmer, we have been told about floating points errors on computer. Do Calculators have floating point error too? Example. ... 2answers 107 views ### Using base-2 numbers: $(1010001)_{2}/(11)_{2}=?$ I wanna solve this simple equation using base-2 number system. $(1010001)_{2}/(11)_{2}=?$ I can't remember how to do that, normally I would start with $101/11$ but what should I do this base-2 ... 2answers 219 views ### bitwise operations so my question is what is the order of operations for bitwise operators << & | and also to see if my logic is right with the problem below (x03 << x08)+ x00 = 300 ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9148407578468323, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/142415-functional-analysis-question.html
# Thread: 1. ## Functional Analysis question I can't work this out show $c_0$ (with the usual sup norm) is not a Hilbert Space. my main problem stems from using the $c_0$ space which I don't fully grasp, I know the method for this type of problem so I'm really looking for a suggestion as a what to let x and y equal and what their norms should look like 2. Originally Posted by blimp I can't work this out show $c_0$ (with the usual sup norm) is not a Hilbert Space. my main problem stems from using the $c_0$ space which I don't fully grasp, I know the method for this type of problem so I'm really looking for a suggestion as a what to let x and y equal and what their norms should look like The way to show results like this is to use the parallelogram identity. You can choose almost any two elements of the space to see that they do not satisfy the identity. The easiest choice would be to take for example x to be the sequence in $c_0$ having a 1 for its first coordinate and 0 for every other coordinate; and take y to be the sequence having a 1 for its second coordinate and 0 for every other coordinate. Then x, y, x+y and x–y all have $c_0$-norm 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9651734232902527, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/114518/commutativity-of-convolution-in-higher-dimensions/114825
# Commutativity of Convolution in higher dimensions I have a basic question about how to show that convolution in dimension $n$ is commutative - or maybe it is rather a question about change of variables .. So on $\mathbb{R}$ I know how to show commutativity: We have the definition \begin{equation} f \ast g := \int_{-\infty}^\infty f(u-t)g(t) \, dt \end{equation} Now by changing the variable $t \mapsto s = u - t$ I get \begin{equation} f \ast g := - \int_{\infty}^{-\infty} f(s)g(u - s) \, ds \end{equation} And so the minus sign helps me to switch the boundary and get back to the original form. Now, in more than one dimenstions the change of variables involves the absolute value of the determinant of the Jacobian, how do I switch the boundary in this case (i.e. reverse the order of the limits) ? Thanks very much ! - 1 In higher dimension, you make the change $s=u-t$, whose absolute value of the Jacobian is $1$, and $\mathbb R^n$ is mapped to $\mathbb R^n$. – Davide Giraudo Feb 28 '12 at 19:12 @DavideGiraudo: but the assignment $s = u - t$ changes the sign of the limits, how do I revert this change ? Or is there no direction of integration in the higher dimensions ? Sorry if that question sounds too dumb I have very little knowledge of higher dimenstional calculus. – harlekin Feb 28 '12 at 20:38 If you don't want to work with high dimensional integral, you can treat them as a succession of simple integrals, and you do the substitution $s_i:=u_i-t_i$ for each component (but you have to use Fubini's theorem). – Davide Giraudo Feb 28 '12 at 21:39 ok, thanks for the help! – harlekin Feb 28 '12 at 23:09 ## 1 Answer We can see, thanks to the formula of change of variables in $\mathbb R^n$, that the substitution $s=u-t$ maps $\mathbb R^n$ to $\mathbb R^n$, and the absolute value of the Jacobian is $1$. If you don't want to use this formula, but just applying it to one-dimensional integrals, then write $u=(u_1,\ldots,u_n)$ and $t=(t_1,\ldots,t_n)$. Then $$f\star g(u)=\int_{\mathbb R^{n-1}}\int_{\mathbb R}f(u_1-t_1,\ldots,u_{n-1}-t_{n-1},u_n-t_n)g(t_1,\ldots,t_{n-1},t_n)dt_n\ldots d_1\ldots dt_{n-1}$$ and putting $s_n=u_n-t_n$ we get $$f\star g(u)=\int_{\mathbb R^{n-1}}\int_{\mathbb R}f(u_1-t_1,\ldots,u_{n-1}-t_{n-1},s_n)g(t_1,\ldots,t_{n-1},u_n-s_n)ds_n\ldots d_1\ldots dt_{n-1},$$ then switch the integrals and continue this process (write it by induction if you don't find this rigourous). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8731493949890137, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/129186/integrating-with-vector-coefficients-in-maple/129269
# Integrating with vector coefficients in Maple I'm trying to use Maple to do something like this integral: $\displaystyle\int \frac{a\mu-b}{||a\mu-b||^3} \mathrm{d}\mu$ Where $a, b$ are vectors and $\mu$ is a scalar I'm integrating by. But I can't figure out to tell Maple to treat them like vectors, and specifically have it figure out how to distribute the dot products in the denominator (calculate the squared length of a vector by dotting it with itself). Right now I'm solving this specific integral like this: int((a*u-b)/sqrt(C*u^2-2*u*D+E)^3, u) I've basically been multiplying out the denominator in to dot products, and treating the dot products as separate variables (eg: $a \cdot a = C, a \cdot b = D, b \cdot b = E$). But this adds a lot of bookkeeping and I'm tired of doing it like this. The equations I'm trying to integrate are becoming increasingly complex and I'd like the computer to handle more of the work. What's the proper way to solve this integral in Maple? Alternatively, is there a way to get Mathematica to do this integration? - ## 2 Answers This seems more like a programming than a math issue, so it might be better placed on stackoverflow (or a forum like www.mapleprimes.com) I suppose that you intend to map the integration action over each element of Vector a*mu-b, is that right? I don't quite understand what is the difficulty about computing the denominator. Sure, you could use that expression involving several dot-products. Or you could just take the 2-norm. It helps if you know something about mu and the entries of Vectors `a` and `b`, even if it's only that all are real-valued. Below, I use the assumption that all are positive (and thus implicitly real), with a pairwise relationship to `mu`. ````restart: N:=2: # or 3, 4... a:=Vector(N,symbol=A): b:=Vector(N,symbol=B): with(LinearAlgebra): # to pick up LinearAlgebra:-Norm myassumptions:=seq(mu*A[i]>B[i],i=1..N), positive: # up to you sol1 := map(int,(a*mu-b)/(Norm(a*mu-b,2)^3),mu) assuming myassumptions; sol2 := int~((a*mu-b)/(sqrt((a*mu-b).(a*mu-b))^3),mu) assuming myassumptions; sol3 := map(int,(a*mu-b)/(sqrt(a.a*mu^2-2*a.b*mu+b.b)^3),mu) assuming myassumptions; sol4 := int~((a*mu-b)/(Norm(a*mu-b,2)^3),mu) assuming myassumptions; sol1-sol2; sol1-sol3; sol1-sol4; ```` The last variation above is using the relatively new syntax wherein the `~` symbol makes the `int` command act elementwise over the Vector a*mu-b. What do you plan on doing with the results of this? Is going to be symbolic, or (floating-point) numeric? - Thanks, you actually showed me a number of things I didn't know you could do. Using Norm isn't great, because it seems to create a lot of csgn functions that aren't necessary at all (the Norm function does $|x_i|^2$). The sqrt form gets me closer to what I'm after. But even using sqrt, it breaks out the dot products in to components instead of keeping them as $a \cdot b$, which would be preferable. And the solution it gives me has the X and Y terms explicitly stated, instead of keeping things in terms of the $a, b$ vectors. – Jay Lemmon Apr 9 '12 at 0:05 ## Did you find this question interesting? Try our newsletter email address If you want a numeric result, I think the following in Mathematica will get you close to where you want to be. `a` and `b` can be of arbitrary dimensions: ````NIntegrate[(a u - b)/Norm[a u - b]^3, {u, lowerLimit, upperLimit}] ```` If you do not want the 2 Norm, then `Norm[a,n]` will give you the nth Norm. In three dimensions, symbolically, Mathematica given ````a = {a1, a2, a3} b = {b1, b2, b3} Integrate[(a u - b)/Sqrt[(a u - b)^2]^3, u] ```` produces, {-(1/(a1 Sqrt[(b1-a1 u)^2])), -(1/(a2 Sqrt[(b2-a2 u)^2])), -(1/(a3 Sqrt[(b3-a3 u)^2]))} - There is a Mathematica StackExchange on which wiser people than I might give you alternate solutions to your question. – image_doctor Apr 8 '12 at 11:02 I'm actually looking for a symbolic result. :/ I'll look at the Mathematic exchange. – Jay Lemmon Apr 9 '12 at 0:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9088475108146667, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/78621/kahler-structure-on-a-complex-reductive-group/78670
## Kähler structure on a complex reductive group ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a complex reductive group, and $K$ a maximal compact subgroup (such that $K_{\mathbb{C}}=G$). By the polar decomposition theorem one has that, as manifolds, $G\cong T^*K$. The inherited symplectic structure is compatible with the complex structure, making $G$ into a Kähler manifold. On the other hand $G$ is a smooth affine variety, and therefore inherits a Kähler structure from any embedding in an affine space. The ring of regular functions of $G$ is described by the algebraic Peter-Weyl theorem, and affine embeddings are of course just given by choices of generators. Can one obtain the Kähler structure coming from $T^*K$ by any of these affine embeddings? - How about any embedding? – Reimundo Heluani Oct 19 2011 at 23:29 My formulation was slightly ambiguous, but that was the question I had intended. I've rephrased the question - thanks for the comment. – Johan Oct 20 2011 at 8:32 I meant something different and I might be wrong, but given such an embedding, G also inherits a Kähler structure coming from affine space as well. – Reimundo Heluani Oct 20 2011 at 10:08 ## 3 Answers Isn't the answer no in the very simplest case? If $K$ is the circle group, then the Kähler structure on the cotangent bundle makes it metrically a cylinder `$R \times S^{1}$`. I believe this cylinder cannot be isometrically embedded in `$C^n$` (apply the maximum modulus principal to the derivative of the map). - Ooops, my bad, didn't see the exponential when I tried this example. Should remove my answer. – Reimundo Heluani Oct 20 2011 at 17:56 Thanks Peter and Reimundo, the problem is much clearer to me know. Indeed it seems like the Kähler structure coming from the polar decomposition (which needs the choice of an invariant metric) is never compatible with an affine embedding. In the paper "Phase Space Bounds for Quantum Mechanics on a Compact Lie Group" by Brian Hall this Kähler structure is discussed a bit more, he shows it provides the unique "adapted complex structure" on $T^*K$ determined by the choice of the metric on $K$. Reimundo: no need to apologize, your answer made me think of something else, I will write it below. – Johan Oct 20 2011 at 23:15 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let me give a simple argument until we think a better answer. Using Peter-Weyl you can choose an embedding of $G \subset \mathbb{A}^n$ such that $K$ is Lagrangian (using real representations for example), since $G = K_\mathbb{C} \simeq T^* K$ you obtain that $K$ is also Lagrangian in this manifold. Now you can use Weinstein's Lagrangian Neighborhood theorem (which gives a Lagrangian in a neighborhood of $K$, but perhaps this can be deformed by using that $K$ is maximal compact?). - This is meant to be a comment to Reimundo's answer, but as it runs longer than comments allow I am posting it as an answer. In the general situation when you think of $G$ as a symplectic manifold, and $K$ a Lagrangian submanifold, it is often possible to make the local symplectomorphism guaranteed by Weinstein's Lagrangian Neighbourhood theorem quite explicit. Suppose $G$ is complex reductive and $K$ an maximal compact subgroup. Consider the following two (left) actions of $K$ on $G$: $\mathcal{L}_k(g)=kg$ and `$\mathcal{R}_k(g)=gk^{-1}$`, for $k\in K$ and $g\in G$. Suppose $G$ has a symplectic form $\omega$, for which both actions, $\mathcal{L}$ and $\mathcal{R}$ are Hamiltonian, with moment maps `$\mu_{\mathcal{L}}$` and `$\mu_{\mathcal{R}}$`. Since the actions commute the moment map for one is invariant for the other. Since the actions are free $K$ is Lagrangian and both moment-maps map onto open subsets in `$\mathbb{k}^*$`. Consider now $G$ as a $K$-principal bundle by means of the action $\mathcal{L}$. This bundle is of course locally trivial, and it is not hard to see that one can in fact use `$\mu_{\mathcal{R}}$` as the quotient map. Moreover this gives a symplectomorphism from $G$ to `$K\times \mu_{\mathcal{R}}(G)\subset T^*K$`. If `$\mu_{\mathcal{R}}(G)$` contains $0$ this provides you with the local symplectomorphism guaranteed by the Lagrangian neighbourhood theorem; if `$\mu_{\mathcal{R}}$` is surjective it gives a global symplectomorphism $G\cong T^*K$. This is the case for the Kähler structure provided by the polar decomposition and a choice of a metric, but also for at least a fair amount of the Kähler structures coming from affine embeddings. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369401335716248, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/08/27/enriched-categorical-constructions/?like=1&source=post_flair&_wpnonce=420e8d64d7
# The Unapologetic Mathematician ## Enriched Categorical Constructions We’re going to need to talk about enriched functors with more than one variable, so we’re going to need an enriched analogue of the product of two categories. Remember that the product $\mathcal{C}\times\mathcal{D}$ of two categories has the product of the object-classes as its objects, and it has pairs of morphisms for its morphisms. That is, the hom-set $\hom_{\mathcal{C}\times\mathcal{D}}((C_1,D_1),(C_2,D_2))$ is the product $\hom_\mathcal{C}(C_1,C_2)\times\hom_\mathcal{D}(D_1,D_2)$. Of course, in the enriched setting we no longer have hom-sets to work with. So we’ll keep the same definition for the objects of our product category, but we’ll replace the definition of the hom-objects: $\hom_{\mathcal{C}\times\mathcal{D}}((C_1,D_1),(C_2,D_2))=\hom_\mathcal{C}(C_1,C_2)\otimes\hom_\mathcal{D}(D_1,D_2)$ Now we can use the associativity and commutativity of our monoidal category $\mathcal{V}$ (remember we’re assuming it’s symmetric now) to move around factors like this: $(\hom_\mathcal{C}(C_2,C_3)\otimes\hom_\mathcal{D}(D_2,D_3))\otimes(\hom_\mathcal{C}(C_1,C_2)\otimes\hom_\mathcal{D}(D_1,D_2))\rightarrow$ $(\hom_\mathcal{C}(C_2,C_3)\otimes\hom_\mathcal{C}(C_1,C_2))\otimes(\hom_\mathcal{D}(D_2,D_3)\otimes\hom_\mathcal{D}(D_1,D_2))$ at which point we can use the composition in each category to give a composition of the original pairs. To get an identity, we use $\mathbf{1}\cong\mathbf{1}\otimes\mathbf{1}$ and then hit the left copy of $\mathbf{1}$ with the identity morphism for the object $C\in\mathcal{C}$ and the right copy with the identity morphism for $D\in\mathcal{D}$. What about the opposite category? Well, it works pretty much the same as before. We just define $\hom_{\mathcal{C}^\mathrm{op}}(A,B)=\hom_\mathcal{C}(B,A)$. For an identity, we just use the same $\mathbf{1}\rightarrow\hom_\mathcal{C}(C,C)=\hom_{\mathcal{C}^\mathrm{op}}(C,C)$ as before. Actually, these same constructions apply to functors. If we have functors $F:\mathcal{C}\rightarrow\mathcal{C}'$ and $G:\mathcal{D}\rightarrow\mathcal{D}'$, we can assemble them into a functor $F\otimes G:\mathcal{C}\otimes\mathcal{D}\rightarrow\mathcal{C}'\otimes\mathcal{D}'$. Just define $F\otimes G(C\otimes D)=F(C)\otimes G(D)$, and use a similar definition for the morphisms. Also, given $F:\mathcal{C}\rightarrow\mathcal{D}$ we get a functor $F^\mathrm{op}:\mathcal{C}^\mathrm{op}\rightarrow\mathcal{D}^\mathrm{op}$. Now we know we have a 2-category $\mathcal{V}\mathbf{-Cat}$ of categories enriched over $\mathcal{V}$. Since a 2-category is a category enriched over categories, we can pass to the underlying category $\mathcal{V}\mathbf{-Cat}_0$ of enriched categories. It turns out that all the foregoing discussion gives this category some nice, familiar structure. The product of two enriched categories turns out to be weakly associative. Also, remember from our discussion of the underlying category that we have a $\mathcal{V}$-category $\mathcal{I}$. This behaves like a weak identity for the product. That is, when we equip $\mathcal{V}\mathbf{-Cat}_0$ with this product and identified object, it turns out to be a monoidal category! Even better, it’s symmetric — $\mathcal{C}\otimes\mathcal{D}\cong\mathcal{D}\otimes\mathcal{C}$. And what is the opposite category but a duality on this category? So now we can define contravariant enriched functors, as well as functors of more than one variable. As usual, you should go back and try to think of these definitions in terms of ordinary categories ($\mathbf{Set}$-categories) as well as $\mathbf{Ab}$-categories. Incidentally, if you want to run ahead a bit, try working out how natural transformations fit into the picture. It turns out that the 2-category $\mathcal{V}\mathbf{-Cat}$ is an example of an even deeper structure I haven’t defined yet: it’s a symmetric monoidal 2-category with duals. [UPDATE]: Excuse me.. I should have said that $\mathcal{V}\mathbf{-Cat}$ is a symmetric monoidal 2-category with a duality involution rather than “with duals”, and similarly for the underlying category. I blame my inattention on being stuck around the house all day waiting for repairmen to come by to put the dishwasher they left in my living room last Friday into the dishwasher-sized hole they put in my kitchen. Basically, the “duality involution” means that the opposite of the opposite $\mathcal{V}$-category is the original $\mathcal{V}$-category back again, and that the opposite of a tensor product is the tensor product of the opposites. ### Like this: Posted by John Armstrong | Category theory No comments yet. « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 32, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9151478409767151, "perplexity_flag": "head"}
http://ams.org/bookstore?fn=20&arg1=cbmsseries&ikey=CBMS-78
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Return to List The Polynomial Identities and Variants of $$n \times n$$ Matrices A co-publication of the AMS and CBMS. SEARCH THIS BOOK: CBMS Regional Conference Series in Mathematics 1991; 55 pp; softcover Number: 78 ISBN-10: 0-8218-0730-7 ISBN-13: 978-0-8218-0730-9 List Price: US\$27 Member Price: US\$21.60 All Individuals: US\$21.60 Order Code: CBMS/78 The theory of polynomial identities, as a well-defined field of study, began with a well-known 1948 article of Kaplansky. The field since developed along two branches: the structural, which investigates the properties of rings that satisfy a polynomial identity; and the varietal, which investigates the set of polynomials in the free ring that vanish under all specializations in a given ring. This book is based on lectures delivered during an NSF-CBMS Regional Conference, held at DePaul University in July 1990, at which the author was the principal lecturer. The first part of the book is concerned with polynomial identity rings. The emphasis is on those parts of the theory related to $$n\times n$$ matrices, including the major structure theorems and the construction of certain polynomial identities and central polynomials for $$n\times n$$ matrices. The ring of generic matrices and its center is described. The author then moves on to the invariants of $$n\times n$$ matrices, beginning with the first and second fundamental theorems, which are used to describe the polynomial identities satisfied by $$n\times n$$ matrices. One of the exceptional features of this book is the way it emphasizes the connection between polynomial identities and invariants of $$n\times n$$ matrices. Accessible to those with background at the level of a first-year graduate course in algebra, this book gives readers an understanding of polynomial identity rings and invariant theory, as well as an indication of problems and research in these areas. Readership Reviews "This monograph provides an excellent overview of the subject and can serve nonexperts as an introduction to the field and serve experts as a handy reference." -- Mathematical Reviews • Polynomial identity rings • The standard polynomial and the Amitsur-Levitzki theorem • Central polynomials • Posner's theorem and the ring of generic matrices • The center of the generic division ring • The Capelli polynomial and Artin's theorem • Representation theory of the symmetric and general linear groups • The first and second fundamental theorems of matrix invariants • Applications of the first and second fundamental theorems • The Nagata-Higman theorem and matrix invariants AMS Home | Comments: [email protected] © Copyright 2012, American Mathematical Society Privacy Statement
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8845608234405518, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/99661?sort=oldest
## pull backs (and tensor product) in algebraic K-theory ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In the context of algebraic (equivariant) K-theory (more specifically, in the context of Chriss and Ginzburg's book representation theory and complex geometry) I would like to know if I have the correct intuition for the following. Let $N$ be a smooth variety and $\pi:M\to N$ a vector bundle (or really $M$ a smooth variety and $\pi$ any flat map). Let $\mathcal{F},\mathcal{G}$ be two locally free sheaves on $N$, and write `$[\mathcal{F}]$` for the element represented by $\mathcal{F}$ in the (zero-th) K-group. Is it true that `$\pi^*([\mathcal{F}]\otimes[\mathcal{G}]) = [\pi^* (\mathcal{F})]\otimes[\pi^* (\mathcal{G})]$` in $K(M)$, the zero-th K-group? To what extent is this true, provided that we must keep $M,N$ smooth so that the tensor products are defined? Specifically is it true provided only one of the sheaves is locally free? EDIT: We must be careful here, as the definition of tensor product of modules isn't quite sufficient to define tensor product in K-theory. Perhaps we should expand the question to a question about the notion of pull back in $K$-theory. Essentially we build the tensor product of modules in 2 steps. Fist we have the exact construction, $\mathcal{F},\mathcal{G}$ modules over $X,Y$ reps., then we can form the outer tensor product of $\mathcal{F},\mathcal{G}$ on $X\times Y$. To do this let `$p_i$` be the $i$-th projection and set, $$\mathcal{F}\boxtimes \mathcal{G} = p_1^*\mathcal{F}\otimes_{\mathcal{O}_{X\times Y}}p_2^*\mathcal{G}$$ As the projection is a flat map the pullback functor $p_i^*$ is exact, and hence the above formula gives rise to a well defined map $K(X)\times K(Y)\to K(X\times Y)$. In the case $X=Y$ we also have the diagonal embedding, $\Delta:X_\Delta\to X\times X$, and hence we can pull back the outer product of two sheaves to the diagonal, `$\mathcal{F}\otimes\mathcal{G}:=?\Delta^*(p_1^*\mathcal{F}\otimes_{\mathcal{O}_{X\times Y}}p_2^*\mathcal{G})$`. BUT `$\Delta^*$` need not be exact! In fact it is only a closed immersion even if we assume $X$ is smooth, so even then the diagonal map is not flat! To make matters worse, if $X$ is singular the higher derived functors `$R^i(\Delta^*)$` need not vanish for $i>>0$, so the typical trick of taking as the definition of `$\Delta^*$` an alternating sum of the higher derived functors doesn't work. Thus Chriss and Ginzburg's definition of the pull back under a closed immersion is somewhat ad hoc. Perhaps the better question to ask here is for someone do describe the geometry of this construction, and give some intuition for it in simple cases. (i.e. does transversality simplify things some?) Their definition goes as follows. Let $f:Y\hookrightarrow X$ be a closed immersion of smooth quasi-projective varieties. For any coherent sheaf $\mathcal{F}$ on $X$ there is a finite locally free resolution $F^\bullet$ of $\mathcal{F}$ (Hilbert's syzygy theorem), $$\cdots \to F^1\to F^0 \to \mathcal{F}\to 0.$$ (For me, the following is the strange part) The sheaf `$f_*(\mathcal{O}_Y)\otimes_{\mathcal{O}_X}F^i$` is a coherent sheaf of `$\mathcal{O}_Y \cong f_*(\mathcal{O}_Y)$`-modules (i.e. it is annihilated by the ideal `$\mathcal{I}_Y$`). Thus, the homology sheaves of the complex, ```$$ \cdots \to f_*(\mathcal{O}_Y)\otimes_{\mathcal{O}_X}F^1\to f_*(\mathcal{O}_Y)\otimes_{\mathcal{O}_X}F^0\to 0 $$``` are coherent sheaves of `$\mathcal{O}_Y$`-modules. Finally(!) we define, ```$$ f^*([\mathcal{F}]) = \sum (-1)^i[\mathcal{H}_i(f_*(\mathcal{O}_Y)\otimes_{\mathcal{O}_X}F^\bullet)] $$``` It seems clear (but I have a hard time trusting) that for $\mathcal{F}$ locally free we have an equality with the old notion of pullback, ```$$ f^*([\mathcal{F}]) = [\mathcal{O}_Y\otimes_{f^{-1}(\mathcal{O}_X)}f^{-1}(\mathcal{F})], $$``` so the intuition for the the pull back of tensor products problem mentioned at the beginning is that we can simply use the composition of pull backs is the pull back of the composition for locally free sheaves. Is this the right intuition? In practice the resolution construction might be overkill. Suppose $i:Z\hookrightarrow X$ is another closed immersion of smooth quasi-projective varieties, and suppose that $Z$ and $Y$ intersect transversally. Then can we say something without using resolutions about the pullback of the structure sheaf `$f^*([i_*\mathcal{O}_Z])$` to $Y$? (i.e. is it the obvious thing, `$[\mathcal{O}_{Y\cap Z}]$`?) - Why should tensor products require smoothness? – Will Sawin Jun 15 at 0:53 What's $\mathcal H_i$? – Will Sawin Jun 15 at 20:52 In the formula above, `$\mathcal{H}_i(f_*(\mathcal{O}_Y)\otimes_{\mathcal{O}_X}F^\bullet)$` stands for the $i$-th homology group of the complex. Explicitly, $$\mathcal{H}_i(f_*(\mathcal{O}_Y)\otimes_{\mathcal{O}_X}F^\bullet) = ker(f_*(\mathcal{O}_Y)\otimes_{\mathcal{O}_X}F^i\to f_*(\mathcal{O}_Y)\otimes_{\mathcal{O}_X}F^{i-1})/image(f_*(\mathcal{O}_Y)\otimes_{\mathcal{O}_X}F^{i-1}\to f_*(\mathcal{O}_Y)\otimes_{\mathcal{O}_X}F^i)$$ – Rob Denomme Jun 15 at 21:27 oops, make that second `$F^{i-1}$` an `$F^{i+1}$` – Rob Denomme Jun 15 at 21:37 @Rob, I'm not sure my response completely answers your question, but this is partly because it's a little hard to parse what you're asking. Could you tighten up the question a little to clarify? Thanks! – Dave Anderson Jun 15 at 23:00 ## 2 Answers There are a couple issues here. I think the main confusion is between K-theory of vector bundles ($K^\circ X$) and K-theory of coherent sheaves ($K_\circ X$). In Chriss-Ginzburg, I think they often assume $X$ is smooth, in which case the two are isomorphic. The isomorphism comes from resolving a coherent sheaf by vector bundles, and taking the alternating sum. The former is always a ring under tensor product, and pullback is indeed the naive one. As you say, pullback is not exact in general, but since tensor product with a locally free sheaf is exact, pullback on K-theory of vector bundles is well-defined, for any morphism and any spaces. For coherent sheaves, there are pullbacks for some classes of morphisms. The easiest case is when $X$ and $Y$ are smooth, so that there's an isomorphism with K-theory of vector bundles, and there's a pullback for any morphism $f: X \to Y$. More generally, the intuition is that a pullback $f^*: K_\circ Y \to K_\circ X$ should be defined by $$f^*[\mathcal{F}] = \sum (-1)^i [Tor^Y_i(\mathcal{O}_X,\mathcal{F})].$$ This makes sense whenever $f$ is a perfect morphism, i.e., $\mathcal{O}_X$ has a finite resolution by flat $f^{-1}\mathcal{O}_Y$-modules, because in that case the Tor sheaves are zero all but finitely many $i$. In particular, it does work when $f$ is flat, or a regular embedding. (A word of warning: these Tor sheaves are not computed using the pushforward $f_*\mathcal{O}_X$ in general, although that does work when $f$ is a closed embedding. The correct definition is to cover $Y$ and $X$ by affines, construct the Tor locally, and glue --- see EGA III.6.) The answer to your last question is yes: if $Y$ and $Z$ are transversally intersecting subvarieties of a smooth variety $X$, with $Y\cap Z = W$, then $[\mathcal{O}_Y]\cdot [\mathcal{O}_Z] = [\mathcal{O}_W]$, because transversality implies vanishing of the higher Tor's. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. We can say more. These are naturally isomorphic as sheaves, with no conditions on $M$ and $N$. It is easiest to see this in the affine case, $N=\textrm{Spec} A$, $M=\textrm{Spec} B$, $\mathcal F$ and $\mathcal G$ the sheafification of the $A$-modules $F$ and $G$. Then your equation translates to $B \otimes_A (F \otimes_A G) = (B \otimes_A F)\otimes_B (B\otimes_A G)$ which is true because tensor product is commutative and associative and $B \otimes_B B=B$. This gives us a local isomorphism between the two sheaves on every affine open set which clearly restrict to the same isomorphism on their intersection, meaning that they glue together into a global isomorphism of sheaves. - Thanks for the response, this is exactly the intuition one wants to use, pull back of composition is the composition of the pull backs. The tensor product in algebraic K-theory isn't so easily constructed due to the failure of diagonal embedding to be flat, see the discussion added above for more details, and so I wonder how well this intuition is suited to the problem after this consideration is made. – Rob Denomme Jun 15 at 20:15 1 Ah. I don't know much about K-theory, so I didn't see the difficulty with exactness. I will think about the discussion and see if I can figure out the answer. – Will Sawin Jun 15 at 20:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 4, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397539496421814, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/187819/sound-triangulation
# Sound Triangulation Hello guys I would appreciate it if someone could help me out here. I have been asked to create a simple visualization for an assignment. I am required to triangulate the source of a artillery cannon after a shot has been fired using 3 towers(microphones). The problem I am having is how would one go about calculating the source of the sound(artillery). I assume that one would use these variables to compute the solution for the problem: speed of sound +-330 meters per second, The distance between the towers and the time each of the towers logged after the sound was heard. - Do the towers know when the shot was fired (perhaps the muzzle flash was seen), or just the differences in time between the arrivals of the sound? – Henry Aug 28 '12 at 12:59 Just the differences in time between the arrivals of the sound ca n be detected – J03Fr0st Aug 28 '12 at 14:50 ## 2 Answers Your initial ideas are good. If we take the speed of sound at 330 meters per second and we assume that it is emitted from the point of the cannon equally in all directions after the shot has been fired (and there are no buildings or other interfering objects between our towers and the cannon) then we can imagine a circle, starting at the unknown point of the cannon, with a radius increasing at the rate of 330 meters per second as the sound wave. I will imagine, at first, that we also know the precise time that the cannon was fired. Label the towers $A$, $B$, and $C$ for simplicity. Now if tower A received the sound wave (say) 1 second after the shot was fired, we would know that the cannon was exactly 330 meters away. Therefore we can draw a circle centered at this tower which has a radius of 330 meters, knowing that our cannon is somewhere on this circle. If tower $B$ received the sound of the cannon being fired 1.5 seconds after the shot was fired, we could draw a circle of 495 meters (1.5 times 330) centered at tower $B$, and we would know that the cannon had to be somewhere on this circle as well. The circle at tower $B$ would intersect the circle at tower $A$ in at most two places, so we have limited the location of the cannon to at most two points. Finally the third tower $C$ may receive the signal 2 seconds after the shot was fired, allowing us to draw a circle centered at tower $C$ with radius 660 meters. This would intersect one of the two points of intersection between the circles at $B$ and $C$, and that would be the point where the cannon was located. If we didn't know the time that the cannon was fired, we could still work out the location. Let the location of the cannon be the point $(x_0,y_0)$. Then when the shot is fired, a circle representing the sound centered on this location expands outward, it's radius increasing at $330m/s$. Let $r$ be the (unknown) distance from the tower $A$ (which we will assume is the first tower to hear the shot) to the cannon. Then letting the position of $A$ be denoted as $(a_x,a_y)$, we have that $(a_x-x_0)^2+(a_y-y_0)^2=r^2$. Note that this is an equation with three unknowns, $x_0, y_0,$ and $r$ - we know $a_x$ and $a_y$. Now assume that tower $B$ hears the shot $b_s$ seconds later. Then tower $B=(b_x,b_y)$ is on a circle $(b_x-x_0)^2+(b_y-y_0)^2=(r+330b_s)^2$. Similarly, we have that when tower $C=(c_x,c_y)$ hears the shot $c_s$ seconds after the first tower, $(c_x-x_0)^2+(c_y-y_0)^2=(r+330c_s)^2$. Therefore we have a system of three polynomials with three unknowns: $$(a_x-x_0)^2+(a_y-y_0)^2=r^2$$ $$(b_x-x_0)^2+(b_y-y_0)^2=(r+330b_s)^2$$ $$(c_x-x_0)^2+(c_y-y_0)^2=(r+330c_s)^2$$ If the sound reached all three towers at the same time (and therefore the towers were necessarily not collinear) then we would know the point of the cannon precisely by the problem of Apollonius. If not, hopefully solving this system of equations yields an answer (there are probably some physical restraints which make it so that a unique answer can be provided in most, if not all, circumstances - for instance, I imagine the towers not being collinear will be important). - U are good. Very detailed explanation. Thank U – J03Fr0st Aug 28 '12 at 14:52 @J03Fr0st You're welcome! I've always been curious about triangulation, it was fun to explore it. – Michael Boratko Aug 28 '12 at 15:47 1 If the sound reached all three towers at exactly the same time, then they came from the circumcentre. – Henry Aug 28 '12 at 20:22 From the time it takes for the sound to reach each tower you get the distance of the towers to the source since you know the speed of sound. The source can then be located as the intersection of three circles. The solution above requires that the times be known precisely. If there is a error interval for the times, then the source will be in the intersection of three annuli. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9720789790153503, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/265637/understand-variant-kinds-of-generating-functions
# Understand variant kinds of generating functions? I found there are various kinds of generating functions in Wikipedia. I would like to understand why (the purpose)and how these concepts were created. For the "how" part, given a sequence $(a_n)$, • the ordinary generating function is defined as $(a_n)$-weighted version of the Taylor expansion of $(1-x)^{-1}$ at $x=0$; • the exponential generating function is defined as $(a_n)$-weighted version of the Taylor expansion of $e^x$ at $x=0$. I was wondering if the following two kinds can be viewed as $(a_n)$-weighted versions of the Taylor expansions of some functions at some points: • The Poisson generating function of a sequence $(a_n)$ is $$\operatorname{PG}(a_n;x)=\sum _{n=0}^{\infty} a_n e^{-x} \frac{x^n}{n!} = e^{-x}\, \operatorname{EG}(a_n;x)\,.$$ If ignoring $a_n$, $\operatorname{PG}(a_n;x)$ seems to expand $1$ by writing it as $1=e^{-x} e^x$ and expand the second factor by the exponential generating function. • The Lambert series of a sequence $(a_n)$ is $$\operatorname{LG}(a_n;x)=\sum _{n=1}^{\infty} a_n \frac{x^n}{1-x^n}.$$ Are the following two kinds viewed as weighted versions of some kinds of expansions of some functions at some points? • The Bell series of a sequence $(a_n)$ is an expression in terms of both an indeterminate x and a prime p and is given by $$\operatorname{BG}_p(a_n;x)=\sum_{n=0}^\infty a_{p^n}x^n.$$ • Formal Dirichlet series are often classified as generating functions, although they are not strictly formal power series. The Dirichlet series generating function of a sequence an is $$\operatorname{DG}(a_n;s)=\sum _{n=1}^{\infty} \frac{a_n}{n^s}.$$ Thanks and regards! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567761421203613, "perplexity_flag": "head"}
http://mathoverflow.net/questions/24763/advice-on-changing-topic-for-thesis-problem
## Advice on changing topic for thesis problem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am starting to find my thesis problem not meaningful nor interesting and too technical. As I learn more mathematics I am finding myself attracted to other topics and only started on my intital problem with my advisor because I could understand it at first. My problem is a very "good" one, fulfuling many criteria mentioned by mathematicians: • It is pitched at exactly the right difficult. • It involves learning a big machine • It solution leads to further work and questions. Should I: 1. Find a new problem? (I'm sure I can't think of such a "good" problem as the one given to me by my advisor.) 2. Stick with the same problem and only do new problems after graduation? 3. Anything else? - 2 Of course, it's never late to change. Working on a problem which is of no interest to you is waste of time. – Wadim Zudilin May 15 2010 at 15:06 13 Do you have a history of losing interest in things when you come to an understanding of them? That would be my first concern with your question as it would indicate a larger problem. I switched dissertation topics $n$ times for $n\geq 4$. I seem to keep on coming back to those problems -- I didn't lose interest in them, either other people solved them or I didn't know what to do. Switching topics isn't a horrible thing since you get to learn a variety of topics in the process, provided you have the time and suitable topics to work on. – Ryan Budney May 15 2010 at 15:11 34 As a rule, I don't see any harm in changing a problem if it doesn't "fit", but you really need to have this discussion with your advisor. – Donu Arapura May 15 2010 at 15:11 4 Something else to ask yourself, since you said the solution "leads to further work and questions", do you think any of these are more interesting? – Karl Schwede May 15 2010 at 16:20 4 How far along are you? The answer to your question may depend a lot on this information. – Felipe Voloch May 16 2010 at 3:45 show 1 more comment ## 5 Answers I would suggest to talk to your advisor. Just tell him exactly what your are telling us: you think that your thesis problem is not meaningful nor interesting and too technical. If he displays as an answer some kind of rude behavior, then I think that it's better to change from subject (and from advisor). But my feeling is that you can have a constructive discussion with him. He must first understand what you don't like in your subject. Then he may propose a less technical question or a different angle of attack on the problem. He may explain the relevance of some technics that may appear boring at first sight, but that you will find both enjoyable and powerful as soon as you master it. Just as many mathematical courses don't reveal all their depth in the first sessions, there may be many wonders that await you in the next years of your phd. But some tedious work may be in order before reaching that point. - Yes, I agree with that answer. – André Henriques May 18 2010 at 19:03 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Sure, work on something else. In fact, work on something else as well even if you love your thesis problem. - My piece of advice is not only for mathematics: it's valid for everything in life. If you ever dream of doing something else than what you're currently doing, go try it out! It will enrich you. And if it doesn't work out, then you can always go back to what you were doing previously (the latter is not always true, but most of the time it is). - I say stick with the problem and learn to love it. Since mathematics is so interconnected I'm sure you'll find your way. It would help to learn about the history and context of the problem. Why did people invent this big machine? How does it relate to cool things in math that you do like? Suppose you would change topic, how would this be different? You can always say something is too technical and not of interest. It's far better to take your topic and find a piece that does seem interesting and is not too technical (when looked from the right angle). [EDIT: Douglas S. Stones]: Doing a Ph.D. requires hard work over a long period of time. The typical candidate needs to meet fairly high expectations, without understanding what these expectations are (e.g. what makes a thesis well-written?). It's very easy to become demoralised (as I became at some points during my candidature). It's not going to be possible to complete a Ph.D. without determination. So I also recommend the stick-with-it approach (although I do not know the particulars of your situation). Furthermore, there's nothing stopping a Ph.D. candidate from studying other topics alongside their thesis topic (in fact, I think this should be encouraged to some degree). - My advice: find someone who knows you in person, and knows more about your situation to talk to about this. The best would be if you felt comfortable discussing it with your advisor, but another professor you know, or some sort of graduate subchair, or even an older graduate student would also be good. You just aren't going to get anything but the most vague advice that's not very well-suited to your situation from strangers on the internet. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9751397371292114, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/24731/does-the-current-acceleration-of-universe-imply-that-our-universe-is-open
# Does the current acceleration of universe imply that our universe is open? Does the current acceleration of universe imply that our universe is open? If the universe is closed, from the Friedmann's equation, the acceleration of universe wouldn't be possible, would it be? (Of course, except for the very earlier time of inflation era.) - 1 The answer is yes. – Ron Maimon May 2 '12 at 16:05 ## 2 Answers The universe is currently believed to be flat, and therefore open. The data from WMAP hows the universe to be flat to within 0.5%. I think strictly speaking "open" means $\Omega < 1$ rather than $\Omega \le 1$ but the flat universe certainly isn't closed. If dark energy behaves like a cosmological constant then I don't think it's possible to observe acceleration in a closed universe obeying the FLRW metric. However little is know about dark energy and it has been suggested that it may vary with time. Andrei Linde has suggested that the acceleration may reverse in time causing a Big Crunch (sorry the link is a bit vague but I couldn't find anything definitive about Linde's suggestion). Later: courtesy of Google see http://arxiv.org/abs/astro-ph/0301087 for an article by Kallosh and Linde about the Big Crunch. Later still: it looks as if no-one else is going to answer, so I'll add a note. Assuming Luboš is right (and he knows vastly more than me!) you can get acceleration in a closed Friedmann universe, so the fact we observer acceleration doesn't necessarily show the universe is open. However, as mentioned in my first paragraph the WMAP data shows the universe is flat so we don't need to observer acceleration to know it isn't closed. - Apologies, I don't think your answer to the OP's question is right. The acceleration of the expansion is possible both for open and closed spatial slices as long as one correctly includes the dark energy term to $\Omega$ as well. It's clear that because we don't really know what the $k$ of the slices is, being closed to flat, but we do know that the acceleration is positive, and the acceleration is a continuous function of the spatial curvature, we can't possibly be able to say whether the spatial curvature is positive or negative. The Universe may be just much larger than the visible one. – Luboš Motl May 2 '12 at 16:06 Hi Luboš, can you clarify that for me; maybe post an answer. I didn't think the FLRW metric with $\Omega < 1$ could have an accelerating phase because if $\Omega < 1$ the mass will always dominate over the cosmological constant. If the universe isn't FLRW then obviously all bets are off and it could do anything. – John Rennie May 2 '12 at 16:29 Oops, that should be $\Omega > 1$ of course – John Rennie May 3 '12 at 14:27 No. Universe that is accelerating can be open, closed, or flat in principle. The latter attributes are determined by how much matter/energy is in the universe relative to the critical value. If the energy is greater than critical then the universe is closed, if it's less than critical the universe is open, and energy equal to critical the universe is flat. On the other hand, the condition for the universe to accelerate is for it to have a component with negative enough pressure (in technical terms, this is entirely obtained from the second Friedmann equation that gives the acceleration of the characteristic scale of the universe, which is not required when talking about flat/open/closed). [That said, it happens that current cosmological data strongly favor a universe that is flat. But this is mostly independent of the statement that the universe is accelerating. ] Without dark energy open/flat (closed) universe implies one that is expanding forever (ends in a Big Crunch). But with dark energy that drives the acceleration of the universe, this "geometry is destiny" link is broken. In particular, the future of the expansion can be arbitrary, as it depends on the future behavior of dark energy. However, if the dark energy continues to accelerate the universe as it is doing now, then the universe will expand - and also accelerate - forever. - The answer is just yes. – Ron Maimon Jun 3 '12 at 18:30 -1: Although you end up saying correct things, you have to stop youself from making something simple complicated. There is acceleration, the universe is open, end of story. – Ron Maimon Jun 3 '12 at 18:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9592711329460144, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/157047/axiomatic-approach-to-polynomials/157089
Axiomatic approach to polynomials? I only know the "constructive" definition of $\mathbb K [x]$, via the space of finite sequences in $\mathbb K$. It essentially tells a polynomial is its coefficients. Is there a way to define polynomials "axiomatically"? I just don't know a more suitable word, what I mean is like real numbers can be defined as complete ordered field, in opposition for example to repeating decimals. If there is no such definition then why? Why it is needed to tell what is a polynomial as itself, instead of telling what can be done with it? Maybe there are axiomatic definitions for some specific kinds of polynomials? - 1 – Jack Schmidt Jun 11 '12 at 17:24 @JackSchmidt if you are referring to the Qiaochu Yuan's answer then I'm no good at category theory to deal with the "universal property" and derive the other definition of polynomials without introducing their coefficients. Otherwise I don't see how the reference is connected to my question. – Yrogirg Jun 11 '12 at 17:35 4 Polynomials have lots of uses. Polynomials with coefficients in a commutative ring R have the nice property of being able to be evaluated in every R-algebra, and the evaluation being uniquely determined by the value chosen for x. The polynomial ring is the unique R-algebra with the property that every R-algebra homomorphism from it to an R-algebra is uniquely determined by where it sends x, and x can be sent anywhere. – Jack Schmidt Jun 11 '12 at 17:37 In other words, polynomials over a commutative ring R can be defined as things that can be evaluated in R-algebras (not just R, which was the point of a few of the answers). – Jack Schmidt Jun 11 '12 at 17:38 @Yrogirg: Very interesting question! Have never seen it done, but it could be of interest even for polynomials over the natural numbers. – André Nicolas Jun 11 '12 at 18:31 4 Answers Short answer: (commuting) polynomials with coefficients from a (commutative, associative, unital) ring R are just things that can be clearly evaluated in any ring containing R. Some definitions I'll describe the definition a few times, so you can see how things like this work. In this answer all rings and algebras are commutative, associative, and unital. (1) Basically, `PolynomialRing(R)` is an R-algebra T and a special element t of T, but T has an extra method called `Evaluate(S,s)` that takes another R-algebra S and an element s of S and produces the unique R-algebra homomorphism from T to S that sends t to s. The existence of the homomorphism is more "what you can do with T" but the uniqueness of the homomorphism is important to define the unique (up to R-algebra isomorphism) polynomial ring over R. Similarly $\mathbb{R}$ is an ordered field, so we know what we can do with it: we can add, subtract, multiply, divide, and compare. We need "complete" to know we have the right field (and it gives us a few other neat operations like sup and inf, so that limits exist). The evaluation homomorphism tells us what we can do with the algebra, and the uniqueness of the evaluation homomorphism tells us we have the right algebra (and gives us a few other neat category theory like operations like adjoints, so that something like limits exist). (2) Given a ring R the polynomial ring with coefficients R is an R-algebra T with a distinguished element t such that for every R-algebra S and element s of S there is a unique R-algebra homomorphism from T to S that sends t to s. The homomorphism is called “evaluating the polynomial at $t=s$”. If $(T,t)$ and $(X,x)$ are two such polynomial rings, then there is a unique R-algebra homomorphism from T to T that sends t to t (namely the identity), but there is also the composition of the homomorphisms from T to X and X to T sending t to x and x to t. Hence the composition is the identity, and both are isomorphisms. (3) Given a ring R the polynomial ring with coefficients R in n indeterminates is an R-algebra with distinguished elements $t_1,t_2,\dots,t_n$ such that for every R-algebra S with element $s_1,s_2,\dots,s_n$ there is a unique R-algebra homomorphism from T to S sending $t_1,t_2,\dots,t_n$ to $s_1,s_2,\dots,s_n$ (in that order). A sequence $s_1,s_2,\dots,s_n$ is a function from the numbers $1,2,\dots,n$ to S. One doesn't have to use numbers for variable identifiers. Any set X can work. (4) Given a ring R and a set X, the polynomial ring with coefficients R and indeterminates X is an R-algebra T containing X as a subset such that for any R-algebra S and function from X to S, there is a unique R-algebra homomorphism from T to S agreeing with the function on X. What are algebras? In case you don't already like R-algebras or rings, let me describe them in simple terms. Begin with the case that R is a field. An R-algebra is either of the following equivalent ideas (choose one that makes sense): (1) An R-vector space S with a multiplication that is associative, commutative, and unital, and that works well with scalar multiplication: $(rv)(w) = r(vw) = v(rw)$ for r in R, and v, w in the vector space. (2) A ring S that contains R as a subring (where the multiplicative identity of S is the same as the one in R). An R-algebra homomorphism is just a function that preserves everything: $f(v+w) = f(v)+f(w)$, $f(rv) = r f(v)$, and $f(vw) = f(v) f(w)$. Notice that $f(r)=r$ if you take viewpoint 2. Since R-algebra homomorphisms preserve addition and multiplication and coefficients, they allow us to evaluate any "polynomial" where by polynomial I mean a recipe for combining ring elements using addition, multiplication, and scalar multiplication, like "Take a ring element, square it, add 3, multiply by the original element, and add 5", better known as $x\mapsto (x^2+3)x+5 = x^3 + 3x + 5$. The right hand side is just a polynomial like we know, and R-algebra homomorphisms allow us to say that if x is replaced by a specific algebra element, then we know what the entire polynomial should be (just follow the recipe). So polynomial rings just have to "exist" in some sense: they are just recipes for combining elements of algebras (recipes of adding and multiplying). The surprising thing is that they exist in a very simple sense: they are themselves R-algebras. The recipes themselves can be added and multiplied. Proving that they exist as R-algebras requires some goofy things like sequences (just like proving the existence of a complete ordered field required goofiness like cauchy sequences or dedekind cuts), but working with them as recipes doesn't require anything like that at all. Incidentally, the definition of algebra for a general commutative ring R is not much different: replace "vector space" with "module" and "S contains R as a subring" with "has a ring homomorphism from R into S". CGT aside On the off chance you like computational group theory: a common way of representing elements of "polynomial groups" (called free groups) is by "monomials" (groups only have multiplication, so no adding allowed) or "words" (like strings in CS). However, words can be a little limiting in practical computations, and there are other data-types used including "straight line programs" which are quite literally recipes for multiplying stuff together. "Take element # 1 and multiply it by element #2 placing the result in element #1. Take element # 1 and multiply it by element #1 and place the result in element # 1." or more briefly $[a,b] \mapsto [(ab)^2, b]$. These recipes can often be stored in space logarithmic in the space required for general words. They can also speed up some calculations by recording a particular efficient "addition chain" to produce a list of group elements (multiplying group elements sometimes takes a few seconds per multiplication, so it is important not to waste them). A lot of times one works with recipes as if they are just recipes, but it is occasionally important to know that the recipes themselves form a group and that there is a unique group homomorphism (evaluation) that takes the formal "element #1" to a specific element of a specific group. - Below is a sketch of one way to "axiomatize" the polynomial ring $\rm\,R[x]\:$ over a ring $\rm\:R.$ • $\rm\:R[x]\:$ is a commutative ring generated by $\rm\:R\:$ and $\rm x\not\in R$ • $\rm\:f = g\:$ in $\rm\:R[x]\iff$ $\rm\:f = g\:$ is provable from the ring axioms (and equalities of $\rm\:R$) Said informally, $\rm\:R[x]\:$ is the "freest" possible ring obtained by hypothesizing only that it is a ring containing $\rm\:R\:$ and $\rm\:x.\:$ In particular, $\rm\:R[x]\:$ contains no more elements than those that can be generated by iteratively applying ring operations to $\rm\:x\:$ and elements of $\rm\:R,\:$ and $\rm\:R[x]\:$ contains no more equalities other than those that can be deduced by the ring axioms (and equalities in $\rm\:R).\:$ Now, since $\rm\:R[x]\:$ satisfies all ring axioms, the usual proofs show that every element is equal to one in standard normal form $\rm\:r_0 + r_1\, x +\cdots+ r_n x^n.\:$ To show that no two distinct normal forms are equal one can use van der Waerden's trick (cf. 2nd-last post here) which here amounts to using a regular representation. Alternatively, more semantically, but less effectively, one can view the elements of $\rm\:R[x]\:$ as identities of $\rm\:R$-algebras, which immediately yields the universal mapping property of $\rm\:R[x].$ These are specializations of various techniques of constructing free algebras in equationally defined classes of algebras. One can find discussion of such in most treatises on universal algebra. For a particularly readable (and comprehensive) introduction see George Bergman's (freest!) textbook An Invitation to General Algebra and Universal Constructions. For convenience I excerpt below my linked Mathoverflow remarks on van der Waerden's trick, etc - which may be of interest to the OP and others. See also this answer on the motivation for formal vs. functional polynomials. How do you interpret the indeterminate "$\rm x$" in ring theory from the set theory viewpoint? How do you write down $\rm\:R[x]\:$ as a set? Is it appropriate/correct to just say that $$\rm R[x] = \{f:R\to R \mid \exists n\in\mathbb N,\ and\ c_i \in R\ \ such\ that\ \ f(x) = c_0+c_1 x+\cdots+c_nx^n\}$$ This appears to be a very analytic definition. Is there a better definition that highlights the algebraic aspect of the set of polynomials? $\ \ \$ (quoted from a Mathoverflow question) The OP does explicitly reveal some motivation, namely he seeks to understand how to construct $\rm\:R[x]\:$ set-theoretically and to better understand the algebraic conception of polynomial rings. Such issues are not only of interest to students. For example, Pete L. Clark's answer refers to his notes on commutative algebra - where he discusses such topics at much greater length than do most algebra textbooks. There, while discussing various constructions of $\rm\:R[x],\:$ he remarks: However it is tedious to verify associativity. Over the years I have developed a slogan: if you are working hard to show that some binary operation is associative, you are missing part of a bigger picture. Unfortunately this is not such a great test case for the slogan: I do not know of a truly snappy conceptual proof of the associativity of multiplication in a polynomial ring. -- Pete L. Clark, Commutative Algebra, Sec. 4.3, p. 38. In fact there is a "bigger picture", including a construction that achieves what he seeks. Such topics are probably not well-known to those who have not studied universal algebra. But they certainly deserve to be better known due to the fact that they provide deeper conceptual and foundational insight. [...] I should emphasize that my motivation differs from that in Pete's notes. My goal is not primarily to find a construction of $\rm\:R[x]\:$ that is simplest (e.g. with easily verified ring axioms). Rather, my motivation has further pedagogical aims. Namely, I desire a construction that is faithful to the intended application of $\rm\:R[x]\:$ as a universal/generic object (e.g. recall my universal proofs of determinant identities by canceling $\rm\:det(A)\:$ for generic $\rm A).\:$ With that goal in mind, one may motivate quite naturally the construction of $\rm\:R[x]\:$ as a quotient $\rm\:T/Q\:$ of the absolutely-free ring term algebra $\rm\:T = R\{x\}.\:$ For $\rm\:T/Q\:$ to satisfy the desired universal mapping property it is obvious what the congruence Q must be: it must identify two terms $\rm\:s(x), t(x)\:$ precisely when they are identical under all specializations into rings, i.e. when $\rm\:s(x) = t(x)\:$ is an identity of rings. So, e.g. $\rm\:mod\ Q\:$ we have $\rm\:1*x = x,\ \ x*(x+1) = x*x+x.\:$ In particular $\rm\:T/Q\:$ is a ring since it satisfies all (instances of) ring identities (esp. the ring axioms). Next we show that the standard sum-of-monomials representation yields a normal form for elements of $\rm\:T/Q,\:$ i.e. every element of $\rm\:T/Q\:$ is uniquely represented by some such normalized polynomial of $\rm\:T.\:$ Existence is trivial: simply apply the ring axioms to reduce a representative to normal polynomial form. It is less trivial to prove uniqueness, i.e. that distinct normal forms represent distinct elements of $\rm\:T/Q.\:$ For this there is a common trick that often succeeds: exploit a convenient representation of the ring. Here a regular representation does the trick. This method is called the "van der Waerden trick" since he employed it in constructions of group coproducts (1948) and Clifford algebras (1966). Notice that this development is pleasingly conceptual: $\rm\:R[x]\:$ is constructed quite naturally as the solution to a universal mapping problem - a problem which is motivated by the desire to be able to perform generic proofs, as in said proofs of determinant identities. Everything is well-motivated - nothing is pulled out of a hat. The same construction of free algebras works much more generally, e.g. for any class of algebras that admit a first-order equational axiomatization. Although there are also a few other known methods to construct such free algebras, this method is the most natural pedagogically and constructively. Indeed, this is the way most computer algebra systems implement free algebras. The difficulty lies not so much in the construction of the free algebra but, rather, in inventing normal-form algorithms so that one may compute effectively in such free algebras. Although this is trivial for rings and groups, for other algebras it can be quite difficult - e.g. the free modular lattice on $5$ generators has undecidable word problem, i.e. no algorithm exists for deciding equality. Of course much work has been done trying to discover such normal form algorithms, e.g. google Knuth-Bendix completion, Bergman's diamond lemma. Ditto for algorithms for computing in quotients of free algebras, i.e. algebras presented by generators and relations, e.g. Grobner bases, Todd-Coxeter, etc. It is worth emphasizing that Van der Waerden's trick comes in handy in many similar situations. For example, see Alberto Gioia's Master's thesis (under Hendrik Lenstra) Normal forms in combinatorial algebra, 2009. It is also discussed in George Bergman's classic paper[1] on the Diamond Lemma, and in his beautiful textbook [2] on universal algebra. [1] Bergman, G. The diamond lemma for ring theory, Advances in Mathematics 29 (1978) 178-218. [2] Bergman, G. An Invitation to General Algebra and Universal Constructions. - 1 Does your link for "van der Waerden's trick" go to the place you meant it to? – MJD Jun 11 '12 at 20:20 1 @Mark Yes, see the second-last post there (which cannot be linked to directly). I have added an excerpt for convenience. – Gone Jun 11 '12 at 20:28 I don’t know whether @Yrogirg will be satisfied with this description, fairly close to what he rather rejected, but one may also view the polynomials in one variable over $R$ as the monoid ring of $\mathbb{N}$ over $R$, where by $\mathbb{N}$ I mean the set of nonnegative integers under addition. That is, the free $R$-module with basis the set $\mathbb{N}$, the combination of monomials arising from addition in $\mathbb{N}$. In the same way, the “ring of Dirichlet polynomials” over $R$, the set of finite formal expressions $\sum_na_nn^{-t}$, where the $a_n$ are in $R$, is the monoid ring of ${\mathbb{N}}^+$ over $R$, where now ${\mathbb{N}}^+$ is the set of positive integers under multiplication. - Edit: As Bill Dubuque has pointed out the following is very similar to the standard approach reffered by the OP. Anyway the following definition differes from the said standard definition because it present the polynomial ring as a module with a basis with additional structure satisfying axioms. Edit2: I've interpretated axiomatization as a presentation of a structure as family of sets and functions on these sets satisfying some axioms, i.e. presentation via (possible multi-sorted) first-order theories. Here a possible axiomatization for polynomial ring in one variable. Let $R$ be a commutative unital ring, then a polynomial ring over $R$ is just a $R$-module, which we denote as $R[x]$, having a countable basis indexed by $\mathbb N$, with a $R$-bilinear map $b \colon R[x] \times R[x] \to R[x]$ such that for each $n,m \in \mathbb N$ if $x^n$ and $x^m$ are respectively the $n$-th and $m$-th elements of the basis then $b(x^n,x^m)=x^{n+m}$. This definition easily extend to polynomial rings with $n$-variable. - This is essentially equivalent to the standard approach, which the OP already knows. – Gone Jun 11 '12 at 21:01 @BillDubuque Maybe I mistaken but this seems to me that is the only possible axiomatic approach to the definition of polynomial rings. – ineff Jun 11 '12 at 21:04 The first answer above give a definition which is based on the universal property of polynomial ring, which clearly is not a first order sentence and require to quantify over the sets of $R$-algebras and $R$-linear maps. – ineff Jun 11 '12 at 21:08 The second definition instead make use of a condition expressing that no non trivial relations don't hold, the problem is that this is a meta-theoric statement. – ineff Jun 11 '12 at 21:13 My definition just expand the definition present a polynomial ring as a $R$-module with additional structure satisfying some axioms. Clearly my presentation is not completely formal but it can be completely elaborated in a more axiomatical definition. – ineff Jun 11 '12 at 21:17 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 98, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349614977836609, "perplexity_flag": "head"}
http://gauss.vaniercollege.qc.ca/pwiki/index.php/Uncertainties_in_Measurement
# Uncertainties in Measurement ### From Pwiki Kreshnik Angoni BRIEF SURVEY OF UNCERTAINTY IN PHYSICS LABS ## Contents ### FIRST STEP: VERIFYING THE VALIDITY OF RECORDED DATA The drawing of graphs during lab measurements is a practical way to estimate quickly: a) Whether the measurements confirm the expected behaviour predicted by the theory b) If any of recorded data is measured in a wrong way and must be excluded from further data treatments. Example: We drop an object from a window and, from free fall model calcultions, we expect it to hit ground after 2sec. To verify our prediction, we measure this time several times and record the following results: ```1.99s, 2.01s, 1.89s, 2.05s 1.96s, 1.99s, 2.68s, 1.97s, 2.03s, 1.95s ``` (Note: 3-5 measurements is a minimum acceptable number of data for estimating a parameter, i.e. repeat the measurement 3-5 times) To check out those data we include them in a graph (fig.1). From this graph we can see that: a) The fall time seems to be constant and very likely ~2s. So, in general, we have acceptable data. b) The seventh measure seems too far from the others results and this might be due to an abnormal circumstance during its measurement. To eliminate any doubt, we exclude this value from the following data analysis. We have enough other data to work with. Our remaining data are: ```1.99s, 2.01s, 1.89s, 2.05s, 1.96s, 1.99s, 1.97s, 2.03s, 1.95s. ``` ### SECOND STEP: ORGANIZING RECORDED DATA IN A TABLE Include all data in a table organized in such a way that some cells be ready to include the uncertainty calculation results. In our example, we are looking to estimate a single parameter “T”, so we have to predict (at least) two cells for its average and its absolute uncertainty. Table 1 | | | | | | | | | | | | |-------|-------|-------|-------|-------|-------|-------|-------|-------|-----|----| | T1 | T2 | T3 | T4 | T5 | T6 | T7 | T8 | T9 | Tav | ΔT | | 1.99s | 2.01s | 1.89s | 2.05s | 1.96s | 1.99s | 1.97s | 2.03s | 1.95s | | | ### THIRD STEP: CALCULATIONS OF UNCERTAINTIES The true value of parameter is unknown. We use the recorded data to find an estimation of the true value and the uncertainty of this estimation. #### Three particular situations for uncertainty estimations A] - We measure a parameter several times and always get the same numerical value. Example: ```We measure the length of a table three times and we get L= 85cm and a little bit more or less. ``` This happens because the smallest unit of the meter stick is 1cm and we cannot be precise about what portion of 1cm is the quantity “a little bit more or little bit less”. In such situations we use “the half-scale rule” i.e.; the uncertainty is equal to the half of the smallest unit available for measurement. In our example ΔL= ±0.5cm and the result of measurement is reported as L= (85.0 ± 0.5)cm. -If we use a meter stick with smallest unit available 1mm, we are going to have a more precise result but even in this case there is an uncertainty. Suppose that we get always the length L= 853mm. Being aware that there is always a parallax error (eye position) on both sides reading, one may get ΔL= ± 1, 2mm (and even 3mm, ) depending on the measurement circumstances. In this situation, it is suggested to accept 1 or 2 units of measurement. ```The result is reported L= (853 ± 1)mm . ``` Our estimation for the table length is 853mm. Also, our measurements show that the true length is between 852 and 854mm. The uncertainty interval is (852, 854)mm. ```The absolute uncertainty of estimation is ΔL= ± 1mm ``` -Now, suppose that, using the same meter stick, we measure the length of a calculator and a room and find Lcalc= (14.0 ±0.5)cm and Lroom= (525.0 ±0.5)cm. In the two cases we have the same absolute uncertainty ΔL= ± 0.5 cm, but we are conscious that the length of room is measured more precisely. The precision of a measurement is estimated by the uncertainty portion that belongs to the unit of measured parameter. Actually, it is estimated by the relative error: ```$\varepsilon = \frac{\Delta L}{\bar{L}}*100%$. ``` -Smaller relative error means higher precision of measurement. In our length measurement, we have: ```$\varepsilon_{calc} = \frac {0.5}{14}*100% = 3.57%$ ``` ```$\varepsilon_{room} = \frac {0.5}{525}*100% = 0.095%$ ``` We see that the room length is measured much more precisely (about 38 times). Note: Don't mix the precision with accuracy! A measurement is accurate if the uncertainty interval contains an expected (known) value and non-accurate if it does not contain it. B] - We measure a parameter several times and always get different numerical values. Example: We drop an object from a window and we measure the time it takes to hit ground. We find the different values of time intervals inserted in table _1. In cases like this, we have to calculate the average value and the absolute uncertainty based on statistical methods. B.1) We estimate the value of parameter by the average of measured data. In case of our example: ```$\bar{T}= \frac{1}{n}\sum_{i=1}^nT_i = \frac{1}{9}\sum_{i=1}^9T_i = \frac{1}{9}[1.99+2.01+1.89+2.05+1.96+1.99+1.97+2.03+1.95] = 1.982 s$ ``` B.2) To estimate how far from the average can be the true value we use the spread of measured data. A first way to estimate the spread is by use of mean deviation i.e. “average distance” of data from their average value. In case of our example we get: ```$\Delta T = \frac{1}{n}\sum_{i=1}^n \left |T_i - \bar{T}\right | = \frac{1}{n}\sum_{i=1}^9 \left |T_i - 1.982\right | = 0.035 s$ ``` Now we can say that the real value of fall time is inside the uncertainty interval (1.947, 2.017)s or between Tmax = 2.017s and Tmin = 1.947s with average value 1.982s. Taking in account the rules of significant figures and rounding off: The result is reported as T = (1.98 +/- 0.04)s Another (statistically better) estimation of spread is the standard deviation1 of data. Based on our example data we get: ```$\sigma T = \sqrt{\frac{\sum_{i=1}^n \left (T_i - \bar{T}\right )^2}{n-1}} = \sqrt{\frac{\sum_{i=1}^9 \left (T_i - 1.982\right )^2}{8}} = 0.047 s$ ``` The result is reported as T = (1.98 +/- 0.05)s B.3) For spread estimation, the standard deviation is a better estimation for the absolute uncertainty. This is because a larger interval of uncertainty means a more “conservative estimation” but in the same time a more reliable estimation. Note that we get ΔT= +/- 0.05 s when using the standard deviation and ΔT= +/- 0.03 s when using the mean deviation. Also, the relative error (relative uncertainty) calculated from the standard deviation is bigger. In our example, the relative uncertainty of measurements is: ```$\varepsilon = \frac{\sigma T}{\bar{T}} *100% = \frac{0.047}{1.982} * 100% = 2.4%$ (when using the standard deviation) ``` ```$\varepsilon = \frac{\Delta T}{\bar{T}} *100% = \frac{0.035}{1.982} * 100% = 1.8%$ (when using the mean deviation) ``` Note: We will accept that our measurement is enough precise if the relative uncertainty “$\varepsilon$” is smaller than 10%. If the relative uncertainty is > 10%, we may proceed by: • Cancelling the data “shifted the most from the average value” • Increasing the number of data by repeating more times the measurement • Improving the measurement procedure C] - Estimation of uncertainties for calculated quantities (uncertainty propagation). Very often, we use the experimental data recorded for some parameters and a mathematical expression to estimate the value of a given parameter of interests (POI). As we estimate the measured parameters with an uncertainty, it is clear that the estimation of POI will have some uncertainty, too. Actually, the calculation of POI average is based on the averages of measured parameters and the formula that relates POI with measured parameters. Meanwhile, the uncertainty of POI estimation is calculated by using the Max-Min method. This method calculates the limits of uncertainty interval, POImin and POImax by using the formula relating POI with other parameters and the combination of their limit values in such a way that the result be the smallest or the largest possible. Example: To find the volume of a rectangular pool with constant depth , we measure its length, its width and its depth and then, we calculate the volume by using the formula V=L*W*D. Assume that our measurement results are L = (25.5 ± 0.5)m, W = (12.0 ±0.5)m, D = (3.5 ±0.5)m. In this case, the average estimation for the volume is Vav = 25.5 * 12.0 * 3.5 = 1071.0m3. This estimation of volume is associated by an uncertainty calculated by Max-Min method as follows: ```$V_{min} = L_{min} * W_{min} * D_{min} = 25 * 11.5 * 3 = 862.5 m^3\,\!$ ``` ```$V_{max} = L_{max} * W_{max} * D_{max} = 26 * 12.5 * 4 = 1300.0 m^3\,\!$ ``` So, the uncertainty interval for volume is (862.5, 1300.0) and the absolute uncertainty is: ```$\Delta V = \frac{V_{max} - V_{min}}{2} = \frac{1300.0 - 862.5}{2} = 218.7 m^3$ ``` while the relative error is: ```$\varepsilon_v = \frac{218.7}{1071.0} * 100% = 20.42%$ ``` Note: When applying the Max-Min method to calculate the uncertainty, one must pay attention to the mathematical expression that relates POI to measured parameters. Example: -You measure the period of an oscillation and you use it to calculate the frequency (POI). As $f = \frac{1}{T}$, $f_{av} = \frac{1}{T_{av}}$, the Max-Min method gives ```$f_{min} = \frac{1}{T_{max}}$ and $f_{max} = \frac{1}{T_{min}}$. ``` - If $z = x - y\,\!$, then ```$z_{av} = x_{av} - y_{av} \,\!$ and $z_{max} = x_{max} - y_{min} \,\!$ and $z_{min} = x_{min} - y_{max} \,\!$. ``` Note_2: Another way to calculate POIav is by use of the formula ```$POI_{av} = \frac{POI_{max} + POI_{min}}{2}$ ``` after finding the limits of its uncertainty interval. 1 The standard deviation can be calculated direct in Excel and in many calculators ### HOW TO PRESENT THE RESULT OF UNCERTAINTY CALCULATIONS You must provide the average, the absolute uncertainty and the relative uncertainty. So, for the last example, the result of uncertainty calculations should be presented as follows: ```$V = (1071.0 \pm 218.7) m^3$ , $\varepsilon = 20.42%$. ``` Note: Uncertainties must be quoted to the same number of decimal digits as the average value. The use of [scientific notation] helps to prevent confusion about the number of significant figures. Example: If calculations generate, say A = (0.03456789 ± 0.00245678). This should be presented after being rounded off (leave 1,2 or at maximum 3 digits after decimal point): ```$A = (3.5 \pm 0.2) \times 10^{-2}$ or $A = (3.46 \pm 0.25) \times 10^{-2}$ ``` ### HOW TO CHECK IF TWO QUANTITIES ARE EQUAL This question appears essentially in two situations: 1. We measure the same parameter by two different methods and want to verify if the results are equal. 2. We use measurements to verify if a theoretical expression is right. In the first case, we have to compare the estimations $A \pm \Delta A$ and $B \pm \Delta B$ of the “two parameters”. The second case can be transformed easily to the first case by noting the left side of expression A and the right side of expression B. Then, the procedure is the same. Example: We want to verify if the thins lens equation $\frac{1}{p} + \frac{1}{q} = \frac{1}{f}$ is right. For this we note $\frac{1}{p} + \frac{1}{q} = A$ and $\frac{1}{f} = B$. Rule: We will consider that the quantities A and B are equal2 if their uncertainty intervals overlap. 2 They should be in the same units. ### WORKING WITH GRAPHS We use graphs to check the theoretical expressions or to find the values of physical quantities. Example: We find theoretically that the oscillation period of a simple pendulum is $T = 2\pi*\sqrt{L/g}$ and we want to verify it experimentally. For this, as a first step, we prefer to get a linear relationship between two quantities we can measure; in our case period T and length L. So, we square both sides of the relationship. ```$T^2 = 4\pi^2 \frac{L}{g}\,\!$ ``` Than, after noting $T^2 = y\,\!$ and $L = x\,\!$ we get the linear expression Y = a*X where $a = \frac{4\pi^2}{g}$. So, we have to verify experimentally if there is such a relation between $T^2\,\!$ and L. Note that if this is verified we can use the experimental value of a to calculate the free fall constant value: “$g = \frac{4 \pi^2}{a}$”. Assume that after measuring the period for a given pendulum length several times, calculated the average values and uncertainties for y(=T2) and repeated this for a set of different values of length x(L=1,...,6m), we get the data shown in table No 1. Table 1 At first, we graph the average data. We see that they are aligned on a straight line, as expected. Then, we use Excel to find the best linear fitting for our data and we ask this line to pass from (X = 0, Y = 0) because this is predicted from the theoretical formula. We get a straight line with: ```$a_{av} = 4.065\,\!$. ``` Using our theoretical formula we calculate the estimation for ```$g_{av} = 4\pi^2/a_{av} = 4\pi^2/4.065= 9.70\,\!$ ``` which is not far from expected value 9.8. Next, we add the uncertainties in the graph and draw the best linear fitting with maximum /minimum slope that pass by origin. From these graphs we get: ```$a_{min}= 3.635\,\!$ and $a_{max}= 4.202\,\!$. ``` So, we get: ```$g_{min} = 4 \pi^2/a_{max} = 4 \pi^2/4.202= 9.38\,\!$ and $g_{max} = 4 \pi^2/a_{min} = 4 \pi^2/3.635= 10.85\,\!$. ``` This way, by using the graphs we: • proved experimentally that our relation between T and L is right. • found that our measurements are accurate because the uncertainty interval (9.38, 10.85) for “g” does include the officially accepted value $g = 9.8m/s^2\,\!$ • found the absolute error Δg = $(10.85-9.38)/2=0.735 m/s^2\,\!$ The relative error is ε = (0.735/9.70)*100% = 7.6% which means an acceptable (ε < 10%) precision of measurement. ### ABOUT THE ACCURACY AND PRECISION - Understanding accuracy and precision by use of hits distribution in a Dart’s play. -The discussion about accuracy appears only during a calibration procedure. As a rule, before using a method (or device) for measurements, one should make sure by measurements that it does produce accurate results in the range of expected values for the parameter under study. During such a procedure one knows in advance the “officially accepted value” which is expected to be the measurement result. In any other case, the result of measurement is unknown previously and there is no sense to talk about the accuracy. Meanwhile, during any kind of measurement one must report the precision. -So, we will refer to accuracy only in those labs that deal with an officially accepted value for a given parameter like free fall acceleration "g", Planck constant "h", etc. In principle, there is an accurate experiment result if the “average of data” fits to the” officially accepted value”. We will consider that our experiment is “enough accurate” if the ” officially accepted value” falls inside the interval of uncertainty for the estimated parameter; otherwise we will say that the result is inaccurate.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8883378505706787, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/28268/do-you-read-the-masters/28274
## Do you read the masters? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I often hear the advice, "Read the masters" (i.e., read old, classic texts by great mathematicians). But frankly, I have hardly ever followed it. What I am wondering is, is this a principle that people give lip service to because it sounds good, but which is honored in the breach more than in the observance? If not, which masterworks have you found to be most enlightening? To keep the question focused, let me lay down some ground rules. 1. List only papers/books from the 19th century or earlier. I recognize that this is an arbitrary cutoff but I want to draw a line somewhere. 2. It must be something that you personally have read in its entirety (or almost in its entirety). I'm not really interested in secondhand evidence ("So-and-so says that X is a must-read"). 3. You must have acquired important mathematical insights (not just historical insights) from the paper/book that you feel that you would never have acquired had you restricted your reading to 20th-century and 21st-century literature. It's not enough, for my purposes, that you found the paper/book "interesting" but not really essential. If possible, briefly describe these insights in your response. [Edit: In response to a comment that suggested that I have set the bar impossibly high, let me violate one of my own ground rules and point to this discussion on the $n$-Category Cafe that gives some secondhand examples. That discussion should also help to clarify what I am asking for more examples of.] - 17 Gauss' Disquisitiones proves quadratic reciprocity by induction on primes. So it is possible to prove theorems about primes by induction, which seems very counterintuitive at first. Such an inductive technique was recently used to great effect in the Khare-Wintenberger proof of Serre's conjecture on the modularity of 2-dimensional mod $p$ Galois representations. – Boyarsky Jun 15 2010 at 16:17 14 I'm a big advocate of "reading the masters", but I usually interpret this as meaning reading papers from, say, 1900 to 1950. But this is largely because of my research interests (ie geometric topology -- I'm learned a lot by reading Dehn, Nielsen, Alexander, Reidemeister, ...). If I were more interested in things like number theory, then it would make more sense to go back and read pre-1900 sources. – Andy Putman Jun 15 2010 at 16:38 17 I don't agree with your interpretation of the word "master". If, say, I were interested in learning Morse theory, then surely reading Bott (1923-2005) would qualify as "reading a master". – Faisal Jun 15 2010 at 16:48 20 Your rules 1 and 3 set the bar almost impossibly high: valuable mathematical insights contained in a paper 110 years old at least have in all likelihood permeated the mathematical community by now, so that requiring that you would never have acquired them by reading more recent sources seems almost contradictory. Anyway, I have always had an even broader understanding of the saying than A.Putman: I understand it as "read the original research material, not the derivative works". And I think this is a great advice! – Olivier Jun 15 2010 at 17:06 20 When I was a young student, I read the advice of Abel (in a second-hand source) about reading the masters and not the pupils, and I applied it. However, I did not take this to mean "read the ancients". Rather, I think it applies in general to try and read original papers in which insights are developed and new points of view presented, rather than simply read text-book accounts or later expository presentations. Interpreted this way, I have certainly followed this maxim. However, the masters were all 20th century, with one or two exceptions. – Emerton Jun 15 2010 at 23:22 show 13 more comments ## 17 Answers I agree 100% with Igor and Andrew L., on the benefit of reading the creator's version of the same thing available from later expositors. I have gained mathematical insights from reading Euclid, Archimedes, Riemann, Gauss, Hurwitz, Wirtinger, as well as moderns like Zariski.... on topics I already thought I understood. Just Euclid's use of the word "measures" for "divides" finally made clear to me the elementary argument that the largest number dividing 2 integers is also the smallest positive number one can measure using both of them. This is clear thinking of (commensurable) measuring sticks, since by translating it is obvious the set of lengths that one can so measure are equally spaced, hence the smallest one would measure them all. I was unaware also that Euclid's characterization of a tangent line to a circle was not just that it is perpendicular to the radius, but is the only line meeting the circle locally once and such that changing its angle ever so little produces a second intersection, i.e. Newton's definition of a tangent line. It is said Newton read Euclid just before giving his own definition. I did not realize until reading Archimedes that the "Cavalieri principle" follows just from the definition of the Riemann integral, without needing the fundamental theorem of calculus. I.e. it follows just from the definition of a volume as a limit of approximating slices, and was known to Archimedes. Hence one can conclude all the usual volume formulas for pyramids, cones, spheres, even the bicylinder, just by starting from the decomposition of a cube into three right pyramids, applying Cavalieri to vary the angle of the pyramid, then approximating and using Cavalieri. It is an embarrassment to me that I had thought the volume of a bicylinder a more difficult calculus problem that that for a sphere, when it follows immediately from comparing horizontal slices of a double square based pyramid inscribed in a cube. I.e. by Cavalieri and the Pythagorean theorem, the volume of a sphere is the difference between the volumes of a cylinder and an inscribed double cone. The same argument shows the volume of a bicylinder is the difference between the volumes of a cube and an inscribed double square based pyramid. This led to an intuitive understanding of the simple relation between the volumes of certain inscribed figures that I then noticed had been recently studied by Tom Apostol. I realized this summer that this allows a computation of the volume of the 4 dimensional ball. I.e. this ball results from revolving half a 3 ball, hence can be calculated by revolving a cylinder and subtracting the volume of revolving a cone. Since Archimedes knew the center of gravity of both those solids he knew this. Having read everywhere that Hurwitz' theorem was that the maximum number of automorphisms of a Riemann surface of genus g is 84(g-1), I had a difficult proof that the maximum number in genus 5 is 192, using Jacobians, Prym varieties, and classifications of representations of planar groups, until Macbeath referred me to Hurwitz' original paper where a complete list of the possible orders was easily given: 84(g-1), 48(g-1),....I subsequently explained this easy argument to some famous mathematical figures. Sometime later a more complicated such example for which Macbeath himself was usually credited was found also to occur in the 19th century literature. Having studied Riemann surfaces all my life, but unable to read German well, I thought I had acquired some grasp of the Riemann Roch theorem, in particular I thought Riemann had given only an inequality l(D) ≥ 1-g + deg(D). When the translation from Kendrick press became available, I learned he had written down a linear map whose kernel computed l(D), and the estimate derived from the fundamental theorem of linear algebra. The full equality also follows, but only if one can compute the cokernel as well. That cokernel of course was already shown by him to be what we now call H^1(D). Hence Riemann's original theorem was the so called "index" version of RR. Since he expressed his map in terms of path integrals, it was natural to evaluate those integrals by residue calculus as Roch did. This is explained in my answer to "why is Riemann Roch [not precisely] an index problem?" Although there are many fine modern expositions of Riemann Roch, the most insightful perhaps being that in the chapter on Riemann surfaces in Griffiths and Harris, I had not seen how simple it was until reading Riemann. Perhaps this is only historical knowledge, but reading Riemann one sees that he also knew completely how to prove (index) Riemann Roch for algebraic plane curves, without appealing to the questionable Dirichlet principle, hence the usual impression that a rigorous proof had to await later arguments of Clebsch, Hilbert, or Brill and Noether, is incorrect. Reading Wirtinger's 19th century paper on theta functions, even though unfortunately for me only available in the original German, I learned that when a smooth Riemann surface acquires a singularity, the elementary holomorphic differential with a non zero period around that vanishing cycle, becomes meromorphic, and that period becomes the residue at the singuklar point. At last this explains clearly why one defines "dualizing differentials" as one does, in algebraic geometry. Once as grad student in Auslander's algebraic geometry class, I vowed to try out Abel's advice and read the master Zariski's paper on the concept of a simple point. I was very discouraged when several hours passed and I had managed only a few pages. Upon returning to class, Auslander began to pepper us with questions about regular local rings. I found out how much I had learned when I answered them all easily until he literally told me to be quiet, since I obviously knew the subject cold. (To be honest, I did not know the very next question he posed, but I was off the hook.) In my answer to a question about where to learn sheaf cohomology I have given an example of insight only contained in Serre's original paper. The sense of wonder and awe one gets upon reading people like Riemann or Euler, is also quite wonderful. Any student who has struggled to compute the sum of the even powers of the reciprocals of natural numbers 1/n^2k, will be amazed at Euler's facile accomplishment of this for many values of k. Calculus students estimating π by the usual series to 3 or 4 places will also be impressed at his scores of correct digits. On the other hand, anyone using a modern computer can detect an actual error in his expansion of π, I forget where, in the 214th place? but an error which was already noticed long ago. As you can see these are elementary examples hence from a fairly naive and uneducated person, myself, who has not at all plumbed the depth of many original papers. But these few forays have definitely convinced me there is a benefit that cannot be gained elsewhere, as these exposures can transform the understanding of ordinary mortals closer to that of more knowledgeable persons, at least in a narrow vein. So while it might be thought that only the strongest mathematicians can attempt these papers, my advice would be that reading such masters may be even more helpful to us average students. As a remark on criterion 2 of the original question, I find it is not at all necessary to read all of a paper by a master to get some insight. One word in Euclid enlightened me, and before the translation came out, I had already gained most of my understanding of Riemann's argument for RR just from reading the headings of the paragraphs. I learned a proof of RR for plane curves from reading only the introduction to a paper of Fulton. A single sentence of Archimedes, that a sphere is a cone with vertex at the center and base equal to the surface, makes it clear the volume is 1/3 the surface area. Moreover this shows the same ratio holds for a bicylinder, whereas the area of a bicylinder is considered so difficult we do not even ask it of calculus students. So one should not be discouraged by the difficulty of reading all of a masters' paper, although of course it wouldn't hurt. A remark on the definition of master, versus creator. There are cases where a later master re - examines an earlier work and adds to it, and in these cases it seems valuable to read both versions. In addition to examples given above of Newton generalizing Euclid and Mumford using Hilbert, perhaps Mumford's demonstration of the power of Grothendieck's Riemann Roch theorem in calculating inavriants of moduli space of curves is relevant. A related question occurs in many cases since the classical arguments of the "ancients" are preserved but only in classical texts such as Van der Waerden in algebra, and newer books have found slicker methods to avoid them. E.g. the method of LaGrange resolvents is useful in Galois theory for proving an extension of prime degree in characteristic zero is radical. There are faster less precise methods of showing this such as Artin/Dedekind's method of independence of characters, but the older method is useful when trying to use Galois theory to actually write down solution formulas of cubics and quartics. Thus today we often have an intermediate choice of reading modern expositions which reproduce the methods of the creators, or ones that avoid them, sometimes losing information. (This is discussed in the math 843-2 algebra notes on my web page, where, being a novice, I give all competing methods of proof.) - 8 Thank you very much for this. – Emerton Jan 13 2011 at 5:08 Other items, inspired by perusing some modern books, include the fact that Riemann proves "Lebesgue's criterion" for Riemann integrability on the page after he gives his definition of the integral, years before Lebesgue's birth. In the section on surface topology Riemann also uses the "Steinitz exchange" argument to prove invariance of the cardinality of a minimal set of homology generators, some 17 years before Steinitz' birth. – roy smith Jan 15 2011 at 15:32 2 I just read Euler's explanation of "Cardano's" cubic formula and it looked easy for the first time. x = (a+b) is always a solution of the special cubic x^3 = 3abx + (a^3+b^3). But every cubic x^3=fx+g has this form since f = 3ab, and g = a^3+b^3, determine the sum and product of a^3 and b^3, hence determine t = a^3, by solving the quadratic t^2 - gt + f^3/27 = 0. Then taking any cube root of a^3 gives a, and then b= f/3a. This also shows why you always need complex numbers to get all three roots this way, even when they are real, since you need all three cube roots of a^3. – roy smith May 31 2011 at 23:02 2 Dear Roy, I am pleased to see that you mention reading Zariski. Reading parts of his papers (including the one on simple points of varieties) was something I also did when trying to learn algebraic geometry. (Incidentally, his report on coherent sheaf cohomology from the 1950s remains one of the best summaries of the subject that I know of.) Best wishes, Matthew – Emerton Oct 1 2011 at 3:51 1 Dear Matthew, Thank you. One testimony to the timeless value of Zariski's work is that apparently you were not yet born when I got the same boost from that paper on simple points. I did not then understand well the sheaf theory report. Thank you for the tip to reconsider it! – roy smith Oct 2 2011 at 3:53 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In algebraic number theory, the existence of a Frobenius element at any prime $p$ in a Galois extension $K/{\mathbf Q}$ is crucial. That is, for any prime ideal $\mathfrak p$ lying over $p$ in $K$ there is some $\sigma \in {\rm Gal}(L/K)$ which looks like the $p$-th power map mod $\mathfrak p$: $$\sigma(\alpha) \equiv \alpha^p \bmod \mathfrak p$$ for all $\alpha$ in the integers of $K$. (This can be jazzed up to the relative case, but I'll keep the base field as $\mathbf Q$ for simplicity here.) In any modern reference I have seen which shows the existence of $\sigma$, first the decomposition field is introduced in order to make a reduction to the case where the base field is the decomposition field. But if you look at the original proof by Frobenius (1896) it is different, using multivariable polynomials in an interesting way and there is no decomposition field. The argument fits in one page; see http://www.math.uconn.edu/~kconrad/blurbs/gradnumthy/frobeniuspf.pdf, where I consider a fairly general setup using the method of Frobenius. This nice proof by Frobenius has been completely forgotten, even though it handles the general case. (Frobenius himself worked with base field ${\mathbf Q}$.) What is the mathematical insight here? That you can prove this theorem without having to mention decomposition fields (which also makes it easier for students new to the subject to follow the proof). I found this essential when teaching a course on algebraic number theory since it meant I did not have to introduce decomposition fields in the lectures at all; they could safely be left to homework assignments, if I so chose. The proof is also a nice illustration of the usefulness of multivariable polynomials, especially considering that a lot of basic algebraic number theory only requires polynomials in one variable. - Another "old school" proof of this result is in Hilbert's Zahlbericht, which has been translated into english. – Noah Snyder Jun 15 2010 at 22:41 1 To add detail to Noah's comment, the proof in the English translation of the Zahlbericht is within the proof of Theorem 69 on pp. 82--83; the notation s_1,...,s_M used there refers to all the elements of the Galois group (M is the degree of the Galois extension of Q). The Zahlbericht's overarching influence is probably the reason nobody remembers the proof by Frobenius. – KConrad Jun 15 2010 at 23:13 "Read the masters" should not be taken as blanket advice, because some masters are much easier to read, or more congenial to modern mathematicians, than others. Some 19th century works that I have learned from are: 1. Dirichlet/Dedekind. Dirichlet's Lectures on Number Theory, edited and supplemented by Dedekind, are very clear and inspiring. They cover everything from the basics up to Dirichlet's own breakthroughs on class numbers and primes in arithmetic progressions. 2. Dedekind's Theory of Algebraic Integers. Dedekind wrote this because he was disappointed with the initial response to his theory of ideals. He goes to great pains to motivate the theory from the problem of unique prime factorization (using the now standard example of $\mathbb{Z}[\sqrt{-5}]$). 3. Poincare's papers on automorphic functions. Whether or not you want to know about automorphic functions, these papers are a great introduction to hyperbolic geometry, fuchsian groups, and Kleinian groups. Like Dedekind, Poincare writes very clearly and simply. Disclaimer. These are all books that I translated, so naturally I think they are good. If you can read them in the original language they are probably even better. I should add that I came to these books after being disappointed with certain 20th century books, which seemed to me too terse, unmotivated, and abstract. If you haven't had this experience, then you probably won't enjoy 19th century books. - 1 Dirichlet's Lectures on Number Theory is really excellent. – Noah Snyder Jun 16 2010 at 4:00 2 Could you provide a reference to one, maybe two specific papers of Poincaré you would consider introductory (w.r.t. automorphic forms)? – Konrad Voelkel Jun 16 2010 at 10:28 2 Konrad, the best two to start with are Poincare's first and second papers in Acta Mathematica, Volume I (1882), pp. 1-62 and pp. 193-294. In my translation, Poincare's Papers on Fuchsian Functions, they are entitled "Theory of Fuchsian Groups" (which contains the basic geometry and group theory) and "On Fuchsian Functions" (which introduces automorphic forms and functions). – John Stillwell Jun 16 2010 at 11:16 There is more than one reason to read "masters". One such reason is field-specific and can be phrased as "read the latest work right before a scientific revolution" (standard example is the large body of work by Cayley, Sylvester, Gordan, etc., in the pre-Hilbert classical invariant theory). Often such results are more powerful in very specific cases of interest. Another practical reason to read "masters" is to avoid embarrassment. Lots of (mostly minor) results are not mentioned in later treatises, so a number of people rediscover these results because they are either too lazy to read, or simply assume that "masters" couldn't have possibly be so smart to figure out these results back then... When going through the references in writing this survey, I read all 80 pages of J.J. Sylvester, A constructive theory of partitions, arranged in three acts, an interact and an exodion, Amer. J. Math. 5 (1882), 251–330. As a result, I discovered that a number of recent results were already proved there, sometimes by leaders in the field (let me not name them here - see the survey). - Which Gordon is that? – Abdelmalek Abdesselam Jul 9 2010 at 3:51 en.wikipedia.org/wiki/Paul_Albert_Gordan – Igor Pak Jul 9 2010 at 6:19 2 ah... Gordon with an `a' – Abdelmalek Abdesselam Jul 10 2010 at 18:40 Riemann's original paper Über die Anzahl der Primzahlen unter einer gegebenen Grösse (On the Number of Primes Less Than a Given Magnitude), 1859, is definitely a master well worth reading. In just 8 or so pages he shows how useful the zeta function is for questions about the primes, proves the functional equation, the explicit formula, and makes several deep and far-reaching conjectures (all proven except one infamous example). This is the paper which (arguably) began the extremely fruitful method of applying complex analysis to number theoretic questions. It lacks details in some places, but it contains a lot of invaluable motivation and exposition. It certainly helped me to understand why complex analysis is so useful, and how one might discover these connections for himself. EDIT: Just so you have no excuse, here's a link to an English translation: http://www.maths.tcd.ie/pub/HistMath/People/Riemann/Zeta/EZeta.pdf (Remember that he writes $tt$ for $t^2$ and $\Pi(s-1)$ for $\Gamma(s)$). - I interpret "read the masters" as advice to learn a theory from those who created it. Often they are mathematicians of higher caliber than those who follow, and hence they offer unique insights missed by later expositions; insights missed either because they were never understood, or as they are considered common knowledge, the "new normal". Examples I have in mind are Thurston's "Notes", Wall's and Browder's books on surgery theory, Gromov's "Hyperbolic groups". - I certainly have read a lot of classics, and have learned a lot (mostly about historical developments) even from Euclid's elements. When it comes to research mathematics, I'd at least like to mention that Weil got the idea for the Weil conjectures from reading Gauss's articles on biquadratic residues. As an example more in line with the question let me add that, in a letter to Goldbach dated April 15, 1749, Euler mentions that he has found, after quite some effort, a parametrized solution of the equation $xyz(x+y+z) = a$. Elkies found a way of deriving Euler's solution and found a simpler one, using methods from modern algebraic geometry. How Euler found his solution is still open. - I enjoyed reading Gauss's Allgemeine Auflösung der Aufgabe die Theile einer gegebenen Fläche auf einer andern gegebnen Fläche so abzubilden, dass die Abbildung dem Abgebildeten in den kleinsten Theilen ähnlich wird. This work was written in 1822 and deals with conformal mappings between the globe and the plane. I dare say I actually learned some mathematics from the experience too. - While I feel it's certainly worthwhile to read the masters (by which, I mean the initial works that created entire fields of mathematics by their founders), my reasoning is somewhat different then most. Reading the masters is really more for conceptual depth than actual mathematical enlightenment. There's a myth surrounding Abel's dictum that stems from the unreadability of the masters like Gauss as a measure of their nearly inhuman brilliance. This is a fallacy. The reason the masters are so difficult to read is because we are catching them with their pants down in the act of creation, i.e. they are groping towards the right notation and terminology, but aren't quite there yet. For example, it's pretty clear Riemann in his doctoral lecture was trying to explain the need for higher dimensional spaces that went beyond familiar three dimensional space ("multiply extended quantities") which preserved all the familiar properties of the usual Euclidean spaces, i.e. Kleinian transformations and calculus in local neighborhoods. The problem was without either linear algebra or the fundamentals of topology, it was next to impossible to express this idea clearly and precisely. He just ends up babbling on about what's needed. But all the same, Riemann recognized what was needed even if how to express it correctly was beyond his ability. A more recent and readily available example will clarify this further: One of my favorite books is Hassler Whitney's Geometric Integration Theory. I have friends in differential geometry who tell me it's a dinosaur, that his proof of the de Rham theorem is incredibly coarse and tedious. Yes, it is — but it has the advantage of being a DIRECT proof from the construction of simplexes on the boundary of an embedded manifold. I love the book because although Whitney's ideas were old fashioned, they were incredibly powerful IDEAS that allow us to tackle the subject concretely and with an amazing amount of insight. THAT'S what we get from reading the masters — their insight and depth of understanding that allows us to see beyond the machinery into why things are defined as they are. - 3 This post is entirely due to Andrew L. All I did was minor edits (mostly adding in spaces, a couple paragraph breaks). – Charles Staats Jun 16 2010 at 23:24 Thanks,Charles. I have proper spacing when I type it,but for some reason,it doesn't carry over when posted.I have no idea what I'm doing wrong yet. – Andrew L Jun 17 2010 at 0:58 Spivak's "comprehensive introduction to differential geometry" has reproductions of a few of Riemann's papers, and I think (and he may even say) that this is the reason he included them. Obviously, the book contains all the technical details one needs to learn basic differential geometry, but Riemann's discussions can be intuitively enlightening, even though they may look technically archaic now. Though you do say the Masters are more "difficult" to read--I would argue this isn't always the case; some times they can be much easier, particularly when careful rigor can obscure what's going on. – jeremy Jun 17 2010 at 1:44 2 @jeremy I agree-and I cite as good examples Banach's original treatise on functional analysis,Kuratowski's 2 volume treatise on point set topology and Cathedothorey and Hahn's treatises on real variables. – Andrew L Jun 17 2010 at 3:16 I am no big shot in reading mathematical papers, but there are a few examples where I think "Reading the masters" is really worthwhile: 1. I read the theory of irrationals from Hardy's Pure Mathematics and found mention of Dedekind's original and concise version "Stetigkeit und Irrationale Zahlen". I read the English translation of this paper and I must say it is better than anything we can find in modern books on real analysis simply because it is easier to comprehend and appreciate. I somehow find this approach to irrationals the simplest requiring least mathematical machinery like you can teach this to a 15 year old kid with just the working knowledge of rationals. 2. Next I read lot of proofs of Jacobi's Triple product identity from wikipedia and other sources, but none of them matches the simplicity of the one given in Fundamenta Nova of Jacobi. He just multiplies the factors on one side and gets the infinite series on the other side. Plain simple multiplication like 2x3 = 6. In fact his whole theory of Elliptic functions as presented in Fundamenta Nova is based on integral transformation theory and is much easier to introduce than the modern modular form approach. 3. Another example is Lambert's proof of irrationality of pi based on the continued fraction expansion of tan(x). The real gem is not the irrationality of pi, but the beautiful formula for tan(x) (much more beautiful than Taylor's series for sin(x) and cos(x)). Compared to his proof, the proof of irrationality of pi by Ivan Niven is quite short and simple to grasp, but is highly non-obvious. 4. Best of all examples is the theory of periods in Disquisitiones Arithmeticae by Gauss. The entire proof of construction of regular polygons based on theory of periods is not to be found in modern texts at least in the form accessible to first year undergraduates. This theory and its application to constructions of polygons is so exciting and awe inspiring. No parallel in modern papers. In my view the modern authors have made it a habit to work in too abstract terms so that only a post-graduate student of mathematics can understand the papers. There are few exceptions definitely and these are the ones I read online, but as a majority the books/papers on maths are increasingly becoming inaccessible to anyone other than a mathematics researcher. - Bolyai: Appendix, the first account of non-Euclidean geometry by one of its inventors. If nothing else, the beauty, clarity and brevity of exposition alone make this a must-read. - Although your rule #1 would bar the following, I don't think anybody would disagree with me if I call Grothendieck a master. I cannot say I have read the whole of EGA, but I did go through most of volume 1 and a big chunk of volume 2, and got a lot out of it. The clarity of exposition is superb. If only it had examples... - 5 @Alberto: there is an example buried in there! If only you waded through the dense super-general blow-ups in the final section of EGA II (the one badly written section) you'd have encountered an example in Remark 8.3.9: group schemes as examples of representable functors, with emphasis on the example GL_1 (so really killing a fly with a sledgehammer). – Boyarsky Jun 16 2010 at 0:22 1 Ha!..I can't even read Hartshrone... – Changwei Zhou Jun 16 2010 at 4:08 2 @Pencil: some parts of EGA (I, II, some parts of III) are much more readable than Hartshorne, in my opinion. Later on the level of generality makes it difficult to follow without losing sight of the applicability of the results to "everyday" situations. – Alberto García-Raboso Jun 16 2010 at 9:33 1 @Mariano: are you mixing up the crazy section 8 of EGA I (on "Chevalley schemes", the local ring business) with section 7.4 of EGA II (on algebraic curves, for which the only explicit curve is the projective line)? I think all explicit examples in EGA are only there as counterexamples to weakening hypotheses of results. For example see IV$_2$, 4.5.12(ii), 5.6.11; IV$_3$, 6.15.2 (actual equations in 3 unknowns), 14.1.5, 15.2.4; IV$_4$, 18.7.7. @Alberto: the trick to IV is reading backwards from interesting theorems. – BCnrd Jun 17 2010 at 4:59 1 In regard to Grothendieck's Tohoku, I still recall after several decades, his advice that "il est prudent" to assume that HOM(A,B) and HOM(X,Y) are disjoint unless A=X and B = Y! – roy smith Jan 17 2011 at 1:04 show 3 more comments Whether or not Alfredo Capelli's papers about the Capelli identity fit the rubric of "old, classic texts by great mathematicians" is open to debate. What is true is that they offer a very clear and surprisingly modern perspective on the "first fundamental theorem of invariant theory for the general linear group" (the terminology is due to Hermann Weyl, who used Capelli's method in his book "Classical groups", but in an oblique and essentially incomprehensible way). In particular, Capelli introduced the universal enveloping algebra of $\mathfrak{gl}_n$ and its center and computed the action of the special central elements that he constructed on the polynomial algebra over matrices, deriving the Gordan – Capelli decomposition (or ($GL_n, GL_m$)-duality). Roger Howe, beginning in the late 1980s, had produced the only faithful modern account of Capelli's approach that I was aware of at the time that I read Capelli's papers. Of course, since it was Roger who introduced me to this area, I read his 20th century exposition first! P.S. From the wording of the question, I get an impression that the rules have been rigged in order to confirm the favored hypothesis. I have several more worthy examples of "read the masters", but I feel as if I would need to argue the case more than I care to. - Indeed, the Poincare-Birkhoff-Witt theorem was first proved by Capelli (for gl_n). – Abdelmalek Abdesselam Jul 9 2010 at 3:50 He did, absolutely, and on the top of that, he proved the Harish-Chandra isomorphism (also for $\mathfrak{gl}_n$), although almost certainly not Kostant's theorem that $U(\mathfrak{gl}_n)$ is free over its center. What really shocked me is that the work of Capelli on the universal enveloping algebra is not even mentioned in either Hawkins' or Borel's books on the history of Lie theory. – Victor Protsak Jul 9 2010 at 4:26 Let me add the link to Capelli's article: digizeitschriften.de/main/dms/img/… – Abdelmalek Abdesselam Aug 4 2010 at 14:30 Boole, George (1854), An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities, Macmillan Publishers, 1854. Reprinted with corrections, Dover Publications, New York, NY, 1958. - I have read Gauss Disquisitiones Arithemeticae (Tr AA Clarke SJ) and Euclid's elements (tr & commentary by Heath) - but both well after engaging with the fields they covered. To an extent the subject had moved on, and later insights had provided better definitions, postulates, generalisations, primitive concepts and theorems. But it did become clear in the reading what had motivated further development. The sheer ingenuity required by Euclid (reread over Christmas) to do arithmetic is a joy to behold, even though we would do it very differently now, and rather tedious on a second reading, Some of the foundational subtleties are also glossed over or rather taken for granted in more modern treatments. With both there is also a kind of pedagogical simplicity by which you arrive suddenly and without apparent effort at a significant result - I guess this is what constitutes true mastery. - Charles Sanders Peirce — Beginning with volumes 3 and 4 of his Collected Papers and covering pretty much everything he wrote on logic and mathematics that I could get my hands on. - 1. Aristotle, “The Categories”, Harold P. Cooke (trans.), pp. 1–109 in Aristotle, Volume 1, Loeb Classical Library, William Heinemann, London, UK, 1938. 2. Aristotle, “On Interpretation”, Harold P. Cooke (trans.), pp. 111–179 in Aristotle, Volume 1, Loeb Classical Library, William Heinemann, London, UK, 1938. 3. Aristotle, “Prior Analytics”, Hugh Tredennick (trans.), pp. 181–531 in Aristotle, Volume 1, Loeb Classical Library, William Heinemann, London, UK, 1938. ## Animadversion 1 One of the reasons to “Read the Masters!” is that you almost always learn how different their actual intellectual contexts, motivations, and reasoning were from what you tend to find in the reports of $2$nd, $3$rd, and $n$th hand sources. In the case of Aristotle, one of the first shocks — that I still distinctly remember — was discovering that he was a far less binary, dichotomous, or dualistic thinker than all my previous readings and teachers had told me. This has a bearing that goes far beyond the purely historical interest to the substantive issue of how deductive reasoning proper relates to what was later described as "inductive" and "abductive" inference. ## Animadversion 2 I call it “mathematics” when I see hints of form that inform and rule the appearances in view. The test of a “practically essential” source, ancient or modern, is much like the test of a chemical catalyst — it is not that we'd never get the desired product by any other reaction pathway, but that we'd be highly unlikely to get it anywhere near as easily in our lifetime. It is very often the forms that permeate our current airs of knowledge that we, like the proverbial fish in water, can hardly see for all their pervasion. ## Animadversion 3 Another reason to study our mathematical organon in embryo is that it makes it easier to see the early integuments and initial embeddings of topics that grow detached and remote from each other as they develop. By way of example, here's a draft of an essay I started on the precursors of category theory. - 9 What mathematical insights were gained from Aristotle not readily found in more modern sources (in accordance with rule #3 above)? – Boyarsky Jun 15 2010 at 16:46 Re: Boyarsky –– I added my response above as it was too long to fit here. – Jon Awbrey Jun 15 2010 at 17:22 6 I would classify this interest as philosophical (or maybe psychological or sociological), but not mathematical. – Alexander Woo Jun 15 2010 at 21:48 3 Re: Alexander Woo –– The request was, "You must have acquired important mathematical insights (not just historical insights) from the [source] that you feel that you would never have acquired had you restricted your reading to 20th-century and 21st-century literature". The word "feel" asks for a personal probability, and I gave my own best estimate. I have noticed that different folks use different charts for mapping the territories of logic, mathematics, philosophy, and science. Until we find an atlas for the manifold each of us is probably stuck with using his own coordinate system. – Jon Awbrey Jun 15 2010 at 23:54 I added some remarks above on the test of mathematical substance and the "never would have got it any other way" criterion for an essential source. – Jon Awbrey Jun 16 2010 at 13:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526935815811157, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/45650/dimension-of-fractals
# Dimension of fractals I would like to know is it possible to generate a fractal in the plane with dimension higher than 2? If that is possible, please could you explain the intuition behind that? If it is not possible, is there some proof for that? Thank you in advance. Best regards, - ## 2 Answers Meaning of "dimension"? If you mean Hausdorff dimension, then NO. If $A \subset B$, then $\dim A \le \dim B$. And $\dim \mathbb R^2 = 2$. - Thank you for the answer. Yes, it is Hausdorff dimension. There are a few examples that puzzle me. Please let me introduce one of them, the one at kaziprst.com/fractal.gif. Assume $A$ is divided into two parts, and each of the parts is replaced by $B$ -- that is a fractal rule. If my understanding is correct, according to en.wikipedia.org/wiki/Fractal_dimension#Specific_definitions, $N(l) = 8$, and $l = 2$, implying $D = 3$. Please, could you point where I am making a mistake in reasoning? – Slobodan Jun 22 '11 at 19:10 1 Does that wiki mention the open set condition? That is what fails in your example. Without it, the simple dimension formula fails. Or, stated another way, the 8 parts that result will have considerable overlap. – GEdgar Jun 23 '11 at 0:13 I think I got what you mean. Thank you. Would you consider the following as a reasonable explanation why $\dim_H(X) \le d$ for $X \subset \mathbb{R}^d$: Hausdorff dimension does not count overlaps. Consider two fractals in $\mathbb{R}^2$ that produce the same drawings: the first one $F_1$ without overlaps, and the second one $F_2$ with overlaps. Then $\dim_H(F_1) = \dim_H(F_2)$, but clearly $\dim_H(F_1) \le 2$. Please, could you tell me what would be dimension of fractal that I gave as the example? Thank you. – Slobodan Jun 27 '11 at 1:21 Oh, I have forgotten to ask -- in order to computer Hausdorff dimension by using $N(l)$ stuff, we can only do that when fractal drawing does not overlap itself? – Slobodan Jun 27 '11 at 1:39 Or when there is minimal overlap, as specified in the "open set condition". – GEdgar Aug 1 '11 at 23:32 This is related to the concept of Hausdorff Dimension. Google this and be welcomed to a cool world of great stuff!! - – ncmathsadist Jun 16 '11 at 2:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945472002029419, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/ramsey-theory+permutations
# Tagged Questions 1answer 183 views ### Algorithm to find a permutation that contains the fewest possible monotone subsequences of length $k$ Fix natural numbers $k,n$, with $k<n$. I want to find a permutation in $S_n$ that contains fewest monotone (increasing or decreasing) subsequences of length $k$. For example the permutation ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8975542187690735, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/6772/could-a-very-long-password-theoretically-eliminate-the-need-for-a-slow-hash
# Could a very long password theoretically eliminate the need for a slow hash? Before I provide details, I want to clarify that I am not looking to implement this practically, but I'm only asking to get a better understanding. The way I currently understand it, we use slow hashes for passwords because there are too few possibilities in passwords. This means people could calculate all the potential passwords (or a whole lot of them) fairly quickly when using a performance hash, and find what made the digest. E.g. if in an imaginary world where digests are 3 bytes and slow hashes last a decade my password is "abb" and it results in digest "x3x", someone who got this hash could start retrieving my password by hashing "aaa", then "aab", then "abb" and see that the result is "x3x", and so my password must be "abb". With a slow hash, he wouldn't have time to calculate all three results. This makes sense for short passwords, but I thought the strength in modern block ciphers lied in the key space. For Two-fish for example, it can have a key from any of 2^256 different variations. Trying all those variations would just be insane, and so it can't be done (for now). Given these two things, wouldn't the strength of a performance hash as a password storage tool increase with each letter? Alphanumeric letters cover 32 to 127 in ASCII, so that's 6 bits entropy per character (I think). If that's true, would a password of 43 characters be secure when hashed with say, SHA 256? - ## 4 Answers Despite the impracticalities of using a 43-char password, I would say yes, such a long password would be secure when hashed with just SHA256. Assuming 127 possible ascii characters, a password of 8 characters would require an attacker to search about 2^56 possibilites (viable), whereas a password of 43 characters would require searching about 2^300 possibilities (infeasible, considering that 2^128 for encryption using a 128-bit key is infeasible). Having such a long password would make a slow-hash redundant. With the case of a 43-character password, I doubt that rainbow tables would be issue either. - 3 Only if each character of the password is randomly (and independently) selected from the relevant alphabet, which is typically not the case for humans. Of course, a 43-character password does provide a huge safety margin, but some people will still use "123123123..." – Thomas Mar 21 at 5:04 Yeah, I suppose so. But as was made clear, this question was only ever in a hypothetical context, not a practical one. It's unreasonable to ask any user to remember and/or input a password of 43 chars, as such no user will use "123123123...". – hunter Mar 22 at 4:34 1 "would" use, then. But even if the question was hypothetical, your entropy estimates do require the above condition (which I feel is important to state clearly, as this is a common question in cryptography), which is not achievable by humans (for instance, many people would use a sequence of words for such a long password, dropping the entropy to maybe 2-3 bits per character). Unless in the hypothetical scenario, humans are also robots, in which case fair enough. But no matter what humans choose, yes, ultimately such a long password would be almost certainly secure no matter what. – Thomas Mar 22 at 5:19 1 Yep, I know where you're coming from - you make a good point. – hunter Mar 22 at 5:26 The "slowness" of a hash is general measured by it's work factor. The underlying cryptographic hash function itself is rarely very slow, so the password hash function built on top of it artificially slows the crypto function down via iteration. For each hash derivation, there are a certain number of iterations that have to be performed. For example, a typical bcrypt hash may use a work factor of 12 (aka, $2^{12}$ iterations), meaning the attacker does $2^{12}$ times as much work as they would have if there were no iterations used. To eliminate the need for the slow password function, account for the desired work factor of attacking the password in the password itself. If you want to match $2^{12}$ iterations worth of "slowness", add $2^{12}$ = 12 bits of work to the password itself. (12 bits, this rounds to the equivalent of 2 random alphanumeric ASCII characters.) The result is the same amount of brute-force work for the attacker. - The answer is generally yes because hash functions are collision-resistant. But an attacker doesn't really need to find the exact password a user has entered, you only need to find a preimage that will give you the same output as the password. This of course prohibits him from using that password to get access to another website the victim is using (assuming a healthy password managment practice is in place). Finding a collision basically means that "abc" may well have the same output as "supercalifragilisticexpialidocious" in which case the point of having a long password is moot. Good news however, the chances of finding a collision in most of the algorithms used (or should be used) in the real word are negligible. But a hash algorithm with a long password, especially when used with salting (whose whole purpose is to increase the attacker's search space) should definitely be more secure for practical applications. - – rath Mar 21 at 4:20 It's not a very long password which matters, but a very random password -- length is just needed in order to make enough room for randomness. We need slow hashing because passwords have relatively low entropy: it is possible to enumerate potential passwords with a good chance of hitting the password chosen by an average user. Slow hashing tries to make such an exhaustive search more expensive. If you use passwords with high enough entropy (ideally 128 bits, in practice 80 bits or so are enough), then exhaustive search in the password ceases to be a viable strategy for the attacker, and hashing no longer needs to be slow. You don't have to make passwords very long to get a lot of entropy; if you use 64 possible signs (uppercase and lowercase letters, digits, and a couple of other signs), then 15 characters are enough to achieve a very comfortable 90-bit entropy, provided that you generate 15 totally random characters. If you generate your password as a concatenation of meaningful words, then you will need a much longer password to reach such an entropy level (but maybe a long sequence of words would be easier to remember). When passwords have very high entropy, we tend not to call then "passwords", but keys. A "password" is something which fits in a normal human brain. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446516633033752, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/46773/list
## Return to Answer 3 minor changes. This is to summarize what were discussed in the comments, so the title will not be listed as unanswered. The linear subspace $S$ of $c_0(\mathbb{Z})$ is equal to the convolution product of two copies of $\ell^2(\mathbb{Z})$. More precisely, $\lbrace a_n \rbrace$ is in $S$ if and only if there exist two sequences $\lbrace b_n \rbrace$ and $\lbrace c_n \rbrace$ in $\ell^2(\mathbb{Z})$ such that $$a_n=\sum_{k=-\infty}^\infty b_k c_{n-k}$$ for all $n$. This follows since every function in $L^1[0,2\pi]$ is a product of two functions in $L^2[0,2\pi]$, and that for any functions $f,g$ in $L^2[0,2\pi]$ one has, by Parseval identity, $$\frac{1}{2\pi}\int_{-\pi}^\pi f(x)g(x)e^{-inx}dx=\frac{1}{2\pi}\int_{-\pi}^\pi f(x)\overline{h(-x)}e^{-inx}dx=\sum_{k=-\infty}^\infty f(x)\overline{h(x)}e^{-inx}dx=\sum_{k=-\infty}^\infty \hat{f}(k) \hat{h}(n-k)$$ hat{g}(n-k) where $h(x)=\overline{g(-x)}$. h(x)=\overline{g(x)}\$. (One also uses that the mapping that maps each $f$ in $L^2[0,2\pi]$ to its Fourier coefficient sequence in $\ell^2(\mathbb{Z})$ is an a surjective isomorphic isometry.) 2 added 78 characters in body This is to summarize what were discussed in the comments, so the title will not be listed as unanswered. The linear subspace $S$ of $c_0(\mathbb{Z})$ is equal to the convolution product of two copies of $\ell^2(\mathbb{Z})$. More precisely, $\lbrace a_n \rbrace$ is in $S$ if and only if there exist two sequences $\lbrace b_n \rbrace$ and $\lbrace c_n \rbrace$ in $\ell^2(\mathbb{Z})$ such that $$a_n=\sum_{k=-\infty}^\infty b_k c_{n-k}$$ for all $n$. This follows since every function in $L^1[0,2\pi]$ is a product of two functions in $L^2[0,2\pi]$, and that for any functions $f,g$ in $L^2[0,2\pi]$ one has $$\frac{1}{2\pi}\int_{-\pi}^\pi f(x)\overline{g(-x)}e^{-inx}\,dx=\sum_{k=-\infty}^\infty f(x)g(x)e^{-inx}dx=\frac{1}{2\pi}\int_{-\pi}^\pi f(x)\overline{h(-x)}e^{-inx}dx=\sum_{k=-\infty}^\infty \hat{f}(k) \hat{g}(n-k).$$ hat{h}(n-k) where $h(x)=\overline{g(-x)}$. (One also uses that the mapping that maps each $f$ in $L^2[0,2\pi]$ to its Fourier coefficient sequence is an isomorphic isometry.) 1 This is to summarize what were discussed in the comments, so the title will not be listed as unanswered. The linear subspace $S$ of $c_0(\mathbb{Z})$ is equal to the convolution product of two copies of $\ell^2(\mathbb{Z})$. More precisely, $\lbrace a_n \rbrace$ is in $S$ if and only if there exist two sequences $\lbrace b_n \rbrace$ and $\lbrace c_n \rbrace$ in $\ell^2(\mathbb{Z})$ such that $$a_n=\sum_{k=-\infty}^\infty b_k c_{n-k}$$ for all $n$. This follows since every function in $L^1[0,2\pi]$ is a product of two functions in $L^2[0,2\pi]$, and that for any functions $f,g$ in $L^2[0,2\pi]$ one has $$\frac{1}{2\pi}\int_{-\pi}^\pi f(x)\overline{g(-x)}e^{-inx}\,dx=\sum_{k=-\infty}^\infty \hat{f}(k) \hat{g}(n-k).$$ (One also uses that the mapping that maps each $f$ in $L^2[0,2\pi]$ to its Fourier coefficient sequence is an isomorphic isometry.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9625808000564575, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/259330/finding-a-covering-space-for-p-times-p
# Finding a covering space for $P \times P$ Let $P$ be a real projective plane. Since the fundamental group of $P \times P$ is $Z_2 \times Z_2$ (abelian group with 5 subgroups), there exists five covering spaces. What is the explicit covering corresponding for $<(1,1)>$? (I found all other 4 cases.) - ## 1 Answer I don't know that this "presentation" is too much more familiar, but the covering space you're interested in is diffeomorphic to $S^2\times S^2/\sim$ where $(x,y) \sim \pm(x,y)$. The projection map sends $[(x,y)]$ to $([x],[y])$. Now, $S^2\times S^2/\sim$ is diffeomorphic to another relatively well known space: the Grassmanian of unoriented $2$-planes in $\mathbb{R}^4$. This space is not homotopy equivalent to any of the other covering spaces of $\mathbb{R}P^2\times\mathbb{R}P^2$. Its fundamental group is $\mathbb{Z}/2\mathbb{Z}$, so it could only potentially be homotopy equivalent to $\mathbb{R}P^2\times S^2$ or $S^2\times\mathbb{R}P^2$. On the other hand, both of these second two spaces are nonorientable, while $S^2\times S^2/\sim$ is orientable. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9263458251953125, "perplexity_flag": "head"}
http://mathoverflow.net/questions/70416?sort=oldest
## Reference for decomposition in invariants and derived subgroup in a semidirect product of abelian groups ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $A$ and $B$ be finite abelian groups with coprime order, and let $G=A\rtimes{}B$ be a semidirect product, via any action. Let $C\subseteq{}A$ be the subgroup of the elements of $A$ which are fixed by the action of $B$, so that $C=Z(G)\cap{}A$. Then we have $$A = C \oplus G'.$$ Is there a quick reference for this fact? Please note that i'm NOT asking for a proof of this simple (and well known i guess?) fact, i just need a reference to quickly point to in a note, to avoid making it cumbersome. Unless there is a one-line proof that i missed. Thanks for the attention! - 2 I don't know what you would consider a oneliner but the usual proof in the context of ordinary representation theory works. We get an action of $e=1/|B|\sum_{b\in B}b$ on $A$ and it gives the projection on $C$. – Torsten Ekedahl Jul 15 2011 at 14:58 Yeah, this is the proof i had in my mind. But i would feel guilty for not adding some verification that the kernel of the projection is exactly the derived subgroup, is it also immediate? without having to talk at all about irreducible representations? I prefer to be quoting some one else's proof, so that if he skips the details i will not feel guilty for him ;) – Maurizio Monge Jul 15 2011 at 17:44 Yes, it is immediate, we have $e^2=e$ and $be=e$. The first shows that $A$ is the direct sum of the image and the kernel of $e$, the second that the image lies in $C$ and and it is clear that $e$ is the identity on $C$. – Torsten Ekedahl Jul 15 2011 at 19:06 My question was: why is the kernel exactly $G'$? $G'$ is clearly contained, but why does equality hold? (again, i know how to prove this, just trying to understand why it is supposed to be obvious) – Maurizio Monge Jul 15 2011 at 22:27 The complement has no quotient on which $B$ acts trivially, which means that it is spanned by elements of the form $ba-a$ which are commutators in $G$ so the complement is contained in $G'$. On the other hand $G$ modulo the complement is clearly commutative so that $G'$ is contained in it. – Torsten Ekedahl Jul 16 2011 at 14:30 show 1 more comment ## 1 Answer Theorem 2.3 in Chapter 5 of the book "Finite Groups" by Daniel Gorenstein states that if $A$ is a $p'$-group of automorphisms of an abelian $p$-group $P$, then $P = C_P(A) \times [P,A]$ (all groups here are assumed to be finite). You can deduce your result easily from this. - Thanks, the reference is perfect! – Maurizio Monge Jul 15 2011 at 17:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507415890693665, "perplexity_flag": "head"}
http://mathoverflow.net/questions/36734/orthonormal-basis-for-non-separable-inner-product-space
## Orthonormal basis for non-separable inner-product space ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose X is an inner product space, with Hilbert space completion H (actually, I'm interested in the real scalar case, but I doubt there's any difference). If H is separable, then so is X, and I can find a (countable or finite) orthonormal basis of H inside X. Indeed, start with some countable subset Y of X which is dense in H. Then, by induction, we can move to a linearly independent subset of Y, and then apply Gram–Schmidt, again by induction. The point (to me, anyway) is that at any stage, we never take limits, and so we never leave X. Now, what happens if H is not assumed separable? I've tried to use a Zorn's Lemma argument, but I keep end up wanting to take limits (or, rather, infinite sums) which gives me an orthonormal basis (in the generalised, non-countable, sense) in H, but I cannot ensure that it's in X. Am I just missing something obvious, or is there a slight technicality here...? - 2 Something here sounds fishy. If $X$ is an incomplete inner product space and $H$ is its completion then an orthonormal basis for $H$ which consists of elements of $X$ is in particular an orthonormal basis for $X$, but some incomplete inner product spaces (which are necessarily not separable) do not have an orthonormal basis. – Mark Schwarzmann Aug 26 2010 at 9:53 Ah, well that would give a counter-example for sure! Do you have a reference? – Matthew Daws Aug 26 2010 at 9:57 2 Ah, Google comes to the rescue: secure.wikimedia.org/wikipedia/en/wiki/… – Matthew Daws Aug 26 2010 at 10:03 Mark: if you write that up into an answer, I'll accept it (as it was news to me that non-separable (incomplete) inner-product spaces might fail to have an o.n. basis. – Matthew Daws Aug 26 2010 at 10:07 1 Sorry, last commment. If you access, a better reference is jstor.org/stable/2318908 – Matthew Daws Aug 26 2010 at 10:12 show 1 more comment ## 4 Answers This is Problem 54 in Halmos' "A Hilbert Space Problem Book". However, I think this is a concrete counterexample. [Please let me know if not viewable.] - As Mark hasn't typed his comment into an answer, I'm accepting this. Thanks all. – Matthew Daws Aug 28 2010 at 12:24 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. - On the arXiv this (2010-09-09) morning: 1009.1441 [ps, pdf, other] Title: Inner product space with no ortho-normal basis without choice. Authors: Saharon Shelah Primary Subject: math.LO We prove in ZF that there is an inner product space, in fact, nicely definable with no orthonormal basis. - I don't think we can do that unless you can make sense out of uncountable sums. The Gram-Schmidt algorithm cannot transform a basis into an orthogonal one unless the original basis has no limit ordinal in its well-ordering. For example, take X as the space of square summable sequences. We can construct a Hamel basis by adding vectors to the set of standard basis vectors (1 at one position and 0 everywhere else). Obviously any non-zero vector in X cannot be orthogonal to every standard basis vector, so the Hamel basis cannot be made orthogonal. (In other words, if these standard basis vectors are considered the "first" vectors in our basis, the least upper bound of all standard basis vectors cannot be orthogonalized.) This shows that it may not be possible just to have an uncountable set of orthogonal vectors. My argument is not comprehensive. It might be the case that some special choices of the first "countably many" vectors may lead to a valid construction of an uncountable set of orthogonal vectors. However, there may be nice spaces that have the property you mentioned. I believe the following is an example: Take X as a space of functions $f:R \to R$ such that $f^{-1}(0)$ is the complement of a countable set and $\sum_{f(x) \ne 0} f(x)^2$ is finite. X is pretty much like the space of square-summable sequences, but each sequence is indexed by a real number instead of a positive integer. We define standard basis vectors as functions that are 1 at only one point and 0 everywhere else. [I believe] these standard basis vectors form a complete orthogonal basis in your sense. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177252650260925, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/15978/how-can-i-understand-the-result-mathematica-returned-from-dsolve
# How can I understand the result Mathematica returned from DSolve? I have solved an ODE using DSolve[], but I have a problem with understanding the solution. In general the solution is in the form: InverseFunction[[many expressions using # and #1]&][g x+C[1]] g is constant What does #1 and & mean and what does mean InverseFunction in this context? PS: The solution is: $\text{InverseFunction}\left[\frac{40 g \text{$\#$1} s^{2/5}+\frac{3 Q^{3/5} \left(\left(1+\sqrt{5}\right) \sqrt[5]{B} g n^{9/5}-\left(-1+\sqrt{5}\right) \sqrt[5]{Q} s^{9/10}\right) \log \left(\frac{1}{2} \left(-1+\sqrt{5}\right) \sqrt[5]{B} \sqrt[5]{n} \sqrt[5]{Q} \sqrt[10]{s} \sqrt[3]{\text{$\#$1}}+B^{2/5} \sqrt[5]{s} \text{$\#$1}^{2/3}+n^{2/5} Q^{2/5}\right) \sqrt[10]{s}}{B^{4/5} n^{6/5}}+\frac{3 Q^{3/5} \left(\left(-1+\sqrt{5}\right) g n^{9/5} \sqrt[5]{B}+\left(1+\sqrt{5}\right) \sqrt[5]{Q} s^{9/10}\right) \log \left(\frac{1}{2} \left(1+\sqrt{5}\right) \sqrt[5]{B} \sqrt[5]{n} \sqrt[5]{Q} \sqrt[10]{s} \sqrt[3]{\text{$\#$1}}+B^{2/5} \sqrt[5]{s} \text{$\#$1}^{2/3}+n^{2/5} Q^{2/5}\right) \sqrt[10]{s}}{B^{4/5} n^{6/5}}+\frac{3 Q^{3/5} \left(\left(1+\sqrt{5}\right) \sqrt[5]{Q} s^{9/10}-\left(-1+\sqrt{5}\right) \sqrt[5]{B} g n^{9/5}\right) \log \left(-\frac{1}{2} \left(1+\sqrt{5}\right) \sqrt[5]{B} \sqrt[5]{n} \sqrt[5]{Q} \sqrt[10]{s} \sqrt[3]{\text{$\#$1}}+B^{2/5} \sqrt[5]{s} \text{$\#$1}^{2/3}+n^{2/5} Q^{2/5}\right) \sqrt[10]{s}}{B^{4/5} n^{6/5}}-\frac{3 Q^{3/5} \left(\left(1+\sqrt{5}\right) g n^{9/5} \sqrt[5]{B}+\left(-1+\sqrt{5}\right) \sqrt[5]{Q} s^{9/10}\right) \log \left(-\frac{1}{2} \left(-1+\sqrt{5}\right) \sqrt[5]{B} \sqrt[5]{n} \sqrt[5]{Q} \sqrt[10]{s} \sqrt[3]{\text{$\#$1}}+B^{2/5} \sqrt[5]{s} \text{$\#$1}^{2/3}+n^{2/5} Q^{2/5}\right) \sqrt[10]{s}}{B^{4/5} n^{6/5}}+\frac{6 \left(\sqrt{10-2 \sqrt{5}} \sqrt[5]{B} g n^{9/5} Q^{3/5} \sqrt[10]{s}-\sqrt{2 \left(5+\sqrt{5}\right)} Q^{4/5} s\right) \tan ^{-1}\left(\frac{4 \sqrt[5]{B} \sqrt[10]{s} \sqrt[3]{\text{$\#$1}}-\left(-1+\sqrt{5}\right) \sqrt[5]{n} \sqrt[5]{Q}}{\sqrt{2 \left(5+\sqrt{5}\right)} \sqrt[5]{n} \sqrt[5]{Q}}\right)}{B^{4/5} n^{6/5}}+\frac{6 \left(\sqrt{10-2 \sqrt{5}} g n^{9/5} Q^{3/5} \sqrt[10]{s} \sqrt[5]{B}+\sqrt{2 \left(5+\sqrt{5}\right)} Q^{4/5} s\right) \tan ^{-1}\left(\frac{4 \sqrt[5]{B} \sqrt[10]{s} \sqrt[3]{\text{$\#$1}}+\left(-1+\sqrt{5}\right) \sqrt[5]{n} \sqrt[5]{Q}}{\sqrt{2 \left(5+\sqrt{5}\right)} \sqrt[5]{n} \sqrt[5]{Q}}\right)}{B^{4/5} n^{6/5}}+\frac{12 \left(\sqrt[5]{B} g n^{9/5} Q^{3/5} \sqrt[10]{s}-Q^{4/5} s\right) \log \left(\sqrt[5]{n} \sqrt[5]{Q}-\sqrt[5]{B} \sqrt[10]{s} \sqrt[3]{\text{$\#$1}}\right)}{B^{4/5} n^{6/5}}-\frac{6 \left(\sqrt{2 \left(5+\sqrt{5}\right)} \sqrt[5]{B} g n^{9/5} Q^{3/5} \sqrt[10]{s}-\sqrt{10-2 \sqrt{5}} Q^{4/5} s\right) \tan ^{-1}\left(\frac{4 \sqrt[5]{B} \sqrt[10]{s} \sqrt[3]{\text{$\#$1}}-\left(1+\sqrt{5}\right) \sqrt[5]{n} \sqrt[5]{Q}}{\sqrt{10-2 \sqrt{5}} \sqrt[5]{n} \sqrt[5]{Q}}\right)}{B^{4/5} n^{6/5}}-\frac{6 \left(\sqrt{2 \left(5+\sqrt{5}\right)} g n^{9/5} Q^{3/5} \sqrt[10]{s} \sqrt[5]{B}+\sqrt{10-2 \sqrt{5}} Q^{4/5} s\right) \tan ^{-1}\left(\frac{4 \sqrt[5]{B} \sqrt[10]{s} \sqrt[3]{\text{$\#$1}}+\left(1+\sqrt{5}\right) \sqrt[5]{n} \sqrt[5]{Q}}{\sqrt{10-2 \sqrt{5}} \sqrt[5]{n} \sqrt[5]{Q}}\right)}{B^{4/5} n^{6/5}}-\frac{12 \left(g n^{9/5} Q^{3/5} \sqrt[10]{s} \sqrt[5]{B}+Q^{4/5} s\right) \log \left(\sqrt[5]{B} \sqrt[10]{s} \sqrt[3]{\text{$\#$1}}+\sqrt[5]{n} \sqrt[5]{Q}\right)}{B^{4/5} n^{6/5}}}{40 s^{7/5}}\&\right]\left[g x+c_1\right]$ - 1 # is explained here reference.wolfram.com/mathematica/ref/Slot.html first example answers your question about #1 and #2. For pure function please see reference.wolfram.com/mathematica/tutorial/PureFunctions.html which explains it well. For InverFunction please see reference.wolfram.com/mathematica/ref/InverseFunction.html – Nasser Dec 9 '12 at 5:24 ## 1 Answer Some DEs are more simple to solve for the dependent variable rather than the independent variable, for example $$\frac{dy}{dx} = y \quad\implies\quad \log(y)=x+c$$ from which you can obtain the solution for $y$ in terms of $x$ by using an inverse function, in this case $y=\exp(x+c)$. Not all examples are this easy to invert, so Mathematica sometimes has to leave the solution written in terms of `InverseFunction`. The `#` and `&` are part of Mathematica's pure (or anonymous) function notation. In particular `&` occurs at the end of a pure function and `#=#1` represents the first slot of the function. For example ````(#^2 + 1&) ```` is equivalent to ````Function[{x}, x^2 + 1] ```` and acts upon its arguments like any other function ````(#^2 + 1&)[t] == Function[{x}, x^2 + 1][t] == t^2 + 1 ```` So, your DE must have yielded a complicated algebraic expression $f(y)=x+c$ that needs to be solved for the variable that you are interested in, $y=f^{(-1)}(x+c)$, which Mathematica can only perform symbolically using `InverseFunction`. - lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8672526478767395, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/76838?sort=newest
A question about quotient singularity Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) If a finite group G acts on a smooth variety X over complex number field and the fixed locus of G is smooth subvariety of codimension 1, will the resulting quotient variety be smooth? What will happen if the fixed locus has lager codimension ? thanks. - Try $X=\mathbb{C}$ and $G=\{1,-1\}$ acting multiplicatively – Michael Sep 30 2011 at 10:49 3 Michael, your example is smooth, but $\mathbb{C}^2/\pm 1$ is not. – Donu Arapura Sep 30 2011 at 11:38 @Donu: you're absolutely right! My mistake was in thinking of $\mathbb{C}$ as $\mathbb{R}^2$. – Michael Sep 30 2011 at 17:49 @Donu and Michael, about the example $\mathbb C^2/ \pm$: It looks to me like $G$ is generated by a pseudo reflection, so by the answer below the quotient should be smooth. And indeed the ring of invariants is $\mathbb C[x,y^2]$, which is a polynomial ring. Am I missing something? – Drew Jan 11 at 17:19 1 Answer In general, for a finite group $G$ acting faithfully on a smooth variety $X$, whether or not the quotient is smooth is determined by the Chevalley-Shephard-Todd theorem: For $x \in X$, let $G_x\subset G$ be the stabilizer of $x$. Then a necessary and sufficient condtion for the quotient to be smooth is that each $G_x$ should be generated by pseudoreflections i.e. elements which fix pointwise a codimension $1$ subvariety of $X$ containing $x$. In particular, one cannot just look at the fixed locus of $G$ to determine whether the quotient is smooth. It could well be empty but the quotient could still be singular. - Just to repeat what Ulrich said, it is not enough to look at the fixed locus, i.e., the set of points fixed by every element of $G$. You have to look at the points with nontrivial stabilizer, i.e., the set of points fixed by at least one non-identity element. – Jason Starr Sep 30 2011 at 12:54 Thanks,this is extremely useful. – strygwyr Sep 30 2011 at 14:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904728889465332, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3888796
Physics Forums ## Irreducible Brillouin Zone When energy levels for a lattice are constructed the Bloch wave vector is evaluated along the edges of the irruducible zone. Like $\Gamma$ - $X$ - $M$ path for a square lattice. I wonder why the calculation is NOT performed for values within the zone? And how the energy corresponding to an arbitrary vector within the zone but not laying at the boundary can be obtained from a band-gap diagrams (plotted for example in $\Gamma$ - $X$ - $M$ coordinates) Thanks in advance PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> Nanocrystals grow from liquid interface>> New insights into how materials transfer heat could lead to improved electronics You're quite free to plot the band structure along any path you like. Much of the time we're only concerned about the behavior at the min and max of the bands, which only requires plotting in a few directions. You might want to know the band gaps, effective masses, and anisotropies near theses extrema. If, for some reason, you're especially concerned about what's happening off the band path, you'll just have to do a calculation of that system yourself (or call authors). Incidentally, there is a theorem that says it's possible to reconstruct the band structure from the band eigenvalues only at the gamma point. From k.p theory I think. You can look it up. Thank you! That's quite interesting that we can reconstruct the band structure from only information at $\Gamma$ Still people plot the diagrams in the particular paths along irreducible zones. You say Quote by sam_bell Much of the time we're only concerned about the behavior at the min and max of the bands, which only requires plotting in a few directions. Why these are more important? Thanks again ## Irreducible Brillouin Zone Quote by trogvar Why these are more important? These give you the most elementary information about the material such as whether it is a metal, semiconductor or insulator (determined by band gap, which is the energy difference between the maximum of the valence band and the minimum of the conduction band) and whether it has a direct or indirect gap (determined by whether the minimum and maximum occur at the same k-point). This is what the band structure is mainly used for. If calculated using the density-functional theory (usually the case) it would be risky trying to determine more subtle effects from the band structure due to the inaccuracies of the method. Recognitions: Gold Member Science Advisor Staff Emeritus Quote by trogvar When energy levels for a lattice are constructed the Bloch wave vector is evaluated along the edges of the irruducible zone. Like $\Gamma$ - $X$ - $M$ path for a square lattice. I wonder why the calculation is NOT performed for values within the zone? The $\Gamma$-point is at the zone center, and the $\Gamma-X$ line, for instance, does span a region of reciprocal space "within the zone". Recognitions: Gold Member Science Advisor Quote by trogvar That's quite interesting that we can reconstruct the band structure from only information at $\Gamma$ The technique is called analytic continuation, and it only works if you have a large number of wave functions at the center of zone. You get a decent approximation if you have 16 or more wave functions and all the coupling constants between them. Thread Tools | | | | |-------------------------------------------------|------------------------------------|---------| | Similar Threads for: Irreducible Brillouin Zone | | | | Thread | Forum | Replies | | | Atomic, Solid State, Comp. Physics | 2 | | | Advanced Physics Homework | 0 | | | Advanced Physics Homework | 3 | | | Atomic, Solid State, Comp. Physics | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220247268676758, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/tagged/simulations?sort=active&pagesize=15
# Tagged Questions The simulations tag has no wiki summary. 0answers 49 views ### Problems with exact Heston simulations I am just wondering if there is any problem with the so-called "exact" Heston simulations? So far what I have seen are the good things about it, what are the disadvantages? Because if it is so ... 0answers 40 views ### Credit spreads vs default events dependence Reading this note it strikes me that credit spreads and defaults seem not to be commonly modeled jointly (e.g. more or less directly in structural models), but at best with some kind of "ex post" ... 2answers 143 views ### Simulation of GBM I have a question regarding the simulation of a GBM. I have found similar questions here but nothing which takes reference to my specific problem: Given a GBM of the form \$dS(t) = \mu S(t) dt + ... 1answer 285 views ### How to simulate a Merton Jump Diffusion process? I am talking about the Merton Jump Diffusion model, on this page, where they give the following formula: $$dS_t = \mu S_t dt + \sigma S_t dW_t + (\eta-1) dq$$ where $W_t$ is a standard brownian ... 1answer 1k views ### How to simulate stock prices with a Geometric Brownian Motion? I want to simulate stock price paths with different stochastic processes. I started with the famous geometric brownian motion. I simulated the values with the following formula: ... 3answers 602 views ### How to account for transaction costs in a simulated market environment? I am simulating a market for my trading system. I have no ask-bid prices in my dataset and use adjusted close for both buy and sell price. To account for this I plan to use a relative transaction ... 1answer 377 views ### Michaud's Resampled Efficient Frontier - Out of Sample Simulation Testing I will be putting ALL my account points on bounty to whoever answers this question [if your answer is crap but it's the only answer, you're getting the 165 points]. You will have to wait 2 days or so ... 2answers 447 views ### When to use Monte Carlo simulation over analytical methods for options pricing? I've been using Monte Carlo simulation (MC) for pricing vanilla options with non-lognormal underlyings returns. I'm tempted to start using MC as my primary option-valuating technique as I can get ... 1answer 118 views ### transaction size and liquidity in simulation of US stocks i am developing a simulation trading in US stocks. i have 1 transaction a day per stock, assumed for simplicity to be executed at the daily closing price. in order to determine a reasonable maximal ... 2answers 334 views ### How to simulate cointegrated prices Is there any simple way to simulate cointegrated prices? 3answers 209 views ### Are there any standard techniques for adding realistic synthetic microstructure noise to a price series? This may seem like a strange question, but for my particular application we need to actually add synthetic microstructure noise to real time charts. The signal should still be representative of the ... 3answers 399 views ### Literature on generating synthetic time series for testing I have some market data (daily time series) for bond prices and CDS indices and I would like to generate synthetic versions of these which are statistically "similar" for testing trading strategies. ... 1answer 511 views ### How to simulate correlated assets for illustrating portfolio diversification? I have seen multiple instances where people try to explain the diversification effects of having assets with a certain level of correlation, especially in the "most diversified portfolio" literature. ... 6answers 824 views ### How to generate a random price series with a specified range and correlation with an actual price? I want to generate a mock price series. I want it to be within a certain range and have a defined correlation with the original price series. If I choose, say, oil, I want as many time series which ... 3answers 374 views ### How to test for and how to simulate price rise/fall asymmetry in the stock market One of the stylized facts of financial time series seems to be a fundamental asymmetry between smooth upward movements over longer periods of time followed by abrupt declines over relatively shorter ... 1answer 262 views ### Simulating conditional expectations There is a multidimensional process X defined via its SDE (we can assume that its a diffusion type process), and lets define another process by $g_t = E[G(X_T)|X_t]$ for $t\leq T$. I would like to ... 1answer 240 views ### How to reduce variance in a Cox-Ingersoll-Ross Monte Carlo simulation? I am working out a numerical integral for option pricing in which I'm simulating an interest rate process using a Cox-Ingersoll-Ross process. Each step in my Monte Carlo generated path is a ... 1answer 397 views ### Monte carlo portfolio risk simulation My objective is to show the distribution of a portfolio's expected utilities via random sampling. The utility function has two random components. The first component is an expected return vector ... 1answer 294 views ### Enhancing Monte-Carlo convergence (crude method) I am currently doing a project involving Monte-Carlo method. I wonder if there is papers dealing with a "learning" refinement method to enhance the MC-convergence, example : Objective : estimate of ... 2answers 446 views ### Is Walk Forward Analysis a good method to estimate the edge of a trading system? Do you think Walk Forward Analysis is a good method to estimate the predictability or edge of a trading system? Are there similar methods to know (estimate) how much alpha can capture an algo (in the ... 1answer 129 views ### What tradeoff is there to using an accurate estimate with a large confidence interval? I am working on calibrating a Heston model from simulated historical stock data. After obtaining an accurate estimate of the model parameters I found very large 95% confidence intervals for these ... 2answers 662 views ### Simulating Returns I'll start this off with a rather broad question: I am trying to simulate returns of a large number of assets within a portfolio of different classes - equity and fixed income in a first step, say 100 ... 1answer 616 views ### Valuing Total Return Swaps In my quest for simulated data, I am trying to generate prices for Total Return Swaps by calculating the NPVs of the fixed and floating leg. My problem: Given the fixed leg, how do I set the spread on ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8979619741439819, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/198628-cooling-law-problem.html
# Thread: 1. ## The Cooling Law Problem A temperature probe is placed in a saucepan filled with hot water, and the temperature is recorded at 30 second intervals for 5 minutes. The ambient room temperature is 18.5°. The data is recorded in the table. | | | | | | | | | | | | | |------------------|----|------|------|------|------|------|------|------|----|------|----| | Time (t) | 0 | 0.5 | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 | 4.5 | 5 | | Temperature (oC) | 98 | 91.5 | 87.5 | 84 | 80.5 | 78 | 75.5 | 73 | 71 | 69.5 | 68 | Note: For all following questions all working out must be shown as justification a) Use the standard exponential regression model (y=abx) to develop an equation in this form for Temperature as a function of Time (min). What temperature does this predict after 30 minutes? This raises a worrying aspect of fitting regression models to data. An exponential relationship seems plausible, and yet the suggested one has limited predictive power. Indeed other regression models fit the data well, but are not helpful as predictors for the situation (see below). c) Develop a model by creating a new list of 'excess' temperatures (T - 18.5), and again doing the standard exponential regression on (T - 18.5) versus t. What temperature does this predict after 30 mins? How does this compare with the model from part a) d) From an analysis of your answers in part a), b) and c), justify when it would be best to add the milk to the cup of black coffee to give the best chance of a nice hot cup of coffee if the phone rings : before answering the phone or after the brief phone call has ended. 2. ## Re: The Cooling Law Problem Originally Posted by nabey1 A temperature probe is placed in a saucepan filled with hot water, and the temperature is recorded at 30 second intervals for 5 minutes. The ambient room temperature is 18.5°. The data is recorded in the table. | | | | | | | | | | | | | |------------------|----|------|------|------|------|------|------|------|----|------|----| | Time (t) | 0 | 0.5 | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 | 4.5 | 5 | | Temperature (oC) | 98 | 91.5 | 87.5 | 84 | 80.5 | 78 | 75.5 | 73 | 71 | 69.5 | 68 | Note: For all following questions all working out must be shown as justification a) Use the standard exponential regression model (y=abx) to develop an equation in this form for Temperature as a function of Time (min). What temperature does this predict after 30 minutes? Okay, you have to determine two values, a and b, so you need two equations. Typically, it is best two use two distant points on the graph, here, x= 0, y= 98 and x= 5, y= 68. That is, solve $98= ab^0$ and [itex]68= ab^5[/itex] for a and b. Once you have found a and b, determine $ab^{30}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8866101503372192, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/57848?sort=oldest
## P vs. NP resistant problems ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) According to Stephen Cook on wikipedia, http://en.wikipedia.org/wiki/P_versus_NP_problem ...it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time. Example problems may well include all of the CMI prize problems. This opinion seems to suggest that if in fact P = NP, then even the most notoriously difficult problems in mathematics would be essentially trivialized. So I am wondering are there any problems that are resistant to becoming 'easy' even if P = NP? For example, according to Cook, none of the seven Clay Mathematics Institute Millennium problems is an example of this. - 1 It's not quite clear to me what you are asking. If P=NP with a reasonable bound on the size of the polynomial (an important qualification) then any reasonable length proof can be discovered in a reasonable amount of time. So a problem will be resistant only if it has either no proof or only very long proofs. But I'm finding it hard to interpret Cook's remark about the Clay problems as well, so perhaps I've misunderstood something. – gowers Mar 8 2011 at 16:12 1 I think Steve didn't mean to imply that solutions to all the CMI problems will have short proofs, but only that they are examples of problems whose solutions (if found) can (in principle) be presented as the kind of formal proof to which the corollary of P = NP applies. – Emil Jeřábek Mar 8 2011 at 16:36 7 FYI, the implications of P=NP on proofs and mathematical practice has been discussed extensively on MO: mathoverflow.net/questions/47954/… – Thierry Zell Mar 8 2011 at 16:43 2 Find the right formalization in which to make a proof short, but still meaningfully helpful, is a mathematics problem that does not fit into this "find a proof" category. More generally, finding the right definitions isn't covered. – Kevin O'Bryant Mar 8 2011 at 18:20 4 I asked a version of this question (quoting an early Cook paper) at CS Theory: "If P=NP, could we obtain proofs of Goldbach's Conjecture etc.?" cstheory.stackexchange.com/questions/2800/… . The knowledgeable answers were illuminating (to me). – Joseph O'Rourke Mar 8 2011 at 19:30 show 1 more comment ## 3 Answers Maybe I should start by saying that the quote from Cook is convincing. If a useful polynomial time algorithm for NP complete problem will be found then a computer will be able to give us quickly proofs for theorems (admitting not too long proofs) that we are interested to prove (and may eventually prove), as well as much harder questions that we are uninterested to prove and it seems that we will never be able to prove. (Is the shortest proof of FLT has an even number of characters?) This automatic ability to prove may lead to much more understanding of mathematical theorems and phenomena. We can explore if a specific direction to a proof works, try all sort of lemmas. explore surprising connections, etc. This picture gives good reason to believe that $P \ne NP$ but there are even better reasons for that. There are various reasons to believe that $NP \ne P$ and indeed one reason is that various tasks that look intractable will immediately look much easier compared to what we experience and expect. The connection with proving specific mathematical theorems looks artificial from various reasons. Usually, to transform a mathematical task into a decision problem to which the NP=?P problem is relevant we need to add a statement like "Is there a proof for RH with less than n pages". This addition makes the original problem much harder. We have a proof for FLT but probably we will never be able to answer the question "what is the smallest number of characters in a proof of FLT?". Fortunately we find the later question uninteresting. So overall Cook's statement can be seen as a provocative agument for why $NP \ne P$, which has some merit, But I dont think it offers any useful connection, One argument against the connection of real life mathematical proofs and the NP/P gap goes as follows. The NP/P problem is about the effort needed to find a proof compared to the effort needed to verify a proof. Now think about ourselves as computational devices and about this gap for cases of proven theorems. Try to estimate the amount of effort that it takes you to verify a proof which capture n journal pages (or n words) compared to finding such a proof. Is it superlinear in n? more than Quadratic in n? This gap (sometimes referred to as the creativity gap) does not seem similar to the gap between finding a proof and verifying a proof in the NP/P theory (say, the gap between finding a hamiltonian cycle in a large graph or verifying that a certain list of vertices and edges form a Hamiltonian cycle.) We can also talk bout the human effort needed to produce an n-page (or n characters) proof. This effot is on average (for cases of success) probably monotone in n, perhaps superlinear in n but there is no reason to expect it to be exponential in n. Just to make the main point clear: Deciding mathematical problems including famous ones appears to be by far easier than solving NP-complete problems, and therefore the $NP \ne P$ by itself seems to offer little explanation for the difficulty in solving mathematical problems. But computational complexity insights do give some understanding of this difficulty. Does the fact that there are mathematical statements that are undecidable give some explanation why some mathematical conjectures are so hard to prove? (Well, it gives some indirect explanation of a sort, but not a real useful connection.) It is an interesting question why proving mathematical conjectures does not seem to be computationally intractable at least for surprisingly many cases where people succeeded. (Also undecidability enters the scene rather rarely.) I am not aware of a very good answer to this question. It may have something to do with what we regard as "interesting" in mathematics, to the nature of mathematical understanding, and to the highly structured nature of mathematical problems. - what is the difference between "s there a proof for RH with less than n pages" and "what is the smallest number of characters in a proof of FLT?"? Cant we run the program log(n) times and find the smallest number of characters of a proof of FLT if n is a reasonable number? – unknown (google) Feb 13 at 20:01 The first is a decision problem (with yes/no answer) and the second is not, but as you correctly said there is a simple reduction from the first to the second. – Gil Kalai Feb 14 at 12:58 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Preamble: I'm going to make this CW, since there is a good chance that I will misrepresent the exact argument, since I'm working mostly from what I've read here. If you see something in need of fixing, please take advantage of the CW mode. In particular, most of what I am writing about already appears in Andreas Blass's answer linked below in some form or another, I'm just rewriting it in an attempt to cross some t's and so on. ## The Answer The argument in this answer runs as follows: fix yourself a length for your target proof, $n$. Then if $P=NP$, you will be able to tell in polynomial time whether your statement has a proof of length at most $n$, because that's an $NP$ kind of statement (here, I find it really helpful to think of $NP$ in terms of existential quantification). So $P=NP$ would not be enough to know if there is a proof of any length, but presumably if the proof length is very long, you can't really claim to understand it (think four color theorem). On the other hand, if you know there is a short proof, you might be tempted to find it exhaustively. (And, as Andreas Blass points out in the answer I linked to, this would give you the first available proof in some lexicographical ordering, not necessarily the most enlightening one.) [Added: As Daniel Litt points out in the comments, having the program terminate with a "Yes" is enough to prove that the theorem is true. I still imagine we might want a more explicit proof, though obviously this is just an opinion and others definitely disagree.] Note that this ought to apply to any statement that you can formalize, so the answer to your original question would be more or less "No", there are no truly hard statements, just some that have longer shortest proof than others. (That should include statements for which the shortest proof is astronomically long, by the way, but again, those are beyond comprehension.) ## Why I don't really buy it [Added: The quote says that this possibility ...would transform mathematics, but I don't really imagine that it would revolutionize the way we do mathematics in the short term, for the reasons explained below.] I've played fast and loose with the setting so far, and this is where I believe the statement is a lot weaker than it appears at first. The way I see it, there are two ways of going about this: 1. Encode your theorem and proof in some fixed axiom system. But then, your proof may have to contain a huge chunk of already-known mathematics. So you would have to pick a very large $n$, which is unproductive since you're only interested in a very long proof with very little that's new. 2. Find a way to encode in "currently known mathematics" if possible. On top of the massive overhead that it implies, the biggest issue I see is that your answer would just be a snapshot at time T. You might not be able to write a proof of GRH in 30 pages in 2011, but in 2012, you will have hundred of thousands of pages of fresh math that you can use but don't have to count against your own page total. That's why I don't see Cook's argument as having a real practical impact, cute as it may be. If I'm overlooking something major, please don't hesitate to correct this. Again, computational complexity is not really my field. - Thierry, what do you mean when you say that you dont really buy it. – Gil Kalai Mar 8 2011 at 21:52 Once your program tells you there is a proof of length $n$, you've already found a proof of your statement (of length polynomial in $n$). Namely, the proof that your program is correct, coupled with the computation it has done to tell you there is a proof of length $n$. So there is no reason to exhaustively search for a proof once your computation is done; you already have one. But of course finding understandable proofs is important, so this isn't the end of the game. – Daniel Litt Mar 8 2011 at 22:05 Thanks Daniel. I'll put that in, but feel free to correct if necessary. – Thierry Zell Mar 8 2011 at 22:18 1 @Jeremy: The language consists of pairs $(\text{theorem}, 1^n)$ such that theorem" admits a proof of length less than or equal to $n$. This is clearly in NP. – Daniel Litt Mar 9 2011 at 3:41 1 @Jeremy: There is no systematic way to bound the length of a statement's proof; if there were, it would be easy to solve the halting problem. – Daniel Litt Mar 9 2011 at 21:16 show 4 more comments Richard Borcherds gave an example in another thread, of a statement that is (almost) obviously true, but very hard to prove: chess is not a forced win for black. The issue is that (generalized) chess is PSPACE-hard (formalizing the idea of a forced win requires(?) a series of alternating quantifiers, one for each move), and showing a forced win or draw for either side would seem to require surveying the entire game tree which is enormous. So this is almost certainly outside NP. (On the other hand, even P != PSPACE is still unknown). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9620510339736938, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/256694/how-to-calculate-center-point-of-geo-coordinates
# How to calculate center point of geo coordinates? How can I calculate center point of N lat/lon pairs, if the distance is at most 30m? Assuming altitude is same for all points. - First you need to define what you mean by "center". There are many different such definitions in use, especially ones invented by cities that want to have the center of such-and-such country or continent X inside their territory. – Henning Makholm Dec 12 '12 at 0:22 I will try. I have to get single coordinate from GPS. Unfortunately it does not have built in averaging function. So it gives me coordinates with certain error. I can however collect dozens of coordinates for that exact position. But i don't know how to average these coordinates into single which can more or less smoothen gps error. – Pablo Dec 12 '12 at 0:26 2 If all the points are within 30 meters of each other, just averaging the latitudes and longitudes will be very close to the "true" geodesic center for any reasonable definition of "center". At least as long as (i) you are not near the poles, and (ii) your points don't span across the 180° meridian. – Rahul Narain Dec 12 '12 at 0:45 – Pablo Dec 12 '12 at 0:51 1 @Rahul Narain you can answer so I can accept it, if you would like. – Pablo Dec 12 '12 at 10:14 show 2 more comments ## 2 Answers For completeness (I know this is pretty late; no need to change your accepted answer): You have $n$ points on the globe, given as latitude $\phi_i$ and longitude $\lambda_i$ for $i=1,\ldots,n$ (adopting Wikipedia's notation). Consider a Cartesian coordinate system in which the Earth is a sphere centered at the origin, with $z$ pointing to the North pole and $x$ crossing the Equator at the $\lambda=0$ meridian. The 3D coordinates of the given points are $$\begin{align} x_i &= r\cos\phi_i\cos\lambda_i,\\ y_i &= r\cos\phi_i\sin\lambda_i,\\ z_i &= r\sin\phi_i, \end{align}$$ (compare spherical coordinates, which uses $\theta=90^\circ-\phi$ and $\varphi=\lambda$). The centroid of these points is of course $$(\bar x,\bar y,\bar z) = \frac1n \sum (x_i, y_i, z_i).$$ This will not in general lie on the unit sphere, but we don't need to actually project it to the unit sphere to determine the geographic coordinates its projection would have. We can simply observe that $$\begin{align} \sin\phi &= z/r, \\ \cos\phi &= \sqrt{x^2+y^2}/r, \\ \sin\lambda &= y/(r\cos\phi), \\ \cos\lambda &= x/(r\cos\phi), \end{align}$$ which implies, since $r$ and $r\cos\phi$ are nonnegative, that $$\begin{align} \bar\phi &= \operatorname{atan2}\left(\bar z, \sqrt{\bar x^2+\bar y^2}\right), \\ \bar\lambda &= \operatorname{atan2}(\bar y, \bar x). \end{align}$$ So yes, the code you linked to does appear to switch the latitude and longitude in the output. You should submit a patch to the author. - If your points are no more than 30m apart, then doing any trigonometric computations will likely introduce more errors than it avoids. So I'd say simply treat the coordinates as coordinates on a plane, and average them to get the centroid. In order to avoid issues with the 180° meridian, you might want to pick one point as reference and ensure that all the others don't differ in longitude by more than 180°. If they do, then add or subtract 360° to remedy that. The end result might be outside the [-180°, 180°] range but can be adjusted by another 360° if desired. Near the poles, the compted longitude will likely have a great deal of uncertainty due to input distributed over a wide range of values. But this only corresponds to the fact that at the poles, large differences in longiude correspond to small differences in distance, so nothing wrong there. If you are even closer to the pole, there might be situations where the geodesics between your data points would be badly approximated by the planar interpretation. Roughly speaking, the computation would connect them along a parallel while the most direct connection might be a lot closer to the pole. But I'd expect such effects to only matter once you are within 100m or so of the pole, so probably they are not worth the effort, as even the badly computed result isn't completely off, like it would be for the 180° meridian case. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323098063468933, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/70503/stalks-of-structure-sheaf-of-fibre-product
## Stalks of structure sheaf of fibre product? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What can I say about it? Can I say the stalks equal the tensor products of the corresponding factors stalks? Thanks! - 7 You don't say what your context is (ringed spaces, schemes,...). If it is schemes then no, as the stalk of the product is a local ring and the tensor product of local may not be local. – Torsten Ekedahl Jul 16 2011 at 14:26 Thank you very much! – MZWang Jul 17 2011 at 1:13 ## 2 Answers Let $X,Y$ be $S$-schemes. Then a point of $X \times_S Y$ corresponds to a pair of points $x \in X, y \in Y$ lying over the same $s \in S$ together with a prime ideal `$\mathfrak{p} \subseteq \mathcal{O}_{X,x} \otimes_{\mathcal{O}_{S,s}} \mathcal{O}_{Y,y}$` which restricts to the maximal ideals in `$\mathcal{O}_{X,x}$` resp. $\mathcal{O}_{Y,y}$. The stalk of the structure sheaf in this point is the localization of the tensor product: `$\mathcal{O}_{X \times_S Y,(x,y,\mathfrak{p})} = (\mathcal{O}_{X,x} \otimes_{\mathcal{O}_{S,s}} \mathcal{O}_{Y,y})_{\mathfrak{p}}$`. There are at least two ways to prove these statements: a) Use the universal property of $\text{Spec}(K)$ for a field $K$ to get the points and then use the universal property of $\text{Spec}(R)$ for a local ring $R$ to get their stalks. So this assumes, of course, that you already know that the fiber product exists, but you can recover the description of the elements and the stalks just by using the universal property! But actually, b) you can construct the fiber product as above, also more general in the category of locally ringed spaces. I've written this up here. Now your actual question seems to be: As Hartshorne chapter III.9.2 claim,an Ox-module (need not be quasi coherent) F's flatness is stable under base change. But the stalks is not the tensor products, how can I prove the claim? The statement is the following: If $f : X \to Y, Y' \to Y$ are morphisms, and $\mathcal{F}$ is a module over $X$ which is flat over $f$, then the pullback of $\mathcal{F}$ to $X \times_Y Y'$ is flat over $X \times_Y Y' \to Y'$. I am pretty sure that Hartshorne understands $\mathcal{F}$ to be quasi-coherent here. Otherwise the sketch of proof also does not make sense. But it is also true in general: Pick a point in $X \times_Y Y'$, thus a triple $(x,y',\mathfrak{p})$ as described above. Let $y$ be the underlying point in $Y$. Now `$\mathcal{F}_{x}$` is flat over `$\mathcal{O}_{Y,y}$`. By commutative algebra (base change of flat modules), it follows that `$\mathcal{F}_x \otimes_{\mathcal{O}_{Y,y}} \mathcal{O}_{Y',y'}$` is flat over `$\mathcal{O}_{Y',y'}$`. Again by commutative algebra (localizations are flat) `$(\mathcal{F}_x \otimes_{\mathcal{O}_{Y,y}} \mathcal{O}_{Y',y'})_{\mathfrak{p}}$` is flat over `$\mathcal{O}_{Y',y'}$`. But this is exactly the stalk of the pullback of $\mathcal{F}$ in the given point $(x,y',\mathfrak{p})$. - Thanks! This answer is exactly what I want. – MZWang Jul 20 2011 at 8:42 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As Hartshorne chapter III.9.2 claim,an Ox-module (need not be quasi coherent) F's flatness is stable under base change. But the stalks is not the tensor products, how can I prove the claim? - 1 You should have added this to your question above (via the Edit function). Also check out the FAQ. But never mind :) – Martin Brandenburg Jul 18 2011 at 9:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9086591601371765, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/22096/does-no-cloning-theorem-implies-a-no-comparison-theorem/22403
# Does no-cloning theorem implies a no-comparison theorem? I was reading about no cloning theorem and it arose a thought experiment, if there were a way of compare quantum states (for being equal) then you could build a pseudocloning machine that searches for quantum states till it finds an equal state, so I think there could be something like a No-Comparison theorem. Does exist something like that? - ## 3 Answers You can indeed test two quantum states for being equal, but the results are not 100% guaranteed accurate: you measure the eigenvalue of the SWAP operator (which swaps the two quantum states). If they are equal, then you have a 100% chance of getting the +1 eigenvalue. If they are orthogonal, then you have a 50% chance of getting either the +1 and -1 eigenvalue. This test (a) destroys the quantum state if you test it against a state that it is not equal to and (b) only yields the correct answer half the time if the answer is "no". These two drawbacks mean that you cannot use it to clone. However, this is still a very useful test as a subroutine in designing some quantum algorithms. I don't know whether anybody has proved a theorem saying that you cannot test equality better than the swap test, but it is definitely true, as the OP speculates, that there is no perfect test for equality of quantum states. - Your recipe applies to testing of two either orthogonal or coinciding states, doesn't it? Otherwise the sharp jump from 100% to 50% as the one state turns from being equal to slightly unequal to the other becomes disturbing. – Slaviks Mar 15 '12 at 13:54 That's right ... fixed. – Peter Shor Mar 15 '12 at 15:19 The question boils down to whether it is possible to compare a given unknown state with a known state, without disturbing it. The answer is clearly a no. Because, doing so is equivalent to determining the complete state. The measure used to say how close is a state to a given state is fidelity. If $\psi$ is the unknown arbitrary state, and $\phi$ is a known state, then the fidelity of $\psi$ with respect to $\phi$ is $|\langle\psi|\phi\rangle|^2$, which is $Tr[|\psi\rangle\langle\psi|\ |\phi\rangle\langle\phi|]$(that is the expectation value of $|\phi\rangle\langle\phi|$, if you like). If it were possible to measure this quantity using a single copy of $\psi$, it means that it is possible to measure the expectation value of any observable; since every hermitian operator is a sum of such projections, weighted with the eigen values. And since measurement is just a comparison, finding the closest state to a given state equivalent to measuring it's fidelity with every state. So, just like cloning, comparison is also impossible. - You can make use of entanglement to make a measurement. Check out this article on QND (Quantum Non-demolition measurements) for a start. physorg article - 2 But that has little to do with what the question is asking, has it? – leftaroundabout Mar 9 '12 at 14:43 How exactly can you obtain a result of your operation without a measurement? – Antillar Maximus Mar 9 '12 at 14:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526767730712891, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/33414/whats-up-in-this-spawar-video
# What's up in this SPAWAR video? Here is a video presentation of infrared recordings of anomalous heating in a deuterium palladium cell: ( youtube video) (see also this presentation if you want more detail, and have time). There are two papers referenced in the link, which give more detail. You can see with your own eyes sporadic localized bursts of energy, which show up as white flashes in the camera (I would like to encourage people to read the linked papers, which include thermal photos with a different color map, and also include piezoelectric detection of bursts) Excluding cold fusion, what could possibly be causing this? Please try to account for the nature of the bursts--- the small radius, the qualitative amount of energy released, etc. Just to be clear, I think the answer is nothing, that it is cold fusion, but I would like to see what explanations people come up with. I will also point out that this research group has been politically shut down recently. - ## 2 Answers Without knowing anything about the experiment nor the camera, I would suggest that what is shown in the video is a combination of shot noise and aliasing due to a poor choice of gradient mapping. Note that the gradient bar at the bottom of the frame jumps from a fairly deep red (actually darker than precedent tones) to pure white in one increment - this may result in small fluctuations in temperature appearing much more visually significant than they are. I had a go at my own gradient maps (in Photoshop) to show off this effect - I tried to go for something similar to their gradient mapping, which for all I know is hardwired into the camera: Something to consider. I read an article recently about injudicious gradient mapping overemphasising tiny differences in geospatial and medical datasets (and also obscuring important distinctions in other scenarios). It's a good read and I'll link it when I find it again. - 1 Shot noise seems like a reasonable explanation to me. All I see there is a grainy image. Also, I must say that the quality of the entire presentation seems quite low. No mention of any control experiments or characterization of their detectors, no quantitative analysis of these "hot spots", no correlative measurements between their camera/piezo detector. It is very unconvincing to me. – user2963 Aug 4 '12 at 12:25 2 @RonMaimon I have a ton of respect for your physics knowledge and intelligence, but do you mind if I ask what experience you have with experimental science? This seems easily explainable as measurement noise to me. Their imaging system is low resolution and has a lot of variance, which is visible in the sections of the line profile which lie outside the electrode - the level of variance does not look substantially smaller than the center. – user2963 Aug 5 '12 at 22:14 1 – user2963 Aug 5 '12 at 22:14 1 Can you give a physical explanation for the anisotropy of the white segments? – user2963 Aug 5 '12 at 22:15 1 Finally, I'd like to reiterate that I think this is very poor science, even if the effect is real: these kinds of doubts could easily by reduced by showing some basic control experiments, such as imaging with the current off, or using a resistive heater to warm the electrode without the reaction and comparing. – user2963 Aug 5 '12 at 22:16 show 4 more comments I'm an experimental electrochemist. The problem with experiments such as those mentioned above is that they lack the necessary details to reproduce it, so that we can verify it or improve upon it. In the first video, a paper linked is here: http://www.lenr-canr.org/acrobat/SzpakSpolarizedd.pdf They vaguely mention a "negatively polarized Pd/D$_2$O system". Then they refer us to Figure 6 for more "complete" experimental details. Let's ignore all the piezo stuff. We have a potentiostat (what kind), with electrodes (unspecified counterelectrode and a vague Pd/D film working electrode without mention of preparative conditions), in an undefined electrodeposition solution with unknown concentrations or purities of reagents, and without any mention of the voltage/settings applied on the potentiostat during the course of the experiment. Moreover, they do not tell us about the thermal scanner, even the make or model. There is a mention to other journals but a quick glance at the few I could easily access online did not illuminate any of these details better. Was the container even cleaned for trace impurities? In an excellent paper you will find all these experimental details and more. In a field which is highly contested among leaders in electrochemistry you are already held to a higher bar of scrutiny and as such greater care must be taken in reporting your results for any serious scholarly interest. Lacking these important details makes the entire paper worthless. If this were an undergrad's first lab report given me to then I wouldn't even award a D-. This is especially true when people in cold fusion report that it is very sensitive to experimental details/preparation. This "paper" also suffers because it does not address why this could not be due to joule heating, a more likely scenario but cannot be judged without knowing the voltages or currents involved. If you can find a paper (with the PDF) that has a complete experimental then e-mail it to me and I'll take a closer look. I would honestly really love for cold fusion to be true and to see if it could work, but the work I've seen doesn't even hold a candle to the care you'll find in a Letter published in the Journal of the American Chemical Society and therefore I cannot spend my time trying to reproduce these results. - 2 This is a nice honest answer. I'll try to dig up the review and link it here. I think you are not fair to these folks, they have published many of their results in journals, and the reagents and concentrations for the experiment are widely available in the CF field, in the SPAWAR published papers. The field is neglected, and the papers tend to look bad, but that's not a way to evaluate experimental results--- you need to look at the data, even if it looks crappy, and see if there is an anomalous effect. – Ron Maimon Jan 22 at 7:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9577409029006958, "perplexity_flag": "middle"}
http://cstheory.stackexchange.com/questions/11797/alan-turings-contributions-to-computer-science
# Alan Turing's Contributions to Computer Science Alan Turing, one of the pioneers of (theoretical) computer science, made many seminal scientific contributions to our field, including defining Turing machines, the Church-Turing thesis, undecidability, and the Turing test. However, his important discoveries are not limited to the ones I listed. In honor of his 100th Birthday, I thought it would be nice to ask for a more complete list of his important contributions to computer science, in order to have a better appreciation of his work. So, what are Alan Turing's important/influential contributions to computer science? - 1 +1 I was thinking about asking a similar question. – Kaveh♦ Jun 24 '12 at 4:42 2 would like some Q like this but this forum, seems appropos on one level but is ironically not the best place. the problem is that, inevitably, research level CS has vastly expanded/moved anywhere beyond what Turing studied in the decades since he contributed. therefore a Turing history related Q would have to be phrased very carefully to fit in here. already you have listed his major contributions in the question, so what is left to answer with? contributions not in the list? they would be somewhat obscure and not as important... – vzn Jun 24 '12 at 14:29 1 see also this related q/a about whether Turing machines influenced creation of later automata models in CS. the current highest rated answer by jeffe remarkably asserts that there was not a historical connection, ie later researchers who invented key CS automata models were verifiably not directly inspired by Turing! – vzn Jun 24 '12 at 14:53 – vzn Jun 24 '12 at 15:00 1 Thanks for the pointers. Btw, I thought we had agreed that history of TCS is on topic for this site, hence the tag. As for Turing's other contributions, perhaps some are still important, just not world-changing. – Lev Reyzin Jun 24 '12 at 15:09 ## 8 Answers I did not know of these until recently. 1) The LU decomposition of a matrix is due to Turing! Considering how fundamental LU decomposition is, this is one contribution that deserves to be highlighted and known more widely (1948). 2) Turing was the first to come up with a "paper algorithm" for chess. At that point, the first digital computers were still being built (1952). Chess programming has had an illustrious set of people associated with it, with Shannon, Turing, Herb Simon, Ken Thompson, etc. The last two won the Turing Award. And Simom, of course, won the Nobel as well. (Shannon came up with a way to evaluate a chess position in 1948.) - 3 I didn't know about the LU decomposition result. That's cool ! Is there a reference ? – Suresh Venkat♦ Jun 25 '12 at 2:59 1 Suresh, I have added the reference to LU decomposition. – V Vinay Jun 25 '12 at 9:21 1 It is not true that Turing wrote the first chess program, this honor seems to go to Konrad Zuse, the inventor of the first computer. He wrote a simple chess program 'on paper' as a benchmark for his Plankalkuel, the first high-level programming language. See here and here. Sorry, no good english language descriptions of this work seem to exist. – Martin Berger Jun 25 '12 at 11:15 As mentioned in the question, Turing was central to defining algorithms and computability, thus he was one of the people that helped assemble the algorithmic lens. However, I think his biggest contribution was viewing science through the algorithmic lens and not just computation for the sake of computation. During WW2 Turing used the idea of computation and electro-mechanical (as opposed to human) computers to help create the Turing–Welchman bombe and other tools and formal techniques for doing crypto-analysis. He started the transformation of cryptology, the art-form, to cryptography, the science, that Claude Shannon completed. Alan Turing viewed cryptology through algorithmic lenses. In 1948, Turing followed his interested in the brain, to create the first learning artificial neural network. Unfortunately his manuscript was rejected by the director of the NPL and not published (until 1967). However, it predated both Hebbian learning (1949) and Rosenblatt's perceptrons (1957) that we typically associated with being the first neural networks. Turing foresaw the foundation of connectionism (still a huge paradigm in cognitive science) and computational neuroscience. Alan Turing viewed the brain through algorithmic lenses. In 1950, Turing published his famous Computing machinery and intelligence and launched AI. This had a transformative effect on Psychology and Cognitive Science which continue to view the cognition as computation on internal representations. Alan Turing viewed the mind through algorithmic lenses. Finally in 1952 (as @vzn mentioned) Turing published The Chemical Basis of Morphogenesis. This has become his most cited work. In it, he asked (and started to answer) the question: how does a spherically symmetric embryo develop into a non-spherically symmetric organism under the action of symmetry-preserving chemical diffusion of morphogens? His approach in this paper was very physics-y, but some of the approach did have an air of TCS; His paper made rigorous qualitative statements (valid for various constants and parameters) instead of quantitative statements based on specific (in some fields: potentially impossible to measure) constants and parameters. Shortly before his death, he was continuing this study by working on the basic ideas of what was to become artificial life simulations, and a more discrete and non-differential-equation treatment of biology. In a blog post I speculate on how he would develop biology if he had more time. Alan Turing started to view biology through algorithmic lenses. I think Turing's greatest (and often ignored) contribution to computer science was showing that we can glean great insight by viewing science through the algorithmic lens. I can only hope that we honour his genious by continuing his work. ### Related questions - One lesser-known contribution is the Good-Turing estimator for estimating the fraction of a population "not yet seen" when taking samples. This is used in biodiversity. - This question is a lot like asking for Newton's contributions to physics, or Darwin's to biology! However, there's an interesting aspect to the question that many commenters have already seized on: namely that, besides the enormous contributions that everyone knows, there are plenty of smaller contributions that most people don't know about --- as well as many insights that we think of as more "modern," but that Turing demonstrated in various remarks that he understood perfectly well. (Incidentally, the same is true of Newton and Darwin.) A few examples I like (besides the ones mentioned earlier): In "Computing Machinery and Intelligence," Turing includes a quite-modern discussion of the benefits of randomized algorithms: It is probably wise to include a random element in a learning machine. A random element is rather useful when we are searching for a solution of some problem. Suppose for instance we wanted to find a number between 50 and 200 which was equal to the square of the sum of its digits, we might start at 51 then try 52 and go on until we got a number that worked. Alternatively we might choose numbers at random until we got a good one. This method has the advantage that it is unnecessary to keep track of the values that have been tried, but the disadvantage that one may try the same one twice, but this is not very important if there are several solutions. The systematic method has the disadvantage that there may be an enormous block without any solutions in the region which has to be investigated first, Now the learning process may be regarded as a search for a form of behaviour which will satisfy the teacher (or some other criterion). Since there is probably a very large number of satisfactory solutions the random method seems to be better than the systematic. It should be noticed that it is used in the analogous process of evolution. Turing was also apparently the first person to use a digital computer to search for counterexamples to the Riemann Hypothesis -- see here. Besides the technical results from Turing's 1939 PhD thesis (mentioned by Lev Reyzin), that thesis is extremely notable for introducing the concepts of oracles and relativization into computability theory. (Some people might wish Turing had never done that, but I'm not one of them! :-D ) Finally, while this is basic, it seems no one has yet mentioned the proof of the existence of universal Turing machines --- that's a distinct contribution from defining the Turing machine model, formulating the Church-Turing Thesis, or proving the unsolvability of the Entscheidungsproblem, yet arguably the most "directly" relevant of any of them to the course of the computer revolution. - Turing's paper on Checking a large routine which was presented at a conference in Cambridge in 1949 antedates formal reasoning about programs as developed by Floyd and Hoare by nearly two decades. The paper is only three pages long and contains the idea of using invariants to prove properties of programs and well-foundedness to prove termination. How can one check a routine in the sense of making sure it is right? In order that the man who checks should not have too difficult a task, the programmer should make a number of definite assertions which can be checked individually, and from which the correctness of the whole program easily follows. - So Turing invented unit testing :) – Lev Reyzin Jun 26 '12 at 23:53 Not in that paper. He is presenting a static method to prove functional correctness and termination. – Vijay D Jun 27 '12 at 18:50 Turing was interested in and did some seminal work in chemical reaction-diffusion patterns. this area of research has expanded substantially since he started investigating it. it has been shown to have ties to computability eg is in a sense "Turing complete" [1]. the chemical reactions can be modeled with complex nonlinear differential equations so in a sense it has been shown that nonlinear differential equations with enough complexity can simulate Turing machines. stemming from his 1951 paper "chemical basis of morphogenesis" [4] [1] chemical kinetics is Turing universal by Magnasco in PRL 97 [2] Turing structures in simple chemical reactions [3] Turing patterns in linear chemical reaction systems with nonlinear cross diffusion by Franz [4] chemical basis of morphogenesis, wikipedia - Here's another one I found on Scott Aaronson's blog (and the Q+A is taken from there): In his Ph.D. thesis, Turing studied the question ($F_α$ is a theory): Given a Turing machine $M$ that runs forever, is there always an ordinal α such that $F_α$ proves that $M$ runs forever? Turing proved: Given any Turing machine $M$ that runs forever, there is an encoding of its axioms ($F_{\omega+1}$) that proves that $M$ runs forever. Unfortunately, the definitions & technical details are harder to summarize, but the linked-to blog post does a good job of explaining them. - here is a broad, highly researched/detailed 9p online survey/retrospective of Turing's specific and more general/longrange contributions in the Notices of the American Mathematical Society by SB Cooper for the 100th anniversary, Incomputability after Alan Turing. some other contributions mentioned in this survey: • Rounding-off errors in matrix processes paper, 1948. influential in numerical analysis and scientific computation in the theory of computation • unpublished 1948 National Physical Laboratory report Intelligent Machinery describes an early connectionist model, similar and contemporaneous with the famous McCulloch and Pitts neural nets. • points out Turing's analysis and theory of morphogenesis can be regarded as the early intellectual foundation of massive (and still ongoing/active) later theory in self-organization and emergent phenomena. (etc) - – vzn May 2 at 22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9596232175827026, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/16669/hydrogen-2p-probability-density-question?answertab=active
# Hydrogen 2P Probability Density Question I'd like to calculate the probability density for Hydrogen in the $|2,1,1\rangle$ and $|2,1,-1\rangle$ states. There is an $\exp(\pm i\phi)$ term attached to the wave function for these states. It seems to me that $|\Psi|^2$ in this case would be the same for $|2,1,1\rangle$ and $|2,1,-1\rangle$ since $|\exp(i\phi)| = 1$ with the same being true for $|\exp(-i\phi)|$. Where am I going wrong here? Also, since I'm working in spherical coordinates the PD should be $r^2|\Psi|^2$, right (this will result in those classic energy orbital diagram functions)? - ## 2 Answers Why did you think you were doing something wrong? The phase factor does indeed become irrelevant when you calculate the probability density. As for the factor of $r^2$: the proper way to interpret $|\Psi|^2$ is that, when integrated over some region, it gives the probability of the electron being found in that region: $$P(\text{e in }V) = \iiint_V|\Psi|^2\mathrm{d}^3V$$ This definition means that $|\Psi|^2$ matches up with the mathematical definition of a probability density function. So your probability density is just $|\Psi|^2$, pretty much by definition. However, if you do this in spherical coordinates, you will get a factor of $r^2$ (and $\sin\theta$) from the measure of integration, namely $\mathrm{d}^3V = r^2\sin\theta\;\mathrm{d}r\;\mathrm{d}\theta\;\mathrm{d}\phi$.. - Thanks for the detailed answer. I thought I was going wrong because the plots of the PD would have been the same. My goal in calculating those PDs is plotting them in 3D, so two of the same plots wouldn't have been meaningful. – nick_name Nov 7 '11 at 22:02 1 Indeed, it's redundant to draw both $nlm$ and $nl,-m$ because the probability densities are the same. That doesn't mean that the states are the same. The phase does contain physical information. Its change in space tells you about the momentum. So by looking at the phase, one may say that the electron is orbitting in one way (counter-clockwise) for $m=1$ and in the other way (clockwise) for $m=-1$. The two wave functions are orthogonal to each other - completely different, mutually exclusive states. – Luboš Motl Nov 7 '11 at 22:20 @LubošMotl That was a solid explanation. Wavefunction orthogonality is a great way of tying it all together. – nick_name Nov 7 '11 at 23:24 1 Thanks for your interest, @nick_name. I should have mentioned the simplest example: $C\exp(ipx)$ is a plane wave and the probability density is constant all over the space, independently of $p$. However, that doesn't mean that the functions are the same for all $p$: they're moving with different velocities. Their Fourier transforms, delta-function in momentum space, have different probability densities (for various momenta), located at different points. The angular-momentum case is just a wrapped, curved version of the same principle. – Luboš Motl Nov 8 '11 at 7:32 You are right, the $\phi$-dependence disappears from the probability of this state. The probability is symmetric with respect to the reflections. The differential probability in spherical coordinates is determined as $$dw=|\Psi(\vec{r})|^2 dV=|\Psi(\vec{r})|^2\cdot r^2dr \cdot sin\theta d\theta \cdot d\phi$$ You can enjoy the 3D visualizations and even rotate them with your mouse with this applet: http://www.falstad.com/qmatom/ - I love those visualisations from Falstad. In my on-going training as a physicist, they've all been useful at some time. – Kasper Meerts Nov 7 '11 at 23:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523347020149231, "perplexity_flag": "head"}
http://mathoverflow.net/questions/46354?sort=oldest
## Methods to prove that a subset generates the whole group ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What kind of methods exist to prove that a subset of a finitely generated abelian group G generates G - How is the group specified (e.g. matrices, words with rewrite rules, operators on a space, etc.)? – Victor Miller Nov 17 2010 at 19:30 ## 1 Answer If you know the generators, represent each of the elements of your set as a linear combination of generators. Form a (non-square) matrix where row number $i$ consists of coefficients of element number $i$ in your set. Perform integer Gauss elimination procedure (i.e. you are allowed switching two rows, and subtracting/adding one row from/to another row). Eventually you will get a matrix in the row echelon form. Look how many 1's you have on the diagonal. If the number of 1's is the same as the number of generators, your set generates the whole group. Otherwise the answer is "no". Edit: I forgot one more transformation in the Gauss elimination procedure: switching columns (these correspond to re-orderings of the set of generators of the group). Without it, the pivotal numbers will not be on the diagonal. - The problem is that we have a INFINITE group, which is finitely generated. – Meriton Ibraimi Nov 17 2010 at 14:37 Yes, I assumed the group was infinite. So your set and the matrix can be infinite also (infinite number of rows, but each row is finite). Do I need to explain how to perform the Gauss elimination procedure on such a matrix? It is usually explained in the proofs that every f.g. Abelian group is a direct product of cyclic groups (or, which is almost the same, every f.g. module over a PID is a sum of cyclic modules). – Mark Sapir Nov 17 2010 at 14:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8918028473854065, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/143957-finite-dimensional-vector-space.html
# Thread: 1. ## Finite Dimensional Vector Space Let V, W be finite dimensional vector spaces over a field k and let Z subset of W be a subspace. Let T : V -> W be a linear map. Prove that dim( T^(-1) ( Z ) ) <= dim V - dim W + dim Z 2. Originally Posted by ques Let V, W be finite dimensional vector spaces over a field k and let Z subset of W be a subspace. Let T : V -> W be a linear map. Prove that dim( T^(-1) ( Z ) ) <= dim V - dim W + dim Z your inequality is not correct. it should be $\geq$ instead of $\leq.$ let $T^{-1}(Z)=X$ and define the map $S: V \longrightarrow W/Z$ by $S(v)=T(v) + Z.$ clearly $\ker S = X$ and thus $V/X \cong S(V)=T(V)/Z.$ hence $\dim V - \dim X = \dim V/X = \dim T(V)/Z = \dim T(V) - \dim Z \leq \dim W - \dim Z.$ therefore $\dim X \geq \dim V - \dim W + \dim Z.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7355271577835083, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/57593-affine-varieties.html
# Thread: 1. ## Affine varieties (1) An integral domain $A$ is a principal ideal domain if every ideal $I$ of $A$ is principal, that is of the form $I = (a)$; show directly that the ideals in a PID satisfy the a.c.c. (ascending chain condition). (2) Show that an integral domain $A$ is a UFD if and only if every ascending chain of principal ideals terminates, and every irreducible element of $A$ is prime No idea! Any help would be appreciated, thanks! 2. Originally Posted by shadow_2145 (1) An integral domain $A$ is a principal ideal domain if every ideal $I$ of $A$ is principal, that is of the form $I = (a)$; show directly that the ideals in a PID satisfy the a.c.c. (ascending chain condition). (2) Show that an integral domain $A$ is a UFD if and only if every ascending chain of principal ideals terminates, and every irreducible element of $A$ is prime No idea! Any help would be appreciated, thanks! you'll find complete and easy to understand solution to your problem in here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9115760922431946, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/19915/using-a-histogram-to-estimate-class-label-densities-in-a-tree-learner
# Using a histogram to estimate class label densities in a tree learner In a sequential (on-line) tree algorithm, I'm trying to estimate class label densities using a histogram. The algorithm grows a tree by generating test functions at each decision node, which correspond to the features in the data set. That is to say each feature is used to create a test in the form of $g(x) > 0$ that's used to decide the left/right propagation of samples down each node of the binary tree. If the result of that test exceeds some threshold theta, then samples are propagated down the right branch, otherwise they're propagated down the left branch. When training the tree, I have to choose the best test to use at each decision node, which is accomplished using a quality measurement in the form Where $R_{jls}$ and $R_{jrs}$ are the left and right partitions made by test s, which corresponds to the data greater than or less than the threshold theta, $(p_i^j)$ is the label density of class $i$ in node $j$ (or $jls/jrs$ for the label densities of class $i$ in the left or right child nodes), $K$ is the total number of classes, and $|x|$ denotes the number of samples in a partition. In other words, the gain with respect to a test $s = g(x)$ depends on the entropy of class labels within that feature. This requires knowing the class label density at each decision node. So, my uncertainty lies in how can this be accomplished with a histogram. A histogram for each feature would have N bins of size range/N, where range is the difference between the max and min values that each feature takes. If we don't have a priori knowledge of this range, we can keep track of the max/min values as we get more training data. This histogram would track the number of samples falling within each range, but it tells nothing about the class labels. Another option is to have a histogram for each class, for each feature, for each node. So, you'd be keeping track of the number of samples having a particular class label given a range of feature values (each bin). This solution seems to be more in line with the above equation, since we need the label densities for each class and for each node, but I just want to verify that I'm on the right track. For reference, see part 2.1.2 of this paper. Saffari et al., 2009. On-line Random Forests. Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference. DOI 10.1109/ICCVW.2009.5457447 - ## 1 Answer I didn't understand very well what you want to do with histograms. If you have a set of features $R_j$ and you are able to generate a split $s$ you can compute $Q(R_j,s)$ easily.Given the split you directly have the two subset of records $R_{jls}$ and $R_{jrs}$ and you need only to compute the probability distribution across classes over these three sets. More specifically you only need: $$p_i^j \ \ i=1,\dots,K$$ $$p_i^{jls} \ \ i=1,\dots,K$$ $$p_i^{jrs} \ \ i=1,\dots,K$$ And you can compute these values counting the number of records for each class for a particular set of records. $$p_i^j = \frac{|R^i_j|}{|R_j|}$$ $$p_i^{jls} = \frac{|R^i_{jls}|}{|R_{jls}|}$$ $$p_i^{jrs} = \frac{|R^i_{jrs}|}{|R_{jrs}|}$$ Where, for example, $R_j^i$ are the records of class $i$ in the set $R_j$. You only need to keep track of the records in a node, that is associated to a set of records, and be able to find a split. If you have all the records in a particular node you might also be able to find splits, for example with an exhaustive search across all possible splits for each feature. If you have a record $(\mathbf{x},y)=(x_1,\dots,x_n,y)$ you put it in a different set according to a test on the feature $x_i$ just looking if $$x_i < \vartheta$$ (if $x_i$ is a continuous parameter) When you have a split you can compute $Q$, then varying $\vartheta$ a varying feature you can find the best split according to $Q$ measure. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312764406204224, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/03/06/exact-sequences/?like=1&source=post_flair&_wpnonce=c1c9c39881
# The Unapologetic Mathematician ## Exact Sequences A sequence of groups is just a list of groups with homomorphisms going down the list: $...\rightarrow G_{n-1}\rightarrow G_n\rightarrow G_{n+1}\rightarrow ...$. We’ll use $d_n$ to refer to the homomorphism from $G_n$ to $G_{n+1}$. We say that a sequence is “exact” if the image of $d_{n-1}$ is the kernel of $d_n$ for each $n$. What does this mean? First off, if we start with an element of $G_{n-1}$ and hit it with $d_{n-1}$ we get something in ${\rm Im}(d_{n-1})$. Since this is the same as ${\rm Ker}(d_n)$, if we now apply $d_n$ we go to the identity element of $G_{n+1}$. That is, the composition $d_n\circ d_{n-1}$ is always the trivial homomorphism sending all of $G_{n-1}$ to the identity in $G_{n+1}$. On the other hand, if we have an element in the kernel of $d_n$ it’s also in the image of $d_{n-1}$. This means that if an element gets sent to the identity in the next group, it must have come from an element in the previous group. So why should we care? Well, a number of different things can be very nicely said with exact sequences. If we write $\mathbf1$ for the group containing only one element, we can set up a sequence: ${\mathbf1}\rightarrow G\rightarrow^fH$. What does it mean for this sequence to be exact? Well there’s only one homomorphism from $\mathbf1$ to any group, and its image is just the identity element in $G$. So the kernel of $f$ is trivial — $f$ is a monomorphism. Now let’s flip the diagram over to $G\rightarrow^fH\rightarrow{\mathbf1}$. There’s only one homomorphism possible from any group to $\mathbf1$, and its kernel is the whole domain. This means that the image of $f$ has to be all of $H$ — $f$ is an epimorphism. Let’s put these two together to get a sequence ${\mathbf1}\rightarrow N\rightarrow^\iota G\rightarrow^\pi H\rightarrow{\mathbf1}$. Exactness at $N$ means that $\iota$ is a monomorphism, which we can think of as describing a copy of $N$ sitting inside $G$. Exactness at $H$ means that $\pi$ is an epimorphism. What does exactness at $G$ mean? The image of $\iota$ is that copy of $N$, which has to also be the kernel of $\pi$. That is, $H$ is (isomorphic to) $G/N$. We call any sequence of this form a “short exact sequence”. Remember that the First Isomorphism Theorem tells us that we can factor any homomorphism into an epimorphism from the domain onto a quotient, followed by a monomorphism putting that quotient into the codomain. We can use that here to weave any exact sequence out of short exact sequences. Here is the (really cool) diagram: Each of the diagonal lines is a short exact sequence, and as it says each (nontrivial) group off the main line is the image of one of the homomorphisms on the line and the kernel of the next. We can also write an exact sequence ${\mathbf1}\rightarrow G\rightarrow H\rightarrow{\mathbf1}$. This just says that the homomorphism between $G$ and $H$ is an isomorphism. It’s really nice when this shows up in the middle of a longer exact sequence. If we can show that $G_{n-1}$ and $G_{n+2}$ are both trivial the sequence looks like $...\rightarrow{\mathbf1}\rightarrow G_n\rightarrow G_{n+1}\rightarrow{\mathbf1}\rightarrow ...$, so $G_n$ and $G_{n+1}$ are immediately isomorphic. Another way exact sequences show up is in describing the structure of a group. We know that every group is a quotient of a free group. That is, there is some free group $F_1$ so that $F_1\rightarrow G\rightarrow{\mathbf1}$ is exact. Then the kernel of this projection is another group, so it’s the quotient of another free group $F_2$. Now the sequence $F_2\rightarrow F_1\rightarrow G\rightarrow{\mathbf1}$ is exact. This is the presentation of $G$ by generators and relations. But the homomorphism from $F_2$ to $F_1$ might have a nontrivial kernel — there might be relations between the relations. In that case we can describe those relations as the quotent of another free group $F_3$: $F_3\rightarrow F_2\rightarrow F_1\rightarrow G\rightarrow{\mathbf1}$ is exact. We can keep going like this to construct an exact sequence called a “free resolution of $G$“. It’s particularly nice if the process terminates at some point, giving a sequence ${\mathbf1}\rightarrow F_n\rightarrow F_{n-1}\rightarrow ...\rightarrow F_2\rightarrow F_1\rightarrow G\rightarrow{\mathbf1}$. A free resolution of a group that has only finitely many terms gives a lot of information about the structure of $G$. ## 3 Comments » 1. And welcome to my own area of research! These free resolutions are pretty near the objects I study. I prefer free resolutions with group algebras, and not groups. Easier to think about. Comment by | March 7, 2007 | Reply 2. Oh surely. But of course I haven’t said what an algebra is.. yet! If nothing else, this whole project is reminding me how much I do know. I should be able to keep this going for quite a while yet. Comment by | March 7, 2007 | Reply 3. i enjoy the interactive session but im a masters student currently studying on delay differential equations.please i will be glad to have any basic knowledge and materials that would help in my study.thanks Comment by Esther Oluwaseun | April 25, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 62, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493197202682495, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/convention+education
# Tagged Questions 4answers 93 views ### Complete instead of Complex, Irregular instead of Imaginary Will the terms complex and imaginary ever be replaced? At least within beginning classes? I imagine it is more of a kind of hazing into the "mathemitician's club" to allow the terms to confuse ... 1answer 45 views ### How is each factor of an expression called? $$mc^2.$$ is called an expression. Correct me if I'm wrong. I'd like to see this expression as $$m * c^2.$$ Here, one of the expression's factors is $m$. Is there a general name for the factors of an ... 5answers 1k views ### No radical in the denominator — why? Why do all school algebra texts define simplest form for expressions with radicals to not allow a radical in the denominator. For the classic example, $1/\sqrt{3}$ needs to be "simplified" to ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9161068201065063, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/93851-2-sided-alternative-hypothesis.html
# Thread: 1. ## 2-sided alternative hypothesis Assuming the rest of my work is correct for this question, I am having trouble on part c. I don't know if I have the right formula and, if I do, I don't know how to calculate $t_{\frac{0.025}{2}}$. Let X equal the length of wood blocks manufactured. Assume distribution of X is $N(\mu,\sigma^{2})$. The greatest lenth is 7.5 inches. We shall test the null hypothesis $H_{0}: \mu=7.5$against a 2-sided alternative hypothesis using 10 observations. a) Define test statistic and critical region for an $\alpha=0.05$ significance level. test statistic- $t=\frac{\bar{x}-7.5}{\frac{s}{\sqrt{10}}}$ critical region- $|t|=\frac{|\bar{x}-7.5|}{\frac{s}{\sqrt{10}}} \ge t_{\frac{\alpha}{2}}(10-1)=2.262$ Calculate the value of the test statistic and give your decision using the following data (n=10) 7.65 7.60 7.65 7.70 7.55 7.55 7.40 7.40 7.50 7.50 $\bar{x}=\frac{75.5}{10}=7.55$ $s^{2}=0.01056$ s=0.10274 t=1.539 1.539 is not greater than 2.262, therefore, it fails to reject $H_{0}: \mu=7.5$ c) Is $\mu=7.50$ contained in a 95% confidence interval for $\mu$? $\bar{x}+/- t_{\frac{0.025}{2}}(n-1)(\frac{s}{\sqrt{n}})$ As of now, I am using my values of $\bar{x}=7.55$, s=0.10274 and n=10 Thank you for helping! 2. You would only need to know $t_{0.025/2}$ if you were calculating a 97.5% confidence interval for $\mu$ . 3. I didn't understand that either. My book seems to use that for the two sided hypotheses when finding the confidence interval. Is there a different formula I should use? or the same without dividing $\alpha$ by 2. The critical region equation for hypotheses one mean and variance unknown written in the book for H1: $\mu$ does not equal $\mu_{0}$ is $|\bar{x}-\mu_{0}| \ge t_{\frac{\alpha}{2}}(n-1)\frac{s}{\sqrt{n}}$ 4. Originally Posted by larz I didn't understand that either. My book seems to use that for the two sided hypotheses when finding the confidence interval. Is there a different formula I should use? or the same without dividing $\alpha$ by 2. The critical region equation for hypotheses one mean and variance unknown written in the book for H1: $\mu$ does not equal $\mu_{0}$ is $|\bar{x}-\mu_{0}| \ge t_{\frac{\alpha}{2}}(n-1)\frac{s}{\sqrt{n}}$ $\alpha = 0.05$ in your problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137001633644104, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/13072/stokes-theorem-etc-for-non-hausdorff-manifolds/
## Stokes' theorem etc., for non-Hausdorff manifolds ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is prompted by another one. I want to motivate the definition of a scheme for people who know about manifolds(smooth, or complex analytic). So I define a manifold in the following way. Defn: A smooth $n$-manifold is a pair $(X, \mathcal{O}_X)$, where $X$ is a topological space and $\mathcal{O}_X$ is a sheaf of rings on it, such that, every point $x \in X$ has a neighborhood $U_x$ which is homeomorphic to an open set $V_x$ in $\mathbb{R}^n$ and $\mathcal{O}_X$ restricted to $U_x$ is isomorphic to the sheaf of ring of smooth functions on $V_x$ and its open subsets. This agrees with the usual definition using charts and atlases, for all except the requirement that a manifold is a (separable) Hausdorff space. But indeed it seems that many things in differential topology can be proven without using the Hausdorff property. In a fleeting conversation in a brief encounter, a personal I shall refer to as O., informed me that even Stokes' theorem can be done this way. But I am unable to ask O. again about this, as he is not physically around. If the above is true, then this is a really good point to mention when introducing schemes to a person who knows about differentiable manifolds. So my main question: Is it true that the proof of Stokes' theorem for smooth manifolds can be proven without the Hausdorff condition on the manifold? If so, is it done in any well-known reference? Aside question: What are some crucial propositions/theorems in differential topology that use Hausdorff condition, except those involving imbedding in some $\mathbb{R}^m$ for high enough $m$, for which of course Hausdorffness is a necessary condition(together with separable property)? Tom Church answers below that partitions of unity does not work, for instance on the example of the line with the double point. However I believe that one can still make sense of integration of differential forms even with such pathologies, because by introducing a measure for instance, we can ignore sets of measure $0$. - 2 @Yemon: the same "line with a double origin" example given by Hartshorne works as an example of a non Hausdorff manifold. – Mariano Suárez-Alvarez Jan 26 2010 at 22:31 1 I feel like removing the Hausdorff restriction is a pretty useless way to generalize manifolds. Instead, suppose that every point has a nbhd homeomorphic to a hilbert space, or a banach space, or even a frechet space. These are actually contexts where we can prove useful theorems cf. Lang's book on manifolds. One motivation I've heard for scheme theory is that we'd rather have a good category of bad spaces than a bad category of good spaces. Dropping the assumption of hausdorffness doesn't make our spaces category any nicer. – Harry Gindi Jan 27 2010 at 1:43 1 Remember that hausdorff spaces are nicer categorically than general topological spaces because they are algebraic with respect to monadicity. Dropping the assumption of Hausdorffness actually makes things worse rather than better. – Harry Gindi Jan 27 2010 at 1:44 5 Non-Hausdorff manifolds arise quite naturally in certain contexts. For example, if X is a foliated manifold the leaf space of X (e.g. the quotient of X defined by the equivalence relation x ~ y if and only if x and y lie on a common leaf) need not be Hausdorff. But it is otherwise a manifold (by the definition of a foliation). I have a colleague who at one point described their main field of study as 1-dimensional (non-Hausdorff) manifolds (these arising as the leaf spaces of certain codimension 1 foliations of three-manifolds). – Emerton Jan 27 2010 at 2:44 4 I should modify my comment to be less categorical, and rather say that certain foliated manifolds have leaf space given by a non-Hausdorff manifold. – Emerton Jan 27 2010 at 13:32 show 8 more comments ## 2 Answers The existence of flows in the direction of a vector field seems to require Hausdorff; indeed, consider the vector field $\frac{\partial}{\partial x}$ on the line-with-two-origins. We have no global existence of a flow for any positive t, even if we make our space compact (that is, considering the circle-with-one-point-doubled). If the nonexistence of the flow is not visibly clear, consider instead the real line with the interval [0,1] doubled. Also, partitions of unity do not exist; for example, in the line with two origins, take the open cover by "the line plus the first origin" and "the line plus the second origin". There is no partition of unity subordinate to this cover (the values at each origin would have to be 1). For me, a basic example of the beauty of this function-theoretic approach is the definition of a vector field as a derivation $D\colon C^\infty(M)\to C^\infty(M)$. The proof that such a derivation defines a vector field hinges upon the fact that $Df$ near a point p only depends on $f$ near the point p. To prove this fact you use the fineness of your sheaf $\mathcal{O}_X$, i.e. the existence of partitions of unity. (It is true though that the failure of fineness in the non-Hausdorff case is of a different sort and might not break this particular theorem.) I feel that the existence of partitions of unity, and the implications thereof, is one of the basic fundamentals of approaching smooth manifolds through their functions; more importantly, a good handle on how partitions of unity are used is important to understand the differences that arise when the same approach is extended to more rigid functions (holomorphic, algebraic, etc.). Now that the question has been edited to ask specifically about Stokes' theorem, let me say a bit more. Stokes' theorem will be false for non-Hausdorff manifolds, because you can (loosely speaking) quotient out by part of your manifold, and thus part of its homology, without killing all of it. For the simplest example, consider dimension 1, where Stokes' theorem is the fundamental theorem of calculus. Let $X$ be the forked line, the 1-dimensional (non-Hausdorff) manifold which is the real line with the half-ray $[0,\infty)$ doubled. For nonnegative $x$, denote the two copies of $x$ by $x^\bullet$ and $x_\bullet$, and consider the submanifold $M$ consisting of $[-1,0) \cup [0^\bullet,1^\bullet] \cup [0_\bullet,1_\bullet]$. The boundary of $M$ consists of the three points $[-1]$ (with negative orientation), $[1^\bullet]$ (with positive orientation), and $[1_\bullet]$ (with positive orientation); to see this, just note that every other point is a manifold point. Consider the real-valued function on $X$ given by "$f(x)=x$" (by which I mean $f(x^\bullet)=f(x_\bullet)=x$). Its differential is the 1-form which we would naturally call $dx$. Now consider $\int_M dx$; it seems clear that this integral is 3, but I don't actually need this. Stokes' theorem would say that $\int_M dx=\int_M df = \int_{\partial M}f=f(1^\bullet)+f(1_\bullet)-f(-1)=1+1-(-1)=3$. This is all fine so far, but now consider the function given by $g(x)=x+10$. Since $dg=dx$, we should have $\int_M dx=\int_M dg=\int_{\partial M}g=g(1^\bullet)+g(1_\bullet)-g(-1)=11+11-9=13$. Contradiction. It's possible to explain this by the nonexistence of flows (instead of $df$, consider the flux of the flow by $\nabla f$). But also note that Stokes' theorem, i.e. homology theory, is founded on a well-defined boundary operation. However, without the Hausdorff condition, open submanifolds do not have unique boundaries, as for example $[-1,0)$ inside $X$, and so we can't break up our manifolds into smaller pieces. We can pass to the Hausdorff-ization as Andrew suggests by identifying $0^\bullet$ with $0_\bullet$, but now we lose additivity. Recall that $M$ was the disjoint union of $A=[-1,0)$ and $B=[0^\bullet,1^\bullet] \cup [0_\bullet,1_\bullet]$. So in the quotient $\partial [A] = [0]-[-1]$ and $\partial [B] = [1^\bullet]-[0]+[1_\bullet]-[0]=[1^\bullet]+[1_\bullet]-2[0]$, which shows that $\partial [M]\neq \partial [A]+\partial [B]$. This is inconsistent with any sort of Stokes formalism. Finally, I'd like to point out that Stokes' theorem aside, even rather nice non-Hausdorff manifolds can be significantly more complicated than we might want to deal with. One nice example is the leaf-space of the foliation of the punctured plane by the level sets of the function $f(x,y)=xy$. The leaf-space looks like the union of the lines $y=x$ and $y=-x$, except that the intersection has been blown up to four points, each of which is dense in this subset. In general, any finite graph can be modeled as a non-Hausdorff 1-manifold by blowing up the vertices, and in higher dimensions the situation is even more confusing. So for any introductory explanation, I would strongly recommend requiring Hausdorff until the students have a lot more intuition about manifolds. - I am a little unhappy about "if we make our space compact": if you accept the algebro-geometric point of view, compactness includes the Hausdorff property, just like complete schemes have to be separated. – t3suji Jan 27 2010 at 0:19 5 Compactness has not included Hausdorffness for ages, just as no one says today preschéma. – Mariano Suárez-Alvarez Jan 27 2010 at 0:57 1 I think that the theorem relating derivations to vector fields still goes through. As I recall, all one needs is the existence of a single bump function around every point, and you still get this as it's true locally in R^n. – Jason DeVito Jan 27 2010 at 1:19 Lack of partition of unity is a serious problem. However I suppose we can make sense of integration of differential forms still, by transforming the situation to that of integrating a measure, wherein we can ignore sets of measure 0. – Anweshi Jan 27 2010 at 11:35 I have edited my question a little bit, in order to focus on Stokes' theorem. I hope you don't mind. – Anweshi Jan 27 2010 at 11:38 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you are a little bit careful as to what you mean by a "non-Hausdorff manifold" then I believe that Stokes' theorem will go through. The crucial issue is that the smooth functions cannot detect the fact that the manifold is not Hausdorff. Thus when you try to integrate a smooth function over the manifold, the function that you are integrating does not know that the manifold was not Hausdorff. Thus the fact that partitions of unity don't work doesn't matter because they do work "up to Hausdorffness". Thus if "non-Hausdorff manifold" essentially means "becomes a manifold upon Hausdorffification" then you ought to be alright. I can't think of a counterexample off the top of my head, but that doesn't mean that one doesn't exist (i.e. an example of a "locally Euclidean space" that does not quotient down to a "Hausdorff locally Euclidean space" - ignore paracompactness for this, that's cheating). A slightly tangential point is that if you have a "locally Euclidean space" that is not Hausdorff then you probably have the wrong topology on it. Take, again, the double pointed line. You probably think that the correct topology on this is the one with basis either $(a,b)$ or $(-a,a)\setminus \{0\}\cup \{\ast\}$. Wrong! By doing so, you are artificially imposing the condition that $0$ and $\ast$ can be distinguished without justifying that assertion. If you try by differential topological means (i.e. not topological, but differential topological) to separate $0$ and $\ast$ you find that you cannot tell the difference between them. So the correct topology has basis $(a,b)$ with $a$ and $b$ of the same sign, and $(-a,a) \cup \{\ast\}$. That is, the induced topology on the subset $\{0,\ast\}$ is the indiscrete topology. At this point I should come clean (especially in the light of Emerton's comment to the original question) and say that when thinking of things that are like-but-not-manifolds then I think of Froelicher spaces. So a non-Hausdorff manifold really means a non-Hausdorff Froelicher space with nice local properties. And for Froelicher spaces, the difference between Hausdorff and non-Hausdorff is extremely small. But if what you are interested in is smooth functions, then that's the right view to take. And if you are integrating, then you are using smooth functions. Of course, in other contexts you may want to remember more structure than just what the smooth functions can detect, but that's a different story. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417856931686401, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/175477/if-f-mathbb-q-to-mathbb-q-is-a-homomorphism-prove-that-fx-0-for-all-x/175480
# If $f: \mathbb Q\to \mathbb Q$ is a homomorphism, prove that $f(x)=0$ for all $x\in\mathbb Q$ or $f(x)=x$ for all $x$ in $\mathbb Q$. If $f: \mathbb Q\to \mathbb Q$ is a homomorphism, prove that $f(x)=0$ for all $x\in\mathbb Q$ or $f(x)=x$ for all $x$ in $\mathbb Q$. I'm wondering if you can help me with this one? - 13 Homomorphism of groups, rings, fields? – countinghaus Jul 26 '12 at 15:02 – Martin Sleziak Jul 26 '12 at 15:09 BTW, ring homomorphisms are often asked to be unital ($f(1) = 1$). If it is the convention you adopt, $0$ is not a morphism $\mathbb Q \to \mathbb Q$. And if it isn't, the property wou want to prove is false (see for example $x \mapsto 2x$.) – PseudoNeo Jul 26 '12 at 15:12 2 @PseudoNeo: Wrong. $x\mapsto 2x$ does $1\cdot 1=1\mapsto 2\neq 2\cdot 2$. – tomasz Jul 26 '12 at 15:20 @tomasz : You're right, I even read a question about non-unital morphisms $n\mathbb Z \to m\mathbb Z$ only few hours ago... – PseudoNeo Jul 26 '12 at 17:23 ## 4 Answers Any multiplicative homomorphism must send $1$ to an idempotent. The only idempotents of $\mathbb{Q}$ are $0$ and $1$ (the roots of $x^2-x=x(x-1)$), so $\varphi(1)=0$ or $\varphi(1)=1$. If $\varphi(1)=0$, then $\varphi(a) = \varphi(1a) = \varphi(1)\varphi(a)=0$, so $\varphi(x)=0$ for all $x$. If $\varphi(1)=1$ and the map is additive, then prove inductively that $\varphi(n)=n$ for all positive integers, hence for all integers; deduce that $\varphi(q) = q$ for all $q$. - Thank you . It was of much help as well :) – Mirna Jul 27 '12 at 18:34 I understand that we're looking at rationals as a ring (as a group it is obviously false). Pick an arbitrary homomorphism $\varphi:\mathbf Q\to \mathbf Q$. Put $e:=\varphi(1)$. If $e=0$, then for any $x\in \mathbf Q$ we have $\varphi(x)=\varphi(1)\varphi(x)=0\cdot\varphi(x)=0$, so $\varphi$ is zero. If $e\neq 0$, then $e\cdot e=\varphi(1)\varphi(1)=\varphi(1\cdot 1)=e$, so $e=1$ (since $e\neq 0$ and $0,1$ are the only solutions of the equation $x^2-x=0$). Furthermore, for any $\frac{p}{q}$ we have $1+\ldots+1=p\cdot 1= p=\frac{p}{q}+\ldots \frac{p}{q}=q\cdot \frac{p}{q}$, so $e+\ldots +e=p\cdot e= q\cdot \varphi(\frac{p}{q})$, so $\varphi(\frac{p}{q})=e\cdot \frac{p}{q}=\frac{p}{q}$ and we're done (for negative $p$ we can use $\frac{p}{q}=(-1)\cdot \frac{-p}{q}$ so $\varphi(\frac{p}{q})=-e\cdot\varphi(\frac{-p}{q})=\frac{p}{q}$). - As groups, it is also true. – Steve D Jul 26 '12 at 17:07 @tomasz I'm new at this "forum" and I do appreciate your answer, but I'm kind of confused, as I thought it would be easier to solve. Can you please tell if your answer belongs to the f(x)= 0 or f(x)=x? Thanks, – Mirna Jul 26 '12 at 17:36 1 @Mirna: this deals with both cases at once. If $f(1) = 0$ then $f$ is identically 0 (sentence starting 'If $e = 0$, ...') - the rest shows that if $f(1) \neq 0$ then $f$ is the identity. – Kris Jul 26 '12 at 17:53 @SteveD: How so? x→ax is a group homomorphism of Q into Q for any a∈Q. For a≠0 it is even an automorphism. Or, if you meant the multiplicative *semi*group, then I believe the homomorphisms are induced by injections of the set of prime numbers within itself (as a set), and absolute value (so there's a lot of nontrivial ones around, many of them automorphisms, even many more than in the additive group case). – tomasz Jul 26 '12 at 23:29 @tomasz: Yes, sorry, I tried to delete my comment after posting it, I simply meant that all (non-trivial) homomorphisms are isomorphisms. – Steve D Jul 26 '12 at 23:33 If $f$ is a homomorphism of rings, we know the kernel of $f$ has to be an ideal of $\mathbb{Q}$, but as $\mathbb{Q}$ is a field, the only ideals of $\mathbb{Q}$ are $0$ and $\mathbb{Q}$ itself. - 3 It might be my ignorance in ring theory, but it seems to me you still need to show that there's no nontrivial embedding of $\mathbf Q$ into itself for this to be complete. For example, if instead of $\mathbf Q$ you used a field with nontrivial automorphisms, your argument would be clearly insufficient. – tomasz Jul 26 '12 at 15:24 2 Noting that the prime field of $\mathbb{Q}$ is itself is sufficient, so if $\varphi$ is $1-1$ it is the $Id$ otherwise $\varphi\equiv 0$ because the kernel is an ideal. – Belgi Jul 26 '12 at 15:38 There is also a related result that can be useful (see Atiyah Macdonald): If $A$ is a field, then every homomorphism of $A$ into a nonzero ring $B$ is injective. So, you can conclude that necessarily $f(1)=1$ (otherwise is the zero homomorphism), then $f(n)=n$, for $n$ integer, and the result follows, since $f(n/m)=f(n)f(m)ˆ{-1}=n/m$. - This is not an answer, but a comment. (But I think you can't comment yet, so I won't downvote it for now.) – tomasz Aug 7 '12 at 13:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343393445014954, "perplexity_flag": "head"}
http://mathoverflow.net/questions/69117/cohomological-dimension-of-mathcalb-n/69119
## Cohomological dimension of $\mathcal{B}_n$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is the cohomological dimension of the braid group $\mathcal{B}_n$ on n-strands ? A reference would be appreciated. - An obvious upper bound would be $2n$ for $B_n$, by looking at the classifying space. I think you could even take an upper bound of $n-1$. – Steve D Jun 29 2011 at 15:26 There is a tag already for cohomological-dimension, so I figured we may as well use it – David White Jun 29 2011 at 16:30 ## 6 Answers The other answers are correct, but I wanted to point out a quick way to see that $B_n$ has cohomological dimension $n-1$. One obtains a lower bound of $n-1$ since $\mathbb{Z}^{n-1}$ is a subgroup of $B_n$. Take $n-1$ disjoint non-isotopic loops forming a pants decomposition of the $n$ punctured plane, then Dehn twists about these give a subgroup isomorphic to $\mathbb{Z}^{n-1}$. For an upper bound, one may use the fact that the moduli space of $n$ points in $\mathbb{C}$ (normalized to have sum $=0$) is a Stein manifold of complex dimension $n-1$, and therefore has a spine of dimension $n-1$. The fundamental group of this space is $B_n$. This space is equivalent to the space of monic polynomials of degree $n$ with zero trace (coefficient of degree $n-1=0$) and non-zero discriminant, which is how one may see that it is Stein (actually, an affine variety). The fact that the moduli space is a $K(B_n,1)$ follows from Teichmuller theory, or one may pass to the finite-sheeted cover of $n$ marked points, and see that this is an iterated surface bundle, and therefore its universal cover is contractible. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It's $n-1$. See Harer, The virtual cohomological dimension of the mapping class group of an orientable surface, Inventiones Mathematicae, Volume 84, Number 1, 157-176. - There is a more geometric(?) method to obtain an upper bound of the cohomological dimension. Recall that the braid group $B_n$ is the fundamental group of the complement of the complexification of an essential hyperplane arrangement `$\mathcal{A}_{n-1}$` in an $(n-1)$-dimensional vector space $\mathfrak{h}_{n-1}$ divided out by the action of the symmetric group $\Sigma_n$ of $n$ letters. Namely ```$$ B_n = \pi_1((\mathfrak{h}_{n-1}\otimes\mathbb{C} - \cup_{H\in \mathcal{A}_{n-1}} H\otimes\mathbb{C})/\Sigma_n). $$``` Since the complement is known to be $K(\pi,1)$, it serves as the classifying space of $B_n$. In general, for any real hyperplane arrangement $\mathcal{A}$, Salvetti constructed a cell complex $\mathrm{Sal}(\mathcal{A})$ which is homotopy equivalent to the complement of the complexification. Furthermore if $\mathcal{A}$ is essential in an $n$ dimensional vector space, then $\dim\mathrm{Sal}(\mathcal{A}) = n$. Thus the cohomological dimension of $B_n$ is bounded by $n-1$. - I also recommend M. Korkmaz' survey on low-dimensional homology of Mapping Class Groups: Korkmaz, Mustafa(TR-MET) Low-dimensional homology groups of mapping class groups: a survey. (English summary) Turkish J. Math. 26 (2002), no. 1, 101–114. 57M05 (20J05 57M07 57N05) - The homology of braid groups with rational and finite field coefficients (all with constant action) was computed by Cohen in the appendix to chapter III of "The homology of iterated loop spaces" LNM 533. - I will sadly have to answer to this old post to elaborate on the lower bound part of Agol's answer. I would be grateful if someone could turn this into a comment. It is in fact quite easy to find a subgroup of $P_n$ isomorphic to $\mathbb Z^{n-1}$. It is generated by the full twist of the first $k$ strands, where $k= 2,3,\dots, n$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9130279421806335, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/82975/integrating-without-complex-analysis/164241
# Integrating without complex analysis How can I evaluate this integral (without complex analysis)? $$\int_{-\infty}^\infty\sinh [x(1-b)] \exp(iax) dx\qquad a, b\in \mathbb R$$ Thanks. - 1 The way you have written this, it looks like $\sinh[a(1-b)]$ is just a constant which you can take outside of the integral. In this case the integral $\int_{-\infty}^{\infty} \exp(iax) dx$ does not converge. Maybe you meant to write something else. – Aleks Vlasev Nov 17 '11 at 9:03 Did you intend to have an $x$ somewhere in the $\sinh$? Yeah, what Aleks said ;-) – robjohn♦ Nov 17 '11 at 9:04 @AleksVlasev: Thanks, very well spotted! It's late and I'm losing it, anyway, I have edited it now. – peake Nov 17 '11 at 9:10 @robjohn: You are very right! :-) – peake Nov 17 '11 at 9:11 4 $\sinh(x)\to\pm\infty$ as $x\to\pm\infty$ so the integral doesn't converge, no complex analysis needed :-). – robjohn♦ Nov 17 '11 at 9:12 show 1 more comment ## 1 Answer This is the Fourier transform of $\sinh$, which can be broken down as a sum of exponentials. Because the Fourier transform is a linear operator, your integral is a sum of Fourier transforms of exponentials, that is a sum of Lorentzian functions (as it's shown for instance here). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278088212013245, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/10846/octonionic-unitary-group/10856
## Octonionic Unitary Group? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi all. I was wondering if anyone has any references on work related to the Octonionic Unitary group. I would imagine that such a group would be generated by Octonionic skew-Hermitian matrices (at the lie-algebra level) in analogy to the Complex Unitary and the Quaternion Unitary (aka Compact Symplectic) groups. Some hints of the construction appear on page 28 of the paper "The Octonions" by Baez. http://arxiv.org/abs/math/0105155 Thanks - ## 3 Answers Instead of asking for a group of matrices, one could try to think about the automorphism group of projective space plus its metric. For R^n, this is PO(n). For C^n, it's PU(n) x Z_2 (semidirect product), where the Z_2 is coming from complex conjugation. For H^n, it's PU(n,H) x SO(3) (again semidirect), where the SO(3) is really inner automorphisms of H. For OP^2, it's E_6, as I recall (not a semidirect product!). Even defining OP^2 is nonobvious. Daniel Allcock has a paper in which he shows the equivalence of several peoples' definitions. There doesn't seem to be any reasonable candidate for OP^3, but some have speculated that it should be the "Monster manifold" whose associated CFT is the moonshine module. - 3 I believe that the automorphism group of OP^2 is F_4, as F_4/Spin(9) = OP^2 (though I'm VERY not certain about this). More on exactly what OP^2 is can be found here: mathoverflow.net/questions/1922/… – Jason DeVito Jan 5 2010 at 23:42 2 The group $E_6$ is the group of collineations (aka projective transformtions) of $\mathbb{OP}^2$ while the group $F_4$ is the group of isometries since there is a naturally defined Riemannian metric on $\mathbb{OP}^2$. – robot May 15 2012 at 11:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Such a thing would not be a group, because the octonions are not associative. For example, the octonionic unitary group of dimension 1 would just be the unit octonions, which form S^7, the sphere in R^8. This is of course not a group. This is not to say the algebraic structure you get is not interesting, it is just not a group. - My reading of the reference supplied by the OP was that Baez explains precisely how to get around this issue. – Kevin Buzzard Jan 6 2010 at 11:11 As others have pointed out, the nonassociativity of the octonions prevents one from constructing a group. For example, any subgroup of the octonions lives inside of a quaternion subalgebra. Having said that, the Clifford algebra $Cl(\mathbb{R}^7)$ has two inequivalent irreducible representations which are each as real vector spaces isomorphic to the octonions (i.e., they are eight-dimensional) and provided that we identify $\mathbb{R}^7$ with the imaginary octonions, the action of $Cl(\mathbb{R}^7)$ is given by left and right octonionic multiplications. This is analogous to what happens with $Cl(\mathbb{R}^3)$ substituting octonion for quaternion in what I said above. Now the Spin group $Spin(3)$ is the one-dimensional quaternionic unitary group and lives naturally inside $Cl(\mathbb{R}^3)$, so one could think of the group $Spin(7)$ as being the analogue of the one-dimensional octonionic unitary group. By the same token, and given the low-dimensional isomorphisms $$Spin(2,1) \cong SL(2,\mathbb{R})$$ $$Spin(3,1) \cong SL(2,\mathbb{C})$$ $$Spin(5,1) \cong SL(2,\mathbb{H})$$ one would be tempted to think of $Spin(9,1)$ as $SL(2,\mathbb{O})$, even though such a group as written does not exist. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445899724960327, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Quicksort
# Quicksort Class Visualization of the quicksort algorithm. The horizontal lines are pivot values. Sorting algorithm O(n2) O(n log n) O(n log n) O(n) auxiliary (naive) O(log n) auxiliary (Sedgewick 1978) Quicksort, or partition-exchange sort, is a sorting algorithm developed by Tony Hoare that, on average, makes O(n log n) comparisons to sort n items. In the worst case, it makes O(n2) comparisons, though this behavior is rare. Quicksort is often faster in practice than other O(n log n) algorithms.[1] Additionally, quicksort's sequential and localized memory references work well with a cache. Quicksort is a comparison sort and, in efficient implementations, is not a stable sort. Quicksort can be implemented with an in-place partitioning algorithm, so the entire sort can be done with only O(log n) additional space used by the stack during the recursion.[2] ## History The quicksort algorithm was developed in 1960 by Tony Hoare while in the Soviet Union, as a visiting student at Moscow State University. At that time, Hoare worked in a project on machine translation for the National Physical Laboratory. He developed the algorithm in order to sort the words to be translated, to make them more easily matched to an already-sorted Russian-to-English dictionary that was stored on magnetic tape.[3] ## Algorithm Quicksort is a divide and conquer algorithm. Quicksort first divides a large list into two smaller sub-lists: the low elements and the high elements. Quicksort can then recursively sort the sub-lists. The steps are: 1. Pick an element, called a pivot, from the list. 2. Reorder the list so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation. 3. Recursively apply the above steps to the sub-list of elements with smaller values and separately the sub-list of elements with greater values. The base case of the recursion are lists of size zero or one, which never need to be sorted. ### Simple version In simple pseudocode, the algorithm might be expressed as this: ``` function quicksort('array') if length('array') ≤ 1 return 'array' // an array of zero or one elements is already sorted select and remove a pivot value 'pivot' from 'array' create empty lists 'less' and 'greater' for each 'x' in 'array' if 'x' ≤ 'pivot' then append 'x' to 'less' else append 'x' to 'greater' return concatenate(quicksort('less'), 'pivot', quicksort('greater')) // two recursive calls ``` Full example of quicksort on a random set of numbers. The shaded element is the pivot. It is always chosen as the last element of the partition. However, always choosing the last element in the partition as the pivot in this way results in poor performance ($O(n^2)$) on already sorted lists, or lists of identical elements. Since sub-lists of sorted / identical elements crop up a lot towards the end of a sorting procedure on a large set, versions of the quicksort algorithm which choose the pivot as the middle element run much more quickly than the algorithm described in this diagram on large sets of numbers. Notice that we only examine elements by comparing them to other elements. This makes quicksort a comparison sort. This version is also a stable sort (assuming that the "for each" method retrieves elements in original order, and the pivot selected is the last among those of equal value). The correctness of the partition algorithm is based on the following two arguments: • At each iteration, all the elements processed so far are in the desired position: before the pivot if less than the pivot's value, after the pivot if greater than the pivot's value (loop invariant). • Each iteration leaves one fewer element to be processed (loop variant). The correctness of the overall algorithm can be proven via induction: for zero or one element, the algorithm leaves the data unchanged; for a larger data set it produces the concatenation of two parts, elements less than the pivot and elements greater than it, themselves sorted by the recursive hypothesis. An example of quicksort. In-place partition in action on a small list. The boxed element is the pivot element, blue elements are less or equal, and red elements are larger. ### In-place version The disadvantage of the simple version above is that it requires O(n) extra storage space, which is as bad as merge sort. The additional memory allocations required can also drastically impact speed and cache performance in practical implementations. There is a more complex version which uses an in-place partition algorithm and can achieve the complete sort using O(log n) space (not counting the input) on average (for the call stack). We start with a partition function: ``` // left is the index of the leftmost element of the subarray // right is the index of the rightmost element of the subarray (inclusive) // number of elements in subarray = right-left+1 function partition(array, left, right, pivotIndex) pivotValue := array[pivotIndex] swap array[pivotIndex] and array[right] // Move pivot to end storeIndex := left for i from left to right - 1 // left ≤ i < right if array[i] <= pivotValue swap array[i] and array[storeIndex] storeIndex := storeIndex + 1 swap array[storeIndex] and array[right] // Move pivot to its final place return storeIndex ``` This is the in-place partition algorithm. It partitions the portion of the array between indexes left and right, inclusively, by moving all elements less than `array[pivotIndex]` before the pivot, and the equal or greater elements after it. In the process it also finds the final position for the pivot element, which it returns. It temporarily moves the pivot element to the end of the subarray, so that it doesn't get in the way. Because it only uses exchanges, the final list has the same elements as the original list. Notice that an element may be exchanged multiple times before reaching its final place. Also, in case of pivot duplicates in the input array, they can be spread across the right subarray, in any order. This doesn't represent a partitioning failure, as further sorting will reposition and finally "glue" them together. This form of the partition algorithm is not the original form; multiple variations can be found in various textbooks, such as versions not having the storeIndex. However, this form is probably the easiest to understand. Once we have this, writing quicksort itself is easy: ``` function quicksort(array, left, right) // If the list has 2 or more items if left < right // See "Choice of pivot" section below for possible choices choose any pivotIndex such that left ≤ pivotIndex ≤ right // Get lists of bigger and smaller items and final position of pivot pivotNewIndex := partition(array, left, right, pivotIndex) // Recursively sort elements smaller than the pivot quicksort(array, left, pivotNewIndex - 1) // Recursively sort elements at least as big as the pivot quicksort(array, pivotNewIndex + 1, right) ``` Each recursive call to this quicksort function reduces the size of the array being sorted by at least one element, since in each invocation the element at pivotNewIndex is placed in its final position. Therefore, this algorithm is guaranteed to terminate after at most n recursive calls. However, since partition reorders elements within a partition, this version of quicksort is not a stable sort. ### Implementation issues #### Choice of pivot In very early versions of quicksort, the leftmost element of the partition would often be chosen as the pivot element. Unfortunately, this causes worst-case behavior on already sorted arrays, which is a rather common use-case. The problem was easily solved by choosing either a random index for the pivot, choosing the middle index of the partition or (especially for longer partitions) choosing the median of the first, middle and last element of the partition for the pivot (as recommended by R. Sedgewick).[4][5] Selecting a pivot element is also complicated by the existence of integer overflow. If the boundary indices of the subarray being sorted are sufficiently large, the naïve expression for the middle index, (left + right)/2, will cause overflow and provide an invalid pivot index. This can be overcome by using, for example, left + (right-left)/2 to index the middle element, at the cost of more complex arithmetic. Similar issues arise in some other methods of selecting the pivot element. #### Optimizations Two other important optimizations, also suggested by R. Sedgewick, as commonly acknowledged, and widely used in practice are:[6][7][8] • To make sure at most O(log N) space is used, recurse first into the smaller half of the array, and use a tail call to recurse into the other. • Use insertion sort, which has a smaller constant factor and is thus faster on small arrays, for invocations on such small arrays (i.e. where the length is less than a threshold t determined experimentally). This can be implemented by leaving such arrays unsorted and running a single insertion sort pass at the end, because insertion sort handles nearly sorted arrays efficiently. A separate insertion sort of each small segment as they are identified adds the overhead of starting and stopping many small sorts, but avoids wasting effort comparing keys across the many segment boundaries, which keys will be in order due to the workings of the quicksort process. It also improves the cache use. #### Parallelization Like merge sort, quicksort can also be parallelized due to its divide-and-conquer nature. Individual in-place partition operations are difficult to parallelize, but once divided, different sections of the list can be sorted in parallel. The following is a straightforward approach: If we have $p$ processors, we can divide a list of $n$ elements into $p$ sublists in O(n) average time, then sort each of these in $\textstyle O\left(\frac{n}{p} \log\frac{n}{p}\right)$ average time. Ignoring the O(n) preprocessing and merge times, this is linear speedup. If the split is blind, ignoring the values, the merge naïvely costs O(n). If the split partitions based on a succession of pivots, it is tricky to parallelize and naïvely costs O(n). Given O(log n) or more processors, only O(n) time is required overall, whereas an approach with linear speedup would achieve O(log n) time for overall. One advantage of this simple parallel quicksort over other parallel sort algorithms is that no synchronization is required, but the disadvantage is that sorting is still O(n) and only a sublinear speedup of O(log n) is achieved. A new thread is started as soon as a sublist is available for it to work on and it does not communicate with other threads. When all threads complete, the sort is done. Other more sophisticated parallel sorting algorithms can achieve even better time bounds.[9] For example, in 1991 David Powers described a parallelized quicksort (and a related radix sort) that can operate in O(log n) time on a CRCW PRAM with n processors by performing partitioning implicitly.[10] ## Formal analysis ### Average-case analysis using discrete probability Quicksort takes O(n log n) time on average, when the input is a random permutation. Why? For a start, it is not hard to see that the partition operation takes O(n) time. In the most unbalanced case, each time we perform a partition we divide the list into two sublists of size 0 and $n-1$ (for example, if all elements of the array are equal). This means each recursive call processes a list of size one less than the previous list. Consequently, we can make $n-1$ nested calls before we reach a list of size 1. This means that the call tree is a linear chain of $n-1$ nested calls. The $i$th call does $O(n-i)$ work to do the partition, and $\textstyle\sum_{i=0}^n (n-i) = O(n^2)$, so in that case Quicksort takes $O(n^2)$ time. That is the worst case: given knowledge of which comparisons are performed by the sort, there are adaptive algorithms that are effective at generating worst-case input for quicksort on-the-fly, regardless of the pivot selection strategy.[11] In the most balanced case, each time we perform a partition we divide the list into two nearly equal pieces. This means each recursive call processes a list of half the size. Consequently, we can make only $\log n / \log 2$ nested calls before we reach a list of size 1. This means that the depth of the call tree is $\log n / \log 2$. But no two calls at the same level of the call tree process the same part of the original list; thus, each level of calls needs only O(n) time all together (each call has some constant overhead, but since there are only O(n) calls at each level, this is subsumed in the O(n) factor). The result is that the algorithm uses only O(n log n) time. In fact, it's not necessary to be perfectly balanced; even if each pivot splits the elements with 75% on one side and 25% on the other side (or any other fixed fraction), the call depth is still limited to $\log n/ \log (4/3)$, so the total running time is still O(n log n). So what happens on average? If the pivot has rank somewhere in the middle 50 percent, that is, between the 25th percentile and the 75th percentile, then it splits the elements with at least 25% and at most 75% on each side. If we could consistently choose a pivot from the two middle 50 percent, we would only have to split the list at most $\log n/ \log (4/3)$ times before reaching lists of size 1, yielding an O(n log n) algorithm. When the input is a random permutation, the pivot has a random rank, and so it is not guaranteed to be in the middle 50 percent. However, when we start from a random permutation, in each recursive call the pivot has a random rank in its list, and so it is in the middle 50 percent about half the time. That is good enough. Imagine that you flip a coin: heads means that the rank of the pivot is in the middle 50 percent, tail means that it isn't. Imagine that you are flipping a coin over and over until you get k heads. Although this could take a long time, on average only 2k flips are required, and the chance that you won't get $k$ heads after $100k$ flips is highly improbable (this can be made rigorous using Chernoff bounds). By the same argument, Quicksort's recursion will terminate on average at a call depth of only $2(\log n/ \log (4/3))$. But if its average call depth is O(log n), and each level of the call tree processes at most $n$ elements, the total amount of work done on average is the product, O(n log n). Note that the algorithm does not have to verify that the pivot is in the middle half—if we hit it any constant fraction of the times, that is enough for the desired complexity. ### Average-case analysis using recurrences An alternative approach is to set up a recurrence relation for the T(n) factor, the time needed to sort a list of size $n$. In the most unbalanced case, a single Quicksort call involves O(n) work plus two recursive calls on lists of size $0$ and $n-1$, so the recurrence relation is $T(n) = O(n) + T(0) + T(n-1) = O(n) + T(n-1).$ This is the same relation as for insertion sort and selection sort, and it solves to worst case $T(n) = O(n^2)$. In the most balanced case, a single quicksort call involves O(n) work plus two recursive calls on lists of size $n/2$, so the recurrence relation is $T(n) = O(n) + 2T\left(\frac{n}{2}\right).$ The master theorem tells us that T(n) = O(n log n). The outline of a formal proof of the O(n log n) expected time complexity follows. Assume that there are no duplicates as duplicates could be handled with linear time pre- and post-processing, or considered cases easier than the analyzed. When the input is a random permutation, the rank of the pivot is uniform random from 0 to n-1. Then the resulting parts of the partition have sizes i and n-i-1, and i is uniform random from 0 to n-1. So, averaging over all possible splits and noting that the number of comparisons for the partition is $n-1$, the average number of comparisons over all permutations of the input sequence can be estimated accurately by solving the recurrence relation: $C(n) = n - 1 + \frac{1}{n} \sum_{i=0}^{n-1} (C(i)+C(n-i-1))$ Solving the recurrence gives $C(n) = 2n \ln n = 1.39n \log_2 n.$ This means that, on average, quicksort performs only about 39% worse than in its best case. In this sense it is closer to the best case than the worst case. Also note that a comparison sort cannot use less than $\log_2(n!)$ comparisons on average to sort $n$ items (as explained in the article Comparison sort) and in case of large $n$, Stirling's approximation yields $\log_2(n!) \approx n (\log_2 n - \log_2 e)$, so quicksort is not much worse than an ideal comparison sort. This fast average runtime is another reason for quicksort's practical dominance over other sorting algorithms. ### Analysis of Randomized quicksort Using the same analysis, one can show that Randomized quicksort has the desirable property that, for any input, it requires only O(n log n) expected time (averaged over all choices of pivots). However, there exists a combinatorial proof, more elegant than both the analysis using discrete probability and the analysis using recurrences. To each execution of Quicksort corresponds the following binary search tree (BST): the initial pivot is the root node; the pivot of the left half is the root of the left subtree, the pivot of the right half is the root of the right subtree, and so on. The number of comparisons of the execution of Quicksort equals the number of comparisons during the construction of the BST by a sequence of insertions. So, the average number of comparisons for randomized Quicksort equals the average cost of constructing a BST when the values inserted $(x_1,x_2,...,x_n)$ form a random permutation. Consider a BST created by insertion of a sequence $(x_1,x_2,...,x_n)$ of values forming a random permutation. Let C denote the cost of creation of the BST. We have: $C=\sum_i \sum_{j<i}$ (whether during the insertion of $x_i$ there was a comparison to $x_j$). By linearity of expectation, the expected value E(C) of C is $E(C)= \sum_i \sum_{j<i}$ Pr(during the insertion of $x_i$ there was a comparison to $x_j$). Fix i and j<i. The values ${x_1,x_2,...,x_j}$, once sorted, define j+1 intervals. The core structural observation is that $x_i$ is compared to $x_j$ in the algorithm if and only if $x_i$ falls inside one of the two intervals adjacent to $x_j$. Observe that since $(x_1,x_2,...,x_n)$ is a random permutation, $(x_1,x_2,...,x_j,x_i)$ is also a random permutation, so the probability that $x_i$ is adjacent to $x_j$ is exactly $2/(j+1)$. We end with a short calculation: $E(C)=\sum_i \sum_{j<i} 2/(j+1)= O(\sum_i \log i)=O(n \log n).$ ### Space complexity The space used by quicksort depends on the version used. The in-place version of quicksort has a space complexity of O(log n), even in the worst case, when it is carefully implemented using the following strategies: • in-place partitioning is used. This unstable partition requires O(1) space. • After partitioning, the partition with the fewest elements is (recursively) sorted first, requiring at most O(log n) space. Then the other partition is sorted using tail recursion or iteration, which doesn't add to the call stack. This idea, as discussed above, was described by R. Sedgewick, and keeps the stack depth bounded by O(log n).[4][5] Quicksort with in-place and unstable partitioning uses only constant additional space before making any recursive call. Quicksort must store a constant amount of information for each nested recursive call. Since the best case makes at most O(log n) nested recursive calls, it uses O(log n) space. However, without Sedgewick's trick to limit the recursive calls, in the worst case quicksort could make O(n) nested recursive calls and need O(n) auxiliary space. From a bit complexity viewpoint, variables such as left and right do not use constant space; it takes O(log n) bits to index into a list of n items. Because there are such variables in every stack frame, quicksort using Sedgewick's trick requires $O((\log n)^2)$ bits of space. This space requirement isn't too terrible, though, since if the list contained distinct elements, it would need at least O(n log n) bits of space. Another, less common, not-in-place, version of quicksort uses O(n) space for working storage and can implement a stable sort. The working storage allows the input array to be easily partitioned in a stable manner and then copied back to the input array for successive recursive calls. Sedgewick's optimization is still appropriate. ## Selection-based pivoting A selection algorithm chooses the kth smallest of a list of numbers; this is an easier problem in general than sorting. One simple but effective selection algorithm works nearly in the same manner as quicksort, except instead of making recursive calls on both sublists, it only makes a single tail-recursive call on the sublist which contains the desired element. This small change lowers the average complexity to linear or O(n) time, and makes it an in-place algorithm. A variation on this algorithm brings the worst-case time down to O(n) (see selection algorithm for more information). Conversely, once we know a worst-case O(n) selection algorithm is available, we can use it to find the ideal pivot (the median) at every step of quicksort, producing a variant with worst-case O(n log n) running time. In practical implementations, however, this variant is considerably slower on average. ## Variants There are four well known variants of quicksort: • Balanced quicksort: choose a pivot likely to represent the middle of the values to be sorted, and then follow the regular quicksort algorithm. • External quicksort: The same as regular quicksort except the pivot is replaced by a buffer. First, read the M/2 first and last elements into the buffer and sort them. Read the next element from the beginning or end to balance writing. If the next element is less than the least of the buffer, write it to available space at the beginning. If greater than the greatest, write it to the end. Otherwise write the greatest or least of the buffer, and put the next element in the buffer. Keep the maximum lower and minimum upper keys written to avoid resorting middle elements that are in order. When done, write the buffer. Recursively sort the smaller partition, and loop to sort the remaining partition. This is a kind of three-way quicksort in which the middle partition (buffer) represents a sorted subarray of elements that are approximately equal to the pivot. • Three-way radix quicksort (developed by Sedgewick and also known as multikey quicksort): is a combination of radix sort and quicksort. Pick an element from the array (the pivot) and consider the first character (key) of the string (multikey). Partition the remaining elements into three sets: those whose corresponding character is less than, equal to, and greater than the pivot's character. Recursively sort the "less than" and "greater than" partitions on the same character. Recursively sort the "equal to" partition by the next character (key). Given we sort using bytes or words of length W bits, the best case is O(KN) and the worst case O(2KN) or at least O(N2) as for standard quicksort, given for unique keys N<2K, and K is a hidden constant in all standard comparison sort algorithms including quicksort. This is a kind of three-way quicksort in which the middle partition represents a (trivially) sorted subarray of elements that are exactly equal to the pivot. • Quick radix sort (also developed by Powers as a o(K) parallel PRAM algorithm). This is again a combination of radix sort and quicksort but the quicksort left/right partition decision is made on successive bits of the key, and is thus O(KN) for N K-bit keys. Note that all comparison sort algorithms effectively assume an ideal K of O(logN) as if k is smaller we can sort in O(N) using a hash table or integer sorting, and if K >> logN but elements are unique within O(logN) bits, the remaining bits will not be looked at by either quicksort or quick radix sort, and otherwise all comparison sorting algorithms will also have the same overhead of looking through O(K) relatively useless bits but quick radix sort will avoid the worst case O(N2) behaviours of standard quicksort and quick radix sort, and will be faster even in the best case of those comparison algorithms under these conditions of uniqueprefix(K) >> logN. See Powers [12] for further discussion of the hidden overheads in comparison, radix and parallel sorting. ## Comparison with other sorting algorithms Quicksort is a space-optimized version of the binary tree sort. Instead of inserting items sequentially into an explicit tree, quicksort organizes them concurrently into a tree that is implied by the recursive calls. The algorithms make exactly the same comparisons, but in a different order. An often desirable property of a sorting algorithm is stability - that is the order of elements that compare equal is not changed, allowing controlling order of multikey tables (e.g. directory or folder listings) in a natural way. This property is hard to maintain for in situ (or in place) quicksort (that uses only constant additional space for pointers and buffers, and logN additional space for the management of explicit or implicit recursion). For variant quicksorts involving extra memory due to representations using pointers (e.g. lists or trees) or files (effectively lists), it is trivial to maintain stability. The more complex, or disk-bound, data structures tend to increase time cost, in general making increasing use of virtual memory or disk. The most direct competitor of quicksort is heapsort. Heapsort's worst-case running time is always O(n log n). But, heapsort is assumed to be on average somewhat slower than standard in-place quicksort. This is still debated and in research, with some publications indicating the opposite.[13][14] Introsort is a variant of quicksort that switches to heapsort when a bad case is detected to avoid quicksort's worst-case running time. If it is known in advance that heapsort is going to be necessary, using it directly will be faster than waiting for introsort to switch to it. Quicksort also competes with mergesort, another recursive sort algorithm but with the benefit of worst-case O(n log n) running time. Mergesort is a stable sort, unlike standard in-place quicksort and heapsort, and can be easily adapted to operate on linked lists and very large lists stored on slow-to-access media such as disk storage or network attached storage. Like mergesort, quicksort can be implemented as an in-place stable sort,[15] but this is seldom done. Although quicksort can be written to operate on linked lists, it will often suffer from poor pivot choices without random access. The main disadvantage of mergesort is that, when operating on arrays, efficient implementations require O(n) auxiliary space, whereas the variant of quicksort with in-place partitioning and tail recursion uses only O(log n) space. (Note that when operating on linked lists, mergesort only requires a small, constant amount of auxiliary storage.) Bucket sort with two buckets is very similar to quicksort; the pivot in this case is effectively the value in the middle of the value range, which does well on average for uniformly distributed inputs. ## Notes 1. Steven S. Skiena (27 April 2011). The Algorithm Design Manual. Springer. p. 129. ISBN 978-1-84800-069-8. Retrieved 27 November 2012. 2. 3. Shustek, L. (2009). "Interview: An interview with C.A.R. Hoare". 52 (3): 38–41. doi:10.1145/1467247.1467261.  More than one of `|author1=` and `|last1=` specified (help) 4. ^ a b Sedgewick, Robert (1 September 1998). Algorithms In C: Fundamentals, Data Structures, Sorting, Searching, Parts 1-4 (3 ed.). Pearson Education. ISBN 978-81-317-1291-7. Retrieved 27 November 2012. 5. ^ a b Sedgewick, R. (1978). "Implementing Quicksort programs". 21 (10): 847–857. doi:10.1145/359619.359631. 6. qsort.c in GNU libc: [1], [2] 7. Miller, Russ; Boxer, Laurence (2000). Algorithms sequential & parallel: a unified approach. Prentice Hall. ISBN 978-0-13-086373-7. Retrieved 27 November 2012. 8. David M. W. Powers, Parallelized Quicksort and Radixsort with Optimal Speedup, Proceedings of International Conference on Parallel Computing Technologies. Novosibirsk. 1991. 9. McIlroy, M. D. (1999). "A killer adversary for quicksort". Software: Practice and Experience 29 (4): 341–237. doi:10.1002/(SICI)1097-024X(19990410)29:4<341::AID-SPE237>3.3.CO;2-I. 10. Hsieh, Paul (2004). "Sorting revisited.". www.azillionmonkeys.com. Retrieved 26 April 2010. 11. MacKay, David (1 December 2005). "Heapsort, Quicksort, and Entropy". users.aims.ac.za/~mackay. Retrieved 26 April 2010. ## References • Sedgewick, R. (1978). "Implementing Quicksort programs". 21 (10): 847–857. doi:10.1145/359619.359631. • Dean, B. C. (2006). "A simple expected running time analysis for randomized "divide and conquer" algorithms". Discrete Applied Mathematics 154: 1–5. doi:10.1016/j.dam.2005.07.005. • Hoare, C. A. R. (1961). "Algorithm 63: Partition". 4 (7): 321. doi:10.1145/366622.366642. • Hoare, C. A. R. (1961). "Algorithm 64: Quicksort". 4 (7): 321. doi:10.1145/366622.366644. • Hoare, C. A. R. (1961). "Algorithm 65: Find". 4 (7): 321–322. doi:10.1145/366622.366647. • Hoare, C. A. R. (1962). "Quicksort". 5 (1): 10–16. doi:10.1093/comjnl/5.1.10.  (Reprinted in Hoare and Jones: Essays in computing science, 1989.) • Musser, D. R. (1997). "Introspective Sorting and Selection Algorithms". Software: Practice and Experience 27 (8): 983–993. doi:10.1002/(SICI)1097-024X(199708)27:8<983::AID-SPE117>3.0.CO;2-#. • Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Pages 113–122 of section 5.2.2: Sorting by Exchanging. • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 7: Quicksort, pp. 145–164. • A. LaMarca and R. E. Ladner. "The Influence of Caches on the Performance of Sorting." Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, 1997. pp. 370–379. • Faron Moller. Analysis of Quicksort. CS 332: Designing Algorithms. Department of Computer Science, Swansea University. • Martínez, C.; Roura, S. (2001). "Optimal Sampling Strategies in Quicksort and Quickselect". 31 (3): 683–705. doi:10.1137/S0097539700382108. • Bentley, J. L.; McIlroy, M. D. (1993). "Engineering a sort function". Software: Practice and Experience 23 (11): 1249–1265. doi:10.1002/spe.4380231105.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8796213269233704, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=291793
Physics Forums ## Triangle inequality with countably infinite terms In lecture in my real analysis course the other day we were proving that absolute convergence of a series implies convergence. Our professor started off by showing us the wrong way to prove it: $$\left| \sum_{k=1}^\infty a_k \right| \leq \sum_{k=1}^\infty \left| a_k \right| < \epsilon$$ Then he demonstrated the correct proof, by showing that the sequence of partial sums is Cauchy convergent and then using the triangle inequality. But this got me thinking: why is the first proof wrong? I definitely agree that the second proof is more solid, but if the triangle inequality is proved by induction, meaning it's true for all natural numbers, isn't that, well, infinite? I was wondering if someone could supply a counterargument or proof by contradiction illustrating why this conclusion is incorrect. Thanks as always. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 1 Recognitions: Homework Help It's true for any natural number. Infinity isn't a natural number. That about sums it up. Obviously the conclusion that the triangle inequality holds for an infinite series holds, since you proved it did. But that doesn't mean the logic that you used to make that conclusion is necessarily correct. Here's an example that might elucidate the problem: Every sequence has an element of greatest magnitude. This is obviously true for finite sequences $$a_1, a_2,..., a_n$$ and is $$max_{i=1,...n}(| a_i |)$$ Hence, by your logic, if we have an infinite sequence $$a_1, a_2,...$$ then $$max_{i=1,2,...}( |a_i| )$$ exists and is in the sequence. Obvious counterexample: $$a_i = 1-\frac{1}{i}$$ Thread Tools | | | | |------------------------------------------------------------------------|--------------------------------------------|---------| | Similar Threads for: Triangle inequality with countably infinite terms | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 3 | | | Beyond the Standard Model | 3 | | | Beyond the Standard Model | 1 | | | Linear & Abstract Algebra | 6 | | | Set Theory, Logic, Probability, Statistics | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334055781364441, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/20929/distinguishing-congruence-subgroups-of-the-modular-group/20931
## Distinguishing congruence subgroups of the modular group ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is something of a follow-up to http://mathoverflow.net/questions/19400/ . How does one recognise whether a subgroup of the modular group $\Gamma=\mathrm{SL}_2(\mathbb{Z})$ is a congruence subgroup? Now that's too broad a question for me to expect a simple answer so here's a more specific question. The subgroup $\Gamma_1(4)$ of the modular group is free of rank $2$ and freely generated by $A=\left( \begin{array}{cc} 1&1\\ 0&1 \end{array}\right)$ and $B=\left( \begin{array}{cc} 1&0\\ 4&1 \end{array}\right)$. If $\zeta$ and $\eta$ are roots of unity there is a homomorphism $\phi$ from $\Gamma_1(4)$ to the unit circle group sending $A$ and $B$ to $\zeta$ and $\eta$ resepectively. Then the kernel $K$ of $\phi$ has finite index in $\Gamma_1(4)$. How do we determine whether $K$ is a congruence subgroup, and if so what its level is? In this example, the answer is yes when $\zeta^4=\eta^4=1$. There are also examples involving cube roots of unity, and involving eighth roots of unity where the answer is yes. I am interested in this example since one can construct a "modular function" $f$, homolomorphic on the upper half-plane and meromorphic at cusps such that $f(Az)=\phi(A)f(z)$ for all $A\in\Gamma_1(4)$. One can take $f=\theta_2^a\theta_3^b\theta_4^c$ for appropriate rationals $a$, $b$ and $c$. Finally, a vaguer general question. Given a subgroup $H$ of $\Gamma$ specified as the kernel of a homomorphism from $\Gamma$ or $\Gamma_1(4)$ (or something similar) to a reasonably tractable target group, how does one determine whether $H$ is a congruence subgroup? - ## 2 Answers There is one answer in the following paper, along with a nice bibliography of other techniques: MR1343700 (96k:20100) Hsu, Tim(1-PRIN) Identifying congruence subgroups of the modular group. (English summary) Proc. Amer. Math. Soc. 124 (1996), no. 5, 1351--1359. - Thanks Andy, that's exactly the sort of thing I was looking for. – Robin Chapman Apr 10 2010 at 16:32 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Although Andy has the correct answer, I thought I would point out that there will only be finitely many groups of the type you consider that are congruence subgroups. Since these are abelian (cyclic) covers of your surface, they will contain the universal abelian cover (corresponding to the commutator subgroup of $\Gamma_1(4)$). By "strong approximation", this group will map onto all but finitely many congruence quotients of $SL_2(\mathbb{Z})$, and therefore so will any group containing it. Thus, only finitely many of your groups will be congruence. Another way to see this is to notice that abelian covers have Cheeger constant approaching zero, and therefore first eigenvalue of the Laplacian $\lambda_1$ approaching zero by Buser's inequality. By Selberg's estimate, $\lambda_1\geq 3/16$ for a congruence subgroup. In principle, one could probably get explicit estimates on the index of a congruence abelian cover this way. - 2 Thanks Ian, that's very interesting; it's a shame I can only tick one response. – Robin Chapman Apr 11 2010 at 8:16 I don't know a good reference for strong approximation, but in this context there's a very elementary argument one may make - see: ams.org/mathscinet-getitem?mr=1459136 Their argument shows that the group maps onto all but finitely many $SL_2(Z/p)$. This then implies that it maps onto all but finitely many $SL_2(Z/p^k)$, and therefore onto a finite-index subgroup of the pro-congruence completion (I don't know if this is standard terminology). – Agol Apr 11 2010 at 16:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253712296485901, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/03/10/structures-related-to-groups/?like=1&source=post_flair&_wpnonce=966197e62c
# The Unapologetic Mathematician ## Structures related to groups We’ve said a lot about groups, but there are are a number of related structures. I’ve delayed these simpler structures because most of the easiest examples to give are actually groups anyway. One of the most important is a monoid. A monoid is like a group, but without the requirement that each element have an inverse. Any group is thus a monoid — just forget the fact that there actually are inverses. Like groups, monoids may or may not be commutative. Between monoids we have monoid homomorphisms preserving the composition and identity element. Many of the constructions for groups work for monoids too. For example, given a set $X$ we can form the free monoid $F(X)$. This is just like the free group, but without adding inverses for the generators. Also similar to the free group, any function from $X$ to the underlying set of a monoid $M$ extends to a unique monoid homomorphism from $F(X)$ to $M$. There are also direct and free products of monoids, which can be defined with the same sorts of universal properties as their analogues for groups. Another construction that’s sort of interesting is the free group on a monoid. Given a monoid $M$, take another copy of it and call it $M'$. The element of $M'$ corresponding to an element $m$ of $M$ is $m'$. Now make a group by putting $M$ together with $M'$, identifying the identity elements $e$ and $e'$, and making $m'$ the inverse of $m$. This group $F(M)$ has the property that any monoid homomorphism from $M$ to the underlying monoid of a group $G$ extends to a unique group homomorphism from $F(M)$ to $G$. The other related structure to mention is a semigroup. This doesn’t even have an identity element — just an associative composition. Again, there are semigroup homomorphisms, products and free products of semigroups, and so on. You should be able to give proper definitions of all these by analogy with monoids and groups. I don’t find semigroups to be as interesting as groups, but it’s a nice, concise term to have ready when it does come up. One more structure that’s often mentioned in this context is a groupoid. I actually don’t want to go into groupoids now because there’s a much more natural way to describe them. To those of you who like them, rest assured I’ll get there eventually. ### Like this: Posted by John Armstrong | Algebra, Group theory ## 6 Comments » 1. I know this is probably irrational, but I hate the term “monoid”. It bothers because it doesn’t sound at all related to groups, while “semigroup” does. It also breaks the analogy with rings. We don’t need special words for “ring without 1″ and “ring with 1″, because the theories aren’t so different that you can’t just specify which you mean. The theories of semigroups and monoids just aren’t that different. Comment by | March 15, 2007 | Reply 2. Walt, you make a good point. In fact, (switching to high gear) when you consider what the left adjoint to the forgetful functor from monoid objects in a category to semigroup objects is, it’s pretty silly. However, the word is there and we’re sort of stuck with it. We could no more change from “field” to “body” (like the rest of the world does) than start saying “semigroup with identity”. On the other hand, outside of rings I’m hard pressed to come up with a natural semigroup that isn’t a monoid already. Monoids do seem to be more prevalent in the wild. Comment by | March 16, 2007 | Reply 3. [...] Semigroup rings Today I’ll give another great way to get rings: from semigroups. [...] Pingback by | April 12, 2007 | Reply 4. [...] We know that monoids are one of the most basic algebraic structures on which many others are built. Naturally, [...] Pingback by | June 28, 2007 | Reply 5. [...] the free monoid on a set [...] Pingback by | November 27, 2007 | Reply 6. [...] a monoid , a -graded algebra is one that, as a vector space, we can write as a direct [...] Pingback by | October 23, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311167001724243, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/259212/postnikov-invariants-cohomology-of-em-spaces
Postnikov invariants & Cohomology of EM spaces I'm in trouble in understanding these two statements in Morita's Geometry of Characteristic Classes book: 1. First: what's up with the "twisted" product $K(\pi_2(X),2)\times_{k^4} K(\pi_3(X),3)\cdots\times \cdots$? How is it defined? 2. Second: Is $\mathbb Q[\iota]$ the polynomial ring in the "variable"/cohomology class $\iota$? And what about $E_\mathbb Q(\iota)$: in which sense it is "the exterior algebra" over $\mathbb Q$? I strongly suspect that the problem is "mine" in the sense that I'm not really into these topics. Any kind of reference is appreciated! - 2 For (2): Yes $\mathbb{Q}[i]$ is the polynomial algebra generated by that class. And an exterior algebra makes sense over any field (with a bit of care at the prime 2). In this case it means something really easy: i^2 = 0, and that's it. – Dylan Wilson Dec 15 '12 at 23:17 2 For (1): All this means is that given a $k$-invariant $k^{n+1}:X_{(n-1)} \to K(\pi_nX,n+1)$, you recover $X_{(n)}$ as the pullback of the path-loop fibration. This has fiber $K(\pi_nX,n)$, so it's a "twisted product" of $X_{(n)}$ with $K(\pi_nX,n)$ (i.e., a fibration). – Aaron Mazel-Gee Dec 16 '12 at 8:11 I think it's slightly better to think of the polynomial algebra and the exterior algebra as both being cases of a free dg commutative algebra on an even and odd degree generator, respectively. Then, the EM spaces give you the same kinds of algebras. – Justin Young Dec 17 '12 at 7:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408825635910034, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/20568/is-there-a-ground-between-set-theory-and-group-theory-algebra/20576
## Is there a ground between Set Theory and Group Theory/Algebra? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is well known that there are strong links between Set Theory and Topology/Real Analysis. For instance, the study of Suslin's Problem turns out to be a set theoretic problem, even though it started in topology: namely, whether $\mathbb{R}$ is the only complete dense unbounded linearly ordered set that satisfies the c.c.c. Another instance is when we see that what's behind extending Lebesgue Measure is really the theory of large cardinals, with the introduction of measurable cardinals. Also another example of a real analysis problem that ends up in Set Theory is whether every set of reals is measurable. So the links are clear between Set Theory and Topology/Real analysis. My question is this: are there links, as strong as the ones I roughly described in the last paragraph, between Set Theory and Abstract Algebra? The only example I know of is the Set Theoretic solution to the famous Whitehead Problem by Shelah (namely that if $V=L$ then every Whitehead group is free and if MA+$\neg$CH then there is a Whitehead group which is not free). Can we hope to discover more of these type of links between Set Theory and Abstract Algebra? In contrast, Model Theory seems to be strongly grounded in Abstract algebra. I have seen that Shelah has some papers about uncountable free Abelian groups but he seems to be the only one investigating some areas of Abstract Algebra with the help of Set Theory. So again is there hope for links? - ## 8 Answers Descriptive set theory also has something to say about algebra ... For example, the Higman-Neumann-Neumann Embedding Theorem states that any countable group G can be embedded into a 2-generator group K. In the standard proof of this classical theorem, the construction of the group K involves an enumeration of a set of generators of the group G; and it is clear that the isomorphism type of K usually depends upon both the generating set and the particular enumeration that is used. So it is natural to ask whether there is a more uniform construction with the property that the isomorphism type of K only depends upon the isomorphism type of G. As if ... Assume the existence of a Ramsey cardinal and suppose that G |----> F(G) is a Borel map from the space of countable groups to the space of finitely generated groups such that G embeds into F(G). Then there exists an uncountable set of pairwise isomorphic groups G such that the f.g. groups F(G) are pairwise incomparable with respect to relative constructibility; ie while G, H are isomorphic, F(G) doesn't even lie in the "set-theoretic universe generated by F(H)." - Simon, are you really using the full Ramsey cardinal, or is this something like 0-sharp, or less? Is this property known to have a large cardinal lower bound? – Joel David Hamkins Apr 7 2010 at 1:15 I use $\Sigma^{1}_{3}$ absoluteness for notions of forcing which collapse the continuum to a countable set ... so a Ramsey is a reasonable assumption even if it isn't optimal. The result almost certainly doesn't need large cardinals for its consistency. I would guess it is true if you add lots of Cohen reals. However, I prefer to use the existence of a "small" large cardinal as this shows that the result is true in the actual set-theoretic universe. – Simon Thomas Apr 7 2010 at 1:53 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A curious example is the linear ordering on braid groups, first discovered by Patrick Dehornoy as a consequence of a large cardinal axiom. Proofs without set theory were discovered later -- and also earlier, but unpublished, by Thurston -- but Dehornoy believes that intuition from set theory was crucial in his discovery of the result. See his book Braids and Self Distributivity, particularly the Introduction, which is available here. - 3 This work is connected with Richard Laver's work on free left-distributive algebras and Laver tables. The large cardinals enter into it in Laver observation that if j:V_lambda to V_lambda witnesses the I3 large cardinal axiom, then the algebra generated by j under composition and application h(k) = union_alpha h(k|V_alpha) is a free left distributive algebra. There were no other natural presentations of this algebra (except for the formal term algebra), and it was by using the large cardinal properties of this context that Laver, Dehornoy and others could gain insight into the free algebra. – Joel David Hamkins Apr 7 2010 at 12:28 3 It is also interesting to follow up Thurston's idea, which is expounded by Short & Wiest, L'Enseignement Mathematique 46(2000) pp. 279--312 (available online). It turns out to follow quite naturally from the idea of Jakob Nielsen (1927) to represent mappings of surfaces by mappings of the "boundary at infinity" of the universal cover, which is a circle (or a line, if you take the half-plane model of the hyperbolic plane ). This is where the linear order comes from. Knowing this, I am even more amazed that there is a route to the same theorem via large cardinals. – John Stillwell Apr 8 2010 at 0:12 Let me also mention some of the work that's been done on the automorphism tower problem in group theory. This topic was the mentioned in this early classic MO question (from before my time here). The basic situation is that one starts with a group G, and then iteratively computes the automorphism group Aut(G), the automorphism group of THAT group, and so on. The automorphism tower can be continued transfinitely, by taking a direct limit of the natural system of homomorphisms mapping a group element to the corresponding inner automorphism. When the group G is centerless, then every group in the tower is centerless, and the groups can be viewed as growing. The main question is: does the tower ever stop growing? Does one reach a fixed point? Simon Thomas proved that the answer is yes for centerless groups and I proved yes for all groups, by showing that every group leads eventually to a centerless group. But the connections with set theory become very interesting. Simon Thomas and I proved that there can be a group G, whose automorphism tower depends highly on the set-theoretic background, in the sense that there are forcing extensions of the universe in which the very same group G has towers of different height. We can make the tower taller or shorter, as desired. The point is that even if one has the same group G, then the the automorphism group Aut(G) already depends on the set-theoretic background, since one can sometimes add new generic automorphisms by forcing. For example, it is sometimes possible for a complete group (centerless + no outer automorphisms) to gain new outer automorphisms in a forcing extension. This would be an example of a tower increasing from height 0 to height at least 1 (and it might still grow much taller!). This phenomenon has now been extended by Gunter Fuchs and Philipp Lücke, who showed that almost any successive up-down pattern is achievable in subsequent forcing extensions, by iterated forcing. The general conclusion is that the automorphism tower of a group, even a finite group, exhibits a fundamentally set-theoretic nature, akin to iteratively computing the power set. Simon Thomas, who I see has posted an answer to this question, is currently writing a book on the automorphism tower problem, and it is excellent, from the preliminary versions I've seen. - Thank you, that is exactly what I was looking for. I did not know of these deep connections. This sounds fascinating. – alephomega Apr 7 2010 at 1:17 1 Is MO really old enough to have classical and modern eras? – Qiaochu Yuan Apr 7 2010 at 1:49 2 Qiaochu, of course, I made a joke. But anyway that question was the reason I first came to MO, because a colleague (Kevin O'Bryant) had noticed it and dropped by my office telling me about it. – Joel David Hamkins Apr 7 2010 at 2:05 The following older MO question is pretty relevant: http://mathoverflow.net/questions/1924/what-are-some-reasonable-sounding-statements-that-are-independent-of-zfc In particular, the highest voted answer (by Daniel Erman) is truly mindboggling: Here's an example from commutative algebra. The projective dimension of a module M is defined as the minimal length of a projective resolution of M. Let S be the ring ℂ[x,y,z] and M be the module ℂ(x,y,z). Then the projective dimension of M is undecidable in ZFC. More specifically, the projective dimension of M is 2 if the continuum hypothesis holds, and it is 3 if the continuum hypothesis fails. - It turns out that when it comes to infinite groups/modules, some algebraic concepts are deeply connected to the underlying set theory (for example, the notion of freeness, the structure of Ext, etc). A good reference for this subject is the book "Almost free modules" by Eklof and Mekler. This book introduces the works of Shelah, Gobel, Eklof and many other important contributors in this field. This research has also led to some interesting developments in "pure" set theory, such as the introduction of black-boxes by Shelah (some diamond-like combinatorial principles which can be proved in ZFC alone, and allow the construction of many interesting algebraic objects). - Shelah is definitely not alone! Here are few set theorists who have done substantial algebraic work. - This is true, I am sorry I forgot them, they are definitely working in this area. Do you know if there are others one working in this area? – alephomega Apr 7 2010 at 0:35 The above list is far from complete! (Apologies to the many that I didn't mention.) – François G. Dorais♦ Apr 7 2010 at 0:39 This is more of a question than a correction, but would one really class Vladimir Pestov as a set theorist? (I was under the impression he was a functional analyst, although I've never asked him about this.) – Yemon Choi Apr 7 2010 at 0:41 Also, are there other results in Group Theory which are independent or which have a purely set theoretic setting? By the way I am actually looking for problems that have exclusively a set theoretic setting (if they exist) and not just examples of "applied set theory" to algebra, like Shelah says. – alephomega Apr 7 2010 at 0:43 3 It has been known for set theorists to show that published "theorems" in group theory are actually independent of ZFC ... – Simon Thomas Apr 7 2010 at 0:58 show 3 more comments The subject of Borel Equivalence relation theory involves deep connections between set theory, particularly descriptive set theory, and classification problems in algebra. The principal theme of the subject is to investigate the complexity of various naturally-occuring equivalence relations, such as the isomorphism relation on finitely generated groups, which arise in other areas of mathematics. It turns out that many of these relations can be viewed as Borel relations on a standard Borel space, and they fit into a hierarchy under the concept of Borel reducibility, introduced by Harvey Friedman. I explained a little about the subject in this MO answer. Much of the best work in this subject is characterized by deep connections between set theory and algebra. - You may want to determine which discipline of set theory to link to another discipline. Descriptive Set Theory came (roughly) as a result of foundational issues arising from looking at certain arguments in Topology and Real Analysis, and at some point later ties to Model Theory, Proof Theory, and Recursion Theory were also investigated. If you look at results in Universal Algebra, you will find many links to various foundational disciplines. This is probably the easiest source to find the kinds of links you mention. For example, as a weak parallel to Shelah's Classification Theory, one finds looking at varieties of algebras and considering their spectra, and classifying those which have many models in algebraic terms to those which have few models. People such as Baldwin, Jeong, Jezek, Kearnes, McKenzie, Valeriote, and Wood do work on decidability, the lattice of interpretability types, spectra, and other questions in the context of varieties. There is also algebraic logic, cylindric algebras, and algebras being used to study certain aspects of set theory and logic. You may want to peruse some of that material and then revisit the question of what links you would still like to see.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502240419387817, "perplexity_flag": "head"}
http://mathoverflow.net/questions/78345/radical-of-f-psl2-p/78393
Radical of $F_p[SL(2,p)]$ Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G=SL(2,p)$. Does anyone know what is the radical of the group algebra $F_p[G]$? Does there exists any book/paper where it is calculated? By radical here I mean maximal ideal I of $F_p[G]$ such that $I^n=0$ - Have you looked in Karpilovsky's book on the Jacobson Radical? – Steve D Oct 17 2011 at 15:55 No I will try look in it. Thanks. – Klim Efremenko Oct 17 2011 at 16:50 1 Answer The answer depends a lot on what kind of description of the radical you ask for. This family of groups of Lie type has been well-studied from the viewpoint of modular representation theory in the defining characteristic (with reference also to the ambient algebraic groups). Even the somewhat degenerate case `$p=2$` fits well enough into the general pattern for odd primes. It's easy to work out explicitly the `$p$` irreducible modular representations, for instance using the characteristic 0 model of spaces of homogeneous polynomials in two variables having degree `$<p$`. Work of Brauer and others filled in the structure of their projective covers in the group algebra; these have very few composition factors. So in this special case you can write down as explicitly as you want all the dimensions involved, including the dimension of the radical (and eventually even its Loewy series). Here are some of the fairly straightforward references, though the story gets far more complicated for groups of higher rank and even for larger finite fields than the prime field: J.E. Humphreys, Representations of `$SL(2, p)$`. Amer. Math. Monthly 82 (1975), 21–39. J.E. Humphreys, Projective modules for `$SL(2, q)$`. J. Algebra 25 (1973), 513–518. Henning Haahr Andersen; Jens Jørgensen; Peter Landrock, The projective indecomposable modules of `$SL(2, p^n)$`. Proc. London Math. Soc. (3) 46 (1983), no. 1, 38–52. ADDED: The general structure of a group algebra (or other finite dimensional algebra) is studied in the traditional way using idempotents in the classical 1962 book by Curtis and Reiner Representations of Finite Groups and Associative Algebras. The idempotents generate left ideal summands (principal indecomposable modules) and survive in the semisimple quotient when the radical is factored out; sometimes these can be described explicitly, as in the case of symmetric groups. In your example of a family of groups of Lie type, split over the prime field, the structure of the group algebra over that field extends naturally to an algebraic closure where comparison with algebraic groups is possible. In the case of a group algebra, the key information tends to involve the representation theory of the group over the given field. It may or may not be helpful to look for generators of the radical (as a two-sided ideal), but I don't know of any substantial results for this family of groups. The dimensions and module structure are transparent, however. - Thanks for your answer. I still do not understand how does it looks. Does it possible to write elements of $F[G]$ that generate of the radical? – Klim Efremenko Oct 17 2011 at 23:45 This is what my first sentence is about. See my added comments. – Jim Humphreys Oct 18 2011 at 12:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9044486880302429, "perplexity_flag": "head"}
http://nrich.maths.org/4716
### Egyptian Fractions The Egyptians expressed all fractions as the sum of different unit fractions. Here is a chance to explore how they could have written different fractions. ### Weekly Problem 44 - 2013 Weekly Problem 44 - 2013 ### Weekly Problem 26 - 2008 If $n$ is a positive integer, how many different values for the remainder are obtained when $n^2$ is divided by $n+4$? # Harmonic Triangle ##### Stage: 3 Challenge Level: This is the start of the harmonic triangle: \begin{array}{ccccccccccc} & & & & &\frac{1}{1} & & & & & \\ & & & & \frac{1}{2} & & \frac{1}{2} & & & & \\ & & & \frac{1}{3} & &\frac{1}{6} & & \frac{1}{3} & & & \\ & & \frac{1}{4} & &\frac{1}{12} & & \frac{1}{12} & & \frac{1}{4} & & \\ & \frac{1}{5} & & \frac{1}{20} & & \frac{1}{30} & & \frac{1}{20} & & \frac{1}{5} & \\ \frac{1}{6} & & \frac{1}{30} & & \frac{1}{60} & & \frac{1}{60} & & \frac{1}{30} & & \frac{1}{6}\\ & & & & & \ldots& & & & & \end{array} Each fraction is equal to the sum of the two fractions below it. Look at the triangle above and check that the rule really does work. Can you work out the next two rows? The $n$th row starts with the fraction $\frac{1}{n}$. We can continue the first diagonal ($\frac{1}{1}$, $\frac{1}{2}$, $\frac{1}{3}$, $\frac{1}{4}$, and so on) using this rule. Take a look at the second diagonal: ($\frac{1}{2}$, $\frac{1}{6}$, $\frac{1}{12}$, $\frac{1}{20}$, and so on). What do you notice about the numerators and denominators of these fractions? Can you prove the pattern will continue? What about the third and fourth diagonals?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7274736762046814, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Reflection_(physics)
# Reflection (physics) The reflection of Mount Hood in Mirror Lake. Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. The law of reflection says that for specular reflection the angle at which the wave is incident on the surface equals the angle at which it is reflected. Mirrors exhibit specular reflection. In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors. ## Reflection of light Double reflection: The sun is reflected in the water, which is reflected in the paddle. Reflection of light is either specular (mirror-like) or diffuse (retaining the energy, but losing the image) depending on the nature of the interface. Furthermore, if the interface is between a dielectric and a conductor, the phase of the reflected wave is retained, otherwise if the interface is between two dielectrics, the phase may be retained or inverted, depending on the indices of refraction.[citation needed] A mirror provides the most common model for specular light reflection, and typically consists of a glass sheet with a metallic coating where the reflection actually occurs. Reflection is enhanced in metals by suppression of wave propagation beyond their skin depths. Reflection also occurs at the surface of transparent media, such as water or glass. Diagram of specular reflection In the diagram at left, a light ray PO strikes a vertical mirror at point O, and the reflected ray is OQ. By projecting an imaginary line through point O perpendicular to the mirror, known as the normal, we can measure the angle of incidence, θi and the angle of reflection, θr. The law of reflection states that θi = θr, or in other words, the angle of incidence equals the angle of reflection. In fact, reflection of light may occur whenever light travels from a medium of a given refractive index into a medium with a different refractive index. In the most general case, a certain fraction of the light is reflected from the interface, and the remainder is refracted. Solving Maxwell's equations for a light ray striking a boundary allows the derivation of the Fresnel equations, which can be used to predict how much of the light is reflected, and how much is refracted in a given situation. Total internal reflection of light from a denser medium occurs if the angle of incidence is above the critical angle. Total internal reflection is used as a means of focusing waves that cannot effectively be reflected by common means. X-ray telescopes are constructed by creating a converging "tunnel" for the waves. As the waves interact at low angle with the surface of this tunnel they are reflected toward the focus point (or toward another interaction with the tunnel surface, eventually being directed to the detector at the focus). A conventional reflector would be useless as the X-rays would simply pass through the intended reflector. When light reflects off a material denser (with higher refractive index) than the external medium, it undergoes a polarity inversion. In contrast, a less dense, lower refractive index material will reflect light in phase. This is an important principle in the field of thin-film optics. Specular reflection forms images. Reflection from a flat surface forms a mirror image, which appears to be reversed from left to right because we compare the image we see to what we would see if we were rotated into the position of the image. Specular reflection at a curved surface forms an image which may be magnified or demagnified; curved mirrors have optical power. Such mirrors may have surfaces that are spherical or parabolic. Refraction of light at the interface between two media. ### Laws of reflection An example of the law of reflection Main article: Specular reflection If the reflecting surface is very smooth, the reflection of light that occurs is called specular or regular reflection. The laws of reflection are as follows: 1. The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane. 2. The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal. 3. The reflected ray and the incident ray are on the opposite sides of the normal. These three laws can all be derived from the reflection equation. #### Mechanism In the classical electrodynamics, light is considered as electromagnetic wave, which is governed by the Maxwell Equations. Light waves incident on a material induce small oscillations of polarisation in the individual atoms (or oscillation of electrons, in metals), causing each particle to radiate a small secondary wave (in all directions, like a dipole antenna). All these waves add up to give specular reflection and refraction, according to the Huygens-Fresnel principle. In case of dielectric (glass), the electric field of the light acts on the electrons in the glass, the moving electrons generate a field and become a new radiator. The refraction light in the glass is the combined of the forward radiation of the electrons and the incident light and; the backward radiation is the one we see reflected from the surface of transparent materials, this radiation comes from everywhere in the glass, but it turns out that the total effect is equivalent to a reflection from the surface. In metals, the electrons with no binding energy are called free electrons. The density number of the free electrons is very large. When these electrons oscillate with the incident light, the phase differences between the radiation field of these electrons and the incident field are $\pi$, so the forward radiation will compensate the incident light at a skin depth, and backward radiation is just the reflected light. Light–matter interaction in terms of photons is a topic of quantum electrodynamics, and is described in detail by Richard Feynman in his popular book QED: The Strange Theory of Light and Matter. ### Diffuse reflection General scattering mechanism which gives diffuse reflection by a solid surface Main article: Diffuse reflection When light strikes the surface of a (non-metallic) material it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g. the grain boundaries of a polycrystalline material, or the cell or fiber boundaries of an organic material) and by its surface, if it is rough. Thus, an 'image' is not formed. This is called diffuse reflection. The exact form of the reflection depends on the structure of the material. One common model for diffuse reflection is Lambertian reflectance, in which the light is reflected with equal luminance (in photometry) or radiance (in radiometry) in all directions, as defined by Lambert's cosine law. The light sent to our eyes by most of the objects we see is due to diffuse reflection from their surface, so that this is our primary mechanism of physical observation.[1] ### Retroreflection Working principle of a corner reflector Main article: Retroreflector Some surfaces exhibit retroreflection. The structure of these surfaces is such that light is returned in the direction from which it came. When flying over clouds illuminated by sunlight the region seen around the aircraft's shadow will appear brighter, and a similar effect may be seen from dew on grass. This partial retro-reflection is created by the refractive properties of the curved droplet's surface and reflective properties at the backside of the droplet. Some animals' retinas act as retroreflectors, as this effectively improves the animals' night vision. Since the lenses of their eyes modify reciprocally the paths of the incoming and outgoing light the effect is that the eyes act as a strong retroreflector, sometimes seen at night when walking in wildlands with a flashlight. A simple retroreflector can be made by placing three ordinary mirrors mutually perpendicular to one another (a corner reflector). The image produced is the inverse of one produced by a single mirror. A surface can be made partially retroreflective by depositing a layer of tiny refractive spheres on it or by creating small pyramid like structures. In both cases internal reflection causes the light to be reflected back to where it originated. This is used to make traffic signs and automobile license plates reflect light mostly back in the direction from which it came. In this application perfect retroreflection is not desired, since the light would then be directed back into the headlights of an oncoming car rather than to the driver's eyes. ### Multiple reflections When light reflects off a mirror, one image appears. Two mirrors placed exactly face to face give the appearance of an infinite number of images along a straight line. The multiple images seen between two mirrors that sit at an angle to each other lie over a circle.[2] The center of that circle is located at the imaginary intersection of the mirrors. A square of four mirrors placed face to face give the appearance of an infinite number of images arranged in a plane. The multiple images seen between four mirrors assembling a pyramid, in which each pair of mirrors sits an angle to each other, lie over a sphere. If the base of the pyramid is rectangle shaped, the images spread over a section of a torus.[3] ### Complex conjugate reflection Light bounces exactly back in the direction from which it came due to a nonlinear optical process. In this type of reflection, not only the direction of the light is reversed, but the actual wavefronts are reversed as well. A conjugate reflector can be used to remove aberrations from a beam by reflecting it and then passing the reflection through the aberrating optics a second time. ## Other types of reflection ### Neutron reflection Materials that reflect neutrons, for example beryllium, are used in nuclear reactors and nuclear weapons. In the physical and biological sciences, the reflection of neutrons off of atoms within a material is commonly used to determine the material's internal structure. ### Sound reflection Sound diffusion panel for high frequencies When a longitudinal sound wave strikes a flat surface, sound is reflected in a coherent manner provided that the dimension of the reflective surface is large compared to the wavelength of the sound. Note that audible sound has a very wide frequency range (from 20 to about 17000 Hz), and thus a very wide range of wavelengths (from about 20 mm to 17 m). As a result, the overall nature of the reflection varies according to the texture and structure of the surface. For example, porous materials will absorb some energy, and rough materials (where rough is relative to the wavelength) tend to reflect in many directions—to scatter the energy, rather than to reflect it coherently. This leads into the field of architectural acoustics, because the nature of these reflections is critical to the auditory feel of a space. In the theory of exterior noise mitigation, reflective surface size mildly detracts from the concept of a noise barrier by reflecting some of the sound into the opposite direction. ### Seismic reflection Seismic waves produced by earthquakes or other sources (such as explosions) may be reflected by layers within the Earth. Study of the deep reflections of waves generated by earthquakes has allowed seismologists to determine the layered structure of the Earth. Shallower reflections are used in reflection seismology to study the Earth's crust generally, and in particular to prospect for petroleum and natural gas deposits. ## See also Wikimedia Commons has media related to: Reflection Wikimedia Commons has media related to: Reflections ## References 1. Mandelstam, L.I. (1926). "Light Scattering by Inhomogeneous Media". Zh. Russ. Fiz-Khim. Ova. 58: 381. 2. M. Iona (1982). "Virtual mirrors". Physics Teacher 20: 278. Bibcode:1982PhTea..20..278G. doi:10.1119/1.2341067. 3. I. Moreno (2010). "Output irradiance of tapered lightpipes". JOSA A 27 (9): 1985. Bibcode:2010JOSAA..27.1985M. doi:10.1364/JOSAA.27.001985.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.910446286201477, "perplexity_flag": "middle"}
http://crntaylor.wordpress.com/2011/07/07/sicp-1-1-exercises/
Chris Taylor Math, coding, finance, geekery # SICP 1.1 – Exercises July 7, 2011 in Coding, Programming, SICP | Tags: coding, exercises, lisp, programming, scheme, sicp The first exercise is merely to evaluate some Lisp expressions manually, and then check them in the interpreter. Easy and not too interesting, but I did say that I’d do every exercise, so I guess I’ll plough ahead. This is good time to talk about the interpreter I’m using. I initially toyed with downloading the MIT Scheme interpreter, which is a minimalist product that’s designed to be used alongside this course. I also thought about downloading Emacs – you can get implementations with a built-in Lisp interpreter. But I’ve been getting along quite happily with a combination of Vim and TextMate (for Mac OS X) for the past few months, and I’m not sure that learning a new text editor is necessary right now. I also wanted to do some more advanced coding in Lisp (what’s the point of learning a new language if I’m not going to use it?) so I wanted something a bit meatier than MIT Scheme. DrRacket I eventually settled on Racket, a bells-and-whistles implementation of Scheme that includes a lot of extra features out of the box – lots of nice data structures, the ability to create GUIs and manipulate graphics, and lots more. It comes with its own interpreter, DrRacket, which is functional and pretty easy to use. A nice touch is that you can load any definitions file into the interpreter to give you a whole load of pre-defined functions – and if the first line of the definitions file is `#lang <language-name>` then you will only use the features of that language. For example, the first line of the definitions file I’m using for this project is `#lang scheme` so I don’t have access to all the funky procedures defined in Racket – just the core Scheme language. On with the exercises. ### Exercise 1.1 What is the result printed by the interpreter in response to each of the following expressions? ```> 10 10 ``` Numerals have a value equal to the number they represent. ```> (+ 5 3 4) 12 ``` This is equivalent to 5 + 3 + 4. Note that the `+` procedure can take multiple arguments. I haven’t learned how to define a function with a variable number of arguments yet, but hopefully I will soon! ```> (- 9 1) 8 ``` This is equivalent to 9 – 1. ```> (/ 6 2) 3 ``` Equivalent to 6 / 2. Note that we get an integer result. ```> (+ (* 2 4) (- 4 6)) 6 ``` Equivalent to (2 * 4) + (4 – 6) = 8 + (-2). Note that there is no need for operator precedence in Scheme, because we always include the parentheses! I wonder if a syntax for Scheme could be defined that allowed you to leave out parentheses when the meaning was clear, and had rules of operator precedence instead. For example, the K programming language that I use at work has simple right-to-left order of evaluation. This can lead to very code and lets you easily write one-line expressions that are very powerful, but has many gotchas: for example, `2*4+5` evaluates to `18` in K, rather than `13`. ```> (define a 3) ``` Whether or not `define` statements return a value is implementation-dependent. My interpreter doesn’t give a return value. However, we have now bound the value 3 to the variable a. ```> (define b (+ a 1)) ``` Again, there is no return value, but we have bound the value of (+ a 1) to the variable b, so that b now has the value 4. ```> (+ a b (* a b)) 19 ``` This is equivalent to a + b + (a * b) where a = 3 and b = 4. ```> (if (and (> b a) (< b (* a b))) b a) 4 ``` This procedure says “if (b > a) and (b < a * b) then return b, else return a”, which becomes “if (4 > 3) and (4 < 12) then return 4, else return 3″ so it returns 4. ```> (+ 2 (if (> b a) b a)) 6 ``` First evaluate the `if` statement. It says “if (b > a) return b, else return a” which becomes (by substitution) “if 4 > 3 then return 4, else return 3″, so it returns 4. We then evaluate `(+ 2 4)`, which returns 6. ```> (* (cond ((> a b) a) ((< a b) b) (else -1)) (+ a 1)) 16 ``` We first evaluate the `cond` statement. It evaluate to 4, since the first predicate is false but the second one is true. The second argument `(+ a 1)` evaluates to 4 also, and finally `(* 4 4)` evaluates to 16. ### Exercise 1.2 Translate the following expression into prefix form: $\dfrac{5 + 4 + (2 - (3 - (6 + 4/5)))}{3(6 - 2)(2 - 7)}$ I reckon we have: ```> (/ (+ 5 4 (- 2 (- 3 (+ 6 (/ 4 5))))) (* 3 (- 6 2) (- 2 7))) -37/150 ``` That return value was a surprise to me! I had assumed that when applying the division operator to two integers, Scheme would either perform integer division (like Java) or return a floating point number (like Python). Apparently it has a built-in rational data type. Which is nice. ### Exercise 1.3 Define a procedure that takes three numbers as arguments and returns the sum of the squares of the two larger numbers. Here’s a pretty ugly answer, using the built-in `min` function: ```> (define (f a b c) (- (+ (* a a) (* b b) (* c c)) (* (min a b c) (min a b c)))) ``` Why is it ugly? Well, it needlessly applies `min` twice, and it does four multiplications, two additions and a subtraction, when all that’s needed is one addition and two multiplications. How about: ```> (define (g a b c) (cond ((and (<= a b) (<= a c)) (+ (* b b) (* c c))) ((<= b c) (+ (* a a) (* c c))) (else (+ (* a a) (* b b))))) ``` This looks uglier, but it’s more efficient: it never performs more than two multiplications and one addition, and never more than three comparisons (the minimum possible, since you have to sort the arguments so that you know which is the smallest, and this takes three comparisons in the worst case. If anyone has a prettier and efficient solution (perhaps one that works for arbitrary lists of arguments?) then I’d like to see it. ### About me Proto-hacker, ex-mathematician and aspiring flaneur. Now living in London and making my living from algorithmic trading. ### Twitter • RT @Betfairpoker: Can anyone truly enjoy watching a man wearing shorts if you know deep down that he probably only owns one house? 2 months ago ## 2 comments I would have done it like this (in Emacs-Lisp): (defun square (num) (* num num)) (defun largest (n nums) (last (sort nums ‘<) n)) (defun sum-square-largest-2 (&rest nums) (apply '+ (mapcar 'square (largest 2 nums)))) This creates something that is modular, easy to understand, and easily generalized. Sure, sorting the whole list might be a little inefficient, but that is a premature optimization. You could always replace largest with an efficient selection algorithm if you needed. Pretty-printed version on gist. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9083746075630188, "perplexity_flag": "middle"}
http://www.contrib.andrew.cmu.edu/~ryanod/?p=207
## Analysis of Boolean Functions by Ryan O'Donnell Fall 2012 course at Carnegie Mellon # §1.2: The Fourier expansion: functions as multilinear polynomials The Fourier expansion of a boolean function $f : \{-1,1\}^n \to \{-1,1\}$ is simply its representation as a real, multilinear polynomial. (Multilinear means that no variable $x_i$ appears squared, cubed, etc.) For example, suppose $n = 2$ and $f = {\textstyle \min_2}$, the “minimum” function on $2$ bits: \begin{align*} {\textstyle \min_2}(+1,+1) &= +1, \\ {\textstyle \min_2}(-1,+1) &= -1, \\ {\textstyle \min_2}(+1,-1) &= -1, \\ {\textstyle \min_2}(-1,-1) &= -1. \end{align*} Then ${\textstyle \min_2}$ can be expressed as a multilinear polynomial, \begin{equation} \label{eqn:min2-expansion} {\textstyle \min_2}(x_1,x_2) = -\tfrac12 + \tfrac12 x_1 + \tfrac12 x_2 + \tfrac12 x_1 x_2; \end{equation} this is the “Fourier expansion” of ${\textstyle \min_2}$. As another example, consider the majority function on $3$ bits, $\mathrm{Maj}_3 : \{-1,1\}^3 \to \{-1,1\}$, which outputs the $\pm 1$ bit occurring more frequently in its input. Then it’s easy to verify the Fourier expansion \begin{equation} \label{eqn:maj3-expansion} \mathrm{Maj}_3(x_1,x_2,x_3) = \tfrac{1}{2} x_1 + \tfrac{1}{2} x_2 + \tfrac{1}{2} x_3 – \tfrac{1}{2} x_1x_2x_3. \end{equation} The functions ${\textstyle \min_2}$ and $\mathrm{Maj}_3$ will serve as running examples in this chapter. Let’s see how to obtain such multilinear polynomial representations in general. Given an arbitrary boolean function $f : \{-1,1\}^n \to \{-1,1\}$ there is a familiar method for finding a polynomial which interpolates the $2^n$ values which $f$ assigns to the points $\{-1,1\}^n \subset {\mathbb R}^n$. For each point $a = (a_1, \dots, a_n) \in \{-1,1\}^n$ the indicator polynomial $$1_{\{a\}}(x) = \left(\tfrac{1+a_1x_1}{2}\right)\left(\tfrac{1+a_2x_2}{2}\right) \cdots \left(\tfrac{1+a_nx_n}{2}\right)$$ takes value $1$ when $x = a$ and value $0$ when $x \in \{-1,1\}^n \setminus \{a\}$. Thus $f$ has the polynomial representation $$f(x) = \sum_{a \in \{-1,1\}^n} f(a) 1_{\{a\}}(x).$$ Illustrating with the $f = {\textstyle \min_2}$ example again, we have \begin{align} \qquad\qquad\quad {\textstyle \min_2}(x)\quad=\quad& \left(+1\right) \left(\tfrac{1+x_1}{2}\right)\left(\tfrac{1+x_2}{2}\right)\nonumber\\ \quad+\quad& \left(-1\right) \left(\tfrac{1-x_1}{2}\right)\left(\tfrac{1+x_2}{2}\right) \label{eqn:min2-fourier-computation}\\ \quad+\quad& \left(-1\right) \left(\tfrac{1+x_1}{2}\right)\left(\tfrac{1-x_2}{2}\right)\nonumber\\ \quad+\quad& \left(-1\right) \left(\tfrac{1-x_1}{2}\right)\left(\tfrac{1-x_2}{2}\right)\nonumber \quad=\quad -\tfrac12 + \tfrac12 x_1 + \tfrac12 x_2 + \tfrac12 x_1 x_2. \nonumber \end{align} Let us make two remarks about this interpolation procedure. First, it works equally well in the more general case of real-valued boolean functions, $f : \{-1,1\}^n \to {\mathbb R}$. Second, since the indicator polynomials are multilinear when expanded out, the interpolation always produces a multilinear polynomial. Indeed it makes sense that we can represent functions $f : \{-1,1\}^n \to {\mathbb R}$ with multilinear polynomials: since we only care about inputs $x$ where $x_i = \pm 1$, any factor of $x_i^2$ can be replaced by $1$. We have illustrated that every $f : \{-1,1\}^n \to {\mathbb R}$ can be represented by a real multilinear polynomial; as we will see in Section 3, this representation is unique. The multilinear polynomial for $f$ may have up to $2^n$ terms, corresponding to the subsets $S \subseteq [n]$. We write the monomial corresponding to $S$ as $$x^S = \prod_{i \in S} x_i \tag{with $x^\emptyset = 1$ by convention}$$ and we use the following notation for its coefficient: \begin{equation*} \widehat{f}(S) = \text{coefficient on monomial $x^S$ in the multilinear representation of $f$}. \end{equation*} We summarize this discussion by the Fourier expansion theorem: Theorem 1 Every function $f : \{-1,1\}^n \to {\mathbb R}$ can be uniquely expressed as a multilinear polynomial, \begin{equation} \label{eqn:multilinear-expansion} f(x) = \sum_{S \subseteq [n]} \widehat{f}(S)\,x^S. \end{equation} This expression is called the Fourier expansion of $f$, and the real number $\widehat{f}(S)$ is called the Fourier coefficient of $f$ on $S$. Collectively, the coefficients are called the Fourier spectrum of $f$. As examples, from \eqref{eqn:min2-expansion} and \eqref{eqn:maj3-expansion} we obtain: $$\widehat{{\textstyle \min_2}}(\emptyset) = -\tfrac{1}{2}, \quad \widehat{{\textstyle \min_2}}(\{1\}) = \tfrac{1}{2}, \quad \widehat{{\textstyle \min_2}}(\{2\}) = \tfrac{1}{2}, \quad \widehat{{\textstyle \min_2}}(\{1,2\}) = \tfrac{1}{2};$$ $$\widehat{\mathrm{Maj}_3}(\{1\}),\ \widehat{\mathrm{Maj}_3}(\{2\}),\ \widehat{\mathrm{Maj}_3}(\{3\}) = \tfrac{1}{2}, \quad \widehat{\mathrm{Maj}_3}(\{1,2,3\}) = -\tfrac{1}{2}, \quad \widehat{\mathrm{Maj}_3}(S) = 0 \text{ else.}$$ We finish this section with some notation. It is convenient to think of the monomial $x^S$ as a function on $x = (x_1, \dots, x_n) \in {\mathbb R}^n$; we write it as $$\chi_S(x) = \prod_{i \in S} x_i.$$ Thus we sometimes write the Fourier expansion of $f : \{-1,1\}^n \to {\mathbb R}$ as $$f(x) = \sum_{S \subseteq [n]} \widehat{f}(S)\,\chi_S(x).$$ So far our notation only makes sense when representing the Hamming cube by $\{-1,1\}^n \subseteq {\mathbb R}^n$. The other frequent representation we will use for the cube is ${\mathbb F}_2^n$. We can define the Fourier expansion for functions $f : {\mathbb F}_2^n \to {\mathbb R}$ by “encoding” input bits $0, 1\in {\mathbb F}_2$ by the real numbers $-1,1 \in {\mathbb R}$. We choose the encoding $\chi : {\mathbb F}_2 \to {\mathbb R}$ defined by $$\chi(0_{{\mathbb F}_2}) = +1, \quad \chi(1_{{\mathbb F}_2}) = -1.$$ This encoding is not so natural from the perspective of boolean logic; e.g., it means the function $\min_2$ we have discussed represents logical $\mathrm{OR}$. But it’s mathematically natural because for $b \in {\mathbb F}_2$ we have the formula $\chi(b) = (-1)^b$. We now extend the $\chi_S$ notation: Definition 2 For $S \subseteq [n]$ we define $\chi_S : {\mathbb F}_2^n \to {\mathbb R}$ by $$\chi_S(x) = \prod_{i \in S} \chi(x_i) = (-1)^{\sum_{i \in S} x_i},$$ which satisfies \begin{equation} \label{eqn:chi-character} \chi_S(x+y) = \chi_S(x)\chi_S(y). \end{equation} In this way, given any function $f : {\mathbb F}_2^n \to {\mathbb R}$ it makes sense to write its Fourier expansion as \begin{equation*} f(x) = \sum_{S \subseteq [n]} \widehat{f}(S)\,\chi_S(x). \end{equation*} In fact, if we are really thinking of ${\mathbb F}_2^n$ the $n$-dimensional vector space over ${\mathbb F}_2$, it makes sense to identify subsets $S \subseteq [n]$ with vectors $\gamma \in {\mathbb F}_2^n$. This will be discussed in Chapter 3. November 4th, 2011 | Tags: Fourier expansion | Category: All chapter sections, Chapter 1: Boolean functions and the Fourier expansion ### 3 comments to §1.2: The Fourier expansion: functions as multilinear polynomials • Hi Ryan, is there a typo in the definition of the indicator polynomial ? the right hand side has no term involving the variable x at all. • Fixed, thanks! • Gil Kalai Dear Ryan best luck with this new project –Gil ### Recent comments • Deepak: Cool! That all makes sense. • Ryan O'Donnell: Thanks on both! • Ryan O'Donnell: 1. Fixed, thanks 2. Fixed, thanks 3. For the first two occ... • Avishay Tal: Two small comments regarding the proof of theorem 16. - In ... • Deepak: A couple of small things: Ex. 42, under equation (2), \pi... • Ryan O'Donnell: Correct, thanks! • Tim Black: I think your $j$ should be an $i$ (or vice-versa) in the sen...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7660205960273743, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/triangle?sort=unanswered
# Tagged Questions 2answers 244 views ### Need a little help understanding this triangle concept. - Thales Theorem The question is: ... 2answers 58 views ### A question on Trigonometry (bisector) If two bisector of a triangular is equal, then it is Isosceles triangular. 2answers 54 views ### Calculating meeting point where line intersects arch How do I find the point $p$ where the arch meets the red line if the angle of the blue are is known and the height of the yellow? 1answer 152 views ### Algebra question about Triangle Interiors I was reading about Triangle Interiors on Wolfram Alpha: http://mathworld.wolfram.com/TriangleInterior.html and they have a simple equation: \mathbf{v} = \mathbf{v}_0 + a\mathbf{v}_1 + ... 1answer 50 views ### Does the orthocenter have any special properties? Each of the commonly known triangle centers I know has some sort of special property. For example: The incenter is the center of the inscribed circle. The circumcenter is the center of the circle ... 1answer 90 views ### Altitude of tetrahedron? I'm really curious to know any relationships between the altitude of a tetrahedron and how the foot of this altitude splits the base triangle. For example if you have a tetrahedron PABC with apex P, ... 1answer 33 views ### P is a point in triangle $ABC$, what is $[APC]$? Moderator Note: This question is part of an ongoing contest on Brilliant.org, and will be unlocked in 1 week. P is a point in triangle $ABC$. The lines $AP$,$BP$, and $CP$ intersect the sides ... 1answer 122 views ### Geometry - optimal 2D mesh between X expendable points Say you have X points on a plane. If you connect two points, you form a line. Connecting three points forms a triangle. A line cannot cross a line, and a smaller triangle cannot be created inside a ... 1answer 72 views ### Prove $(b-c)\sin A+(c-a)\sin B+(a-b)\sin C=0$ Prove the following equation, when you consider it as $BC=a$, $CA=b$, and $AB=c$ in a triangle $ABC$. $(b-c)\sin A+(c-a)\sin B+(a-b)\sin C=0$ 1answer 51 views ### Gergonne Point of a triangle coinciding with other triangle centers I am trying to prove the following: Let $T$ be the Gergonne point (the intersection of the lines that connect the points of tangency of the incircle with the vertices of the triangle) of \$\triangle ... 1answer 38 views ### Triangle $z$-index interpolation between the vertices I got a $2$D triangle, each vertex has a $2$D coordinate with a $z$-index value (NOT a $z$ coordinate!). The $z$-index value indicates whether a vertex lays on, in front of, or behind your screen ... 1answer 152 views ### Squares in a triangle? I've got some trouble... IJKL is a square and B, I, J, C are aligned (alternatively, |IJ| is confounded with |BC|. h is the height of acute $\triangle$ ABC from A to side BC. C1 is the red ... 1answer 113 views ### Barycentric coordinates of a triangle I have to do what described in the picture below. Any ideas on how to do this? 1answer 58 views ### how to find(measure,calculate) the distance (height,length) of an object? I am trying to develope code ,so i need a mathematics help to proceed,please help me to find distance of an object using trigonometry r any applicable maths without using any sensors r external ... 0answers 73 views ### triangles in a grid of $n\times n$ with positive coordinates I need to count the number of triangles formed in a grid of $n\times n$ with positive integer coordinates $(0..n)$. For example for $n = 1$ the answer is 4. 0answers 253 views ### Euler's Line of a medial triangle I have the following problem with a comment below on the steps that I took so far. Here is the example: Let triangle ABC be any triangle. The midpoints of the sides in Triangle ABC are labeled \$A', ... 0answers 157 views ### Does “triangle” in English exclude degenerate triangles? Just for fun read few problems on the projeteuler.net website. Number 276 found interesting: Consider the triangles with integer sides a, b and c with a ≤ b ≤ c. An integer sided triangle ... 0answers 32 views ### Proving that the circumcenter is the centroid Given a triangle and its centroid, we know that the 3 line segments between the centroid and each of the vertices of the triangle divide the triangle into three smaller triangles. Prove that the ... 0answers 21 views ### maximum length of a scaled vector in a triangle (simplex) Given a triangle (or, in general, a simplex) $T$ and a vector $\vec{s}$, I'd like to compute the quantity $$\max\{|x-y|: x,y\in T, x-y = \alpha \vec{s}, \alpha\in\mathbb{R}\}$$ i.e., the maximum ... 0answers 114 views ### Finding side and angle of isosceles triangle inside two circles I'm having a problem that I'm not sure how to solve (or if it's even possible). It's not homework, just something i'm struggling with for a project. :) Basically, there are two circles, represented ... 0answers 31 views ### Two coloured plane Can you prove that For any two angles $θ,ϕ$ there exists a monochromatic triangle that has angles $θ,ϕ,180−(θ+ϕ)$ in two coloured plane? 0answers 58 views ### How to find the inverse position inside a triangle If i were standing in a triangle - How do i calculate the inverse of my position? Can it be done? It's easy inside a rectangle, but I can't think of how you would do it inside of a triangle. I'm ... 0answers 34 views ### Triangular exponentation logarithm and inverse The generalized formula of triangular exponentation on real numbers field is $x ^ {\triangle y} = \frac {1} {y \cdot B (x, y)} = \frac {\Gamma(x + y)} {\Gamma(x) \cdot \Gamma(y + 1)}$ It's my ... 0answers 326 views ### General formula for computing triangular gaussian quadrature. While this is a simple question, I'm totally lost. Is there any general formula for generation of n-point gaussian quadrature over a triangle? I'll use this formula to generate a variable-point (7, ... 0answers 37 views ### Law sines in Spherical Triangle $\rightarrow$ Law sines in plane triangle Could any one tell me how to estimate or get law of sines in Spherical Triangle to The Law of Sines in Plane Triangle? i.e $\frac{\sin a}{\sin A}=\frac{sin b}{\sin B}=\frac{\sin c}{\sin C}$ to ... 0answers 55 views ### Finding a formula for perimeter of triangles in triangle I hope you are familiar with counting triangles in triangle problem. I've studied it a little recently. Now i want to find a formula for sum of perimeters of this all triangles but i don't know how to ... 0answers 55 views ### Sum of angles in a hyperbolic triangle with one ideal angle I want to calculate the sum of the angles of the triangle formed in the hyperbolic plane from the points $(-1,1), (0,1)$, and $(1,1)$. This forms an angle at the origin which has an infinite slope for ... 0answers 54 views ### Figuring out angles of a second triangle based off of one side of a first My friend and I are developing some image tracking software for a robot we are creating and we have this right here: ... 0answers 80 views ### Unknown depth issue: Triangle, Pyramid, Rotation, Translation, Zoom? Edit: Had to delete the 2nd picture Another tricky question. What you can see here is my physical pyramid built with 3 leds which form a triangle in 1 plane and another led in the mid center, about ... 0answers 63 views ### Determining a point in 3D space So given a point, a rotation around the y-axis, a rotation around the x-axis, and a distance, how can one calculate the relative point in space? For example, the beginning coordinates are (0,0,0). ... 0answers 31 views ### Looking for different (analytical) approaches to a problem Given 5 points in the plane any three of which are vertices of a triangle. Prove that among these triangles there is an obtuse triangle. I was able to prove it by examining all possible cases. I ... 0answers 33 views ### Rotate a triangle to next 'visible' side I have a triangle along the y/z axis (I can only see the flat side facing me). How do I rotate it around the x axis so that the next side faces me? 0answers 130 views ### pixels in a projection of a triangle in 3d space onto a 3d plane through a pinhole camera I have a triangle in 3d space. The x and y components of its vertices make a 2d right isoceles triangle. I am projecting it through a pinhole onto a plane. The projected triangles on the plane are now ... 0answers 75 views ### Get value of angle with 45 degrees as maximum and 0 and 90 degrees as minimum I want the calculate the "value" of an angle in such a way that: The angle of 45 degrees corresponds with the maximum value of 1 The angles of 0 and 90 degrees correspond with the minimum value of 0 ... 0answers 129 views ### Uniform Random Points on a triangle using only edge plane normals For a triangle $ABC$ in 3D (each point has x, y, z coordinates) is it possible to generate uniform random points on the triangle from only the following data: Normal of the triangle plane \$N = ... 0answers 161 views ### How to find the last coordinate of an isosceles triangle I'm having some trouble trying to find out how to find the final coordinate on an isosceles triangle. Here's a list of information that I have: The length of the two equal sides (A and B) The angle ... 0answers 177 views ### problem finding a 2D Point in a triangle I have a Triangle with 3 Points - A, B and C and the angle alpha A and B are fixed. C is any point at the side of 'b' Alpha has at A and B the same size I need to find any Point on side 'a' except B ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9045118689537048, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/47531/simulating-the-evolution-of-a-wavepacket-through-a-crystal-lattice?answertab=votes
# Simulating the evolution of a wavepacket through a crystal lattice I am interested simulating the evolution of an electronic wave packet through a crystal lattice which does not exhibit perfect translational symmetry. Specifically, in the Hamiltonian below, the frequency of each site $\omega_n$ is not constant. Suppose the lattice is specified by a certain tight-binding Hamiltonian $$H = \sum_n \omega_n a^\dagger_n a_n + t \sum_{<n>} a^\dagger_n a_{n+1} +\text{all nearest neighbor interactions} + \text{h.c}.$$ We prepare a wavepacket, and for simplicity, we express the wavepacket in the fock basis of each lattice site $$| \psi \rangle = \sum_i |b_1\rangle |b_2\rangle \ldots |b_n\rangle.$$ Thus, there are $b_1$ electrons in the $1$st lattice site. Of course, electrons are fermions and $b_1$ may be either $0$ or $1$. Suppose we treat this problem purely quantum mechanically. Then we will need to prepare a vector of length $2^n$, which is computationally intractable for any significant $n$. I am interested in physical techniques that may be employed to simplify this problem. Is it possible to attempt the problem in a semiclassical manner? - Our FAQ actually disavowes computational questions. With your permission I will migrate this to the new Scientific Computation beta site. Of course, you can ask about the physics here not withstanding that you are planning a computational attack, but this seems to be a implementation question. Or have I mistaken your intent? – dmckee♦ Dec 24 '12 at 21:19 2 I am more interested in the physical techniques that can be used to simplify the problem and hence, make it computationally viable. As we know, quantum mechanical simulations on classical computers are often intractable as the computational steps required increase exponentially with the degrees of freedom in the system. – flamearchon Dec 24 '12 at 21:25 2 At the end of the day, I would like to numerically time-step through some differential equation. The question is which differential equation do I solve! – flamearchon Dec 24 '12 at 21:29 1 Ah...thank you for the clarification. This certainly should remain here. – dmckee♦ Dec 24 '12 at 21:45 @flamearchon For the exact method, you either use eigenvalue or direct evolution, and you dont have the symmetry in the Hamiltonian. The other method should only be approximation. If you get the answer, please post here. – hwlau Dec 28 '12 at 7:18 show 9 more comments ## 1 Answer If you use tight-binding Hamiltonian, it is reasonable to start not from semiclassical, but one-particle approximation. In that case, you have an amplitude (complex number) at each site, the state is complex vector of length $n$, Hamiltonian is $n\times n$ (sparse) matrix and the problem of time evolution and/or eigenstates (for one particle state) is solvable for relatively large lattices. If you are interested in many particle physics, you may build a model on top of these oneparticle states. The details are dependent on what exactly you wish to compute. Unfortunately, I do not know a reference with rigorous transfer from one formulation to another. - Yes, usually we dont want to get the exact wavefunction. I think the ground state energy is usually one want to compute. Do you have any idea how to do that with some approximation? – hwlau Jan 6 at 9:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220289587974548, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/184793/graph-theory-and-burnsides-lemma?answertab=votes
# Graph theory and Burnside's lemma How many are non isomorphic tournaments (directed clique) with $n=5$ vertices? I'm not sure how to understand isomorphism here. This problem was in the set of problems on Burnside's lemma but it's different than the rest, I think. Normally I was asked to count the number of significantly different colorings of necklace or chessboard which was very nice and the group for Burnside's lemma was given explicit - simply all rotations. But here I don't know what is the group, how to approach this. Is it very difficult to solve this in general - for $n\in\mathbb{N}$? Will it be useful here the term: graph automorphism? How to understand it? Because I have it in the next problem. - ## 1 Answer Judging from the question, a lot of the confusion is converting from group actions terminology to graph theory terminology. I'll try to clarify. Let $\mathcal{A}$ be the set of labelled tournaments on the vertex set $\{1,2,\ldots,5\}$. We know $$|\mathcal{A}|=2^{{5 \choose 2}}=1024$$ since each edge can go in one of two directions. The symmetric group $S_5$ acts on $\mathcal{A}$ by permuting the vertex labels. Pick an arbitrary tournament $T \in \mathcal{A}$ and an arbitrary permutation $\alpha \in S_5$. • We say that $T$ and $\alpha(T)$ are isomorphic (which we intuitively interpret as meaning structurally equivalent). • We can define an equivalence relation on the set of $5$-vertex tournaments, with tournaments $A$ and $B$ being equivalent if $\alpha(A)=B$ for some $\alpha \in S_5$. The equivalence classes are called isomorphism classes or orbits. The number of non-isomorphic tournaments, is the number of orbits. Burnside's Lemma gives a formula the number of orbits under a group action. However, to use it, we need to find a way of counting the number of tournaments $T$ that are "fixed" by each $\alpha \in S_5$. • If $\alpha(T)=T$, then $\alpha$ is said to be an automorphism of $T$. (Alternatively, we can say that $\alpha$ stabilises $T$.) This is our notion of "fixed". Thus, to use Burnside's Lemma, for each $\alpha \in S_5$, we need to find the size of $$F_\alpha:=\{T \in \mathcal{A}:\alpha(T)=T\}$$ which gives the number of orbits (the number of non-isomorphic tournaments) as: $$\frac{1}{|S_5|} \sum_{\alpha \in S_5} |F_\alpha|.$$ This might sound horrid (since $|S_5|=120$), but it's made much easier by the observation that $|F_\alpha|$ varies only with the cycle structure of $\alpha$. The possible cycle structures are: • (1,1,1,1,1), • (2,1,1,1), • (2,2,1), • (3,1,1), • (3,2), • (4,1), • (5). For each of these cycle structures, e.g. (2,2,1), we can pick a representative, e.g. $\beta=(12)(34)(5)$, and find $|F_\beta|$. • Hint: The number of permutations with a given cycle structure is given here, for example. • Hint: Automorphisms of tournaments must have odd order. (Why?) This means we need only consider the cycle structures $(1,1,1,1,1)$, $(3,1,1)$ and $(5)$. Thus, the question is really asking for $|F_{(1)(2)(3)(4)(5)}|=1024$, $|F_{(123)(4)(5)}|$ and $|F_{(12345)}|$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9348152279853821, "perplexity_flag": "head"}
http://www.sjbrown.co.uk/2012/03/30/sampling-sun-and-sky/
# Simon's Graphics Blog Work log for ideas and hobby projects. ## Sampling Sun And Sky with one comment In this post I will briefly cover how I implemented sampling of external light sources in a path tracing framework, concluding with an observation about sampling multiple external light sources that are non-zero over very different solid angles. I’m going to assume the reader is familiar with path tracing in the Veach framework. My definition of an external light source, which I’ve also seen called an “infinite” light source since they are considered to be infinitely far away (and infinitely bright as a result), is as follows: • Radiance always originates from outside of the scene bounds • Radiance is a function of world space direction only (not sample position) A simple example would be a cube map considered to be always centered at the sample point. ## Single Emitter The way I represent external lighting is as a sum of external emitters. A single external emitter is defined as follows: • Provides a function that takes a world space direction and returns emitted radiance from that direction • Provides a function to generate a world space direction and emitted radiance given 2 uniform random variables Both of these functions also compute the probability density of the direction with respect to solid angle. There are two techniques in a path tracer that would include radiance from an external emitter: • Sample the BSDF and evaluate the emitter if there is no geometry in that ray direction • Sample the emitter and evaluate the BSDF if there is no geometry in that ray direction Here’s a diagram of this for an external emitter that is only non-zero for a small solid angle (where the dotted line is solid): In this example, BRDF sampling is unlikely to generate samples that hit the non-zero region, so it is essential to use multiple importance sampling between these two techniques to avoid excessive noise. This should all be very familiar to anyone that has implemented such a “next event estimation” path tracer, so let’s move on. ## Multiple Emitters It is common to require more than one external emitter to be active at the same time. For example, I implement the daylight model by Preetham et al as the sum of 2 external emitters: • A constant emission function for some solid angle in the sun direction (i.e. sunlight) • A variable emission function over the hemisphere in the “up” direction (i.e. skylight) A first strategy to handle multiple emission functions might be to pick a single emitter for each sample (using some random variable), adjust the pdf accordingly, and rely on multiple samples to cover all the functions. However, this is quite wasteful of our intersection tests considering that these emission functions should be completely additive for a given direction, we can do better by using all functions and multiple importance sampling between them. Note this is a separate application of MIS, used in addition to the MIS between path tracing techniques we already apply during the path tracing algorithm. So let us consider a new type of external emitter that is simply an array of N external emitters, and consider how to sample it. We define each emitter as a function of direction as: $f_i(\omega)$ Each emitter has probability density with respect to solid angle of: $p_i(\omega)$ We also define our discrete probabilities for choosing one of these N emitters as: $\left\{ s_1, ..., s_N \right\} \qquad \sum_{i=1}^N{s_i} = 1$ We can then define the value and probability density of our combined emitter as: $f(\omega) = \sum_{i=1}^N{f_i(\omega)}$ $p(\omega) = \sum_{i=1}^N{s_i p_i(\omega)}$ Our sampling function is constructed from the sampling functions for each emitter as follows: • Pick an emitter according to the probabilities $$s_i$$ • Sample a world space direction using the sampling function of the chosen emitter • Compute the radiance and combined pdf using the equations above This multiple importance sampling step nicely handles the fact that sunlight and skylight are non-zero over very different solid angles: even if we choose the sky emitter and happen to lie within the solid angle of the sun (producing a very large radiance value), the combined pdf will take into account the density of all emitters resulting in a low-variance estimate. A sensible approach for choosing the discrete probabilities $$s_i$$ is to use the proportion of the total power provided by each emitter. ## Multiple Emitters With Restricted Sampling What I hadn’t realised before is that it’s perfectly valid to make the discrete probability for one or more of the emitters 0. This can of course result in the combined radiance function (as defined above) being non-zero when the combined pdf is zero. When this situation is detected by calling code while the emitters are being evaluated for some direction, it should be treated as “this direction can only be sampled by BSDF sampling”, setting the multiple importance sampling weight to 1 (since there is only one technique for this direction). This gives us a way to only multiple importance sample between BSDF sampling and emitter sampling for some components of the external lighting, with the remaining components (that have 0 discrete probability) sampled only using BSDF sampling. Somewhat counter-intuitively, I get lower variance when using this approach with sunlight and skylight. I produced a reference image for this scene using 1024spp, then looked at the convergence speed for discrete probabilities that correspond to the following ratios: • 50/50 sunlight/skylight • 90/10 sunlight/skylight • 100/0 sunlight/skylight (i.e. only the sun emitter can be sampled explicitly) Here’s the reference image, sorry for the lack of shading normals on the Stanford bunny: Here’s a log/log graph of the results for each of the 3 sampling strategies for the external emitters. The x axis is the base-2 logarithm of the sample count, so the data points are at from 1,2,4 etc up to 256 samples. The y axis is the base-10 logarithm of RMSE, so lower is better. (At some point I need to implement a perceptual metric, but RMSE will have to do for now.) The technique with the lowest noise is the technique that doesn’t explicitly sample skylight at all, relying only on BSDF sampling to find skylight not within the solid angle of the sun. I think this works out because: • By always sampling sunlight, we removed a random variable from the system. • Since BSDF sampling only for skylight is already a good technique, the net effect of removing a random variable is less overall noise. Written by Simon Brown March 30th, 2012 at 10:12 pm Posted in CUDA,Global Illumination,GPGPU,Rendering ### One Response to 'Sampling Sun And Sky' Subscribe to comments with RSS or TrackBack to 'Sampling Sun And Sky'. 1. Its nice, You can also notice that often we only need to sample an hemisphere (when sampling the sky). ie. that we can separate the sky sampling into 2 hemisphere sampling… and use 2 different sampling probability too. Of course, it depends of the scene ! krys 6 Nov 12 at 2:59 pm
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8968146443367004, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2010/09/15/modules-2/?like=1&source=post_flair&_wpnonce=f636fc8316
# The Unapologetic Mathematician ## Modules With the group algebra in hand, we now define a “$G$-module” to be a module for the group algebra of $G$. That is, it’s a (finite-dimensional) vector space $V$ and a bilinear map $A:\mathbb{C}[G]\times V\to V$. This map must satisfy $A(\mathbf{e},v)=v$ and $A(\mathbf{g},A(\mathbf{h},v))=A(\mathbf{gh},v)$. This is really the same thing as a representation, since we may as well pick a basis $\{e_i\}$ for $V$ and write $V=\mathbb{C}^d$. Then for any $g\in G$ we can write $\displaystyle A(\mathbf{g},e_i)=\sum\limits_{j=1}^dm_i^je_j$ That is, $A(\mathbf{g},\underbar{\hphantom{X}})$ is a linear map from $V$ to itself, with its matrix entries given by $m_i^j$. We define this matrix to be $\rho(g)$, which must be a representation because of the conditions on $A$ above. Conversely, if we have a matrix representation $\rho:G\to GL_d$, we can define a module map for $\mathbb{C}^d$ as $\displaystyle A(\mathbf{g},v)=\rho(g)v$ where we apply the matrix $\rho(g)$ to the column vector $v$. This must satisfy the above conditions, since they reflect the fact that $\rho$ is a representation. In fact, to define $A$, all we really need to do is to define it for the basis elements $\mathbf{g}\in\mathbb{C}[G]$. Then linearity will take care of the rest of the group algebra. That is, we can just as well say that a $G$-module is a vector space $V$ and a function $A:G\times V\to V$ satisfying the following three conditions: • $A$ is linear in $V$: $A(g,cv+dw)=cA(g,v)+dA(g,w)$. • $A$ preserves the identity: $A(e,v)=v$. • $A$ preserves the group operation: $A(g,A(h,v))=A(gh,v)$. The difference between the representation viewpoint and the $G$-module viewpoint is that representations emphasize the group elements and their actions, while $G$-modules emphasize the representing space $V$. This viewpoint will be extremely helpful when we want to consider a representation as a thing in and of itself. It’s easier to do this when we think of it as a vector space equipped with the extra structure of a $G$-action. ## 11 Comments » 1. For whatever reason, I took a course in Module Theory at Caltech, got lost in the forest, and couldn’t remember by the end what I was learning this FOR. Thanks for being so clear. Maybe I’ll get it better this time around… Comment by | September 15, 2010 | Reply 2. [...] Actions and Representations From the module perspective, we’re led back to the concept of a group action. This is like a -module, but [...] Pingback by | September 16, 2010 | Reply 3. [...] course, this shouldn’t really surprise us. After all, representations of are equivalent to modules for the group algebra; and the very fact that is an algebra means that it comes with a bilinear [...] Pingback by | September 17, 2010 | Reply 4. [...] Between Representations Since every representation of is a -module, we have an obvious notion of a morphism between them. But let’s be explicit about [...] Pingback by | September 21, 2010 | Reply 5. [...] We say that a module is “reducible” if it contains a nontrivial submodule. Thus our examples last time show [...] Pingback by | September 23, 2010 | Reply 6. [...] I’d like to cover a stronger condition than reducibility: decomposability. We say that a module is “decomposable” if we can write it as the direct sum of two nontrivial submodules [...] Pingback by | September 24, 2010 | Reply 7. [...] and Kernels A nice quick one today. Let’s take two -modules and . We’ll write for the vector space of intertwinors from to . This is pretty [...] Pingback by | September 29, 2010 | Reply 8. [...] Now that we know that images and kernels of -morphisms between -modules are -modules as well, we can bring in a very general [...] Pingback by | September 30, 2010 | Reply 9. [...] and Commutant Algebras We will find it useful in our study of -modules to study not only the morphisms between them, but the structures that they [...] Pingback by | October 1, 2010 | Reply 10. [...] way of looking at it: remember that a representation of a group on a space can be regarded as a module for the group algebra . If we then add a commuting representation of a group , we can actually [...] Pingback by | November 1, 2010 | Reply 11. [...] that we’re interested in concrete actions of Lie algebras on vector spaces, like we were for groups. Given a Lie algebra we define an -module to be a vector space equipped with a bilinear function [...] Pingback by | September 12, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230040907859802, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/128946/grp-as-a-reflexive-coreflexive-subcategory-of-mon
# Grp as a reflexive/coreflexive subcategory of Mon So my question is the statement made in the title, is there a functor F:Mon->Grp which makes Grp into a (co)reflexive subcategory of Mon? Thanks in advance - 1 I don't think reflection tag was intended in category-theoretical sense. Perhaps adjoint could be more suitable, since reflector is an adjoint functor? – Martin Sleziak Apr 7 '12 at 10:16 ## 1 Answer The forgetful functor $U : \mathrm{Grp} \to \mathrm{Mon}$ preserves limits and satisfies the solution set condition, thus has a left adjoint according to the General Adjoint Functor Theorem (see Mac Lane's book for details). You can write it down explicitly: It maps $M$ to the group $G(M)$ which is the free group defined by generators $\underline{m}$, one for each element $m \in M$, subject to the relations $\underline{1}=1$ and $\underline{mn}=\underline{m} \underline{n}$. Thus, elements of $G(M)$ have the form $\underline{m_1} \cdot \underline{m_2}^{-1} \cdot \underline{m_3} \cdot \underline{m_4}^{-1} \cdot \dotsc \cdot \underline{m}_n$. When $M$ is commutative, this is usually called the Grothendieck construction; here every element has the form $\underline{m} \cdot \underline{n}^{-1}$. But as you can see, these kind of adjoint functors always exist when we just forget some part of algebraic structure. The forgetful functor also has a right adjoint; it maps a monoid $M$ to its group of units $M^*$. This can be verified directly. But there is also a more general approach. Namely, from the construction of colimits in these categories it is clear that $U$ preserves colimits. Again one can verify the solution set condition (for the dual categories), so that $U$ has a right adjoint $R$. The underlying set of $R(M)$ is then $\cong \hom(\mathbb{Z},R(M)) \cong \hom(U(\mathbb{Z}),M) \cong M^*$. Thus, $R(M)=M^*$. Even if you don't know the Adjoint Functor Theorem, this methods lets you to calculate the right adjoint when you don't have a right guess. In summary, $\mathrm{Grp}$ is a reflective subcategory of $\mathrm{Mon}$, the reflector being the "Grothendieck construction", as well as a coreflective subcategory, the coreflector being the group of units construction. - Thanks a milion, this is awesome – user25470 Apr 7 '12 at 12:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295682311058044, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/58286-cube-problem.html
# Thread: 1. ## cube problem Suppose each edge is 6 inches long. Find the measure of the angle formed by diagonal GB and edge GC. I can do similar problems but the angle on this one is killing me. Can someone please help m figure it out? thanks sooooo much 2. Hello, daydreembelievr! Suppose each edge is 6 inches long. Find the measure of the angle formed by diagonal $GB$ and edge $GC$. Triangle $BCG$ is a right triangle. Code: ``` * B /| / | / | _ / | 6√2 / | / | / θ | G *-------* C 6``` $GC$ is an edge of the cube: . $GC \:=\:6$ $BC$ is the diagonal of a face: . $BC \:=\:6\sqrt{2}$ Then: . $\tan\theta\:=\:\frac{6\sqrt{2}}{6} \:=\:<br /> \sqrt{2}\quad\Rightarrow\quad \theta \:\approx\:54.7^o$ 3. I was having so much trouble visualizing it in 2D, but now that makes a lot of sense. thanks so much
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401249289512634, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/time?page=4&sort=votes&pagesize=15
# Tagged Questions Time is defined operationally to be that which is measured by clocks. The SI unit of time is the second, which is defined to be 1answer 82 views ### How precisely can we date the recombination? The early universe was hot and opaque. Once it cooled enough, protons and electrons were able to form hydrogen atoms. This made the universe transparent, and was known as recombination. We can see the ... 1answer 120 views ### How to calculate the amount of night time during a flight? I have been asked to find a way to calculate the amount of time that a flight takes during night time. So far, I have the departure latitude and longitude and the time of takeoff, the arrival ... 4answers 483 views ### How to describe a well defined “zero moment” in time Suppose you have to specify the moment in time when a given event occurred, a "zero time". The record must be accurate to the minute, and be obtainable even after thousands of years. All the measures ... 1answer 85 views ### How much time has passed for Voyager I since it left the Earth, 34 years ago? 34 years have passed since Voyager I took off and it's just crossing the solar system, being approximately at 16.4 light-hours away. How much time have passed for itself, though? 2answers 182 views ### Will observers moving on a sphere experience time dilation? A single source of light exists at a fixed point in space relative to two observers. The two observers move on the surface of a shell with a fixed radius with the light source at its centre. They move ... 2answers 110 views ### Calculating Average Velocity I understand that the concept of an average of a data list means finding a certain value 'x', which ensures that the sum of the deviations of the numbers on the left of 'x' and on the right of 'x' ... 1answer 92 views ### How can we know the time frames for events in the early universe? I just finished watching Into the Universe with Stephen Hawking (2010). Specifically the third episode titled 'The Story of Everything.' In the episode Hawking is explaining the mainstream theories ... 1answer 128 views ### When calculating the local apparent sidereal time, which time scale should I use? UT1, UTC, TAI, TDB, or what? I need to determine the time difference between a given observation and the epoch from which certain constants apply. I typically work with the J2000.0 epoch. This is to ... 1answer 286 views ### Laser Coherence Length/Time Scenario: Imagine a collimated beam of white light falling on one refracting face of a prism. Let the light emerging from the second face be focused by a lens onto a screen. Suppose there is linear ... 2answers 159 views ### When moving fast Time slows down Vs speeds up I was watching an old cartoon movie where a scientist makes a gadget, which when bound on the wrist, freezes the movement of the whole world. So, that one may do 100s of things in a single second. ... 3answers 264 views ### Energy-time uncertainty and pair creation Usually, the energy-time analogue of the position-momentum uncertainty relation is quoted as $\Delta E \Delta t \geq \frac{h}{4 \pi}$. This has interpretational issues and such. But, with a suitable ... 1answer 122 views ### Is time the rate at which one moves through space I'll start out with the cliche attempt in a protective shield of my dignity. I am a young highschool kid just eager to learn and understand. If I'm way off or this is already a known idea, or maybe ... 1answer 79 views ### What is the difference between “at all times” and “at any particular time”? Morrison writes in "Morrison, Michael A. : Understanding quantum physics : a user's manual" $|\Psi(x,t)|^2 \xrightarrow[x\rightarrow\pm \infty ]{} 0$ at all times t [bound state] \$ ... 1answer 276 views ### What is actually meant by 'sun set' and 'sun rise' times, when taking into account the mirage due to light bending in the atmosphere I’ve heard from the likes of Brian Cox that what we see of the sun during a sunset and sun rise is actually the mirage of the sun. The Sun has actually set/risen and we see it due to the way light is ... 0answers 79 views ### General physics question involving Heisenberg Uncertainty Principle Question: An unstable particle produced in a high-energy collision is measured to have an energy of $483\ \mathrm{MeV}$ and an uncertainty in energy of $84\ \mathrm{keV}$. Use the Heisenberg ... 0answers 76 views ### Do particles travel backward and forward in time? [duplicate] All these classical ideas are pointless and obsolete today, because in quantum mechanics, the particles are completely different objects, defined by quantum motion of fields, not by the location of ... 0answers 50 views ### Wormholes and the illusion of time? I was watching a video on Youtube by Brian Greene, "the illusion of time."http://www.youtube.com/watch?v=j-u1aaltiq4 In that video, he introduces to me the idea of a "brane," or a slice of the ... 0answers 74 views ### “Time” by epistemic subdivision of a closed system [closed] There is the idea that there is no time in a completely closed (thus unobservable) system. Within such a system, a subsystem may be imagined to be split off by some virtual boundary. However, one ... 1answer 464 views ### Relation between comoving distance and conformal time? In cosmology, we have two quantities and I want to understand the physical relation between these two : $\chi = \int_{t_e}^{t_0}c\frac{dt'}{a(t')}$ : the comoving distance with $t_e$ the time at ... 1answer 205 views ### Two identical rockets, time dilation, and possible weirdness [duplicate] Possible Duplicate: Why isn't the symmetric twin paradox a paradox? Suppose there are two identical rockets, each carrying one of two identical clocks and one of two identical ... 0answers 89 views ### Confusion with infinity and time [closed] I have some confusion between with resolving the following situation. I know that no measurable quantity can have a value of infinity. For example, I just wrote something out, but clearly this ... 0answers 2k views ### What's the relationship between mass and time? [closed] [This question has arisen from a wish to understand an end-of-universe scenario: heat death] Are time and mass intrinsically linked? If so, does time "run slower" (whatever that may mean) in a ... 2answers 141 views ### What's the relationship between quantum entanglement and the relativity of time? Apologies in advance for what may be a stupid question from a layman. In reading recently about quantum entanglement, I understood there to be a direct link between entangled particles, even at ... 4answers 164 views ### Bear with me, this is a stupid question. Talking on the phone and time So a few days ago I noticed (im sure we all just know this) that when talking on the phone you receive the messages a few seconds after it is said by the sender. So person A says "hello" to person B. ... 2answers 171 views ### Is there a device that could measure the speed of time? Is there (or can there be) a device that could measure the speed and acceleration of time? 2answers 160 views ### Imaginary time and string theory Is imaginary time an extra dimension? In other words, are time and imaginary time considered two separate dimensions? If so, does imaginary time appear (as a separate dimension) in string theory (thus ... 1answer 103 views ### What is the maximum time dilation between two objects, if one is standing still and the other is moving at $c$? What is the maximum ratio in the rate of change in time in reference to object $A$ which is standing still and object $B$ which is moving at the speed of light? 3answers 148 views ### Time slowing down problem When someone moves, time slows down for him. Let, a man standing still and another moving very very very fast, this happens for an hour (as measured by the standing man). Time has moved slower for the ... 2answers 264 views ### What isotope has the shortest half life? Question: What isotope has the shortest half life? 1answer 96 views ### Confusion about time shift in special relativity I have never really found a way to comfortably comprehend the idea of time shift even though I know its not the hard part of relativity theory. In that light, can someone point out what is wrong or ... 1answer 172 views ### Does the Earth's revolution around the Sun affect radioactive decay? Premises: The radioactivity is either hastened or slowed inside a fast moving aircraft. Speed of fastest aircraft: 3,529.6 km/h. The earth's revolution is: 107278.87 km/h. The earth's ... 2answers 295 views ### Is it true looking at an object from a great distance with a telescope will show the past version of the object? Once again from my son's workbook. It discussed standing on a planet 65 million light years away from Earth, with an extremely powerful telescope pointed at Earth. It claimed that then you could see ... 1answer 52 views ### Time Dilation in relation to Acceleration What I am looking for is a layman's explanation on the equations required to work out Time Dilation at high speeds including acceleration and deceleration of velocity. Or I would greatly appreciate it ... 1answer 147 views ### The real meaning of time dilation Is this true or false: If A and B have clocks and are traveling at relative velocity to each other, then to B it APPEARS that A's clock moving slower, but A sees his own clock moving at normal speed. ... 1answer 134 views ### Does photon possesses no time to cover any arbitrary distance? Photon travel 8 minutes (with speed $c$) from the sun to reach the earth. Any particle (or space-ship) with velocity $0.99 c$ covers the same distance (93 millions km) within less than 2 minutes ... 1answer 133 views ### Approximate Time Dilation at Rocket Speeds How do you calculate the time dilation effect experienced by a traveler traveling at a relatively low speed? Specifically, how much time dilation would a traveller moving at $v=0.0007 c$ (speed of ... 2answers 78 views ### Resources for current thought on time/spacetime? Are any of the big-name physicists associated with the time in the same way that Hawking and Penrose are associated with black holes? I'm interested in some good books that focus on the topic. 1answer 181 views ### Is relativistic motion equivalent to fluctuating gravitational fields? The theory of relativity makes very precise predictions about how an object's motion through space-time affects the passage of time for both the object and observers in other frames of reference. I ... 1answer 103 views ### Did space and time exist before the Big Bang? [duplicate] I accept the Big Bang theory. What I can't understand is how there can be a where or when to the Big Bang if space time did not exist prior to it. Did space and time exist prior to the Big Bang? 1answer 83 views ### Does our local time speed up as the Universe expands? Starting from a simplified radial Freidman Walker metric we have $$ds^2 = -c^2 dt^2 + a(t)^2 dr^2$$ How does one measure one's proper time operationally? One times a light beam along an element of ... 1answer 92 views ### Deriving infinitesimal time dilation for arbitrary motion from Lorentz transformations I'm trying to derive the infinitesimal time dilation relation $dt = \gamma d\tau$, where $\tau$ is the proper time, $t$ the coordinate time, and $\gamma = (1-v(t)^2/c^2)^{-1/2}$ the time dependent ... 4answers 197 views ### The bigger the mass, the more time slows down. Why is this? If I were to stand by a pyramid, which weighs about 20 million tons, I would slow down by a trillion million million million of second. Don't know if that's exactly right, but you get the point. Also, ... 1answer 114 views ### Time Contraction This is my first time posting on this site. I am a computer programmer that stumbled across a physics text book and have a question on special relativity. So firstly, I understand that there is no ... 1answer 2k views ### Finding deceleration and velocity using distance and time A car is moving down a street with no brakes or gas. The car is slowing due to wind resistance and the effect of friction. The road is flat and straight. The only data I have are timings taken at 100m ... 1answer 100 views ### Red shift and time distortion Superman throws a light emitting object away from himself fast enough to notice a red-shift. The object passes through a region in which time runs more slowly. From Superman's perspective, does the ... 1answer 183 views ### If you removed every particle from space…? [closed] I'm trying to find something Einstein (I think) said about time...It was something like.. "If you removed every particle from space and were left with only one pocket watch (clock, timepiece?), time ... 0answers 38 views ### Time ordering and Fermions Having time ordering operator for fermions, should it reverse sign if it swaps operators with opposite spin variable? In other words should $T[c_{t_1,\uparrow}c_{t_2,\downarrow}^\dagger]$ return ... 0answers 27 views ### What is the formula for calculating the length of any given day (sunrise to sunset)? [duplicate] In a specific date what law gives us perfect measurements and how will we measure if latitude is given? 0answers 34 views ### Why there is no operator for time in QM? [duplicate] Is there one central reason why there is no "Time" operator in QM? I know this question has been asked before, but I thought I would try to stimulate some fresh thinking. 0answers 36 views ### Can a black hole actually grow, from the point of view of a distant observer? [duplicate] Possible Duplicate: Black hole formation as seen by a distant observer I've read in several places that from the PoV of a distant observer it will take an infinite amount of time for new ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423972964286804, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/112938/when-is-sin-x-an-algebraic-number-and-when-is-it-non-algebraic?answertab=votes
# When is $\sin x$ an algebraic number and when is it non-algebraic? Show that if $x$ is rational, then $\sin x$ is algebraic number when $x$ is in degrees and $\sin x$ is non algebraic when $x$ is in radians. Details: so we have $\sin(p/q)$ is algebraic when $p/q$ is in degrees, that is what my book says. of course $\sin (30^{\circ})$, $\sin 45^{\circ}$, $\sin 90^{\circ}$, and halves of them is algebraic. but I'm not so sure about $\sin(1^{\circ})$. Also is this is an existence proof or is there actually a way to show the full radical solution. One way to get this started is change degrees to radians. x deg = pi/180 * x radian. So if x = p/q, then sin (p/q deg) = sin ( pi/180 * p/q rad). Therefore without loss of generality the question is show sin (pi*m/n rad) is algebraic. and then show sin (m/n rad) is non-algebraic. - For the second part you'll also need to assume $x\ne0$. – Henning Makholm Feb 24 '12 at 17:28 Claim is false: $\sin 0^{\circ}$ and $\sin 0$ (in radians) are both algebraic. – Arturo Magidin Feb 24 '12 at 17:29 4 Hint for the first part: Instead of considering $\sin(\frac{\pi}{180}x)$, view it as the real part of $z=-ie^{\frac{\pi}{180}xi}$ and consider $z^{180q}$ to see that $z$ is algebraic. Then the sine, being $\frac{z}{2}+\frac{\bar z}{2}$, is also algebraic, because the algebraic numbers are closed under addition. – Henning Makholm Feb 24 '12 at 17:31 – user21436 Feb 24 '12 at 17:45 ## 2 Answers $\sin\left(\frac{p}{q}\pi\right)=\sin\left(\frac{p}{q}180^\circ\right)$ is always algebraic for $\frac{p}{q}\in\mathbb{Q}$: Let $$\alpha=e^{\frac{i\pi}{q}}=\cos\frac{\pi}{q}+i\sin\frac{\pi}{q}.$$ Then $\alpha^q+1=0$, i.e. $\alpha$ is an (algebraic) $2q^\text{th}$ root of unity, i.e. it is a root of $x^{2q}-1$. Hence, so is its power $\alpha^p$ and reciprocal/conjugate power, which for $p$ an $q$ in lowest terms are roots of $x^q-(-1)^p=0$. Therefore, so too are $$\cos\frac{p\pi}{q}=\frac{\alpha^p+\alpha^{-p}}{2} \qquad\text{and}\qquad \sin\frac{p\pi}{q}=\frac{\alpha^p-\alpha^{-p}}{2i},$$ by the closure of the algebraic numbers as a field. Ivan Niven gives a nice proof at least that $\sin x$ is irrational for (nonzero) rational $x$. As @Aryabhata points out, the Lindemann-Weierstrass theorem gives us that these values of $\sin$ and $\cos$ are transcendental (non-algebraic), by using the fact that the field extension $L/K$ of $L=\mathbb{Q}(\alpha)$ over $K=\mathbb{Q}$ has transcendence degree 1. - Why is a^q +1 = 0 algebraic? I thought algebraic number means it is the root of a polynomial with integer coefficients. this might be beyond the scope of my knowledge. But your alpha is a root of unity , and is a complex number. – bob thornton Feb 24 '12 at 18:19 I edited my post. Is it clear now? $\alpha$ is a root of $x^q+1=0$ and hence also $x^{2q}-1=0$. – bgins Feb 24 '12 at 18:23 @bob: The relevant polynomial is $x^q + 1$, not $\alpha^q + 1$. As $\alpha$ is a root, and $sin$ can be expressed linearly in $\alpha$, we have that $sin$ is algebraic at that value too. – mixedmath♦ Feb 24 '12 at 18:23 Lindemann-Weierstrass theorem implies that for $\alpha$ non-zero algebraic, $\sin \alpha$ is transcendental. - well sin (pi*x) , pi*x is not algebraic since pi is not algebraic. That is sufficient to prove sin(pi*x) is algebraic? Are you saying that for any 'a' transcendental sin(a) is algebraic? I should qualify 'a' to be a real transcendental. so is sin(e) algebraic? – bob thornton Feb 24 '12 at 18:48 @bobthornton: No, I am saying for any $a$ algebraic (and non-zero), $\sin a$ is transcendental. The other portion of your question was answered by bgins. No clue when $a$ is transcendental (except rational multiples of $\pi$). – Aryabhata Feb 24 '12 at 18:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941389262676239, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/13169/how-can-a-laser-pointer-have-range-of-several-kilometers-in-atmosphere/13224
How can a laser pointer have range of several kilometers in atmosphere? Laser pointers manufacturers claim that some pointers have range of several kilometers. Okay, they use a powerful laser, but that powerful laser usually has power less than one watt. Okay, the laser beam is very focused. But what about atmosphere particles like dust and vapor? Why don't these particles diffuse the beam and make it lose energy? How does it happen that a less than one watt laser can have several kilometers range in atmosphere? - I don't think that the test is carried in the atmosphere. I suspect it is carried in a clean and dry room (possibly, under vacuum) and use mirrors. This is much like the tests for batteries which are carried under ideal conditions (temperature, humidity etc.) – Yotam Aug 4 '11 at 9:10 3 – Leonidas Aug 4 '11 at 13:27 Sure the intervening atmosphere saps energy of the beam and spreads it out, but what do you mean by "range"? The distance at which its holder can see its spot? or the distance at which someone at the other end can see it? – Mike Dunlavey Jul 3 '12 at 18:49 @Mike Dunlavey: Since it's a pointer I guess it's the distance at which the holder can see the spot. – sharptooth Jul 4 '12 at 6:18 4 Answers A laser pointer's energy and lights are concentrated in a very small light cone to reach a quite high intensity. So its labelled Wattage is much smaller than a usual bulb while its light is very strong at one point. And the labled power on laser pointers is the output power. The typical input power of a 50mW pointer is about 0.5 W. This make the difference looks bigger. Actually common laser pointers used by teachers and lecturers are less than 5mW. The farthest distance to see their lights is about dozens of meters. But the laser pointers used by professional and amateur astronomers are much more powerful. You can see the light path caused by Tyndall effect directly as below image(source, though this impressive light path is not generated by a handheld pointer, the scence is similar.) The energy reduction is mainly caused by Rayleigh scattering, which is relatively small compared with laser's intensity. So the beam can easily reach several miles away with the power higher than 50 mW. Update: Georg insist the previous image can not be used to discribe laser pointers. So I add another real pointer picture here. But I have no feeling about distance with this one. - ""The Wattage does not refer to the the amount of light. It only indicates the amount of electrical energy consumed by the device."" This is rubbish! Your idea applies for light bulbs or a electric heater or the motor of Your lawn mover. For lasers the "wattage" is the power of the light coming out! -1 – Georg Aug 6 '11 at 9:24 Yep, I made a mistake here. The labled power on laser pointers is output power. The deviation between input power and output value is conversion efficiency. The typical input power of a 50mW pointer is about 0.5 W. But This is not the point. – gerry Aug 6 '11 at 12:40 Laser pointers are those handheld things used to point. The thing in Your picture is a artificial star to control adaptive optics, which has nothing to do with a pointer! Range of pointers is limited by chep lenses, thats all. – Georg Aug 6 '11 at 13:02 Those laser pointers (there are 5 actually) are used to play as artificial stars. They are more powerful (5W each) than handheld device, but ARE still pointers and have a range of several kilometers. Why do you think they are different ? – gerry Aug 6 '11 at 13:24 Because they are not used to point! Read "laser pointer" in wikipedia! – Georg Aug 6 '11 at 13:26 Did you find this question interesting? Try our newsletter email address What about the elephant in the room, called coherence? Laser light and ordinary light differ in the amount of coherence of the beam. Incoherent beams lose intensity as 1/r**2, where r is the distance from the source. Coherent beams in vacuum disperse slowly according to optical equations. In the atmosphere there will be absorption and scattering, as discussed in other answers. Have a look at http://en.wikipedia.org/wiki/Lunar_Laser_Ranging_experiment to see laser light reflected from the moon, as far as distance travelled goes. - As far as I know that's not correct, only beams that illuminate the whole unit sphere ($4\pi$ steradian) lose 'intensity' with $1/r^2$. The difference is in the divergence of the beams, the more convergent the beams are, the less they lose 'intensity' ('intensity' is a bit tricky here because there are various units that can be used). – Tim Aug 8 '11 at 7:30 – anna v Aug 8 '11 at 9:41 You're right, I had some things mixed up. But I fail to see why this does not apply to coherent light. Even laser beams diverge (slightly) and thus decay with $1/r^2$. – Tim Aug 8 '11 at 10:18 Laser beams diverge according to the plan of the optics,as in the link I gave in my answer, plus a bit of scattering and absorption will reduce the intensity a bit more. Note the first formula for intensity has a negative exponential for the fall along the axis. – anna v Aug 8 '11 at 13:06 @annav Thanks for the flag, but I haven't found any traces of voting abuse, and this is not mods role to judge the meritoric aspects of users' voting decisions. – mbq♦ Aug 8 '11 at 19:26 show 4 more comments It is most likely the same reason as why the sky is blue. More specifically, Rayleigh scattering is proportional to the fourth inverse power of the wavelength, so if you choose the wavelength of the laser to be large (i.e. towards the red end of the visible spectrum), then you can suppress Rayleigh scattering significantly. In that case the range of the beam can be very large. - Upshot: the range of a laser is greater because the light is concentrated in a very narrow beam. Without going into much detail of the power of the laser, I think the problem here has to do with the definition of 'intensity'. Wikipedia lists several different units for intensity, the one that most closely matches 'brightness' as perceived by the eye is probably radiance. The units of radiance are $$L = W·sr^{−1}·m^{−2}$$ or watt per steradian per square metre, a 'steradian' being a two-dimensional angle on the sky. Now given a fixed power for a certain light source, the difference between ordinary lighting and a laser are two-fold: 1. The surface ($m^{2}$) of a laser is much smaller (probably a factor of 10) 2. The divergence ($sr$) of the laser beam is also much smaller (factor ~100 or more). Both these factors mean that the radiance for the same power ($W$) a laser beam is at least 1000 times as 'bright' as a regular light source. Since the brightness is higher, the beam will be visible up to a longer distance. It is important to note that this result is regardless of the absorbance or scattering that occurs, which in principle happens for both laser and non-laser beams. - Aha, what about that in vacuum of space? The question is here about "reach" in the atmosphere (what ever that means) – Georg Aug 8 '11 at 8:04 In a vacuum (which space is not), light travels more or less uninterruptedly. But because of non-zero divergence of any beam, the 'brightness' will slowly go to zero. – Tim Aug 8 '11 at 8:08 1 – anna v Aug 8 '11 at 9:44 Anna I'm not sure why you think these answers are wrong. Beyond the rayleigh range (on the order of 10 meters for a pointer) a gaussian laser beam still falls in intensity as 1/r^2, the only difference is the smaller divergence angle. – user2963 Oct 19 '11 at 20:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470165371894836, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/approximation-theory+integration
# Tagged Questions 2answers 82 views ### Sequence of polynomials converging to zero function Find a sequence of polynomials $(f_n)$ such that $f_n \rightarrow 0$ point wise on $[0,1]$ and $\int_0^1 f_n(x) \rightarrow 3$. Calculate $\int_0^1 \sup_n |f_n(x)| dx$ for this sequence of ... 1answer 178 views ### prove equality with integral and series I am stuck on one question with integral. Help me please to show that with $n=1$ the following is true ... 0answers 132 views ### integral with Bessel function Let $n$ be half an odd integer, say $n=k+1/2, k \in Z$. Let $q\geq 1$. I would like to calculate (or approximate) the following integral \int_0^{\infty}\left(\sqrt{\frac{\pi}{2}}\cdot 1\cdot 3\cdot ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8705291748046875, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/12/14/my-solution-to-the-xkcd-puzzle/?like=1&source=post_flair&_wpnonce=7b62a7b6bf
# The Unapologetic Mathematician ## My solution to the XKCD Puzzle So here’s how I’ve been thinking about that XKCD puzzle. My approach is essentially an exercise in the techniques covered in John Baez’ Quantum Gravity seminar for 2003-2004. Basically, we consider each possible path from the source — which we’ll assume is at the origin of our lattice — to the target — which will be at lattice coordinates $(a,b)$ — as a series circuit, and the collection of all possible paths is a giant parallel circuit. If a given path has length $n$ it passes through $n$ resistors, and so its total resistance is $n$. Thus we need to form the sum over all paths $\displaystyle\sum\limits_{\gamma}\frac{1}{\mathrm{length}(\gamma)}=\sum\limits_{n=0}^\infty\frac{\#\{\mathrm{length}(\gamma)=n\}}{n}$ and take the reciprocal of this sum. So we’ve got a combinatorial problem: how many lattice paths $\gamma$ have length $n$ and go from the origin to $(a,b)$? Let’s call this number $G_{a,b,n}$ and build the “exponential generating function” $\displaystyle\Gamma_{a,b}(t)=\sum\limits_{n=0}^\infty\frac{G_{a,b,n}}{n!}t^n$ We choose the exponential generating function here rather than the ordinary generating function $\sum G_{a,b,n}t^n$ because of a manipulation we want to do later that looks nicer for exponential generating functions. We’ll go from $\Gamma_{a,b}$ to our desired sum a little later. For now, let’s consider a simpler generating function. Consider the number $U_{a,n}$ of paths on a one-dimensional lattice with length $n$ that start at ${0}$ and end at $a$. First off, if $n<a$ we have no paths that work. Similarly, if $n-a$ is not divisible by $2$ we won’t have any good paths. But when $n=2k+a$, there are generally many different paths. How do we get our hands on one? Choose $k$ steps to take left and the other $k+a$ to take right, and we’ll end up $a$ steps to the right. So we set $U_{a,2k+a}=\binom{2k+a}{k}$ to get the exponential generating function $\displaystyle\Upsilon_a(t)=\sum\limits_{k=0}^\infty\frac{U_{a,2k+a}}{(2k+a)!}t^{2k+a}=\sum\limits_{k=0}^\infty\binom{2k+a}{k}\frac{1}{(2k+a)!}t^{2k+a}=i^{-a}J_a(2it)$ where $J_a(x)$ is a Bessel function of the first kind. Now a path in the two-dimensional lattice is really a mixture of two one-dimensional paths. That is, we break our $n$ steps up into chunks of $n_1$ and $n_2$ steps, respectively, put a one-dimensional path on the $n_1$ chunk ending at $a$, and put a one-dimensional path on the $n_2$ chunk ending at $b$. And it turns out that generating functions are perfect for handling this! $\displaystyle\Gamma_{a,b}(t)=\sum\limits_{n=0}^\infty \frac{G_{a,b,n}}{n!}t^n=\sum\limits_{n=0}^\infty \left(\sum\limits_{n_1+n_2=n}\binom{n}{n_1}U_{a,n_1}U_{b,n_2}\right)\frac{t^n}{n!}=$ $\displaystyle\sum\limits_{n=0}^\infty\sum\limits_{n_1+n_2=n}\frac{U_{a,n_1}}{n_1!}\frac{U_{b,n_2}}{n_2!}t^{n_1+n_2}=\sum\limits_{n_1=0}^\infty\frac{U_{a,n_1}}{n_1!}t^{n_1}\sum\limits_{n_2=0}^\infty\frac{U_{b,n_2}}{n_2!}t^{n_2}=$ $\displaystyle\Upsilon_a(t)\Upsilon_b(t)=i^{-a-b}J_a(2it)J_b(2it)$ This tells us how to find the number $G_{a,b,n}$ of paths of length $n$ that end at $(a,b)$ — it’s just the $n$th derivative of this power series, evaluated at $t=0$. That is $\displaystyle G_{a,b,n}=\frac{d^n}{dt^n}\left(i^{-a-b}J_a(2it)J_b(2it)\right)$ Now let’s consider the ordinary generating function for $G_{a,b,n}$: $\displaystyle\sum\limits_{n=0}^\infty G_{a,b,n}t^n$ Here the coefficient of $t^n$ is the number of paths of length $n$ (and thus with resistance $n$) from the origin to $(a,b)$. What we want to calculate is the sum $\sum\frac{G_{a,b,n}}{n}$. To get at this, we’ll divide our power series by $t$ and then integrate with respect to $t$. This will give the series $\sum\frac{G_{a,b,n}}{n}t^n$, which we can then evaluate at $t=1$. Notice that we never have any paths of length zero unless we’re counting up the reciprocal resistance from the origin to itself, which should definitely be infinite anyway, so dividing by $t$ isn’t really a problem. Now I admit there’s one glaring problem here: I don’t know offhand how to go from an exponential generating function to an ordinary generating function, much less how to integrate what we get from doing that to this product of Bessel functions. Any thoughts? [UPDATE]: I was just working on some numerics and there’s a problem here somewhere. The coefficients are off, but by a (sort of) predictable amount. I don’t really have time to fix this now, so I’ll come back with more later. [UPDATE]: Okay, I see what’s wrong now. I’m adjusting what I said above. [UPDATE]: Someone pointed out to me that after all of this I need to do a renormalization just like for the path integrals in quantum field theory. That is, in the above I need to restrict to paths that never hit the same edge twice, which is a harder combinatorial problem than I expected. Still, it’s nifty to see the Bessel functions… ### Like this: Posted by John Armstrong | Uncategorized ## 4 Comments » 1. I don’t think it’s quite so complicated; I seem to recall seeing a solution to a very similar problem (the locations on the grid were different, but everything else was the same) that was only a couple lines. Unfortunately I have no idea where I saw it. Comment by | December 14, 2007 | Reply 2. I’ve also seen simple solutions, but usually they’re for only one choice of terminal point, and I’m not quite sure I buy them. It doesn’t help that I’ve seen multiple “simple” solutions. One in particular for the resistance across a single resistor claims to get 1/2 an ohm in two different ways. The first involves some hand-waving about impedance that I’m not sure I trust, and the second seems to completely neglect all the other closed paths that get back around from one point to the other. And this is among the clearest expositions I could find. Basically, to whatever extent one or another special case I’ve seen turns out to be right, there seems to be almost no concept of “what’s really” going on here, which leads to a true understanding of not only the special case, but of the problem as a whole. My solution, on the other hand, recognizes the basic idea as thinking of the whole grid as a massively parallel circuit, and the calculation to actually be a discretized path-integral. At each step (modulo an error I’m about to fix) it’s clear what we’re counting or adding up, and what relation that part has to the whole. The only problem is that I don’t yet know how to extract an answer from the framework. Comment by | December 14, 2007 | Reply 3. See http://aps.arxiv.org/abs/cond-mat/9909120 for solutions in terms of lattice Green functions for various lattices. For this particular problem the answer would be 4/pi-1/2. Comment by | December 15, 2007 | Reply 4. This isn’t particularly useful for the problem, but it’s a nice little thing I found while I was working on it. When a+b has the same parity as n, G_{a,b,n} is just (n choose (a+b+n)/2) (n choose (a-b+n)/2). (Otherwise, it’s zero, of course.) This is because you need to choose exactly (a+b+n)/2 steps in the path to go either up or right, and exactly (a-b+n)/2 steps to go either down or right. Comment by Anton Malyshev | December 18, 2007 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9296079874038696, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/15992/problem-with-an-electricity-thermodynamics-assignment
# Problem with an electricity / thermodynamics assignment I've been trying to figure this one out for a while on my own, so I'd like to ask for your help if you could offer some. A heater made out of a wire with a diameter $R = 0.2\text{ mm}$, length $4\pi\text{ m}$ and electrical resistivity of $0.5\times 10^{-6}\ \Omega\;\mathrm{m}$ is connected to a voltage source of $220\text{ V}$, sinked in the water. Which mass of water will it heat up from $20^{\circ}\mathrm{C}$ to $50^{\circ}\mathrm{C}$ in the time of 10 minutes? (C of water = $4200\ \mathrm{J\;kg}/\mathrm{K}$) I know I have the electrical properties of the wire and the thermodynamic properties of the water, but I don't know how to proceed from there. We've been studying electricity and I am not really aware how I can connect it with thermodynamics? - Hi Phystudent and welcome to Physics Stack Exchange! This is actually not a homework help site, it's for conceptual questions about physics, so your question in its original form was not appropriate for the site. I tried to fix it up and make it more focused on the underlying concepts and more generally useful. I'm not sure how successful I was, but I think as it is now, it can perhaps stay open. If you have any problems with the changes I made, you're free to make further edits, or you can add a comment. – David Zaslavsky♦ Oct 21 '11 at 4:22 The three answers below boil down to the same method. – Fingolfin Oct 21 '11 at 22:09 ## 3 Answers You may consider this question from perspective of energy view: the electric energy is consumed by the resistor and convert this energy to the thermal energy (the source of heat that heat up the water). So from this point of view, if you can assume the 100% electric energy converting to thermal energy, and usually, this assumption is right for resistors since there is no other kind of energy that electric energy can convert to, since this is not a motor or a light bulb. Thus you will have the following equation: $Heat = I^2Rt = \frac{U^2}{R}t$ So this amount of heat will be absorbed by water and heat the water up, so $Heat = c_{water}\cdot m\cdot \Delta T$ By solving these two equations, you can get the amount of water ($m_{water}$) you need as $m_{water} = \frac{U^2 t}{R\cdot c\cdot \Delta T}$ where $R$ can be easily calculated as $R=\frac{\rho L}{\pi r^2}$ for this cylindrical resistor. So the key point to connect the electricity to thermal dynamics is the conservation of energy so that energy has to convert from one form (electric energy in your case) to another form (thermal energy or heat in your case), and little or no energy is converted into other form such as light. - nice 'physics' answer! – Nic Oct 21 '11 at 21:01 Find the resistance of the particular wire. Then calculate the power it uses. Assume this power is dissipated as heat. Find how much energy is converted to thermal energy by heat in 10 minutes. Use the equation Q = mc ΔT to find the mass of the water. - find m from this equation, $$mC_p\Delta T = \frac{V^2}{R}t$$ where,$$R=\frac{\rho l}{\pi r^2}$$ - Note that r in the bottom of this equation is the radius, and you were given a diameter. I warn you of this mistake because you called your diameter R. – Fingolfin Oct 21 '11 at 22:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9538997411727905, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/238444/show-x-times-s1-is-parallelizable-if-x-admits-a-non-vanishing-normal-vecto
# Show $X \times S^1$ is parallelizable if $X$ admits a non-vanishing normal vector field Let $X \subset \mathbb R^{n+1}$ be an embedded submanifold of codimension 1 such that there is a map $F:X \to \mathbb R^{n+1}$ such that for all $x \in X$, $F(x)$ is in the orthogonal complement of $T_xX$ in $\mathbb R^{n+1}$. Show that $X \times S^1$ admits a global frame. I think we have 2 global normal vector fields on $X \times S^1$ namely $(N,0)$ and $(0,M)$ where $M,N$ are global normal fields of $X, S^1$ respectively. So I guess if $X$ was just embedded in $\mathbb R^2$ then their cross product would work. But I have no idea how to approach this in higher dimensions. Any ideas? - ## 1 Answer At any point $x \in X$, The normal space $N_{x}(X)$ must be $1$-dimensional, so any vector orthogonal to a normal vector must be a tangent vector (since we can write any vector space space as the direct sum of a subspace and its orthogonal complement, and the dimension of $T_{x}(X)$ is the dimension of $X$). Let $N(X)$ be the non-vanishing normal vector field. Take $v^{j}(x,z)$ for $(x,z) \in X \times \mathbb{S}^1$ to be $$v^{j}(x,z) = \big(e_{j} - \langle \frac{N(x)}{|N(x)|},e_{j} \rangle N(x), \langle \frac{N(x)}{|N(x)|},e_{j} \rangle iz \big)$$ where $e_j$ is the $j$th standard basis vector, and we embed $\mathbb{S}^1$ into $\mathbb{C}$ so that we can get a tangent vector to $z$ by multiplying by $i$. Note that $$e_{j} - \langle \frac{N(x)}{|N(x)|},e_{j} \rangle e_{j}$$ is a tangent vector because it is orthogonal to $N(x)$, and by our above discussion it must be in the tangent space. Thus $v^j$ is a vector field, so we need to show that $v^{1}, \cdots, v^{n}$ are all linearly independent at a point. Suppose that there exists $x$, and $c_j$ not all zero, for which $$\sum c_{j} v^{j}(x,z) = 0$$ Then it must be the case that $$\sum c_{j}e_{j} = \lambda N(x), \,\,\,\,\, \lambda \neq 0$$ since the only way the projection to the orthogonal complement could vanish would be if the vector itself was parallel to $N(x)$. But now, if we take the dot product of both sides of this equality with $\frac{N(x)}{|N(x)|}$, we get $$\sum c_{j} \langle \frac{N(x)}{|N(x)|},e_{j} \rangle = \lambda |N(x)|$$ where the left-hand side is zero by hypothesis, since it is the right coordinate of $\sum c_{j} v^{j}(x,z)$, which was the zero vector. But this implies that $\lambda |N(x)| = 0$, and since $\lambda \neq 0$, we must have that $N(x) = 0$, which is impossible since we assumed $N$ was a non-vanishing normal field. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536387324333191, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-applied-math/74644-kalman-filtering-application-3.html
# Thread: 1. [quote=atlove;272782]OK, i got what you mean. What we can do now is to learn the knowledge of the system. Otherwise, we need to try and test with the given data to make such a model. Can you show me an example that how to found those variable with enough information?[quote] Not really Is the E(x) purely the mean of state ? If i have a matrix ....|x11 x12 x13| x= |x21 x22 x23| ....|x31 x32 x33| how to find the E(x)? Thank You E(.) is the expectation operator, which gives the mean of its argument. If x is the state E(x) is the expected value of the state. The expectation of a matrix is the matrix of expectations. CB 2. [quote=CaptainBlack;273585][quote=atlove;272782]OK, i got what you mean. What we can do now is to learn the knowledge of the system. Otherwise, we need to try and test with the given data to make such a model. Can you show me an example that how to found those variable with enough information? Not really E(.) is the expectation operator, which gives the mean of its argument. If x is the state E(x) is the expected value of the state. The expectation of a matrix is the matrix of expectations. CB Thank You. I am waiting the information about the system. If I get that , i will post here asap. Thanks your reply so long. 3. Dear CaptainBlack, Besides the system knowledge, I want to ask sth. I am getting confuse on the innovation part.Is the z(k) in the State estimate update(x=x+K(z-Hx)) measured from the system directly? Since according to the latest matlab code you posted before, you used the z value from the data(function rv=KalmanRun(data)) to find each set of estimation of state. If I want to know the estimation of z in k instant. Am i supposed to use the process equation y(k)=H(k)x(k)+w(k)? 4. Originally Posted by atlove Dear CaptainBlack, Besides the system knowledge, I want to ask sth. I am getting confuse on the innovation part.Is the z(k) in the State estimate update(x=x+K(z-Hx)) measured from the system directly? Since according to the latest matlab code you posted before, you used the z value from the data(function rv=KalmanRun(data)) to find each set of estimation of state. If I want to know the estimation of z in k instant. Am i supposed to use the process equation y(k)=H(k)x(k)+w(k)? z(k) is the measurement at epoc k. CB 5. Originally Posted by CaptainBlack z(k) is the measurement at epoc k. CB So, z(k) is measured from the system. If i use the esitmation of state (x1...x5) to find the z (x6) by z=Hx+v. What is the meaning of this z? Is it the optimal estiamtion of z (x6) and more accurate than the measure one? CaptainBlack, I have got some new information from my consultant. 1. Five variables can be adjusted by the student using rheostats and they are: Condensing water temperature into chiller, adjustable between 20C to 30C Condensing water flow into chiller, adjustable between 50 l/s to 80 l/s Chilled water temperature into chiller, adjustable between 10C to 16C Chilled water temperature out of chiller, adjustable between 6C to 9C Chilled water flow, adjustable btween 40 l/s to 70 l/s 2. Then, the simulator can calculate the power consumption real-time. 3. All these six variables will be available on a 10-second frequency on the NAE so that the student can read them continuously. 4. The student's job is to predict the power consumption continuously by using techniques in Kalam Filtering. 5. The way of calculating the power consumption inside our NAE will be confidential. 6. The student is considered doing a great job by tracing the power consumption closely with the value from our simulator. Suggest the student use a moving window of say 10 minutes, i.e. 60 sets of data. During these 10 minutes, he should adjust the five parameters slightly to have a wider dynamical span. Then, he starts to trace the power consumption from the 61st set of data onwards. With 2-61st sets of date, he can then predict the power consumption of the 62nd set of data. P.S. The simulator is a software used to get value of parameters. NAE is a control unit so that i can get the information by my computer through this unit. One thing i don't understand, Why 2-61st sets of date data are necessary to predict the power consumption? Kalman filtering told that we just need the previous estimation of state (t=k-1) and measurement at this instant (t=k) to predict the new estimation of state. 6. Originally Posted by atlove So, z(k) is measured from the system. If i use the esitmation of state (x1...x5) to find the z (x6) by z=Hx+v. What is the meaning of this z? Is it the optimal estiamtion of z (x6) and more accurate than the measure one? Hx is the prediction of the measurement from the current state, v is an unknown noise term, the z that appears in the equations is your x6. CB 7. Originally Posted by atlove So, z(k) is measured from the system. If i use the esitmation of state (x1...x5) to find the z (x6) by z=Hx+v. What is the meaning of this z? Is it the optimal estiamtion of z (x6) and more accurate than the measure one? CaptainBlack, I have got some new information from my consultant. 1. Five variables can be adjusted by the student using rheostats and they are: Condensing water temperature into chiller, adjustable between 20C to 30C Condensing water flow into chiller, adjustable between 50 l/s to 80 l/s Chilled water temperature into chiller, adjustable between 10C to 16C Chilled water temperature out of chiller, adjustable between 6C to 9C Chilled water flow, adjustable btween 40 l/s to 70 l/s 2. Then, the simulator can calculate the power consumption real-time. 3. All these six variables will be available on a 10-second frequency on the NAE so that the student can read them continuously. 4. The student's job is to predict the power consumption continuously by using techniques in Kalam Filtering. 5. The way of calculating the power consumption inside our NAE will be confidential. 6. The student is considered doing a great job by tracing the power consumption closely with the value from our simulator. Suggest the student use a moving window of say 10 minutes, i.e. 60 sets of data. During these 10 minutes, he should adjust the five parameters slightly to have a wider dynamical span. Then, he starts to trace the power consumption from the 61st set of data onwards. With 2-61st sets of date, he can then predict the power consumption of the 62nd set of data. P.S. The simulator is a software used to get value of parameters. NAE is a control unit so that i can get the information by my computer through this unit. One thing i don't understand, Why 2-61st sets of date data are necessary to predict the power consumption? Kalman filtering told that we just need the previous estimation of state (t=k-1) and measurement at this instant (t=k) to predict the new estimation of state. The filter will run from the first data set perfectly well, you just won't have very good estimates untill the system has processed enough data to have settled down. As a miniimum you would expect to need 5 data points to estimate 5 coefficients, assuming that the 5 data points constitute a linearly independednt set. By the look of this you should be able to get the filter to work by setting the initial values to be some small values and the covariance to be diagonal with values of the order of 1,000,000 on the diagonal (assuming power is measured in something like watts). CB 8. Originally Posted by CaptainBlack The filter will run from the first data set perfectly well, you just won't have very good estimates untill the system has processed enough data to have settled down. As a miniimum you would expect to need 5 data points to estimate 5 coefficients, assuming that the 5 data points constitute a linearly independednt set. It means that the 2-61st dataset are used to intialize the estimation? Only previous estimation of state are necessary for further estimation? But why not 1-61st dataset? By the look of this you should be able to get the filter to work by setting the initial values to be some small values and the covariance to be diagonal with values of the order of 1,000,000 on the diagonal (assuming power is measured in something like watts). CB How to find that small values, by taking average of data or other else? Is there any problem to assume that to be 0? Is the covariance values larger is better? are you mean: $<br /> \begin{bmatrix} 1000000 & 0 & 0 \\ 0 & 1000000 & 0 \\ 0 & 0 & 100000 \end{bmatrix}<br />$ Besides , he didn't give me any information on the variance to determine R. Can i assume it to be anything? 9. I 've done some modification base on the latest matlab code which plot the curve of estimation of z against the real measured z Innovation Code: ```function [X,P]=KalmanInov(Data,R,X,P) % % Kalman Filter Inovation Processing % % data is a row vector with the independent vars and the depended var % ll=length(Data) H=Data(1:ll-1); %extract the H matrix or vector from the data z=Data(ll); %extract the measurement from the data % Below is the standard Kalman inovation equations in matrix form S=H*P*H'+R; %note this is a scalar K=P*H'*inv(S); X=X+K*(z-H*X); P=(eye(ll-1,ll-1)-K*H)*P;``` Main Kalman filtering running Code: ```function P=KalmanRun(data) % % function to run simulation of the Kalman filter for % MHF question % % data is a matrix each row of which contains the vector of data % % The system model is: % % z=sum(x(i)*aa(i)) + w % % where z is the measurement to be explained, the x's are the % independent variables corresponding to z and w is a scalar zero % mean gaussian noise term. % % Initial values, the state estimate is set to zero, and the covariance % to a diagonal matrix of the right dimension with 10^2 down the diagonal % % the measurement noise variance R (of w in the model) is set to 1. % a=data; sz=size(data); niter=sz(1); %number of rows of data ll=sz(2); %length of each row of data P=eye(ll-1,ll-1).*1000000; %initial covariance of state estimate X=zeros(ll-1,1); %initial state estimate R=0.2; %measurement variance rv=X; % % loop % for idx=1:niter D=data(idx,:); % load the nest line of data into D consists of [x(1),..,x(n),z] [X,P]=KalmanInov(D,R,X,P); % inovate the filter with the data vactor D rv=[rv,X];%accumulate the state estimate end a1=a(1:niter,1:ll-1); b1=rv(1:ll-1,2:niter+1); d1=[]; for idx=1:niter; c1=a1(idx:idx,1:ll-1)*b1(1:ll-1,idx:idx); d1=[d1,c1]; end e1=a(1:niter,ll:ll)'; d1=[d1;e1]; plot([1:niter],d1);``` You may randomly input some data matrix to test. $<br /> \begin{bmatrix} 1 & 2 & 3 & 4 & 5 & 100 \\ 2 & 3 & 4 & 5 & 6 & 130 \\ 3 & 4 & 5 & 6 & 7 & 155 \\ 4 & 5 & 6 & 7 & 8 & 175 \\5 & 6 & 7 & 8 & 9 & 200 \end{bmatrix}<br />$ The picture attached is the result. Can i do that in this way? Attached Thumbnails 10. Originally Posted by atlove It means that the 2-61st dataset are used to intialize the estimation? Only previous estimation of state are necessary for further estimation? But why not 1-61st dataset? How to find that small values, by taking average of data or other else? Is there any problem to assume that to be 0? Is the covariance values larger is better? are you mean: $<br /> \begin{bmatrix} 1000000 & 0 & 0 \\ 0 & 1000000 & 0 \\ 0 & 0 & 100000 \end{bmatrix}<br />$ Besides , he didn't give me any information on the variance to determine R. Can i assume it to be anything? Zeros should be OK for the initial state estimate. The diagonal terms of the initial covariance matrix should be as big as they can be without causing numerical problems (so they can probably be bigger without too much trouble but I would not go beyond about 10,000,000,000). You may have to make some assumptions about the measurement variance. One way of doing this is to start by assuming the SD it is say 10% of the first measurement. Run the filer over the first block of data then go back and calculate the SD or variance of the residuals about the back predictions from the final state, the rerun the data with that as a basis for the measurement variance. (possibly repeating this procedure if necessary) CB 11. Originally Posted by CaptainBlack Zeros should be OK for the initial state estimate. The diagonal terms of the initial covariance matrix should be as big as they can be without causing numerical problems (so they can probably be bigger without too much trouble but I would not go beyond about 10,000,000,000). You may have to make some assumptions about the measurement variance. One way of doing this is to start by assuming the SD it is say 10% of the first measurement. Run the filer over the first block of data then go back and calculate the SD or variance of the residuals about the back predictions from the final state, the rerun the data with that as a basis for the measurement variance. (possibly repeating this procedure if necessary) CB Is the covariance matrix adjust graduately as the filter is running? For the measurement covariance, do you mean to run the filter to the final state with assumming SD? Then use the dataset which is used before to find the variance? 12. Dear CaptainBlack, How to find the state(a,b,c,d,e) with a large set of data by kalman filter? for example, if i have 60 set of data. So that $<br /> \begin{bmatrix} y(1) \\ y(2) \\ .\\ . \\ y(60) \end{bmatrix}<br />$= $<br /> \begin{bmatrix} x1(1) & x2(1) & x3(1) & x4(1) & x5(1) \\ x1(2) & x2(2) & x3(2) & x4(2) & x5(2) \\ . & . & . & . & . \\ . & . & . & . & . \\ x1(60) & x2(60) & x3(60) & x4(60) & x5(60) \end{bmatrix}<br />$ x $<br /> \begin{bmatrix} a \\ b \\ c\\ d \\ e \end{bmatrix}<br />$ 13. Originally Posted by atlove Is the covariance matrix adjust graduately as the filter is running? The state covariance is adjusted automatically For the measurement covariance, do you mean to run the filter to the final state with assumming SD? Then use the dataset which is used before to find the variance? Something like that, with a lot of these things one of the best things to do is run it and see what happens. CB 14. Originally Posted by atlove Dear CaptainBlack, How to find the state(a,b,c,d,e) with a large set of data by kalman filter? for example, if i have 60 set of data. So that $<br /> \begin{bmatrix} y(1) \\ y(2) \\ .\\ . \\ y(60) \end{bmatrix}<br />$= $<br /> \begin{bmatrix} x1(1) & x2(1) & x3(1) & x4(1) & x5(1) \\ x1(2) & x2(2) & x3(2) & x4(2) & x5(2) \\ . & . & . & . & . \\ . & . & . & . & . \\ x1(60) & x2(60) & x3(60) & x4(60) & x5(60) \end{bmatrix}<br />$ x $<br /> \begin{bmatrix} a \\ b \\ c\\ d \\ e \end{bmatrix}<br />$ You treat each row as a seperate measurement with given parameters x1, .. x5 and measurement x6, That is you have 60 updates. CB 15. hello CaptainBlack, With your assistance before, i have build a basic model and programmed with vb.net already. Later on i will post some result of the estimation estimated by my program. Thank you very much. Besides that, I was asked to do some modification about to optimize my model. My consultant said that there is forgetting factor for kalman filtering. What exact this means? Do you have related information?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9210233688354492, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/109722/list
## Return to Answer 2 typo fixed; added 38 characters in body As long as $Z$ is projective or quasi-projective over say $k = \mathbb{C}$, this is fine. This can be generalized, but let me keep it simple for now. The quasi-projective case reduces to the projective case by taking the closure of $Z$ in projective space. Therefore, let's do the projective case, $Z \subseteq X = \mathbb{P}^n_k$. Let $I_Z$ denote the ideal sheaf of $Z$. Consider $$I_Z \otimes O_X(n)$$ for $n \gg 0$. This sheaf is globally generated, and so has no basepoints away from $Z$. For a general section $\gamma \in \Gamma(X, I_Z \otimes O_X(n))$, the hypersurface $H = V(\gamma)$ is therefore smooth away from $Z$ (here I'm using Bertini -- characteristic zero and algebraically closed). Any of these hyperplanes passes through $Z$ by construction. Now, let $d$ denote the codimension of $Z \subseteq X$. Choose $n$ d$general hyperplanes$H_1, \dots, H_d$coming from general global sections of$\gamma_1, \dots, \gamma_d \in \Gamma(X, I_Z \otimes O_X(n)$O_X(n))$. The scheme theoretic intersection $W = H_1 \cap \dots \cap H_d$ satisfies what you want. In fact, $W = Z \cup Y$ where $Y$ is some other irreducible scheme which is smooth away from from $Z$ (the fact that $Y$ is irreducible and smooth away from $W$ comes from Bertini's theorem). Ok, how do you know that the irreducible component of $W$ corresponding to $Z$ is reduced? This is also pretty easy. This comes from the fact that $Z$ was reduced in the first place. Indeed, your general sections $\gamma_1, \dots, \gamma_d$ generate maximal ideal of the stalk at the generic point of $W$ since $I_Z \otimes O_X(n)$ was globally generated. Of course, for arbitrary schemes, there's no hope. $Z$ need not embed in a nonsingular scheme at all. 1 As long as $Z$ is projective or quasi-projective over say $k = \mathbb{C}$, this is fine. This can be generalized, but let me keep it simple for now. The quasi-projective case reduces to the projective case by taking the closure of $Z$ in projective space. Therefore, let's do the projective case, $Z \subseteq X = \mathbb{P}^n_k$. Let $I_Z$ denote the ideal sheaf of $Z$. Consider $$I_Z \otimes O_X(n)$$ for $n \gg 0$. This sheaf is globally generated, and so has no basepoints away from $Z$. For a general section $\gamma \in \Gamma(X, I_Z \otimes O_X(n))$, the hypersurface $H = V(\gamma)$ is therefore smooth away from $Z$ (here I'm using Bertini -- characteristic zero and algebraically closed). Any of these hyperplanes passes through $Z$ by construction. Now, let $d$ denote the codimension of $Z \subseteq X$. Choose $n$ general hyperplanes $H_1, \dots, H_d$ coming from general global sections of $I_Z \otimes O_X(n)$. The scheme theoretic intersection $W = H_1 \cap \dots \cap H_d$ satisfies what you want. In fact, $W = Z \cup Y$ where $Y$ is some other irreducible scheme which is smooth away from from $Z$ (the fact that $Y$ is irreducible and smooth away from $W$ comes from Bertini's theorem). Ok, how do you know that the irreducible component of $W$ corresponding to $Z$ is reduced? This is also pretty easy. This comes from the fact that $Z$ was reduced in the first place. Indeed, your general sections $\gamma_1, \dots, \gamma_d$ generate maximal ideal of the stalk at the generic point of $W$ since $I_Z \otimes O_X(n)$ was globally generated. Of course, for arbitrary schemes, there's no hope. $Z$ need not embed in a nonsingular scheme at all.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460724592208862, "perplexity_flag": "head"}
http://mathoverflow.net/questions/94078/generalizing-the-spectral-radius-of-a-unistochastic-matrix/94094
Generalizing the spectral radius of a unistochastic matrix Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider a square matrix $A$, and from it construct $B$ whose entries are the squared magnitudes of those in $A$. What can we say about the spectral radius of $B$? I know that for a unitary matrix $A$, $B$ is unistochastic so its spectral radius is 1, but I'm interested in the general case for arbitrary $A$. Also, relatedly, for general $A$, what can be said of the column 1-norms of $B$? And what about the the column 1-norms of products of such $B$-type matrices? - 1 Answer Well, $B$ is the Hadamard product of $A$ with itself, so if $A$ is nonnegative or psd, its radius is bounded from above by the square of the radius of $A$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9291283488273621, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/133686-calculating-work-print.html
# Calculating WORK... Printable View • March 13th 2010, 08:30 PM lazypkjaii Calculating WORK... I am having a test the day right after spring break and throughout this time i don't know where to look for help, so i hope someone can help me with this problem... The diameter of a circle pool is 24ft. the side is 5ft high and the depth of the water is 4ft. How much work is required to pump all of the water out over the side? Note: density of water is 1000kg/m^3, gravity=9.8m/s^2. I tried to tackle this problem but I am not sure where I have done wrong I set up the equation like this... dWork = (dForce)(y) y - the distance from disc to top of the pool (dWeight)(y) (dmass)(gravity)(y) (dVolume)(density)(gravity)(y) dV= (pi)(r^2)(dy) dWork= (9.8)(1000)(Pi)(144) the integral from 1 to 5 (y)(5-y)(dy) Hope I can get some guidance from some of the people in the forum • March 13th 2010, 09:03 PM Jhevon Quote: Originally Posted by lazypkjaii I am having a test the day right after spring break and throughout this time i don't know where to look for help, so i hope someone can help me with this problem... The diameter of a circle pool is 24ft. the side is 5ft high and the depth of the water is 4ft. How much work is required to pump all of the water out over the side? Note: density of water is 1000kg/m^3, gravity=9.8m/s^2. I tried to tackle this problem but I am not sure where I have done wrong I set up the equation like this... dWork = (dForce)(y) y - the distance from disc to top of the pool (dWeight)(y) (dmass)(gravity)(y) (dVolume)(density)(gravity)(y) dV= (pi)(r^2)(dy) dWork= (9.8)(1000)(Pi)(144) the integral from 1 to 5 (y)(5-y)(dy) Hope I can get some guidance from some of the people in the forum First we need to match up our units. Note that $1000 \frac {\text{km}}{\text{m}^3} = 62.4 \frac {\text{lb}}{\text{ft}^3}$ (remember, lbs is a unit of force, so gravity is included here). Now, I hope you have drawn a diagram, let y = 0 be the bottom of the pool, so that y = 5 is at the top. We will compute the work needed to move an infinitesimal slice of water, of thickness $\Delta y$, at a level $y$ above the bottom of the pool, and then integrate to find the total work. Now, $W = F \cdot D$, where $F$ is force and $D$ is distance. Note that the distance we want to move this slice will be $D = 5 - y$, it should be easy to see this if you drew your diagram correctly. Now, the force is $(\text{volume of the slice})(\text{weight of water per unit volume}) = 62.4 \pi (12)^2 \cdot \Delta y$ $= 8985.6 \pi \Delta y$ So that we have $W = 8985.6 \pi (5 - y) \Delta y$ now, the total work is the sum of the works needed to move all the infinitesimal slices, ranging from a level y = 0 to y = 4 of the pool, hence, $\text{Total Work } = 8985.6 \pi \int_0^4 (5 - y)~dy$ And note that the answer will be in ft-lbs • March 13th 2010, 09:39 PM lazypkjaii I am actually confused at how you converted the units in the beginning...while everything else seems clear now if the teacher had not given the Density of H2O which is 62.4 lbs/ft^3. Can we still work this problem out? thanks alot for help me! • March 13th 2010, 10:12 PM Jhevon Quote: Originally Posted by lazypkjaii I am actually confused at how you converted the units in the beginning...while everything else seems clear now if the teacher had not given the Density of H2O which is 62.4 lbs/ft^3. Can we still work this problem out? thanks alot for help me! well, we used conversion factors to go from one to the other. presumably you know some. maybe from lbs to kg or ft to m, etc (i used both, or you can look it up :D). which ones do you know? the alternative would probably be to change ft to meters and then you would have force = volume * density * gravity, and your final answer would be in Nm All times are GMT -8. The time now is 11:13 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398464560508728, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/278900/question-on-pathological-sine-function
# Question on pathological sine function Some years ago I came across what was defined as "pathological" function defined as: $$f(x)=\sum_{k=1}^\infty \frac{1}{k^2}\cdot \sin\left(k!\cdot x\right)$$ It was mentioned (in an article I cannot remember) as something that could not be completely drawn because the partial sums become increasingly "ripply" when adding new terms. I did some experimenting with plotting software and this seems the case, but I don't know if sums of this type are very trivial to build or this is a more special case. Is this series related to any well known special function ? Has anyone more information on the property of it ? Thanks in advance Prospero - 1 Thomae's function and Dirichlet's nowhere continuous function come to mind. – 000 Jan 14 at 23:30 ## 1 Answer A Fourier representation comes to mind. You can imagine writing $$f(x) = \sum_{j=1}^{\infty} a_j \sin{j x}$$ where $$a_j = \begin{cases} 1/k^2 & j=k! \\ 0 & \mathrm{otherwise} \end{cases}$$ for each $k \in \mathbb{N}$. What function $f(x)$ has such a coefficient I cannot say. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570490717887878, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/26029/minimum-of-two-exponential-variates-whats-wrong-with-this-derivation
# Minimum of two exponential variates: What's wrong with this derivation? Suppose we have $\newcommand{\E}{\mathrm{Exp}} X \sim \E(\lambda)$, $Y \sim \E(\mu)$, and $W = \min(X,Y)$. I know that $W \sim \E(\lambda+\mu)$. I know how to derive it. But, I tried this alternate derivation that gave me a different distribution for $W$, and I still can't figure out what's wrong with it. I started with $\newcommand{\rd}{\,\mathrm d}\renewcommand{\Pr}{\mathbb P}f_W(t) = f_X(t) \Pr(X<Y) + f_Y(t) \Pr(Y<X)$ Now I need $\Pr(X < Y)$. Seems straightforward: $\begin{align} \Pr(X < Y) &= \int_0^\infty [1-F_X(t)] f_Y(t) \rd t \\ &= \int_0^\infty e^{-\lambda t} \mu e^{-\mu t} \rd t \\ &= \frac{\mu}{-(\lambda + \mu)} \left[ e^{-(\lambda + \mu)t} \right]^{\infty}_0\\ &= \frac{\mu}{\mu+\lambda} \>. \end{align}$ And, if I do the same thing for $\Pr(Y < X)$, I get $\Pr(Y < X) = \frac{\lambda}{\mu + \lambda}$. So $\Pr(X < Y)$ and $\Pr(Y < X)$ sum up to 1 as expected. Encouraging. And now I substitute that into my original equation: $\begin{align} f_W(t) &= \frac{\mu}{\mu + \lambda} \lambda e^{- \lambda t} + \frac{\lambda}{\mu + \lambda} \mu e^{- \mu t} \\ &= \frac{\lambda\mu}{\lambda + \mu}\left(e^{-\lambda t} + e^{-\mu t}\right) \>. \end{align}$ That... That's no exponential. Where did I go wrong? - 2 The distribution of $X$ conditional on the fact that it is less than $Y$ is not the same as the distribution of $X$. – deinst Apr 7 '12 at 15:41 ## 1 Answer The distribution of $X$ conditional on the fact that it is less than $Y$ is not the same as the distribution of $X$. This is probably easiest understood if $X$ and $Y$ are identical distributions. Then your mixture $$f_W(t)=f_X(t)P(X<Y)+f_Y(t)P(Y\le X)=f_X(t),$$ and this is obviously not true. - (+1) Good answer. – cardinal Apr 7 '12 at 15:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9620268940925598, "perplexity_flag": "head"}
http://particle.physics.ucdavis.edu/blog/?tag=ads
# Terning's View of Physics particle physics and other sundry items… ## Posts Tagged ‘AdS’ ### The AdS/QCD correspondence: delivery failure Thursday, November 3rd, 2011 As described previously here, there are good theoretical reasons to think that the so called AdS/QCD correspondence should provide a poor description of the collisions of strongly interacting particles like the proton, or their internal quarks and gluons.  The idea for the correspondence was inspired by string theory where is can be shown that special (strongly interacting, supersymmetric, scale invariant)  theories with gluons can be simply described by calculations on a curved 5D space called anti-de Sitter space, and abbreviated as AdS.  While the theory of quantum chromodynamics (QCD) does contain gluons, it is not supersymmetric, not scale invariant, and, it turns out, not strongly interacting enough for the correspondence to work. The problem can be seen fairly easily in collisions.  In QCD collisions of quarks and gluons tend to produce narrow sprays of particles, known as jets, that look something like this: Jets of particles. The length of the line shows the energy of the particle. While in AdS theories the produced particle spread out uniformily in all directions like this: spherical spray of particles Some theorists have shrugged their shoulders about this problem and tried to apply the AdS/QCD correspondence to heavy ion collision data pointing out that some particular measurements happened to agree with the AdS/QCD prediction. The BackReaction blog points out that the latest data from the LHC again points to the inadequacies of the AdS/QCD correspondence. The ratio of the probability of finding a jet in lead-lead collisions to the same probability in proton-proton collisions as a function of the momentum of the jet away from the beam line (aka transverse momemtum P_T). Image from Thorsten Renk, Slide 17 of this presentention The data most closely follow a model of ordinary QCD jet  production, labelled YaJEM for Yet another Jet Energy-loss Model, rather than the AdS calculation.  For the experts: while the jetty description of QCD continues to work at large number of colors, $N$, the AdS description requires both $N$ and the coupling times the number of colors, $\alpha N$, to be large, and it is the latter condition that fails in the real world. Further reading: • Systematics of the charged-hadron $P_T$ spectrum and the nuclear suppression factor in heavy-ion collisions from $\sqrt{s}=200$ GeV to $\sqrt{s} =2.76$ TeV arxiv.org/abs/1103.5308v2 • Pathlength dependence of energy loss within in-medium showers arxiv.org/abs/1010.4116 • The AdS/QCD Correspondence: Still Undelivered arxiv.org/abs/0811.3001 Tags: AdS, jets, LHC, physics, qcd Posted in physics | 4 Comments » ### Varieties of Particle Jets Friday, September 18th, 2009 A simulation of a string repeatedly breaking looks similar to the jets of particles found in collisions of quarks and gluons. “Jets” is the name given to sprays of particles, headed in roughly the same direction, that appear when quarks or gluons collide. Jets turn out to be a useful way to relate experimental results on quarks and gluons with the theory of Quantum Chromodynamics (QCD) which describes the interactions of quarks and gluons. Their usefulness arises partly because the jets can be seen to emerge in a simple way.  The primary particles involved in the scattering have a small probability to emit a new gluon which, it turns out, is most likely to head in the direction of the particle that emitted it.  The new gluons have a small probability to emit further gluons, and so on. Iterating this process a few times gives you a jet of quarks and gluons. The gluon emission probability is small because the QCD interaction strength is fairly small in high-energy processes. It is somewhat surprising that we can also see jets emerge in an entirely different way.  It is known that QCD becomes much simpler if we imagine that the number of “colors” of quarks is a large number, N, rather than the small number, 3, that we find in our Universe.  For large N, the allowed configurations have a flux-tube, or string, connecting every quark to an anti-quark (to make a meson) or have the strings of N quarks meeting at a point (to make a baryon). This “large N approximation” actually does a pretty good job of describing our world, leading to the oft repeated quasi-joke that 3 is a large number. A baryon, like a proton, consists of three quarks connected by three strings which meet at a junction. Of course such strings can break.  This occurs when a quark and anti-quark are created at some point along the string.  The energy required to produce the quark and the anti-quark can be offset by the broken string contracting.  In this way we can imagine a heavy meson or baryon with a very long (excited) string decaying into lighter “daughter” mesons and baryons made of shorter strings. The string in baryon can break to form a new baryon and a new meson. Imagine producing a quark and an anti-quark in a high energy collision. The quark and the anti-quark would be flying apart in opposite directions with a string stretching between them. Starting with this very excited “meson,”  we could simulate the repeated breaking of the string and see what comes out.  This is a little tricky, the quarks and strings are moving in a complicated way due to all this breaking, but we were able to do it (mainly thanks to Matt Reece’s programming skills).  The result is that the string tends to break into relatively short bits, which therefore have little rest mass (the mass grows like the string length) and thus lots of kinetic energy, since the total energy has to add up to the initial energy. Interestingly the string bits end up mostly going in the directions of the initial quark and anti-quark. This is because in the rest frame of one of the daughter mesons, the subsequent “grand-daughters” are equally likely to go in any direction, but in the rest frame of the lab, the daughter meson was moving rapidly in the direction of the original quark or anti-quark, and the grand-daughters are “thrown” forward in this direction. So we get something that looks very much like a jet.  This is just what is shown in the picture at the top of this page. This is very different from what happens in theories where the interaction strength and N are both large.  Such theories can be approximately scale invariant in which case they are called conformal field theories (or CFT’s).  CFT’s are thought to be described by almost non-interacting particles moving in a five dimensional anti-de Sitter (AdS) space.  This is the basis of the AdS/CFT correspondence. It is fairly easy to work out what happens in this case, either using CFT methods or direct simulation in AdS.  Each time an excited meson decays into two lighter mesons, most of the initial energy goes into the rest mass of the daughter particles, so they have very little kinetic energy.  This means that there is very little difference between the rest frame of the daughter particle and the rest frame of the lab, so the grand-daughters are equally likely to go in any direction. The result of this type of process is shown below. When the interaction strength is large enough the jets broaden so much that the events look spherical. This raises some interesting prospects for the large hadron collider.  First we need fairly precise estimates of standard QCD jets, especially those containing b quarks, so that we can separate out the “old” physics from the new physics, and the stringy picture may be helpful for improving these calculations.  Second, in some scenarios the new physics does look like a CFT, in which case the standard types of analysis will not be helpful in teasing out the underlying information. In that case we will need some new ideas in order uncover the new physics. (Technical note: in the top and bottom graphic, the length of each lines is proportional to the energy of the particle moving in that direction.) Tags: AdS, jets, LHC, physics, string Posted in physics | 5 Comments » ### Negative S from AdS Monday, June 12th, 2006 Veronica Sanz, photo by Johannes Hirn Hirn and Sanz have examined models of electroweak symmetry breaking in anti-de Sitter space via boundary conditions. They include additional bulk breaking of electroweak symmetry and find that they can change the spectrum of vector and axial vector resonances so as to make the S parameter negative and thus compatible with precision electroweak tests. Tags: AdS, electroweak, extra dimensions, physics Posted in physics | Comments Off ### Linear Confinement in AdS Monday, March 6th, 2006 the rho tragectory from Karch et. al. A new paper by Karch, Katz, Son, and Stephanov attempts to modify the 5D AdS background that is used to model QCD in order to obtain the behaviour of a string state with large spin or excitation number.  They find that this can occur in a non-trivial dilaton background. Tags: AdS, physics, qcd, string Posted in physics | Comments Off
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9161664843559265, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/82577/how-to-find-the-dimensions-of-a-max-area
# How to find the dimensions of a max area? I am having trouble solving a math problem for a test review. Here it is: What are the dimensions of the largest area of a rectangle with a semicircle on a short side given 80ft of fencing? I am assuming the figure looks like this: +-------------+ + + + + + + + + + + +-------------+ + ^^^^^ a semi circle Can someone please tell me how to solve this problem step by step? - 1 Let the side of the rectangle with the semicircle attached and its opposite side be $a$ and let the other two sides be $b$. Can you see why $Area = ab + \frac{1}{2} \pi (\frac{a}{2})^2$ and $a + 2b + \pi \frac{a}{2} = 80$? After that, can you express the area in terms of just $a$ and find the maximum of the quadratic? – Sp3000 Nov 16 '11 at 5:16 – pedja Nov 16 '11 at 5:18 1 You were very impatient for someone to answer. Now someone has answered - and you have nothing to say? – Gerry Myerson Nov 16 '11 at 12:11 ## 1 Answer We have two variables, the height and width of the rectangle (note that the height of the rectangle determines the radius of the circle). We can write two expressions: one relating these variables to the perimeter of the region, and another relating to the area of the region. Let's call the height $h$ and the width $w$; then the radius of the circle is $h/2$. Then our expressions are $$\text{Perimeter: } 80 = 2w + h + \frac{\pi h}{2}$$ $$\text{Area: } A = wh + \frac{\pi h^2}{8}$$ We want to maximize $A$ using these equations. We can do this by solving the perimeter equation for one of the variables, and then substituting into the area equation; we will end up with area as a function of one variable, which we can then maximize. Solving for w, $$w = 40 - h(\frac{1}{2} + \frac{\pi}{4})$$ and substituting, $$A = -h^2(\frac{\pi + 4}{8}) + 40h$$ Since the problem specifies that the semicircle must be on the short side, we can't simply maximize this function over its whole domain; we need to find a constraint based on this requirement. We can do this by setting $w=h$ and solving for $h$; this will give us the maximum value for $h$ (i.e., making $h \le w$ ensures that $h$ is the short side). We find that this constraint is $h\le \frac{160}{\pi+6}$. Now we can maximize our function for $A$; since it is a parabola, this is relatively simple. The vertex is at $-\frac{b}{2a} = \frac{160}{\pi+4}$, and since our constraint is less than this value, the maximum must be at the maximum allowed by the constraint; that is, the value of $h$ that maximizes $A$ is $h=\frac{160}{\pi+6}$, and since we found this value by setting $w=h$, the value of $w$ is the same. This answer makes sense intuitively, as the maximum area of a figure with fixed perimeter is often found by making it as regular as possible. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421880841255188, "perplexity_flag": "head"}
http://mathoverflow.net/questions/115100/numerical-evaluation-of-the-petersson-product-of-elliptic-modular-forms
## Numerical evaluation of the Petersson product of elliptic modular forms ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is known how to compute the Fourier expansion of elliptic modular forms using modular symbols, and it is known how to get numerical evaluations of $L$-functions of various type ; it's possible to get explicit values on those matters with sage already. It is also known that when an Eisenstein series is involved, it's possible to relate the Petersson scalar product to $L$-functions, and hence to evaluate them. I have seen that sage bug about various pairings for modular forms, but it looks more like it's about the pairing between modular forms and modular symbols than the Petersson scalar product. My question is: does there exist general formulas to compute the Petersson scalar product of two elliptic modular forms numerically? EDIT(2012-12-23): I insist on the numerically: having an expansion with estimates on the order of the error with constants which depends on this or that (I'm thinking about those which can be found in chapter 5 of Iwaniek's "Topics in classical automorphic forms" for example) is very nice from a theoretical point of view, but doesn't help when one wants to actually compute with specific forms and to a given precision. In fact, I want to compute various things with the Petersson scalar product, so this question is to check whether I can directly work on them or if I should write something about the matter before. - 1 for Jacobi forms, see arxiv.org/abs/1009.3198: "a numerical method to compute the Petersson scalar products of Jacobi Forms is developed and discussed in detail." – Carlo Beenakker Dec 1 at 20:24 ## 3 Answers There is a "quick and dirty" way to find the inner product of two cusp forms that are not necessarily Hecke eigenforms. I learned this from Akshay Venkatesh. The formula is that \begin{equation*}\langle f, g \rangle = \lim_{y \rightarrow 0^+} y^k \int_0^{1} f(x+iy) \overline{g(x+iy)} dx, \end{equation*} where $f$ and $g$ are weight $k$ and the inner product is normalized via \begin{equation*} \langle f, g \rangle = \int_{\Gamma \backslash \mathbb{H}} y^k f(z) \overline{g(z) }\frac{1}{V} \frac{dx dy}{ y^2}, \end{equation*} where $V$ is the volume of $\Gamma \backslash \mathbb{H}$. The philosophy behind the proof is that the horocycle $x+iy: 0 \leq x \leq 1$ equidistributes in the fundamental domain as $y \rightarrow 0$. You can prove the formula by spectrally decomposition $f \overline{g}$. The projection onto the constant eigenfunction gives $\langle f, g\rangle$. The projections onto the cusp forms integrate out to zero. The projection onto the Eisenstein series leaves the constant terms which are bounded by $\sqrt{y}$, and hence have limit zero as $y$ tends to $0$. If $f(z) =\sum_n a(n) e(nz)$ and $g(z) = \sum_n b(n) e(nz)$, then of course \begin{equation*} \int_0^{1} f(x+iy) \overline{g(x+iy)} dx = \sum_{n \geq 1} a(n) \overline{b(n)} \exp(-4 \pi n y). \end{equation*} - This formula looks nice from a theoretical point of view, but having a limit on an integral isn't a good start for numerical evaluation... I'm looking for something like a series expansion with an explicit bound on the remainder when cut. My question is really about practical numerical computation. – Julien Puydt Dec 2 at 17:35 @Julien : Actually, my guess is that this kind of limit formula will give a good approximation in practice (because of the sum converging exponentially), but that it will not obvious to bound the error term rigorously. – François Brunault Dec 2 at 21:15 The last expression is the one you want to use, which does not involve an integral. Instead you suppose you have computed Fourier coefficients up to some bound $n \leq N$, and then choose $y$ so that the sum of the tail is quite small, using whatever available bound you have for the Fourier coefficients. It's not the best algorithm, but it's easy to implement. – Matt Young Dec 3 at 1:50 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let $f(z) = \sum a(n) e(n z)$ and $g(z) = \sum b(n) e(n z)$ be holomorphic modular forms of weight $k \in 2 \mathbb{N}$ on $\Gamma = \operatorname{SL}_2(\mathbb{Z})$ whose product decays rapidly. Then \begin{equation} \int_{\Gamma \backslash \mathbb{H}} y^k \overline{f(z)}g(z) ~ \frac{dx ~ d y}{y^2} = 2 \sum_{n \in \mathbb{N} } \frac{ \overline{a(n)} b(n) }{ n^{k-1} } \sum_{d \in \mathbb{N}} \Phi(4 \pi d \sqrt{n}), \end{equation} where \begin{equation} \Phi(y) = 2( \frac{y}{8 \pi})^{k-1} (y K_{k-2}(y) - K_{k-1}(y)). \end{equation} Note that $\Phi(y) \asymp_k y^{k-1/2} e^{-y}$ for $y \gg 1$. For a general finite index subgroup $\Gamma$ of $\operatorname{SL}_2(\mathbb{Z})$, a correct formula may be obtained by summing the RHS over the cusps $\mathfrak{a}$ of $\Gamma$, weighted by the width $w$ of $\mathfrak{a}$, and taking for $a(n), b(n)$ ($n \in w^{-1} \mathbb{N}$) the Fourier coefficients at $\mathfrak{a}$. A reference for a general form of such identities is Theorem 5.6 (p.24) in my paper Evaluating modular forms on Shimura curves; see also Example 5.7, Remark 3.5, and the discussion of Sections 5.3--5.6, which includes a detailed comparison with the other approaches mentioned in this thread that I will summarize briefly here. Pros: no need to Hecke-decompose, unlike the "symmetric square" approach; converges rapidly to the correct value, unlike the vanilla "equidistribution of horocycles" approach; generalizes to non-holomorphic forms lacking a straightforward Hecke decomposition (e.g., certain theta series), although perhaps this feature is not important for your purposes. Cons: requires the Fourier expansion at every cusp, unlike either approach just mentioned (although the "symmetric square" approach is not devoid of such subtlety, since it requires one to compute the conductor and bad Euler factors of the symmetric square of a newform). Another approach (specific to the holomorphic case) would be to exploit the connection with period polynomials, for which a search just now turned up this article. One variant of that method also requires knowing Fourier expansions at every cusp, and reduces the problem to evaluating a class of incomplete gamma functions some of which reduce to K-Bessel functions as above; another requires only that one be able to compute periods of a cusp form $f$ over split geodesics in $\Gamma \backslash \mathbb{H}$, which can apparently be done using the Fourier expansion at only one cusp. Moreover, one can speed up the computation when $f = g$ is an eigenform by exploiting certain rationality results. - It's easy to reduce to the case of computing the Petersson product of a normalised new eigenform with itself. Here you can use the fact that the product is equal to the value at s=k of the symmetric square L-function of f, and this you can compute using e.g. Tim Dokchitser's algorithms. Here is a thread from the Sage developers mailing list with example code by Martin Raum: https://groups.google.com/forum/m/#!topic/sage-nt/EkBWOogY8yw For elliptic curves there is also Mark Watkins' Sympow program, which will compute all the symmetric power L-functions. - Hmmmm... basically decomposing the two forms on an orthogonal basis of normalized eigenforms, thus reducing to an explicit linear combination of squares, which indeed are easy to tackle using existing code? That looks quite promising. – Julien Puydt Dec 2 at 18:40 Well, you can interpret this in two ways. Firstly, it's saying that the Petersson product of $f$ and $g$ is basically $\lim_{s \to k^+} (s - k)^{-1} \sum_{n \ge 1} a_n \overline{b_n} n^{-s}$. This holds for any $f$, $g$ and is somehow in the same ball-park as Matt Young's answer (but with a less rapidly converging series). But secondly, if you restrict to f=g a new eigenform, then you're computing a value of an $L$-function, and there is a well-developed theory for rigorous numerical computation of L-values things. – David Loeffler Dec 3 at 9:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9025833606719971, "perplexity_flag": "head"}
http://en.m.wikipedia.org/wiki/Squeeze_mapping
# Squeeze mapping r = 3/2 squeeze mapping In linear algebra, a squeeze mapping is a type of linear map that preserves Euclidean area of regions in the Cartesian plane, but is not a Euclidean motion. For a fixed positive real number r, the mapping (x, y) → (rx, y/r ) is the squeeze mapping with parameter r. Since $\{ (u,v) \, : \, u v = \mathrm{constant}\}$ is a hyperbola, if u = rx and v = y/r, then uv = xy and the points of the image of the squeeze mapping are on the same hyperbola as (x,y) is. For this reason it is natural to think of the squeeze mapping as a hyperbolic rotation, as did Émile Borel in 1913, by analogy with circular rotations which preserve circles. ## Group theory If r and s are positive real numbers, the composition of their squeeze mappings is the squeeze mapping of their product. Therefore the collection of squeeze mappings forms a one-parameter group isomorphic to the multiplicative group of positive real numbers. An additive view of this group arises from consideration of hyperbolic sectors and hyperbolic angles. In fact, the invariant measure of this group is hyperbolic angle. From the point of view of the classical groups, the group of squeeze mappings is SO+(1,1), the identity component of the indefinite orthogonal group of 2 × 2 real matrices preserving the quadratic form u2 − v2. This is equivalent to preserving the form xy via the change of basis $x=u+v,\quad y=u-v\,,$ and corresponds geometrically to preserving hyperbolae. The perspective of the group of squeeze mappings as hyperbolic rotation is analogous to interpreting the group SO(2) (the connected component of the definite orthogonal group) preserving quadratic form x2 + y2) as being circular rotations. Note that the "SO+" notation corresponds to the fact that the reflections $u \mapsto -u,\quad v \mapsto -v$ are not allowed, though they preserve the form (in terms of x and y these are x ↦ y, y ↦ x and x ↦ −x, y ↦ −y); the additional "+" in the hyperbolic case (as compared with the circular case) is necessary to specify the identity component because the group O(1,1) has 4 connected components, while the group O(2) has 2 components: SO(1,1) has 2 components, while SO(2) only has 1. The fact that the squeeze transforms preserve area and orientation corresponds to the inclusion of subgroups SO ⊂ SL – in this case SO(1,1) ⊂ SL(2) – of the subgroup of hyperbolic rotations in the special linear group of transforms preserving area and orientation (a volume form). In the language of Möbius transforms, the squeeze transformations are the hyperbolic elements in the classification of elements. ↑Jump back a section ## Literature A squeeze mapping moves one purple hyperbolic sector to another with the same area. It also squeezes blue and green rectangles. An early recognition of squeeze symmetry was the 1647 discovery by Grégoire de Saint-Vincent that the area under a hyperbola (concretely, the curve given by xy = k) is the same over [a, b] as over [c, d] when a/b = c/d . This preservation of areas under a hyperbola with hyperbolic rotation, was a key step in the development of the logarithm. Formalization of the squeeze group required the theory of groups, which was not developed until the 19th century. William Kingdon Clifford was the author of Common Sense and the Exact Sciences, published in 1885. In the third chapter on Quantity he discusses area in three sections. Clifford uses the term "stretch" for magnification and the term "squeeze" for contraction. Taking a given square area as fundamental, Clifford relates other areas by stretch and squeeze. He develops this calculus to the point of illustrating the addition of fractions in these terms in the second section. The third section is concerned with shear mapping as area-preserving. In 1965 Rafael Artzy listed the squeeze mapping as a generator of planar affine mappings in his book Linear Geometry (p 94). The myth of Procrustes is linked with this mapping in a 1967 educational (SMSG) publication: Among the linear transformations, we have considered similarities, which preserve ratios of distances, but have not touched upon the more bizarre varieties, such as the Procrustean stretch (which changes a circle into an ellipse of the same area). Coxeter & Greitzer, pp. 100, 101. Attention had been drawn to this plane mapping by Modenov and Parkhomenko in their Russian book of 1961 which was translated in 1967 by Michael B. P. Slater. It included a diagram showing the squeezing of a circle into an ellipse. Werner Greub of the University of Toronto includes "pseudo-Euclidean rotation" in the chapter on symmetric bilinear functions of his text on linear algebra. This treatment in 1967 includes in short order both the diagonal form and the form with sinh and cosh. The Mathematisch Centrum Amsterdam published E.R. Paërl's Representations of the Lorentz group and Projective Geometry in 1969. The squeeze mapping, written as a 2 × 2 diagonal matrix, Paërl calls a "hyperbolic screw". In his 1999 monograph Classical Invariant Theory, Peter Olver discusses GL(2,R) and calls the group of squeeze mappings by the name the isobaric subgroup. However, in his 1986 book Applications of Lie Groups to Differential Equations (p. 127) he uses the term "hyperbolic rotation" for an equivalent mapping. In 2004 the American Mathematical Society published Transformation Groups for Beginners by S.V. Duzhin and B.D. Chebotarevsky which mentions hyperbolic rotation on page 225. There the parameter r is given as et and the transformation group of squeeze mappings is used to illustrate the invariance of a differential equation under the group operation. ↑Jump back a section ## Applications In studying linear algebra there are the purely abstract applications such as illustration of the singular-value decomposition or in the important role of the squeeze mapping in the structure of 2 × 2 real matrices. These applications are somewhat bland compared to two physical and a philosophical application. ### Corner flow In fluid dynamics one of the fundamental motions of an incompressible flow involves bifurcation of a flow running up against an immovable wall. Representing the wall by the axis y = 0 and taking the parameter r = exp(t) where t is time, then the squeeze mapping with parameter r applied to an initial fluid state produces a flow with bifurcation left and right of the axis x = 0. The same model gives fluid convergence when time is run backward. Indeed, the area of any hyperbolic sector is invariant under squeezing. For another approach to a flow with hyperbolic streamlines, see the article potential flow, section "Power law with n = 2". In 1989 Ottino described the "linear isochoric two-dimensional flow" as $v_1 = G x_2 \quad v_2 = K G x_1$ where K lies in the interval [−1, 1]. The streamlines follow the curves $x_2^2 - K x_1^2 = \mathrm{constant}$ so negative K corresponds to an ellipse and positive K to a hyperbola, with the rectangular case of the squeeze mapping corresponding to K = 1. Stocker and Hosoi (2004) announced their approach to corner flow as follows: we suggest an alternative formulation to account for the corner-like geometry, based on the use of hyperbolic coordinates, which allows substantial analytical progress towards determination of the flow in a Plateau border and attached liquid threads. We consider a region of flow forming an angle of π/2 and delimited on the left and bottom by symmetry planes. Stocker and Hosoi then recall H.K. Moffatt's 1964 paper "Viscous and resistive eddies near a sharp corner" (Journal of Fluid Mechanics 18:1–18). Moffatt considers "flow in a corner between rigid boundaries, induced by an arbitrary disturbance at a large distance." According to Stocker and Hosoi, For a free fluid in a square corner, Moffatt's (antisymmetric) stream function ... [indicates] that hyperbolic coordinates are indeed the natural choice to describe these flows. ### Relativistic spacetime Select (0,0) for a "here and now" in a spacetime. Light radiant left and right through this central event tracks two lines in the spacetime, lines that can be used to give coordinates to events away from (0,0). Trajectories of lesser velocity track closer to the original timeline (0,t). Any such velocity can be viewed as a zero velocity under a squeeze mapping called a Lorentz boost. This insight follows from a study of split-complex number multiplications and the "diagonal basis" which corresponds to the pair of light lines. Formally, a squeeze preserves the hyperbolic metric expressed in the form xy; in a different coordinate system. This application in the Theory of relativity was noted in 1912 by Wilson and Lewis (see footnote p. 401 of reference),by Werner Greub in the 1960s, and in 1985 by Louis Kauffman. Furthermore, Wolfgang Rindler, in his popular textbook on relativity, used the squeeze mapping form of Lorentz transformations in his demonstration of their characteristic property (see equation 29.5 on page 45 of the 1969 edition, or equation 2.17 on page 37 of the 1977 edition, or equation 2.16 on page 52 of the 2001 edition). ### Bridge to transcendentals The area-preserving property of squeeze mapping has an application in setting the foundation of the transcendental functions natural logarithm and its inverse the exponential function: Definition: Sector(a,b) is the hyperbolic sector obtained with central rays to (a, 1/a) and (b, 1/b). Lemma: If bc = ad, then there is a squeeze mapping that moves the sector(a,b) to sector(c,d). Proof: Take parameter r = c/a so that (u,v) = (rx, y/r) takes (a, 1/a) to (c, 1/c) and (b, 1/b) to (d, 1/d). Theorem (Gregoire de Saint-Vincent 1647) If bc = ad, then the quadrature of the hyperbola xy = 1 against the asymptote has equal areas between a and b compared to between c and d. Proof: An argument adding and subtracting triangles of area ½, one triangle being {(0,0), (0,1), (1,1)}, shows the hyperbolic sector area is equal to the area along the asymptote. The theorem then follows from the lemma. Theorem (Alphonse Antonio de Sarasa 1649) As area measured against the asymptote increases in arithmetic progression, the projections upon the asymptote increase in geometric sequence. Thus the areas form logarithms of the asymptote index. For instance, for a standard position angle which runs from (1, 1) to (x, 1/x), one may ask "When is the hyperbolic angle equal to one?" The answer is the transcendental number x = e. A squeeze with r = e moves the unit angle to one between (e, 1/e) and (ee, 1/ee) which subtends a sector also of area one. The geometric progression e, e2, e3, ..., en, ... corresponds to the asymptotic index achieved with each sum of areas 1,2,3, ..., n,... which is a proto-typical arithmetic progression A + nd where A = 0 and d = 1 . ↑Jump back a section ## See also ↑Jump back a section ## References • HSM Coxeter & SL Greitzer (1967) Geometry Revisited, Chapter 4 Transformations, A genealogy of transformation. • Edwin Bidwell Wilson & Gilbert N. Lewis (1912) "The space-time manifold of relativity. The non-Euclidean geometry of mechanics and electromagnetics", Proceedings of the American Academy of Arts and Sciences 48:387–507. • W. H. Greub (1967) Linear Algebra, Springer-Verlag. See pages 272 to 274. • Louis Kauffman (1985) "Transformations in Special Relativity", International Journal of Theoretical Physics 24:223–36. • P. S. Modenov and A. S. Parkhomenko (1965) Geometric Transformations, volume one. See pages 104 to 106. • J. M. Ottino (1989) The Kinematics of Mixing: stretching, chaos, transport, page 29, Cambridge University Press. • Walter, Scott (1999). "The non-Euclidean style of Minkowskian relativity". In J. Gray. The Symbolic Universe: Geometry and Physics. Oxford University Press. pp. 91–127. (see page 9 of e-link) • Roman Stocker & A.E. Hosoi (2004) "Corner flow in free liquid films", Journal of Engineering Mathematics 50:267–88. ↑Jump back a section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8765750527381897, "perplexity_flag": "middle"}
http://gambasdoc.org/help/comp/gb.opengl/gl/evalpoint2?en&v3
2.0 3.0 > comp > gb.opengl > gl > evalpoint2 Previous  Next  Edit  Rename  Undo  Refresh  Search  Administration Documentation Gl.EvalPoint2 (gb.opengl) `Static Sub EvalPoint2 ( I As Integer, J As Integer )` Generate and evaluate a single point in a mesh. ### Parameters i Specifies the integer value for grid domain variable $\mathit{i}$. j Specifies the integer value for grid domain variable $\mathit{j}$ (Gl.EvalPoint2 only). ### Description Gl.MapGrid and Gl.EvalMesh are used in tandem to efficiently generate and evaluate a series of evenly spaced map domain values. Gl.EvalPoint can be used to evaluate a single grid point in the same gridspace that is traversed by Gl.EvalMesh. Calling Gl.EvalPoint1 is equivalent to calling ```glEvalCoord1( $i·\Delta \mathit{u}+{\mathit{u}}_{1}$ ); ``` where $\Delta \mathit{u}=\frac{\left({\mathit{u}}_{2}-{\mathit{u}}_{1}\right)}{\mathit{n}}$ and $\mathit{n}$, ${\mathit{u}}_{1}$, and ${\mathit{u}}_{2}$ are the arguments to the most recent Gl.MapGrid1 command. The one absolute numeric requirement is that if $\mathit{i}=\mathit{n}$, then the value computed from $\mathit{i}·\Delta \mathit{u}+{\mathit{u}}_{1}$ is exactly ${\mathit{u}}_{2}$. In the two-dimensional case, Gl.EvalPoint2, let $\Delta \mathit{u}=\frac{\left({\mathit{u}}_{2}-{\mathit{u}}_{1}\right)}{\mathit{n}}$ $\Delta \mathit{v}=\frac{\left({\mathit{v}}_{2}-{\mathit{v}}_{1}\right)}{\mathit{m}}$ where $\mathit{n}$, ${\mathit{u}}_{1}$, ${\mathit{u}}_{2}$, $\mathit{m}$, ${\mathit{v}}_{1}$, and ${\mathit{v}}_{2}$ are the arguments to the most recent Gl.MapGrid2 command. Then the Gl.EvalPoint2 command is equivalent to calling ```glEvalCoord2( $i·\Delta \mathit{u}+{\mathit{u}}_{1},j·\Delta \mathit{v}+{\mathit{v}}_{1}$ ); ``` The only absolute numeric requirements are that if $\mathit{i}=\mathit{n}$, then the value computed from $\mathit{i}·\Delta \mathit{u}+{\mathit{u}}_{1}$ is exactly ${\mathit{u}}_{2}$, and if $\mathit{j}=\mathit{m}$, then the value computed from $\mathit{j}·\Delta \mathit{v}+{\mathit{v}}_{1}$ is exactly ${\mathit{v}}_{2}$. ### Associated Gets Gl.Get with argument Gl.MAP1_GRID_DOMAIN Gl.Get with argument Gl.MAP2_GRID_DOMAIN Gl.Get with argument Gl.MAP1_GRID_SEGMENTS Gl.Get with argument Gl.MAP2_GRID_SEGMENTS ### See also Gl.EvalCoord, Gl.EvalMesh, Gl.Map1, Gl.Map2, Gl.MapGrid See original documentation on OpenGL website
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6241093277931213, "perplexity_flag": "middle"}